id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
25203 | https://en.wikipedia.org/wiki/Quilting | Quilting | Quilting is the process of joining a minimum of three layers of fabric together either through stitching manually using a needle and thread, or mechanically with a sewing machine or specialised longarm quilting system. An array of stitches is passed through all layers of the fabric to create a three-dimensional padded surface. The three layers are typically referred to as the top fabric or quilt top, batting or insulating material, and the backing.
Quilting varies from a purely functional fabric joinery technique to highly elaborate, decorative three dimensional surface treatments. A wide variety of textile products are traditionally associated with quilting, including bed coverings, home furnishings, garments and costumes, wall hangings, artistic objects, and cultural artifacts.
A quilter can employ a wide range of effects that contribute to the quality and utility of the final quilted material. To create these effects, the quilter manipulates elements such as material type and thickness, stitch length and style, pattern design, piecing, and cutting. Two-dimensional effects such as optical illusions can be achieved through aesthetic choices regarding colour, texture, and print. Three-dimensional and sculptural components of quilted material can be manipulated and enhanced further by embellishment, which may include appliqué, embroidery techniques such as shisha mirror work, and the inclusion of other objects or elements such as pearls, beads, buttons, and sequins. Some quilters create or dye their own fabrics. In contemporary artistic quilting, quilters sometimes use new and experimental materials such as plastics, paper, natural fibers, and plants.
Quilting can be considered one of the first examples of upcycling, as quilters have historically made extensive use of remnants and offcuts for the creation of new products.
History
Early quilting
The origin of the term 'quilt' is linked to the Latin word culcita, meaning a bolster, cushion, or stuffed sack. The word came into the English language from the French word cuilte. The first use of the term seems to have been in England in the 13th century.
The sewing techniques of piecing, appliqué, and quilting have been used to create clothing and furnishings in various parts of the world for several millennia, and a wide range of unique quilting styles and techniques have evolved around the globe.
The earliest known quilted garment is depicted on the carved ivory figure of a Pharaoh dating from the ancient Egyptian First Dynasty. In 1924 archaeologists discovered a quilted floor covering in Mongolia, estimated to date between 100 BC and 200 AD.
In Europe, quilting has been part of the needlework tradition since about the fifth century. Early objects contained Egyptian cotton, which may indicate that Egyptian and Mediterranean trade provided a conduit for the technique. However, quilted objects were relatively rare in Europe until approximately the twelfth century, when quilted bedding and other items appeared after the return of the Crusaders from the Middle East. The medieval quilted gambeson, aketon and arming doublet were garments worn under or instead of chain mail or plate armor. These later developed into the quilted doublet worn as part of European male clothing from the fourteenth to seventeenth century. The earliest known surviving European bed quilt is the Tristan quilt, which was made in late-fourteenth century Italy from linen padded with wool. The blocks across its center are scenes from the legend of Tristan. The quilt is and is in the Victoria and Albert Museum in London.
American quilts
In American Colonial times, quilts were predominantly whole-cloth quilts—a single piece of fabric layered with batting and backing held together with fine needlework quilting. Broderie perse quilts were popular during this time and the majority of pierced or appliqued quilts made during the 1770–1800 period were medallion-style quilts (quilts with a central ornamental panel and one or more borders). Patchwork quilting in America dates to the 1770s, the decade the United States gained its independence from England. These late-eighteenth- and nineteenth-century patchwork quilts often mixed wool, silk, linen, and cotton in the same piece, as well as mixing large-scale (often chintz) and small-scale (often calico) patterns. In North America, some worn-out blankets were utilized to create a new quilt from worn-out clothes, and in these quilts the internal batting layer was made up of old blankets or older quilts.
During American pioneer days, foundation piecing became popular. Paper was cut into shapes and used as a pattern; each individual piece of cut fabric was basted around the paper pattern. Paper was a scarce commodity in the early American west so women would save letters from home, postcards, newspaper clippings, and catalogs to use as patterns. The paper not only served as a pattern but as an insulator. Paper found between these old quilts has become a primary source of information about pioneer life.
Quilts made without any insulation or batting were referred to as summer quilts. They were not made for warmth, but to keep the chill off during cooler summer evenings.
African-American quilts
There is a long tradition of African-American quilting beginning with quilts made by enslaved Africans, both for themselves and for the people who enslaved them. The style of these quilts was determined largely by time period and region, rather than race, and the documented slave-made quilts generally resemble those made by white women in their region. After 1865 and the end of slavery in the United States, African-Americans began to develop their own distinctive style of quilting.
Harriet Powers, an African American woman born into slavery, made two famous "story quilts" and was one of the many African-American quilters who contributed to the development of quilting in the United States. This style of African-American quilts was categorized by its bright colors, organization in a strip arrangement, and asymmetrical patterns.
The first nationwide recognition of African-American quilt-making came when the Gee's Bend quilting community of Alabama was celebrated in an exhibition that opened in 2002 and traveled to many museums, including the Smithsonian. Gee's Bend is a small, isolated community of African-Americans in southern Alabama with a quilt-making tradition that goes back several generations and is characterized by pattern improvisation, multiple patterning, bright and contrasting colors, visual motion, and a lack of rules. The contributions made by Harriet Powers and other quilters of Gee's Bend, Alabama have been recognized by the US Postal Service with a series of stamps. Many of the quilters of Gee’s Bend also participated in the Freedom Quilting Bee. A quilting co-op created by some of the African American women of Wilcox County, Alabama.Some of the founding and influential members include Estelle Witherspoon, Willie Abrams, Lucy Mingo, Minder Pettway Coleman, and Aolar Mosely. The communal nature of the quilting process (and how it can bring together women of varied races and backgrounds) was honored in the series of stamps. Themes of community and storytelling are common themes in African-American quilts.
Beginning with the children's story Sweet Clara and the Freedom Quilt (1989), a legend has developed that enslaved people used quilts as a means to share and transmit secret messages to escape slavery and travel the Underground Railroad. Consensus among historians is that there is no sound basis for this belief, and no documented mention among the thousands of slave narratives or other contemporary records.
Contemporary quilters such as Faith Ringgold utilize quilt making to tell stories and make political statements about the African-American experience. Ringgold, originally a painter, began quilting in order to stray away from Western art practices. Her famous "story quilts" utilize mixed media, painting, and quilting. One of her most famous quilts, Tar Beach 2 (1990), depicts the story of a young African-American girl flying around Harlem in New York City.
Bisa Butler, another modern African-American quilter, celebrates Black life with her vibrant, quilted portraits of both everyday people and notable historical figures. Her quilts are now preserved in the permanent collections at the National Museum of African American History and Culture, the Art Institute of Chicago, and about a dozen other art museums.
Amish quilts
Another American group to develop a distinct style of quilting were the Amish. Typically, these quilts use only solid fabrics, are pieced from geometric shapes, do not contain appliqué, and construction is simple (corners are butted, rather than mitered, for instance) and done entirely by hand. Amish quilters also tend to use simple patterns: Lancaster County Amish are known for their Diamond-in-a-Square and Bars patterns, while other communities use patterns such as Brick, Streak of Lightning, Chinese Coins, and Log Cabins, and midwestern communities are known for their repeating block patterns. Borders and color choice also vary by community. For example, Lancaster quilts feature wide borders with lavish quilting. Midwestern quilts feature narrower borders to balance the fancier piecing.
Native American quilts
Some Native Americans are thought to have learned quilting through observation of white settlers; others learned it from missionaries who taught quilting to Native American women along with other homemaking skills. Native American women quickly developed their own unique style, the Lone Star design (also called the Star of Bethlehem), a variation on Morning Star designs that had been featured on Native American clothing and other items for centuries. These quilts often featured floral appliqué framing the star design. Star quilts have become an important part of many Plains Indian ceremonies, replacing buffalo robes traditionally given away at births, marriages, tribal elections, and other ceremonies. Pictorial quilts, created with appliqué, were also common.
Another distinctive style of Native American quilting is Seminole piecing, created by Seminoles living in the Florida Everglades. The style evolved out of a need for cloth (the closest town was often a week's journey away). Women would make strips of sewing the remnants of fabric rolls together, then sew these into larger pieces to make clothing. Eventually the style began to be used not just for clothing but for quilts as well. In 1900, with the introduction of sewing machines and readily available fabric in Seminole communities, the patterns became much more elaborate and the style continues to be in use today, both by Seminole women and by others who have copied and adapted their designs and techniques.
Hawaiian quilting
"Hawaiian quilting was well established by the beginning of the twentieth century. Hawaiian women learned to quilt from the wives of missionaries from New England in the 1820s. Though they learned both pieced work and applique, by the 1870s they had adapted applique techniques to create a uniquely Hawaiian mode of expression. The classic Hawaiian quilt design is a large, bold, curvilinear appliqué pattern that covers much of the surface of the quilt, with the symmetrical design cut from only one piece of fabric."
South Asian quilting
There are two primary forms of quilting that originate in South Asia: Nakshi Kantha and Ralli. Nakshi Kantha quilts originated in India and are typically made of scraps and worn-out fabric stitched together with old sari threads using kantha embroidery stitches. "The layers of cloth were spread on the ground, held in place with weights at the edges, and sewn together with rows of large basting stitches. The cloth was then folded and worked on whenever there was time." The first recorded kantha are more than 500 years old.
Ralli quilts are traditionally made in Pakistan, western India, and the surrounding area. They are made by every sector of society including Hindu and Muslim women, women of different castes, and women from different towns or villages or tribes with the colors and designs varying among these groups. The name comes from ralanna, a word meaning to mix or connect. Quilts tops were designed and pieced by one woman using scraps of hand-dyed cotton. This cotton often comes from old dresses or shawls. Once pieced, the quilt top is placed on a reed mat with the other layers and sewn together using thick, colored thread in straight parallel lines by members of the designer's family and community.
East Asian quilting
Quilting in Japan, until the 20th century, generally covered local bast fibers with more valuable cotton cloth. The rectangular nature of Japanese cloth articles encouraged rectangle-based patterns. Sashiko stitching has now also developed purely decorative forms.
Swedish quilting
Quilting originated in Sweden in the fifteenth century with heavily stitched and appliquéd quilts made for the very wealthy. These quilts, created from silk, wool, and felt, were intended to be both decorative and functional and were found in churches and in the homes of nobility. Imported cotton first appeared in Sweden in 1870, and began to appear in Swedish quilts soon after along with scraps of wool, silk, and linen. As the availability of cotton increased and its price went down, quilting became widespread among all classes of Swedish society. Wealthier quilters used wool batting while others used linen scraps, rags, or paper mixed with animal hair. In general, these quilts were simple and narrow, made by both men and women. The biggest influence on Swedish quilting in this time period is thought to have come from America as Swedish immigrants to the United States returned to their home country when conditions there improved.
Art quilting
During the late 20th century, art quilts became popular for their aesthetic and artistic qualities rather than for functionality; these quilts may be displayed on a wall or table instead of being used on a bed. "It is believed that decorative quilting came to Europe and Asia during the Crusades (A.D. 1100–1300), a likely idea because textile arts were more developed in China and India than in the West."
American artist Judy Chicago stated in a 1981 interview that were it not for sexism in the visual arts, the art world, and broader society, quilting would be more widely regarded as a form of high art:
Modern quilting
In the early 21st century, modern quilting became a more prominent area of quilting. Modern quilting follows a distinct aesthetic style which draws on inspiration from modern style in architecture, art, and design using traditional quilt making techniques. Modern quilts are different from art quilts in that they are made to be used. Modern quilts are also influenced by the Quilters of Gee's Bend, Amish quilts, Nancy Crow, Denyse Schmidt, Gwen Marston, Yoshiko Jinzenji, Bill Kerr and Weeks Ringle.
The Modern Quilt Guild has attempted to define modern quilting. The characteristics of a modern quilt may include: the use of bold colors and prints, high contrast and graphic areas of solid color, improvisational piecing, minimalism, expansive negative space, and alternate grid work.
The Modern Quilt Guild, a non-profit corporation, with 14,000 members in more than 200 members guilds in 39 countries, fosters modern quilting via local guilds, workshops, webinars, and Quiltcon—an annual modern quilting conference and convention. The founding Modern Quilt Guild formed in October 2009 in Los Angeles.
QuiltCon features a quilt show with 400+ quilts, quilt vendors, lectures, and quilting workshops and classes. The first QuiltCon was February 21–24, 2013 in Austin, TX. QuiltCon 2020 was held in Austin, Texas, February 20–23, 2020 and featured 400 juried modern quilts from quilters around the world.
Quilting in fashion and design
Unusual quilting designs have increasingly become popular as decorative textiles. As industrial sewing technology has become more precise and flexible, quilting using exotic fabrics and embroidery began to appear in home furnishings in the early 21st century.
Quilt blocks
The quilt block is traditionally a sub-unit composed of several pieces of fabric sewn together. The quilt blocks are repeated, or sometimes alternated with plain blocks, to form the overall design of a quilt. Barbara Brackman has documented over 4000 different quilt block patterns from the early 1830s to the 1970s in the Encyclopedia Of Pieced Quilt Patterns. Some of the simpler designs for quilt blocks include the Nine-Patch, Shoo Fly, Churn Dash, and the Prairie Queen.
Most geometric quilt block designs fit into a "grid", which is the number of squares a pattern block is divided into. The five categories into which most square patterns fall are Four Patch, Nine Patch, Five-Patch, Seven-Patch, and Eight-Pointed Star. Each block can be subdivided into multiples: a Four-Patch can be constructed of 16 or 64 squares, for example.
A simple Nine Patch is made by sewing five patterned or dark pieces (patches) to four light square pieces in alternating order. These nine sewn squares make one block.
The Shoo Fly varies from this Nine Patch by dividing each of the four corner pieces into a light and dark triangle.
Another variation develops when one square piece is divided into two equal rectangles in the basic Nine Patch design. The Churn Dash block combines the triangles and rectangle to expand the Nine Patch.
The Prairie Queen block combines two large scale triangles in the corner section with the middle section using four squares. The center piece is one full size square. Each of the nine sections does have the same overall measurement and fits together.
The number of patterns possible by subdividing Four-, Five-, Seven-, Nine-Patches and Eight-Pointed Stars and using triangles instead of squares in the small subdivisions is almost endless.
Quilting techniques
Many types of quilting exist today. The two most widely used are hand-quilting and machine quilting.
Hand quilting is the process of using a needle and thread to sew a running stitch by hand across the entire area to be quilted. This binds the layers together. A quilting frame or hoop is often used to assist in holding the piece being quilted off the quilter's lap. A quilter can make one stitch at a time by first driving the needle through the fabric from the right side, then pushing it back up through the material from the wrong side to complete the stitch; this is called a stab stitch. Another option is called a rocking stitch, where the quilter has one hand, usually with a finger wearing a thimble, on top of the quilt, while the other hand is located beneath the piece to push the needle back up. A third option is called "loading the needle" and involves doing four or more stitches before pulling the needle through the cloth. Hand quilting is still practiced by the Amish and Mennonites within the United States and Canada, and is enjoying a resurgence worldwide.
Machine quilting is the process of using a home sewing machine or a longarm machine to sew the layers together. With the home sewing machine, the layers are tacked together before quilting. This involves laying the top, batting, and backing out on a flat surface and either pinning (using large safety pins) or tacking the layers together. Longarm quilting involves placing the layers to be quilted on a special frame. The frame has bars on which the layers are rolled, keeping these together without the need for tacking or pinning. These frames are used with a professional sewing machine mounted on a platform. The platform rides along tracks so that the machine can be moved across the layers on the frame. A longarm machine is moved across the fabric. In contrast, the fabric is moved through a home sewing machine.
Tying is another technique of fastening the three layers together. This is done primarily on quilts that are made to be used and are needed quickly. The process of tying the quilt is done with yarns or multiple strands of thread. Square knots are used to finish off the ties so that the quilt may be washed and used without fear of the knots coming undone. This technique is commonly called "tacking." In the Midwest, tacked bed covers are referred to as comforters.
Quilting is now taught in some American schools. It is also taught at senior centers around the U.S., but quilters of all ages attend classes. These forms of workshops or classes are also available in other countries in guilds and community colleges.
Quilting tools
Contemporary quilters use a wide range of quilting designs and styles, from ancient and ethnic to post-modern futuristic patterns. There is no one single school or style that dominates the quilt-making world.
Sewing machines can be used in the process of piecing together a quilt top. Some quilters also use a home sewing machine for quilting together the layers of the quilt, as well as binding the final product. While most home sewing machines can be used to quilt layers together, having a wide throat (the space to the right of the needle mechanism) is useful to manipulate a bulky quilt through the machine when the throat is both high and long.
Fabric markers can be used to mark where cuts should be made in the fabric. Marks from specialist fabric marker wash out of fabrics.
Quilting rulers are usually square or rectangular measuring instruments with length measurement and degree angle markings along multiple edges.
Longarm quilting machines can be used to make larger quilts. Larger machines can be leveraged so that the quilter does not have to hold the fabric. Some specialist quilt shops offer longarm services.
Machine quilting needles are very sharp in order to readily pierce layers of quilt and properly sew together the quilt top, batting and backing.
Hand quilting needles are traditionally called betweens and are generally smaller and stronger than normal sewing needles. They have a very small eye which prevents any extra bump at the head of the needles when you are pulling through the thread.
Pins can be used in many different combinations to achieve similar results.
Thimbles provide protection to fingertips.
Specialist quilting threads come in many types, including different weights of thread and different materials. Cotton, polyester, and nylon threads are used in different forms of quilting.
Rotary cutters revolutionized quilt-making when they appeared in the late 1970s. Rotary cutters simplify the process of cutting even slices of fabric.
Basting spray is a temporary aerosol glue that can be used to spray the layers of a quilt together, so it stays in place whilst being sewn. It is a specially formulated glue that will not clog a sewing machine, and is a much quicker basting method than hand basting.
Quilting templates/patterns come in many varieties and are generally considered the basis of the structure of the quilt, like a blueprint for a house.
Bias binding or bias tape can be made from strips of quilt fabric or purchased as quilt binding. It is used in the last stage of making a quilt, and is a method of covering the edges of the quilt.
Specialty styles
Foundation piecing – also known as paper-piecing – sewing pieces of fabric onto a temporary or permanent foundation
Shadow or echo quilting – Hawaiian quilting, where quilting is done around an appliquéd piece on the quilt top, then the quilting is echoed again and again around the previous quilting line.
Ralli quilting – Pakistani and Indian quilting, often associated with the Sindh (Pakistan) and Gujarat (India) regions.
Sashiko stitching – Basic running stitch worked in heavy, white cotton thread usually on dark indigo colored fabric. It was originally used by the working classes to stitch layers together for warmth.
Trapunto quilting – stuffed quilting, often associated with Italy.
Machine trapunto quilting – a process of using water-soluble thread and an extra layer of batting to achieve trapunto design and then sandwiching the quilt and re-sewing the design with regular cotton thread.
Shadow trapunto – This involves quilting a design in fine lawn and filling some of the spaces in the pattern with small lengths of colored wool.
Tivaevae or tifaifai – A distinct art from the Cook Islands.
Watercolor quilting – A sophisticated form of scrap quilting whereby uniform sizes of various prints are arranged and sewn to create a picture or design. | Technology | Techniques_2 | null |
25211 | https://en.wikipedia.org/wiki/Quantum%20chemistry | Quantum chemistry | Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions to physical and chemical properties of molecules, materials, and solutions at the atomic level. These calculations include systematically applied approximations intended to make calculations computationally feasible while still capturing as much information about important contributions to the computed wave functions as well as to observable properties such as structures, spectra, and thermodynamic properties. Quantum chemistry is also concerned with the computation of quantum effects on molecular dynamics and chemical kinetics.
Chemists rely heavily on spectroscopy through which information regarding the quantization of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and scanning probe microscopy. Quantum chemistry may be applied to the prediction and verification of spectroscopic data as well as other experimental data.
Many quantum chemistry studies are focused on the electronic ground state and excited states of individual atoms and molecules as well as the study of reaction pathways and transition states that occur during chemical reactions. Spectroscopic properties may also be predicted. Typically, such studies assume the electronic wave function is adiabatically parameterized by the nuclear positions (i.e., the Born–Oppenheimer approximation). A wide variety of approaches are used, including semi-empirical methods, density functional theory, Hartree–Fock calculations, quantum Monte Carlo methods, and coupled cluster methods.
Understanding electronic structure and molecular dynamics through the development of computational solutions to the Schrödinger equation is a central goal of quantum chemistry. Progress in the field depends on overcoming several challenges, including the need to increase the accuracy of the results for small molecular systems, and to also increase the size of large molecules that can be realistically subjected to computation, which is limited by scaling considerations — the computation time increases as a power of the number of atoms.
History
Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom. However, a 1927 article of Walter Heitler (1904–1981) and Fritz London is often recognized as the first milestone in the history of quantum chemistry. This was the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. However, prior to this a critical conceptual framework was provided by Gilbert N. Lewis in his 1916 paper The Atom and the Molecule, wherein Lewis developed the first working model of valence electrons. Important contributions were also made by Yoshikatsu Sugiura and S.C. Wang. A series of articles by Linus Pauling, written throughout the 1930s, integrated the work of Heitler, London, Sugiura, Wang, Lewis, and John C. Slater on the concept of valence and its quantum-mechanical basis into a new theoretical framework. Many chemists were introduced to the field of quantum chemistry by Pauling's 1939 text The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry, wherein he summarized this work (referred to widely now as valence bond theory) and explained quantum mechanics in a way which could be followed by chemists. The text soon became a standard text at many universities. In 1937, Hans Hellmann appears to have been the first to publish a book on quantum chemistry, in the Russian and German languages.
In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. In addition to the investigators mentioned above, important progress and critical contributions were made in the early years of this field by Irving Langmuir, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Hans Hellmann, Maria Goeppert Mayer, Erich Hückel, Douglas Hartree, John Lennard-Jones, and Vladimir Fock.
Electronic structure
The electronic structure of an atom or molecule is the quantum state of its electrons. The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian, usually making use of the Born–Oppenheimer (B–O) approximation. This is called determining the electronic structure of the molecule. An exact solution for the non-relativistic Schrödinger equation can only be obtained for the hydrogen atom (though exact solutions for the bound state energies of the hydrogen molecular ion within the B-O approximation have been identified in terms of the generalized Lambert W function). Since all other atomic and molecular systems involve the motions of three or more "particles", their Schrödinger equations cannot be solved analytically and so approximate and/or computational solutions must be sought. The process of seeking computational solutions to these problems is part of the field known as computational chemistry.
Valence bond theory
As mentioned above, Heitler and London's method was extended by Slater and Pauling to become the valence-bond (VB)
method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance.
Molecular orbital theory
An alternative approach to valence bond theory was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptual basis of the Hartree–Fock method and further post-Hartree–Fock methods.
Density functional theory
The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies. A large part of the focus on developing DFT is on improving the exchange and correlation terms. Though this method is less developed than post Hartree–Fock methods, its significantly lower computational requirements (scaling typically no worse than n3 with respect to n basis functions, for the pure functionals) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and often comparable accuracy to MP2 and CCSD(T) (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry.
Chemical dynamics
A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum dynamics, whereas its solution within the semiclassical approximation is called semiclassical dynamics. Purely classical simulations of molecular motion are referred to as molecular dynamics (MD). Another approach to dynamics is a hybrid framework known as mixed quantum-classical dynamics; yet another hybrid framework uses the Feynman path integral formulation to add quantum corrections to molecular dynamics, which is called path integral molecular dynamics. Statistical approaches, using for example classical and quantum Monte Carlo methods, are also possible and are particularly useful for describing equilibrium distributions of states.
Adiabatic chemical dynamics
In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born–Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface.
Non-adiabatic chemical dynamics
Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surfaces (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau–Zener transition. Their formula allows the transition probability between two adiabatic potential curves in the neighborhood of an avoided crossing to be calculated. Spin-forbidden reactions are one type of non-adiabatic reactions where at least one change in spin state occurs when progressing from reactant to product.
| Physical sciences | Chemistry: General | null |
25213 | https://en.wikipedia.org/wiki/QWERTY | QWERTY | QWERTY ( ) is a keyboard layout for Latin-script alphabets. The name comes from the order of the first six keys on the top letter row of the keyboard: . The QWERTY design is based on a layout included in the Sholes and Glidden typewriter sold via E. Remington and Sons from 1874. QWERTY became popular with the success of the Remington No. 2 of 1878 and remains in ubiquitous use.
History
The QWERTY layout was devised and created in the early 1870s by Christopher Latham Sholes, a newspaper editor and printer who lived in Kenosha, Wisconsin. In October 1867, Sholes filed a patent application for his early writing machine he developed with the assistance of his friends Carlos Glidden and Samuel W. Soulé.
The first model constructed by Sholes used a piano-like keyboard with two rows of characters arranged alphabetically as shown below:
- 3 5 7 9 N O P Q R S T U V W X Y Z
2 4 6 8 . A B C D E F G H I J K L M
Sholes struggled for the next five years to perfect his invention, making many trial-and-error rearrangements of the original machine's alphabetical key arrangement. The study of bigram (letter-pair) frequency by educator Amos Densmore, brother of the financial backer James Densmore, is believed to have influenced the array of letters, although this contribution has been called into question. Others suggest instead that the letter groupings evolved from telegraph operators' feedback.
In November 1868 he changed the arrangement of the latter half of the alphabet, N to Z, right-to-left. In April 1870 he arrived at a four-row, upper case keyboard approaching the modern QWERTY standard, moving six vowel letters, A, E, I, O, U, and Y, to the upper row as follows:
2 3 4 5 6 7 8 9 -
A E I . ? Y U O ,
B C D F G H J K L M
Z X W V T S R Q P N
In 1873 Sholes's backer, James Densmore, successfully sold the manufacturing rights for the Sholes & Glidden Type-Writer to E. Remington and Sons. The keyboard layout was finalized within a few months by Remington's mechanics and was ultimately presented:
2 3 4 5 6 7 8 9 - ,
Q W E . T Y I U O P
Z S D F G H J K L M
A X & C V B N ? ; R
After they purchased the device, Remington made several adjustments, creating a keyboard with essentially the modern QWERTY layout. These adjustments included placing the "R" key in the place previously allotted to the period key. Apocryphal claims that this change was made to let salesmen impress customers by pecking out the brand name "TYPE WRITER QUOTE" from one keyboard row is not formally substantiated. Vestiges of the original alphabetical layout remained in the "home row" sequence DFGHJKL.
The modern ANSI layout is:
1 2 3 4 5 6 7 8 9 0 - =
Q W E R T Y U I O P [ ] \
A S D F G H J K L ; '
Z X C V B N M , . /
The QWERTY layout became popular with the success of the Remington No. 2 of 1878, the first typewriter to include both upper and lower case letters, using a key.
One popular but possibly apocryphal explanation for the QWERTY arrangement is that it was designed to reduce the likelihood of internal clashing of typebars by placing commonly used combinations of letters farther from each other inside the machine.
Differences from modern layout
Substituting characters
The QWERTY layout depicted in Sholes's 1878 patent is slightly different from the modern layout, most notably in the absence of the numerals 0 and 1, with each of the remaining numerals shifted one position to the left of their modern counterparts. The letter M is located at the end of the third row to the right of the letter L rather than on the fourth row to the right of the N, the letters X and C are reversed, and most punctuation marks are in different positions or are missing entirely. 0 and 1 were omitted to simplify the design and reduce the manufacturing and maintenance costs; they were chosen specifically because they were "redundant" and could be recreated using other keys. Typists who learned on these machines learned the habit of using the uppercase letter I (or lowercase letter L) for the digit one, and the uppercase O for the zero.
The 0 key was added and standardized in its modern position early in the history of the typewriter, but the 1 and exclamation point were left off some typewriter keyboards into the 1970s.
Combined characters
In early designs, some characters were produced by printing two symbols with the carriage in the same position. For instance, the exclamation point, which shares a key with the numeral 1 on post-mechanical keyboards, could be reproduced by using a three-stroke combination of an apostrophe, a backspace, and a period. A semicolon (;) was produced by printing a comma (,) over a colon (:). As the backspace key is slow in simple mechanical typewriters (the carriage was heavy and optimized to move in the opposite direction), a more professional approach was to block the carriage by pressing and holding the space bar while printing all characters that needed to be in a shared position. To make this possible, the carriage was designed to advance only after releasing the space bar.
In the era of mechanical typewriters, combined characters such as é and õ were created by the use of dead keys for the diacritics (′, ~), which did not move the paper forward. Thus the ′ and e would be printed at the same location on the paper, creating é.
Contemporaneous alternatives
There were no particular technological requirements for the QWERTY layout, since at the time there were ways to make a typewriter without the "up-stroke" typebar mechanism that had required it to be devised. Not only were there rival machines with "down-stroke" and "front stroke" positions that gave a visible printing point, the problem of typebar clashes could be circumvented completely: examples include Thomas Edison's 1872 electric print-wheel device which later became the basis for Teletype machines; Lucien Stephen Crandall's typewriter (the second to come onto the American market in 1883) whose type was arranged on a cylindrical sleeve; the Hammond typewriter of 1885 which used a semi-circular "type-shuttle" of hardened rubber (later light metal); and the Blickensderfer typewriter of 1893 which used a type wheel. The early Blickensderfer's "Ideal" keyboard was also non-QWERTY, instead having the sequence "DHIATENSOR" in the home row, these 10 letters being capable of composing 70% of the words in the English language.
Properties
Alternating hands while typing is a desirable trait in a keyboard design. While one hand types a letter, the other hand can prepare to type the next letter, making the process faster and more efficient. In the QWERTY layout many more words can be spelled using only the left hand than the right hand. Thousands of English words can be spelled using only the left hand, while only a couple of hundred words can be typed using only the right hand (the three most frequent letters in the English language, , are all typed with the left hand). In addition, more typing strokes are done with the left hand in the QWERTY layout. This is helpful for left-handed people but disadvantageous for right-handed people.
Contrary to popular belief, the QWERTY layout was not designed to slow the typist down, but rather to speed up typing. Indeed, there is evidence that, aside from the issue of jamming, placing often-used keys farther apart increases typing speed, because it encourages alternation between the hands. (On the other hand, in the German keyboard the has been moved between the and the to help type the frequent digraphs TZ and ZU in that language.) Almost every word in the English language contains at least one vowel letter, but on the QWERTY keyboard only the vowel letter is on the home row, which requires the typist's fingers to leave the home row for most words.
A feature much less commented on than the order of the keys is that the keys do not form a rectangular grid, but rather each column slants diagonally. This is because of the mechanical linkages – each key is attached to a lever, and hence the offset prevents the levers from running into each other – and has been retained in most electronic keyboards. Some keyboards, such as the Kinesis or TypeMatrix, retain the QWERTY layout but arrange the keys in vertical columns, to reduce unnecessary lateral finger motion.
Computer keyboards
The first computer terminals such as the Teletype were typewriters that could produce and be controlled by various computer codes. These used the QWERTY layouts and added keys such as escape which had special meanings to computers. Later keyboards added function keys and arrow keys. Since the standardization of personal computers and Windows after the 1980s, most full-sized computer keyboards have followed this standard (see drawing at right). This layout has a separate numeric keypad for data entry at the right, 12 function keys across the top, and a cursor section to the right and center with keys for , , , , , and with cursor arrows in an inverted-T shape.
Diacritical marks
QWERTY was designed for English, a language with accents ('diacritics') appearing only in a few words of foreign origin. The standard US keyboard has no provision for these at all; the need was later met by the so-called "US-International" keyboard mapping, which uses "dead keys" to type accents without having to add more physical keys. (The same principle is used in the standard US keyboard layout for macOS, but in a different way). Most European (including UK) keyboards for PCs have an key ('Alternative Graphics' key, replaces the right Alt key) that enables easy access to the most common diacritics used in the territory where sold. For example, default keyboard mapping for the UK/Ireland keyboard has the diacritics used in Irish but these are rarely printed on the keys; but to type the accents used in Welsh and Scots Gaelic requires the use of a "UK Extended" keyboard mapping and the dead key or compose key method. This arrangement applies to Windows, ChromeOS and Linux; macOS computers have different techniques. The US International and UK Extended mappings provide many of the diacritics needed for students of other European languages.
Other keys and characters
Some QWERTY keyboards have alt codes, in which holding while inputting a sequence of numbers on a numeric keypad allows the entry of special characters. For example, + results in ú (a Latin lowercase letter u with an acute accent).
Specific language variants
Minor changes to the arrangement are made for other languages. There are a large number of different keyboard layouts used for different languages written in Latin script. They can be divided into three main families according to where the , , , , and keys are placed on the keyboard. These are usually named after the first six letters, for example this QWERTY layout and the AZERTY layout.
In this section you will also find keyboard layouts that include some additional symbols of other languages. But they are different from layouts that were designed with the goal to be usable for multiple languages (see Multilingual variants).
The following sections give general descriptions of QWERTY keyboard variants along with details specific to certain operating systems. The emphasis is on Microsoft Windows.
English
Canada
English-speaking Canadians have traditionally used the same keyboard layout as in the United States, unless they are in a position where they have to write French on a regular basis. French-speaking Canadians respectively have favoured the Canadian French keyboard layout (see French (Canada), below).
The CSA keyboard is the official multilingual keyboard layout of Canada.
United Kingdom
The United Kingdom and Ireland use a keyboard layout based on the 48-key version defined in the (now withdrawn) British Standard BS 4822. It is very similar to that of the United States, but has an key and a larger key, includes and signs and some rarely used EBCDIC symbols (, ), and uses different positions for the characters , , , , , and .
The BS 4822:1994 standard did not make any use of the key and lacked support for any non-ASCII characters other than and . It also assigned a key for the non-ASCII character broken bar , but lacked one for the far more commonly used ASCII character vertical bar . It also lacked support for various diacritics used in the Welsh alphabet, and the Scots Gaelic alphabet; and also is missing the letter yogh, ȝ, used very rarely in the Scots language. Therefore, various manufacturers have modified or extended the BS 4822 standard:
The B00 key (left of ), shifted, results in vertical bar on some systems (e.g. Windows UK/Ireland keyboard layout and Linux/X11 UK/Ireland keyboard layout), rather than the broken bar assigned by BS 4822 and provided in some systems (e.g. IBM OS/2 UK166 keyboard layout)
The E00 key (left of ) with provides either vertical bar (OS/2's UK166 keyboard layout, Linux/X11 UK keyboard layout) or broken bar (Microsoft Windows UK/Ireland keyboard layout)
Support for the diacritics needed for Scots Gaelic and Welsh was added to Windows and ChromeOS using a "UK-extended" setting (see below); Linux and X11 systems have an explicit or reassigned key for this purpose.
UK Apple keyboard
The British version of the Apple Keyboard does not use the standard UK layout. Instead, some older versions have the US layout (see below) with a few differences: the sign is reached by and the sign by , the opposite to the US layout. The is also present and is typed with . Umlauts are reached by typing and then the vowel, and ß is reached by typing .
Newer Apple "British" keyboards use a layout that is relatively unlike either the US or traditional UK keyboard. It uses an elongated return key, a shortened left with and in the newly created position, and in the upper left of the keyboard are and instead of the traditional EBCDIC codes. The middle-row key that fits inside the key has and .
United States
The arrangement of the character input keys and the keys contained in this layout is specified in the US national standard ANSI-INCITS 154-1988 (R1999) (formerly ANSI X3.154-1988 (R1999)), where this layout is called "ASCII keyboard". The complete US keyboard layout, as it is usually found, also contains the usual function keys in accordance with the international standard ISO/IEC 9995-2, although this is not explicitly required by the US American national standard.
US keyboards are used not only in the United States, but also in many other English-speaking places (except UK and Ireland), including India, Australia, Anglophone Canada, Hong Kong, New Zealand, South Africa, Malaysia, Singapore, Philippines, and Indonesia that uses the same 26-letter alphabets as English. In many other English-speaking jurisdictions (e.g., Canada, Australia, the Caribbean nations, Hong Kong, Malaysia, India, Pakistan, Bangladesh, Singapore, New Zealand, and South Africa), local spelling sometimes conforms more closely to British English usage, although these nations decided to use a US English keyboard layout. Until Windows 8 and later versions, when Microsoft separated the settings, this had the undesirable side effect of also setting the language to US English, rather than the local orthography.
The US keyboard layout has a second instead of the key and does not use any dead keys; this makes it inefficient for all but a handful of languages (unless the 'US-International' keyboard mapping is used, see below). On the other hand, the US keyboard layout (or the similar UK layout) is occasionally used by programmers in countries where the keys for are located in less convenient positions on the locally customary layout.
On some keyboards the is bigger than traditionally and takes up also a part of the line above, more or less the area of the traditional location of the key. In these cases the backslash is located in alternative places. It can be situated one line above the default location, on the right of the key. Sometimes it is placed one line below its traditional situation, on the right of the (in these cases the key is narrower than usual on the line of its default location). It may also be two lines below its default situation on the right of a narrower than traditionally right key.
Arabic
Two keyboard layouts that are based on Qwerty are used in Arabic-speaking countries. Microsoft designate them as Arabic (101) and Arabic (102).
In both the number line is identical to the American layout, beside being mirrored, and not including the key to the left of .
The key on the right side of the keyboard is also the same. could also be produced by shifting the key on the left side of the keyboard. are produced by shifting the same keys, but is mirrored to . In Arabic (102) it's true also for {} which are again mirrored.
Finally, instead of being the normal output of their keys, are produced by shifting the same keys.
Czech
The typewriter came to the Czech-speaking area in the late 19th century, when it was part of Austria-Hungary where German was the dominant language of administration. Therefore, Czech typewriters have the QWERTZ layout.
However, with the introduction of imported computers, especially since the 1990s, the QWERTY keyboard layout is frequently used for computer keyboards. The Czech QWERTY layout differs from QWERTZ in that the characters (e.g. and others) missing from the Czech keyboard are accessible with on the same keys where they are located on an American keyboard. In Czech QWERTZ keyboards the positions of these characters accessed through differs.
Danish
Both the Danish and Norwegian keyboards include dedicated keys for the letters Å/å, Æ/æ and Ø/ø, but the placement is a little different, as the and keys are swapped on the Norwegian layout. (The Finnish–Swedish keyboard is also largely similar to the Norwegian layout, but the and are replaced with and . On some systems, the Danish keyboard may allow typing Ö/ö and Ä/ä by holding the or key while striking and , respectively.) Computers with Windows are commonly sold with ÖØÆ and ÄÆØ printed on the two keys, allowing same computer hardware to be sold in Denmark, Finland, Norway and Sweden, with different operating system settings.
Dutch (Netherlands)
Though it is seldom used (most Dutch keyboards use US International layout), the Dutch layout uses QWERTY but has additions for the sign, the diaresis (¨), and the braces ({ }) as well as different locations for other symbols. An older version contained a single-stroke key for the Dutch character IJ/ij, which is usually typed by the combination of and . In the 1990s, there was a version with the now-obsolete florin sign (Dutch: guldenteken) for IBM PCs.
In Flanders (the Dutch-speaking part of Belgium), "AZERTY" keyboards are used instead, due to influence from the French-speaking part of Belgium.
| Technology | Media and communication: Basics | null |
25220 | https://en.wikipedia.org/wiki/Quantum%20computing | Quantum computing | A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. Theoretically a large-scale quantum computer could break some widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
The basic unit of information in quantum computing, the qubit (or "quantum bit"), serves the same function as the bit in classical computing. However, unlike a classical bit, which can be in one of two states (a binary), a qubit can exist in a superposition of its two "basis" states, a state that is in an abstract sense "between" the two basis states. When measuring a qubit, the result is a probabilistic output of a classical bit. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly.
Quantum computers are not yet practical for real work. Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. National governments have invested heavily in experimental research that aims to develop scalable qubits with longer coherence times and lower error rates. Example implementations include superconductors (which isolate an electrical current by eliminating electrical resistance) and ion traps (which confine a single atomic particle using electromagnetic fields).
In principle, a classical computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms are exponentially more efficient than the best-known classical algorithms. A large-scale quantum computer could in theory solve computational problems unsolvable by a classical computer in any reasonable amount of time. This concept of extra ability has been called "quantum supremacy". While such claims have drawn significant attention to the discipline, near-term practical use cases remain limited.
History
For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain perplexing physical phenomena observed at atomic scales, and digital computers emerged in the following decades to replace human computers for tedious calculations. Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography, and quantum physics was essential for nuclear physics used in the Manhattan Project.
As physicists applied quantum mechanical models to computational problems and swapped digital bits for qubits, the fields of quantum mechanics and computer science began to converge. In 1980, Paul Benioff introduced the quantum Turing machine, which uses quantum theory to describe a simplified computer.
When digital computers became faster, physicists faced an exponential increase in overhead when simulating quantum dynamics, prompting Yuri Manin and Richard Feynman to independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation.
In a 1984 paper, Charles Bennett and Gilles Brassard applied quantum theory to cryptography protocols and demonstrated that quantum key distribution could enhance information security.
Quantum algorithms then emerged for solving oracle problems, such as Deutsch's algorithm in 1985, the BernsteinVazirani algorithm in 1993, and Simon's algorithm in 1994.
These algorithms did not solve practical problems, but demonstrated mathematically that one could gain more information by querying a black box with a quantum state in superposition, sometimes referred to as quantum parallelism.
Peter Shor built on these results with his 1994 algorithm for breaking the widely used RSA and DiffieHellman encryption protocols, which drew significant attention to the field of quantum computing. In 1996, Grover's algorithm established a quantum speedup for the widely applicable unstructured search problem. The same year, Seth Lloyd proved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations, validating Feynman's 1982 conjecture.
Over the years, experimentalists have constructed small-scale quantum computers using trapped ions and superconductors.
In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology, and subsequent experiments have increased the number of qubits and reduced error rates.
In 2019, Google AI and NASA announced that they had achieved quantum supremacy with a 54-qubit machine, performing a computation that is impossible for any classical computer. However, the validity of this claim is still being actively researched.
Quantum information processing
Computer engineers typically describe a modern computer's operation in terms of classical electrodynamics.
Within these "classical" computers, some components (such as semiconductors and random number generators) may rely on quantum behavior, but these components are not isolated from their environment, so any quantum information quickly decoheres.
While programmers may depend on probability theory when designing a randomized algorithm, quantum mechanical notions like superposition and interference are largely irrelevant for program analysis.
Quantum programs, in contrast, rely on precise control of coherent quantum systems. Physicists describe these systems mathematically using linear algebra. Complex numbers model probability amplitudes, vectors model quantum states, and matrices model the operations that can be performed on these states. Programming a quantum computer is then a matter of composing operations in such a way that the resulting program computes a useful result in theory and is implementable in practice.
As physicist Charlie Bennett describes the relationship between quantum and classical computers,
Quantum information
Just as the bit is the basic concept of classical information theory, the qubit is the fundamental unit of quantum information. The same term qubit is used to refer to an abstract mathematical model and to any physical system that is represented by that model. A classical bit, by definition, exists in either of two physical states, which can be denoted 0 and 1. A qubit is also described by a state, and two states often written and serve as the quantum counterparts of the classical states 0 and 1. However, the quantum states and belong to a vector space, meaning that they can be multiplied by constants and added together, and the result is again a valid quantum state. Such a combination is known as a superposition of and .
A two-dimensional vector mathematically represents a qubit state. Physicists typically use Dirac notation for quantum mechanical linear algebra, writing for a vector labeled . Because a qubit is a two-state system, any qubit state takes the form , where and are the standard basis states, and and are the probability amplitudes, which are in general complex numbers. If either or is zero, the qubit is effectively a classical bit; when both are nonzero, the qubit is in superposition. Such a quantum state vector acts similarly to a (classical) probability vector, with one key difference: unlike probabilities, probability are not necessarily positive numbers. Negative amplitudes allow for destructive wave interference.
When a qubit is measured in the standard basis, the result is a classical bit. The Born rule describes the norm-squared correspondence between amplitudes and probabilitieswhen measuring a qubit , the state collapses to with probability , or to with probability .
Any valid qubit state has coefficients and such that .
As an example, measuring the qubit would produce either or with equal probability.
Each additional qubit doubles the dimension of the state space.
As an example, the vector represents a two-qubit state, a tensor product of the qubit with the qubit .
This vector inhabits a four-dimensional vector space spanned by the basis vectors , , , and .
The Bell state is impossible to decompose into the tensor product of two individual qubitsthe two qubits are entangled because neither qubit has a state vector of its own.
In general, the vector space for an n-qubit system is 2n-dimensional, and this makes it challenging for a classical computer to simulate a quantum one: representing a 100-qubit system requires storing 2100 classical values.
Unitary operators
The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix
Mathematically, the application of such a logic gate to a quantum state vector is modelled with matrix multiplication. Thus
and .
The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are
The controlled NOT (CNOT) gate can then be represented using the following matrix:
As a mathematical consequence of this definition, , , , and . In other words, the CNOT applies a NOT gate ( from before) to the second qubit if and only if the first qubit is in the state . If the first qubit is , nothing is done to either qubit.
In summary, quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Quantum parallelism
Quantum parallelism is the heuristic that quantum computers can be thought of as evaluating a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, allowing for the computation of multiple outputs simultaneously. This property is key to the speedup of many quantum algorithms. However, "parallelism" in this sense is insufficient to speed up a computation, because the measurement at the end of the computation gives only one value. To be useful, a quantum algorithm must also incorporate some other conceptual ingredient.
Quantum programming
There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed.
Gate array
A quantum gate array decomposes computation into a sequence of few-qubit quantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Any quantum computation (which is, in the above formalism, any unitary matrix of size over qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem. Implementation of Boolean functions using the few-qubit quantum gates is presented here.
Measurement-based quantum computing
A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation.
Adiabatic quantum computing
An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution.
Neuromorphic quantum computing
Neuromorphic quantum computing (abbreviated as ‘n.quantum computing’) is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and do not follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing and quantum computing share similar physical properties during computation.
Topological quantum computing
A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice.
Quantum Turing machine
A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
Noisy intermediate-scale quantum computing
The threshold theorem shows how increasing the number of qubits can mitigate errors, yet fully fault-tolerant quantum computing remains "a rather distant dream". According to some researchers, noisy intermediate-scale quantum (NISQ) machines may have specialized uses in the near future, but noise in quantum gates limits their reliability.
Scientists at Harvard University successfully created "quantum circuits" that correct errors more efficiently than alternative methods, which may potentially remove a major obstacle to practical quantum computers. The Harvard research team was supported by MIT, QuEra Computing, Caltech, and Princeton University and funded by DARPA's Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program.
Quantum cryptography and cybersecurity
Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which relies on the principles of quantum mechanics, offers the possibility of secure communication channels that are resistant to eavesdropping. Quantum key distribution (QKD) protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Moreover, quantum random number generators (QRNGs) can produce high-quality random numbers, which are essential for secure encryption.
However, quantum computing also poses challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key cryptography schemes like RSA, which rely on the difficulty of factoring large numbers. Post-quantum cryptography, which involves the development of cryptographic algorithms that are resistant to attacks by both classical and quantum computers, is an active area of research aimed at addressing this concern.
Ongoing research in quantum cryptography and post-quantum cryptography is crucial for ensuring the security of communication and data in the face of evolving quantum computing capabilities. Advances in these fields, such as the development of new QKD protocols, the improvement of QRNGs, and the standardization of post-quantum cryptographic algorithms, will play a key role in maintaining the integrity and confidentiality of information in the quantum era.
Communication
Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.
Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware (such as quantum repeaters), hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing.
Algorithms
Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.
Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely.
Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems.
Simulation of quantum systems
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer.
About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production. It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process by the mid-2020s although some have predicted it will take longer.
Post-quantum cryptography
A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).
Search problems
The most well-known example of a problem that allows for a polynomial quantum speedup is unstructured search, which involves finding a marked item out of a list of items in a database. This can be solved by Grover's algorithm using queries to the database, quadratically fewer than the queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Many examples of provable quantum speedups for query problems are based on Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees.
Problems that can be efficiently addressed with Grover's algorithm have the following properties:
There is no searchable structure in the collection of possible answers,
The number of possible answers to check is the same as the number of inputs to the algorithm, and
There exists a Boolean function that evaluates each input and determines whether it is the correct answer.
For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is a Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies.
Quantum annealing
Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Adiabatic optimization may be helpful for solving computational biology problems.
Machine learning
Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks.
For example, the HHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.
Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms.
Engineering
classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain.
Challenges
There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer:
Physically scalable to increase the number of qubits
Qubits that can be initialized to arbitrary values
Quantum gates that are faster than decoherence time
Universal gate set
Qubits that can be read easily.
Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co.
The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.
Decoherence
One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds.
As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time; hence any operation must be completed much more quickly than the decoherence time.
As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of binary digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates show that at least 3million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate for practically useful integer factorization problem sizing 1,024-bit or larger.
Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates.
Quantum supremacy
Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.
In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers and even beating it.
In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.
Claims of quantum supremacy have generated hype around quantum computing, but they are based on contrived benchmark tasks that do not directly imply useful real-world applications.
In January 2024, a study published in Physical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims.
Skepticism
Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarized current quantum computers as being "For now, [good for] absolutely nothing". The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023 Communications of the ACM article found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm".
This state of affairs can be traced to several current and long-term considerations.
Conventional computer hardware and algorithms are not only optimized for practical tasks, but are still improving rapidly, particularly GPU accelerators.
Current quantum computing hardware generates only a limited amount of entanglement before getting overwhelmed by noise.
Quantum algorithms provide speedup over conventional algorithms only for some tasks, and matching these tasks with practical applications proved challenging. Some promising tasks and applications require resources far beyond those available today. In particular, processing large amounts of non-quantum data is a challenge for quantum computers.
Some promising algorithms have been "dequantized", i.e., their non-quantum analogues with similar complexity have been found.
If quantum error correction is used to scale quantum computers to practical applications, its overhead may undermine speedup offered by many quantum algorithms.
Complexity analysis of algorithms sometimes makes abstract assumptions that do not hold in applications. For example, input data may not already be available encoded in quantum states, and "oracle functions" used in Grover's algorithm often have internal structure that can be exploited for faster algorithms.
In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for a long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms (which may be unknown) and show that some quantum algorithms asymptomatically improve upon those bounds.
Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons.
Bill Unruh doubted the practicality of quantum computers in a paper published in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:
"So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be... about 10300... Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never."
Physical realizations
A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well.
The first quantum logic gates were implemented with trapped ions and prototype general purpose machines with up to 20 qubits have been realized. However, the technology behind these devices combines complex vacuum equipment, lasers, microwave and radio frequency equipment making full scale processors difficult to integrate with standard computing equipment. Moreover, the trapped ion system itself has engineering challenges to overcome.
The largest commercial systems are based on superconductor devices and have scaled to 2000 qubits. However, the error rates for larger machines have been on the order of 5%. Technologically these devices are all cryogenic and scaling to large numbers of qubits requires wafer-scale integration, a serious engineering challenge by itself.
Research efforts to create stabler qubits for quantum computing include topological quantum computer approaches. For example, Microsoft is working on a computer based on the quantum properties of two-dimensional quasiparticles called anyons.
Potential applications
With focus on business management's point of view, the potential applications of quantum computing into four major categories are cybersecurity, data analytics and artificial intelligence, optimization and simulation, and data management and searching.
Investment in quantum computing research has increased in the public and private sectors.
As one consulting firm summarized,
Theory
Computability
Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis.
Complexity
While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers.
The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that and is widely suspected that , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.
The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP).
| Technology | Computer architecture concepts | null |
25229 | https://en.wikipedia.org/wiki/Quagga | Quagga | The quagga ( or ) (Equus quagga quagga) is an extinct subspecies of the plains zebra that was endemic to South Africa until it was hunted to extinction in the late 19th century. It was long thought to be a distinct species, but MtDNA studies have supported it being a subspecies of plains zebra. A more recent study suggested that it was the southernmost cline or ecotype of the species.
The quagga is believed to have been around long and tall at the shoulders. It was distinguished from other zebras by its limited pattern of primarily brown and white stripes, mainly on the front part of the body. The rear was brown and without stripes, and appeared more horse-like. The distribution of stripes varied considerably between individuals. Little is known about the quagga's behaviour, but it may have gathered into herds of 30–50. Quaggas were said to be wild and lively, yet were also considered more docile than the related Burchell's zebra. They were once found in great numbers in the Karoo of Cape Province and the southern part of the Orange Free State in South Africa.
After the European settlement of South Africa began, the quagga was extensively hunted, as it competed with domesticated animals for forage. Some were taken to zoos in Europe, but breeding programmes were unsuccessful. The last wild population lived in the Orange Free State; the quagga was extinct in the wild by 1878. The last captive specimen died in Amsterdam on 12 August 1883. Only one quagga was ever photographed alive, and only 23 skins exist today. In 1984, the quagga was the first extinct animal whose DNA was analysed. The Quagga Project has attempted to breed Burchell's zebras with similar striping patterns to the quagga.
Taxonomy
It has been historically suggested that the name quagga is derived from the Khoikhoi word for zebra (cf. Tshwa 'zebra'), thereby being an onomatopoeic word, resembling the quagga's call, variously transcribed as "kwa-ha-ha", "kwahaah", or "oug-ga". The name is still used colloquially for the plains zebra.
The quagga was originally classified as a distinct species, Equus quagga, in 1778 by Dutch naturalist Pieter Boddaert. Traditionally, the quagga and the other plains and mountain zebras were placed in the subgenus Hippotigris. Much debate has occurred over the status of the quagga in relation to the plains zebra. The British zoologist Reginald Innes Pocock in 1902 was perhaps the first to suggest that the quagga was a subspecies of the plains zebra. As the quagga was scientifically described and named before the plains zebra, the trinomial name for the quagga becomes E. quagga quagga under this scheme, and the other subspecies of the plains zebra are placed under E. quagga, as well.
Historically, quagga taxonomy was further complicated because the extinct southernmost population of Burchell's zebra (Equus quagga burchellii, formerly Equus burchellii burchellii) was thought to be a distinct subspecies (also sometimes thought a full species, E. burchellii). The extant northern population, the "Damara zebra", was later named Equus quagga antiquorum, which means that it is today also referred to as E. q. burchellii, after it was realised they were the same taxon. The extinct population was long thought very close to the quagga, since it also showed limited striping on its hind parts. As an example of this, Shortridge placed the two in the now disused subgenus Quagga in 1934. Most experts now suggest that the two subspecies represent two ends of a cline.
Different subspecies of plains zebras were recognised as members of Equus quagga by early researchers, though much confusion existed over which species were valid. Quagga subspecies were described on the basis of differences in striping patterns, but these differences were since attributed to individual variation within the same populations. Some subspecies and even species, such as E. q. danielli and Hippotigris isabellinus, were based only on illustrations (iconotypes) of aberrant quagga specimens. One craniometric study from 1980 seemed to confirm its affiliation with the horse (Equus ferus caballus), but early morphological studies have been noted as being erroneous. Studying skeletons from stuffed specimens can be problematical, as early taxidermists sometimes used donkey and horse skulls inside their mounts when the originals were unavailable.
Evolution
The quagga is poorly represented in the fossil record, and the identification of these fossils is uncertain, as they were collected at a time when the name "quagga" referred to all zebras. Fossil skulls of Equus mauritanicus from Algeria have been claimed to show affinities with the quagga and the plains zebra, but they may be too badly damaged to allow definite conclusions to be drawn from them.
The quagga was the first extinct animal to have its DNA analysed, and this 1984 study launched the field of ancient DNA analysis. It confirmed that the quagga was more closely related to zebras than to horses, with the quagga and mountain zebra (Equus zebra) sharing an ancestor 3–4 million years ago. An immunological study published the following year found the quagga to be closest to the plains zebra. A 1987 study suggested that the mtDNA of the quagga diverged at a range of roughly 2 percent per million years, similar to other mammal species, and again confirmed the close relation to the plains zebra.
Later morphological studies came to different conclusions. A 1999 analysis of cranial measurements found that the quagga was as different from the plains zebra as the latter is from the mountain zebra. A 2004 study of skins and skulls instead suggested that the quagga was not a distinct species, but a subspecies of the plains zebra. In spite of these findings, many authors subsequently kept the plains zebra and the quagga as separate species.
A genetic study published in 2005 confirmed the subspecific status of the quagga. It showed that the quagga had little genetic diversity, and that it diverged from the other plains zebra subspecies only between 120,000 and 290,000 years ago, during the Pleistocene, and possibly the penultimate glacial maximum. Its distinct coat pattern perhaps evolved rapidly because of geographical isolation and/or adaptation to a drier environment. In addition, plains zebra subspecies tend to have less striping the further south they live, and the quagga was the most southern-living of them all. Other large African ungulates diverged into separate species and subspecies during this period, as well, probably because of the same climate shift.
The simplified cladogram below is based on the 2005 analysis (some taxa shared haplotypes and could, therefore, not be differentiated):
A 2018 genetic study of plains zebras populations confirmed the quagga as a member of that species. They found no evidence for subspecific differentiation based on morphological differences between southern populations of zebras, including the quagga. Modern plains zebra populations may have originated from southern Africa, and the quagga appears to be less divergent from neighbouring populations than the northernmost living population in northeastern Uganda. Instead, the study supported a north–south genetic continuum for plains zebras, with the Ugandan population being the most distinct. Zebras from Namibia appear to be the closest genetically to the quagga.
Description
The quagga is believed to have been long and tall at the shoulders. Based on measurements of skins, mares were significantly longer and slightly taller than stallions, whereas the stallions of extant zebras are the largest. Its coat pattern was unique among equids: zebra-like in the front but more like a horse in the rear. It had brown and white stripes on the head and neck, brown upper parts and a white belly, tail and legs. The stripes were boldest on the head and neck and became gradually fainter further down the body, blending with the reddish brown of the back and flanks, until disappearing along the back. It appears to have had a high degree of polymorphism, with some having almost no stripes and others having patterns similar to the extinct southern population of Burchell's zebra, where the stripes covered most of the body except for the hind parts, legs and belly. It also had a broad dark dorsal stripe on its back. It had a standing mane with brown and white stripes.
The only quagga to have been photographed alive was a mare at the Zoological Society of London's Zoo. Five photographs of this specimen are known, taken between 1863 and 1870. On the basis of photographs and written descriptions, many observers suggest that the stripes on the quagga were light on a dark background, unlike other zebras. The German naturalist Reinhold Rau, pioneer of the Quagga Project, claimed that this is an optical illusion: that the base colour is a creamy white and that the stripes are thick and dark.
Living in the very southern end of the plains zebra's range, the quagga had a thick winter coat that moulted each year. Its skull was described as having a straight profile and a concave diastema, and as being relatively broad with a narrow occiput. Like other plains zebras, the quagga did not have a dewlap on its neck as the mountain zebra does. The 2004 morphological study found that the skeletal features of the southern Burchell's zebra population and the quagga overlapped, and that they were impossible to distinguish. Some specimens also appeared to be intermediate between the two in striping, and the extant Burchell's zebra population still exhibits limited striping. It can therefore be concluded that the two subspecies graded morphologically into each other. Today, some stuffed specimens of quaggas and southern Burchell's zebra are so similar that they are impossible to definitely identify as either, since no location data was recorded.
Behaviour and ecology
The quagga was the southernmost distributed plains zebra, mainly living south of the Orange River. It was a grazer, and its habitat range was restricted to the grasslands and arid interior scrubland of the Karoo region of South Africa, today forming parts of the provinces of Northern Cape, Eastern Cape, Western Cape, and the Free State. These areas were known for distinctive flora and fauna and high amounts of endemism. Quaggas have been reported gathering into herds of 30–50, and sometimes travelled in a linear fashion. They may have been sympatric with Burchell's zebra between the Vaal and Orange rivers. This is disputed, and there is no evidence that they interbred. It could also have shared a small portion of its range with Hartmann's mountain zebra (Equus zebra hartmannae).
Little is known about the behaviour of quaggas in the wild, and it is sometimes unclear what exact species of zebra is referred to in old reports. The only source that unequivocally describes the quagga in the Free State is that of the British military engineer and hunter William Cornwallis Harris. His 1840 account reads as follows:
The geographical range of the quagga does not appear to extend to the northward of the river Vaal. The animal was formerly extremely common within the colony; but, vanishing before the strides of civilisation, is now to be found in very limited numbers and on the borders only. Beyond, on those sultry plains which are completely taken possession of by wild beasts, and may with strict propriety be termed the domains of savage nature, it occurs in interminable herds; and, although never intermixing with its more elegant congeners, it is almost invariably to be found ranging with the white-tailed gnu and with the ostrich, for the society of which bird especially it evinces the most singular predilection. Moving slowly across the profile of the ocean-like horizon, uttering a shrill, barking neigh, of which its name forms a correct imitation, long files of quaggas continually remind the early traveller of a rival caravan on its march. Bands of many hundreds are thus frequently seen doing their migration from the dreary and desolate plains of some portion of the interior, which has formed their secluded abode, seeking for those more luxuriant pastures where, during the summer months, various herbs thrust forth their leaves and flowers to form a green carpet, spangled with hues the most brilliant and diversified.
The practical function of striping in zebras has been debated and it is unclear why the quagga lacked stripes on its hind parts. A cryptic function for protection from predators (stripes obscure the individual zebra in a herd) and biting flies (which are less attracted to striped objects), as well as various social functions, have been proposed for zebras in general. Differences in hind quarter stripes may have aided species recognition during stampedes of mixed herds, so that members of one subspecies or species would follow its own kind. It has also been evidence that the zebras developed striping patterns as thermoregulation to cool themselves down, and that the quagga lost them due to living in a cooler climate, although one problem with this is that the mountain zebra lives in similar environments and has a bold striping pattern. A 2014 study strongly supported the biting-fly hypothesis, and the quagga appears to have lived in areas with lesser amounts of fly activity than other zebras.
A 2020 study suggested that the sexual dimorphism in size, with quagga mares being larger than stallions, could be due to the cold and droughts that affects the Karoo plateau, conditions that were even more severe in prehistoric times, such as during ice ages (other plains zebras live in warmer areas). Isolation, cold, and aridity could thereby have affected quagga evolution, including coat colour and size dimorphism. Since plains zebra mares are pregnant or lactate for much of their lives, larger size could have been a selective advantage for quagga mares, as they would therefore have more food reserves when food was scarce. Dimorphism and coat colour could also have evolved through genetic drift due to isolation, but these influences are not mutually exclusive, and could have worked together.
Relationship with humans
Quaggas have been identified in cave art attributed to the indigenous San people of Southern Africa. As it was easy to find and kill, the quagga was hunted by early Dutch settlers and later by Afrikaners to provide meat or for their skins. The skins were traded or exploited. The quagga was probably vulnerable to extinction due to its restricted range. Local farmers used them as guards for their livestock, as they were likely to attack intruders. Quaggas were said to be lively and highly strung, especially the stallions. Quaggas were brought to European zoos, and an attempt at captive breeding was made at London Zoo, but this was halted when a lone stallion killed itself by bashing itself against a wall after losing its temper. On the other hand, captive quaggas in European zoos were said to be tamer and more docile than Burchell's zebra. One specimen was reported to have lived in captivity for 21 years and 4 months, dying in 1872.
The quagga was long regarded a suitable candidate for domestication, as it counted as the most docile of the zebras. The Dutch colonists in South Africa had considered this possibility, because their imported work horses did not perform very well in the extreme climate and regularly fell prey to the feared African horse sickness. In 1843, the English naturalist Charles Hamilton Smith wrote that the quagga was 'unquestionably best calculated for domestication, both as regards strength and docility'. Some mentions have been given of tame or domesticated quaggas in South Africa. In Europe, two stallions were used to drive a phaeton by the sheriff of London in the early 19th century.
In an attempt at domesticating the quagga, the British lord George Douglas, 16th Earl of Morton obtained a single male which he bred with a female horse of partial Arabian ancestry. This produced a female hybrid with stripes on its back and legs. Lord Morton's mare was sold and was subsequently bred with a black stallion, resulting in offspring that again had zebra stripes. An account of this was published in 1820 by the Royal Society. It is unknown what happened to the hybrid mare itself. This led to new ideas on telegony, referred to as pangenesis by the British naturalist Charles Darwin. At the close of the 19th century, the Scottish zoologist James Cossar Ewart argued against these ideas and proved, with several cross-breeding experiments, that zebra stripes could appear as an atavistic trait at any time.
There are 23 known stuffed and mounted quagga specimens throughout the world, including a juvenile, two foals, and a foetus. In addition, a mounted head and neck, a foot, seven complete skeletons, and samples of various tissues remain. A 24th mounted specimen was destroyed in Königsberg, Germany, during World War II, and various skeletons and bones have also been lost.
Extinction
The quagga had disappeared from much of its range by the 1850s. The last population in the wild, in the Orange Free State, was extirpated in the late 1870s. The last known wild quagga died in 1878. The specimen in London died in 1872 and the one in Berlin in 1875. The last captive quagga, a female in Amsterdam's Natura Artis Magistra zoo, lived there from 9 May 1867 until it died on 12 August 1883, but its origin and cause of death are unclear. Its death was not recognised as signifying the extinction of its kind at the time, and the zoo requested another specimen; hunters believed it could still be found "closer to the interior" in the Cape Colony. Since locals used the term quagga to refer to all zebras, this may have led to the confusion. The extinction of the quagga was internationally accepted by the 1900 Convention for the Preservation of Wild Animals, Birds and Fish in Africa. The last specimen was featured on a Dutch stamp in 1988. The specimen itself was mounted and is kept in the collection of Naturalis Biodiversity Center in Leiden. It has been on display for special occasions.
In 1889, the naturalist Henry Bryden wrote: "That an animal so beautiful, so capable of domestication and use, and to be found not long since in so great abundance, should have been allowed to be swept from the face of the earth, is surely a disgrace to our latter-day civilization."
Breeding back project
After the very close relationship between the quagga and extant plains zebras was discovered, Rau started the Quagga Project in 1987 in South Africa to create a quagga-like zebra population by selectively breeding for a reduced stripe pattern from plains zebra stock, with the eventual aim of introducing them to the quagga's former range. To differentiate between the quagga and the zebras of the project, they refer to it as "Rau quaggas". The founding population consisted of 19 individuals from Namibia and South Africa, chosen because they had reduced striping on the rear body and legs. The first foal of the project was born in 1988. Once a sufficiently quagga-like population has been created, participants in the project plan to release them in the Western Cape.
Introduction of these quagga-like zebras could be part of a comprehensive restoration programme, including such ongoing efforts as eradication of non-native trees. Quaggas, wildebeest, and ostriches, which occurred together during historical times in a mutually beneficial association, could be kept together in areas where the indigenous vegetation has to be maintained by grazing. In early 2006, the third- and fourth-generation animals produced by the project were considered looking much like the depictions and preserved specimens of the quagga. This type of selective breeding is called breeding back. The practice is controversial, since the resulting zebras will resemble the quaggas only in external appearance, but will be genetically different. The technology to use recovered DNA for cloning has not yet been developed.
| Biology and health sciences | Equidae | Animals |
25233 | https://en.wikipedia.org/wiki/Quartz | Quartz | Quartz is a hard, crystalline mineral composed of silica (silicon dioxide). The atoms are linked in a continuous framework of SiO4 silicon–oxygen tetrahedra, with each oxygen being shared between two tetrahedra, giving an overall chemical formula of SiO2. Quartz is, therefore, classified structurally as a framework silicate mineral and compositionally as an oxide mineral. Quartz is the second most abundant mineral in Earth's continental crust, behind feldspar.
Quartz exists in two forms, the normal α-quartz and the high-temperature β-quartz, both of which are chiral. The transformation from α-quartz to β-quartz takes place abruptly at . Since the transformation is accompanied by a significant change in volume, it can easily induce microfracturing of ceramics or rocks passing through this temperature threshold.
There are many different varieties of quartz, several of which are classified as gemstones. Since antiquity, varieties of quartz have been the most commonly used minerals in the making of jewelry and hardstone carvings, especially in Europe and Asia.
Quartz is the mineral defining the value of 7 on the Mohs scale of hardness, a qualitative scratch method for determining the hardness of a material to abrasion.
Etymology
The word "quartz" is derived from the German word , which had the same form in the first half of the 14th century in Middle High German and in East Central German and which came from the Polish dialect term , which corresponds to the Czech term ("hard"). Some sources, however, attribute the word's origin to the Saxon word Querkluftertz, meaning cross-vein ore.
The Ancient Greeks referred to quartz as () derived from the Ancient Greek () meaning "icy cold", because some philosophers (including Theophrastus) understood the mineral to be a form of supercooled ice. Today, the term rock crystal is sometimes used as an alternative name for transparent coarsely crystalline quartz.
Early studies
Roman naturalist Pliny the Elder believed quartz to be water ice, permanently frozen after great lengths of time. He supported this idea by saying that quartz is found near glaciers in the Alps, but not on volcanic mountains, and that large quartz crystals were fashioned into spheres to cool the hands. This idea persisted until at least the 17th century. He also knew of the ability of quartz to split light into a spectrum.
In the 17th century, Nicolas Steno's study of quartz paved the way for modern crystallography. He discovered that regardless of a quartz crystal's size or shape, its long prism faces always joined at a perfect 60° angle thus discovering the law of constancy of interfacial angles.
Crystal habit and structure
Quartz belongs to the trigonal crystal system at room temperature, and to the hexagonal crystal system above . The ideal crystal shape is a six-sided prism terminating with six-sided pyramid-like rhombohedrons at each end. In nature, quartz crystals are often twinned (with twin right-handed and left-handed quartz crystals), distorted, or so intergrown with adjacent crystals of quartz or other minerals as to only show part of this shape, or to lack obvious crystal faces altogether and appear massive.
Well-formed crystals typically form as a druse (a layer of crystals lining a void), of which quartz geodes are particularly fine examples. The crystals are attached at one end to the enclosing rock, and only one termination pyramid is present. However, doubly terminated crystals do occur where they develop freely without attachment, for instance, within gypsum.
α-quartz crystallizes in the trigonal crystal system, space group P3121 or P3221 (space group 152 or 154 resp.) depending on the chirality. Above , α-quartz in P3121 becomes the more symmetric hexagonal P6422 (space group 181), and α-quartz in P3221 goes to space group P6222 (no. 180).
These space groups are truly chiral (they each belong to the 11 enantiomorphous pairs). Both α-quartz and β-quartz are examples of chiral crystal structures composed of achiral building blocks (SiO4 tetrahedra in the present case). The transformation between α- and β-quartz only involves a comparatively minor rotation of the tetrahedra with respect to one another, without a change in the way they are linked. However, there is a significant change in volume during this transition, and this can result in significant microfracturing in ceramics during firing, in ornamental stone after a fire and in rocks of the Earth's crust exposed to high temperatures, thereby damaging materials containing quartz and degrading their physical and mechanical properties.
Varieties (according to microstructure)
Although many of the varietal names historically arose from the color of the mineral, current scientific naming schemes refer primarily to the microstructure of the mineral. Color is a secondary identifier for the cryptocrystalline minerals, although it is a primary identifier for the macrocrystalline varieties.
The most important microstructure difference between types of quartz is that of macrocrystalline quartz (individual crystals visible to the unaided eye) and the microcrystalline or cryptocrystalline varieties (aggregates of crystals visible only under high magnification). The cryptocrystalline varieties are either translucent or mostly opaque, while the macrocrystalline varieties tend to be more transparent. Chalcedony is a cryptocrystalline form of silica consisting of fine intergrowths of both quartz, and its monoclinic polymorph moganite. Agate is a variety of chalcedony that is fibrous and distinctly banded with either concentric or horizontal bands. While most agates are translucent, onyx is a variety of agate that is more opaque, featuring monochromatic bands that are typically black and white. Carnelian or sard is a red-orange, translucent variety of chalcedony. Jasper is an opaque chert or impure chalcedony.
Varieties (according to color)
Pure quartz, traditionally called rock crystal or clear quartz, is colorless and transparent or translucent and has often been used for hardstone carvings, such as the Lothair Crystal. Common colored varieties include citrine, rose quartz, amethyst, smoky quartz, milky quartz, and others. These color differentiations arise from the presence of impurities which change the molecular orbitals, causing some electronic transitions to take place in the visible spectrum causing colors.
Amethyst
Amethyst is a form of quartz that ranges from a bright vivid violet to a dark or dull lavender shade. The world's largest deposits of amethysts can be found in Brazil, Mexico, Uruguay, Russia, France, Namibia, and Morocco. Amethyst derives its color from traces of iron in its structure.
Ametrine
Ametrine, as its name suggests, is commonly believed to be a combination of citrine and amethyst in the same crystal; however, this may not be technically correct. Like amethyst, the yellow quartz component of ametrine is colored by iron oxide inclusions. Some, but not all, sources define citrine solely as quartz with its color originating from aluminum-based color centers. Other sources do not make this distinction. In the former case, the yellow quartz in ametrine is not considered true citrine. Regardless, most ametrine on the market is in fact partially heat- or radiation-treated amethyst.
Blue quartz
Blue quartz contains inclusions of fibrous magnesio-riebeckite or crocidolite.
Dumortierite quartz
Inclusions of the mineral dumortierite within quartz pieces often result in silky-appearing splotches with a blue hue. Shades of purple or gray sometimes also are present. "Dumortierite quartz" (sometimes called "blue quartz") will sometimes feature contrasting light and dark color zones across the material. "Blue quartz" is a minor gemstone.
Citrine
Citrine is a variety of quartz whose color ranges from yellow to yellow-orange or yellow-green. The cause of its color is not well agreed upon. Evidence suggests the color of citrine is linked to the presence of aluminum-based color centers in its crystal structure, similar to those of smoky quartz. Both smoky quartz and citrine are dichroic in polarized light and will fade when heated sufficiently or exposed to UV light. They may occur together in the same crystal as “smoky citrine.” Smoky quartz can also be converted to citrine by careful heat treatment. Alternatively, it has been suggested that the color of citrine may be due to trace amounts of iron, but synthetic crystals grown in iron-rich solutions have failed to replicate the color or dichroism of natural citrine. The UV-sensitivity of natural citrine further indicates that its color is not caused solely by trace elements.
Natural citrine is rare; most commercial citrine is heat-treated amethyst or smoky quartz. Amethyst loses its natural violet color when heated to above 200-300°C and turns a color that resembles natural citrine, but is often more brownish. Unlike natural citrine, the color of heat-treated amethyst comes from trace amounts of the iron oxide minerals hematite and goethite. Clear quartz with natural iron inclusions or limonite staining may also resemble citrine, but it is not true citrine. Like amethyst, heat-treated amethyst often exhibits color zoning, or uneven color distribution throughout the crystal. In geodes and clusters, the color is usually deepest near the tips. This does not occur in natural citrine.
It is nearly impossible to differentiate between cut citrine and yellow topaz visually, but they differ in hardness. Brazil is the leading producer of citrine, with much of its production coming from the state of Rio Grande do Sul. The name is derived from the Latin word citrina which means "yellow" and is also the origin of the word "citron". Citrine has been referred to as the "merchant's stone" or "money stone", due to a superstition that it would bring prosperity.
Citrine was first appreciated as a golden-yellow gemstone in Greece between 300 and 150 BC, during the Hellenistic Age. Yellow quartz was used prior to that to decorate jewelry and tools but it was not highly sought after.
Milky quartz
Milk quartz or milky quartz is the most common variety of crystalline quartz. The white color is caused by minute fluid inclusions of gas, liquid, or both, trapped during crystal formation, making it of little value for optical and quality gemstone applications.
Rose quartz
Rose quartz is a type of quartz that exhibits a pale pink to rose red hue. The color is usually considered as due to trace amounts of titanium, iron, or manganese in the material. Some rose quartz contains microscopic rutile needles that produce asterism in transmitted light. Recent X-ray diffraction studies suggest that the color is due to thin microscopic fibers of possibly dumortierite within the quartz.
Additionally, there is a rare type of pink quartz (also frequently called crystalline rose quartz) with color that is thought to be caused by trace amounts of phosphate or aluminium. The color in crystals is apparently photosensitive and subject to fading. The first crystals were found in a pegmatite found near Rumford, Maine, US, and in Minas Gerais, Brazil. The crystals found are more transparent and euhedral, due to the impurities of phosphate and aluminium that formed crystalline rose quartz, unlike the iron and microscopic dumortierite fibers that formed rose quartz.
Smoky quartz
Smoky quartz is a gray, translucent version of quartz. It ranges in clarity from almost complete transparency to a brownish-gray crystal that is almost opaque. Some can also be black. The translucency results from natural irradiation acting on minute traces of aluminum in the crystal structure.
Prase
Prase is a leek-green variety of quartz that gets its color from inclusions of the amphibole actinolite. However, the term has also variously been used for a type of quartzite, a microcrystalline variety of quartz or jasper, or any leek-green quartz.
Prasiolite
Prasiolite, also known as vermarine, is a variety of quartz that is green in color. The green is caused by iron ions. It is a rare mineral in nature and is typically found with amethyst; most "prasiolite" is not natural – it has been artificially produced by heating of amethyst. , almost all natural prasiolite has come from a small Brazilian mine, but it is also seen in Lower Silesia in Poland. Naturally occurring prasiolite is also found in the Thunder Bay area of Canada.
Piezoelectricity
Quartz crystals have piezoelectric properties; they develop an electric potential upon the application of mechanical stress. Quartz's piezoelectric properties were discovered by Jacques and Pierre Curie in 1880.
Occurrence
Quartz is a defining constituent of granite and other felsic igneous rocks. It is very common in sedimentary rocks such as sandstone and shale. It is a common constituent of schist, gneiss, quartzite and other metamorphic rocks. Quartz has the lowest potential for weathering in the Goldich dissolution series and consequently it is very common as a residual mineral in stream sediments and residual soils. Generally a high presence of quartz suggests a "mature" rock, since it indicates the rock has been heavily reworked and quartz was the primary mineral that endured heavy weathering.
While the majority of quartz crystallizes from molten magma, quartz also chemically precipitates from hot hydrothermal veins as gangue, sometimes with ore minerals like gold, silver and copper. Large crystals of quartz are found in magmatic pegmatites. Well-formed crystals may reach several meters in length and weigh hundreds of kilograms.
The largest documented single crystal of quartz was found near Itapore, Goiaz, Brazil; it measured approximately and weighed over .
Mining
Quartz is extracted from open pit mines. Miners occasionally use explosives to expose deep pockets of quartz. More frequently, bulldozers and backhoes are used to remove soil and clay and expose quartz veins, which are then worked using hand tools. Care must be taken to avoid sudden temperature changes that may damage the crystals.
Related silica minerals
Tridymite and cristobalite are high-temperature polymorphs of SiO2 that occur in high-silica volcanic rocks. Coesite is a denser polymorph of SiO2 found in some meteorite impact sites and in metamorphic rocks formed at pressures greater than those typical of the Earth's crust. Stishovite is a yet denser and higher-pressure polymorph of SiO2 found in some meteorite impact sites. Moganite is a monoclinic polymorph. Lechatelierite is an amorphous silica glass SiO2 which is formed by lightning strikes in quartz sand.
Safety
As quartz is a form of silica, it is a possible cause for concern in various workplaces. Cutting, grinding, chipping, sanding, drilling, and polishing natural and manufactured stone products can release hazardous levels of very small, crystalline silica dust particles into the air that workers breathe. Crystalline silica of respirable size is a recognized human carcinogen and may lead to other diseases of the lungs such as silicosis and pulmonary fibrosis.
Synthetic and artificial treatments
Not all varieties of quartz are naturally occurring. Some clear quartz crystals can be treated using heat or gamma-irradiation to induce color where it would not otherwise have occurred naturally. Susceptibility to such treatments depends on the location from which the quartz was mined.
Prasiolite, an olive colored material, is produced by heat treatment; natural prasiolite has also been observed in Lower Silesia in Poland. Although citrine occurs naturally, the majority is the result of heat-treating amethyst or smoky quartz. Carnelian has been heat-treated to deepen its color since prehistoric times.
Because natural quartz is often twinned, synthetic quartz is produced for use in industry. Large, flawless, single crystals are synthesized in an autoclave via the hydrothermal process.
Like other crystals, quartz may be coated with metal vapors to give it an attractive sheen.
Uses
Quartz is the most common material identified as the mystical substance maban in Australian Aboriginal mythology. It is found regularly in passage tomb cemeteries in Europe in a burial context, such as Newgrange or Carrowmore in Ireland. Quartz was also used in Prehistoric Ireland, as well as many other countries, for stone tools; both vein quartz and rock crystal were knapped as part of the lithic technology of the prehistoric peoples.
While jade has been since earliest times the most prized semi-precious stone for carving in East Asia and Pre-Columbian America, in Europe and the Middle East the different varieties of quartz were the most commonly used for the various types of jewelry and hardstone carving, including engraved gems and cameo gems, rock crystal vases, and extravagant vessels. The tradition continued to produce objects that were very highly valued until the mid-19th century, when it largely fell from fashion except in jewelry. Cameo technique exploits the bands of color in onyx and other varieties.
Efforts to synthesize quartz began in the mid-nineteenth century as scientists attempted to create minerals under laboratory conditions that mimicked the conditions in which the minerals formed in nature: German geologist Karl Emil von Schafhäutl (1803–1890) was the first person to synthesize quartz when in 1845 he created microscopic quartz crystals in a pressure cooker. However, the quality and size of the crystals that were produced by these early efforts were poor.
Elemental impurity incorporation strongly influences the ability to process and utilize quartz. Naturally occurring quartz crystals of extremely high purity, necessary for the crucibles and other equipment used for growing silicon wafers in the semiconductor industry, are expensive and rare. These high-purity quartz are defined as containing less than 50 ppm of impurity elements. A major mining location for high purity quartz is the Spruce Pine Gem Mine in Spruce Pine, North Carolina, United States. Quartz may also be found in Caldoveiro Peak, in Asturias, Spain.
By the 1930s, the electronics industry had become dependent on quartz crystals. The only source of suitable crystals was Brazil; however, World War II disrupted the supplies from Brazil, so nations attempted to synthesize quartz on a commercial scale. German mineralogist Richard Nacken (1884–1971) achieved some success during the 1930s and 1940s. After the war, many laboratories attempted to grow large quartz crystals. In the United States, the U.S. Army Signal Corps contracted with Bell Laboratories and with the Brush Development Company of Cleveland, Ohio to synthesize crystals following Nacken's lead. (Prior to World War II, Brush Development produced piezoelectric crystals for record players.) By 1948, Brush Development had grown crystals that were 1.5 inches (3.8 cm) in diameter, the largest at that time. By the 1950s, hydrothermal synthesis techniques were producing synthetic quartz crystals on an industrial scale, and today virtually all the quartz crystal used in the modern electronics industry is synthetic.
An early use of the piezoelectricity of quartz crystals was in phonograph pickups. One of the most common piezoelectric uses of quartz today is as a crystal oscillator. The quartz oscillator or resonator was first developed by Walter Guyton Cady in 1921. George Washington Pierce designed and patented quartz crystal oscillators in 1923. The quartz clock is a familiar device using the mineral. Warren Marrison created the first quartz oscillator clock based on the work of Cady and Pierce in 1927. The resonant frequency of a quartz crystal oscillator is changed by mechanically loading it, and this principle is used for very accurate measurements of very small mass changes in the quartz crystal microbalance and in thin-film thickness monitors.
Almost all the industrial demand for quartz crystal (used primarily in electronics) is met with synthetic quartz produced by the hydrothermal process. However, synthetic crystals are less prized for use as gemstones. The popularity of crystal healing has increased the demand for natural quartz crystals, which are now often mined in developing countries using primitive mining methods, sometimes involving child labor.
| Physical sciences | Mineralogy | null |
25236 | https://en.wikipedia.org/wiki/Quadrupedalism | Quadrupedalism | Quadrupedalism is a form of locomotion where animals have four legs that are used to bear weight and move around. An animal or machine that usually maintains a four-legged posture and moves using all four legs is said to be a quadruped (from Latin quattuor for "four", and pes, pedis for "foot"). Quadruped animals are found among both vertebrates and invertebrates.
Quadrupeds vs. tetrapods
Although the words ‘quadruped’ and ‘tetrapod’ are both derived from terms meaning ‘four-footed’, they have distinct meanings. A tetrapod is any member of the taxonomic unit Tetrapoda (which is defined by descent from a specific four-limbed ancestor), whereas a quadruped actually uses four limbs for locomotion. Not all tetrapods are quadrupeds and not all entities that could be described as ‘quadrupedal’ are tetrapods. This last meaning includes certain artificial objects; almost all quadruped organisms are tetrapods (one exception is some raptorial arthropods adapted for four-footed locomotion, such as the Mantodea. Another example is brush-footed butterflies (Nymphalidae), the largest butterfly family with ~6000 species, including the well-known monarch (shown in photo)).
The distinction between quadrupeds and tetrapods is important in evolutionary biology, particularly in the context of tetrapods whose limbs have adapted to other roles (e.g., hands in the case of humans, wings in the case of birds and bats, and fins in the case of whales). All of these animals are tetrapods, but not all are quadrupeds. Even snakes, whose limbs have become vestigial or lost entirely, are, nevertheless, tetrapods.
In infants and for exercise
Quadrupedalism is sometimes referred to as being "on all fours", and is observed in crawling, especially by infants.
In the 20th century quadrupedal movement was popularized as a form of physical exercise by Georges Hebert. Kenichi Ito is a Japanese man famous for speed running on four limbs in competitions.
Other human quadrupedalism
In July 2005, in rural Turkey, scientists discovered five Turkish siblings who had learned to walk naturally on their hands and feet. Unlike chimpanzees, which ambulate on their knuckles, the Ulas family walked on their palms, allowing them to preserve the dexterity of their fingers.
Quadrupedal robots
BigDog is a dynamically stable quadruped robot created in 2005 by Boston Dynamics with Foster-Miller, the NASA Jet Propulsion Laboratory, and the Harvard University Concord Field Station. Its successor was Spot.
Also by NASA JPL, in collaboration with University of California, Santa Barbara Robotics Lab, is RoboSimian, with emphasis on stability and deliberation. It has been demonstrated at the DARPA Robotics Challenge.
Pronograde posture
A related concept to quadrupedalism is pronogrady, or having a horizontal posture of the trunk. Although nearly all quadrupedal animals are pronograde, bipedal animals also have that posture, including many living birds and extinct dinosaurs.
Nonhuman apes with orthograde (vertical) backs may walk quadrupedally in what is called knuckle-walking.
| Biology and health sciences | Ethology | Biology |
25239 | https://en.wikipedia.org/wiki/Quasar | Quasar | A quasar ( ) is an extremely luminous active galactic nucleus (AGN). It is sometimes known as a quasi-stellar object, abbreviated QSO. The emission from an AGN is powered by accretion onto a supermassive black hole with a mass ranging from millions to tens of billions of solar masses, surrounded by a gaseous accretion disc. Gas in the disc falling towards the black hole heats up and releases energy in the form of electromagnetic radiation. The radiant energy of quasars is enormous; the most powerful quasars have luminosities thousands of times greater than that of a galaxy such as the Milky Way. Quasars are usually categorized as a subclass of the more general category of AGN. The redshifts of quasars are of cosmological origin.
The term originated as a contraction of "quasi-stellar [star-like] radio source"—because they were first identified during the 1950s as sources of radio-wave emission of unknown physical origin—and when identified in photographic images at visible wavelengths, they resembled faint, star-like points of light. High-resolution images of quasars, particularly from the Hubble Space Telescope, have shown that quasars occur in the centers of galaxies, and that some host galaxies are strongly interacting or merging galaxies. As with other categories of AGN, the observed properties of a quasar depend on many factors, including the mass of the black hole, the rate of gas accretion, the orientation of the accretion disc relative to the observer, the presence or absence of a jet, and the degree of obscuration by gas and dust within the host galaxy.
About a million quasars have been identified with reliable spectroscopic redshifts, and between 2-3 million identified in photometric catalogs. The nearest known quasar is about 600 million light-years from Earth, while the record for the most distant known AGN is at a redshift of 10.1, corresponding to a comoving distance of 31.6 billion light-years, or a look-back time of 13.2 billion years.
Quasar discovery surveys have shown that quasar activity was more common in the distant past; the peak epoch was approximately 10 billion years ago. Concentrations of multiple quasars are known as large quasar groups and may constitute some of the largest known structures in the universe if the observed groups are good tracers of mass distribution.
Naming
The term quasar was first used in an article by astrophysicist Hong-Yee Chiu in May 1964, in Physics Today, to describe certain astronomically puzzling objects:
History of observation and interpretation
Background
Between 1917 and 1922, it became clear from work by Heber Doust Curtis, Ernst Öpik and others that some objects ("nebulae") seen by astronomers were in fact distant galaxies like the Milky Way. But when radio astronomy began in the 1950s, astronomers detected, among the galaxies, a small number of anomalous objects with properties that defied explanation.
The objects emitted large amounts of radiation of many frequencies, but no source could be located optically, or in some cases only a faint and point-like object somewhat like a distant star. The spectral lines of these objects, which identify the chemical elements of which the object is composed, were also extremely strange and defied explanation. Some of them changed their luminosity very rapidly in the optical range and even more rapidly in the X-ray range, suggesting an upper limit on their size, perhaps no larger than the Solar System. This implies an extremely high power density. Considerable discussion took place over what these objects might be. They were described as "quasi-stellar [meaning: star-like] radio sources", or "quasi-stellar objects" (QSOs), a name which reflected their unknown nature, and this became shortened to "quasar".
Early observations (1960s and earlier)
The first quasars (3C 48 and 3C 273) were discovered in the late 1950s, as radio sources in all-sky radio surveys. They were first noted as radio sources with no corresponding visible object. Using small telescopes and the Lovell Telescope as an interferometer, they were shown to have a very small angular size. By 1960, hundreds of these objects had been recorded and published in the Third Cambridge Catalogue while astronomers scanned the skies for their optical counterparts. In 1963, a definite identification of the radio source 3C 48 with an optical object was published by Allan Sandage and Thomas A. Matthews. Astronomers had detected what appeared to be a faint blue star at the location of the radio source and obtained its spectrum, which contained many unknown broad emission lines. The anomalous spectrum defied interpretation.
British-Australian astronomer John Bolton made many early observations of quasars, including a breakthrough in 1962. Another radio source, 3C 273, was predicted to undergo five occultations by the Moon. Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to find a visible counterpart to the radio source and obtain an optical spectrum using the Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt was able to demonstrate that these were likely to be the ordinary spectral lines of hydrogen redshifted by 15.8%, at the time, a high redshift (with only a handful of much fainter galaxies known with higher redshift). If this was due to the physical motion of the "star", then 3C 273 was receding at an enormous velocity, around , far beyond the speed of any known star and defying any obvious explanation. Nor would an extreme velocity help to explain 3C 273's huge radio emissions. If the redshift was cosmological (now known to be correct), the large distance implied that 3C 273 was far more luminous than any galaxy, but much more compact. Also, 3C 273 was bright enough to detect on archival photographs dating back to the 1900s; it was found to be variable on yearly timescales, implying that a substantial fraction of the light was emitted from a region less than 1 light-year in size, tiny compared to a galaxy.
Although it raised many questions, Schmidt's discovery quickly revolutionized quasar observation. The strange spectrum of 3C 48 was quickly identified by Schmidt, Greenstein and Oke as hydrogen and magnesium redshifted by 37%. Shortly afterwards, two more quasar spectra in 1964 and five more in 1965 were also confirmed as ordinary light that had been redshifted to an extreme degree. While the observations and redshifts themselves were not doubted, their correct interpretation was heavily debated, and Bolton's suggestion that the radiation detected from quasars were ordinary spectral lines from distant highly redshifted sources with extreme velocity was not widely accepted at the time.
Development of physical understanding (1960s)
An extreme redshift could imply great distance and velocity but could also be due to extreme mass or perhaps some other unknown laws of nature. Extreme velocity and distance would also imply immense power output, which lacked explanation. The small sizes were confirmed by interferometry and by observing the speed with which the quasar as a whole varied in output, and by their inability to be seen in even the most powerful visible-light telescopes as anything more than faint starlike points of light. But if they were small and far away in space, their power output would have to be immense and difficult to explain. Equally, if they were very small and much closer to this galaxy, it would be easy to explain their apparent power output, but less easy to explain their redshifts and lack of detectable movement against the background of the universe.
Schmidt noted that redshift is also associated with the expansion of the universe, as codified in Hubble's law. If the measured redshift was due to expansion, then this would support an interpretation of very distant objects with extraordinarily high luminosity and power output, far beyond any object seen to date. This extreme luminosity would also explain the large radio signal. Schmidt concluded that 3C 273 could either be an individual star around 10 km wide within (or near to) this galaxy, or a distant active galactic nucleus. He stated that a distant and extremely powerful object seemed more likely to be correct.
Schmidt's explanation for the high redshift was not widely accepted at the time. A major concern was the enormous amount of energy these objects would have to be radiating, if they were distant. In the 1960s no commonly accepted mechanism could account for this. The currently accepted explanation, that it is due to matter in an accretion disc falling into a supermassive black hole, was only suggested in 1964 by Edwin E. Salpeter and Yakov Zeldovich, and even then it was rejected by many astronomers, as at this time the existence of black holes at all was widely seen as theoretical.
Various explanations were proposed during the 1960s and 1970s, each with their own problems. It was suggested that quasars were nearby objects, and that their redshift was not due to the expansion of space but rather to light escaping a deep gravitational well. This would require a massive object, which would also explain the high luminosities. However, a star of sufficient mass to produce the measured redshift would be unstable and in excess of the Hayashi limit. Quasars also show forbidden spectral emission lines, previously only seen in hot gaseous nebulae of low density, which would be too diffuse to both generate the observed power and fit within a deep gravitational well. There were also serious concerns regarding the idea of cosmologically distant quasars. One strong argument against them was that they implied energies that were far in excess of known energy conversion processes, including nuclear fusion. There were suggestions that quasars were made of some hitherto unknown stable form of antimatter in similarly unknown types of region of space, and that this might account for their brightness. Others speculated that quasars were a white hole end of a wormhole, or a chain reaction of numerous supernovae.
Eventually, starting from about the 1970s, many lines of evidence (including the first X-ray space observatories, knowledge of black holes and modern models of cosmology) gradually demonstrated that the quasar redshifts are genuine and due to the expansion of space, that quasars are in fact as powerful and as distant as Schmidt and some other astronomers had suggested, and that their energy source is matter from an accretion disc falling onto a supermassive black hole. This included crucial evidence from optical and X-ray viewing of quasar host galaxies, finding of "intervening" absorption lines, which explained various spectral anomalies, observations from gravitational lensing, Gunn's 1971 finding that galaxies containing quasars showed the same redshift as the quasars, and Kristian's 1973 finding that the "fuzzy" surrounding of many quasars was consistent with a less luminous host galaxy.
This model also fits well with other observations suggesting that many or even most galaxies have a massive central black hole. It would also explain why quasars are more common in the early universe: as a quasar draws matter from its accretion disc, there comes a point when there is less matter nearby, and energy production falls off or ceases, as the quasar becomes a more ordinary type of galaxy.
The accretion-disc energy-production mechanism was finally modeled in the 1970s, and black holes were also directly detected (including evidence showing that supermassive black holes could be found at the centers of this and many other galaxies), which resolved the concern that quasars were too luminous to be a result of very distant objects or that a suitable mechanism could not be confirmed to exist in nature. By 1987 it was "well accepted" that this was the correct explanation for quasars, and the cosmological distance and energy output of quasars was accepted by almost all researchers.
Modern observations (1970s and onward)
Later it was found that not all quasars have strong radio emission; in fact only about 10% are "radio-loud". Hence the name "QSO" (quasi-stellar object) is used (in addition to "quasar") to refer to these objects, further categorized into the "radio-loud" and the "radio-quiet" classes. The discovery of the quasar had large implications for the field of astronomy in the 1960s, including drawing physics and astronomy closer together.
In 1979, the gravitational lens effect predicted by Albert Einstein's general theory of relativity was confirmed observationally for the first time with images of the double quasar 0957+561.
A study published in February 2021 showed that there are more quasars in one direction (towards Hydra) than in the opposite direction, seemingly indicating that the Earth is moving in that direction. But the direction of this dipole is about 28° away from the direction of the Earth's motion relative to the cosmic microwave background radiation.
In March 2021, a collaboration of scientists, related to the Event Horizon Telescope, presented, for the first time, a polarized-based image of a black hole, specifically the black hole at the center of Messier 87, an elliptical galaxy approximately 55 million light-years away in the constellation Virgo, revealing the forces giving rise to quasars.
Current understanding
It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the expansion of the universe.
Quasars inhabit the centers of active galaxies and are among the most luminous, powerful, and energetic objects known in the universe, emitting up to a thousand times the energy output of the Milky Way, which contains 200–400 billion stars. This radiation is emitted across the electromagnetic spectrum almost uniformly, from X-rays to the far infrared with a peak in the ultraviolet optical bands, with some quasars also being strong sources of radio emission and of gamma-rays. With high-resolution imaging from ground-based telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been detected in some cases. These galaxies are normally too dim to be seen against the glare of the quasar, except with special techniques. Most quasars, with the exception of 3C 273, whose average apparent magnitude is 12.9, cannot be seen with small telescopes.
Quasars are believed—and in many cases confirmed—to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, as suggested in 1964 by Edwin Salpeter and Yakov Zeldovich. Light and other radiation cannot escape from within the event horizon of a black hole. The energy produced by a quasar is generated the black hole, by gravitational stresses and immense friction within the material nearest to the black hole, as it orbits and falls inward. The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert between 5.7% and 32% of the mass of an object into energy, compared to just 0.7% for the p–p chain nuclear fusion process that dominates the energy production in Sun-like stars. Central masses of 105 to 109 solar masses have been measured in quasars by using reverberation mapping. Several dozen nearby large galaxies, including the Milky Way galaxy, that do not have an active center and do not show any activity similar to a quasar, are confirmed to contain a similar supermassive black hole in their nuclei (galactic center). Thus it is now thought that all large galaxies have a black hole of this kind, but only a small fraction have sufficient matter in the right kind of orbit at their center to become active and power radiation in such a way as to be seen as quasars.
This also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including the Milky Way, have gone through an active stage, appearing as a quasar or some other class of active galaxy that depended on the black-hole mass and the accretion rate, and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation.
The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole, which will cause the matter to collect into an accretion disc. Quasars may also be ignited or re-ignited when normal galaxies merge and the black hole is infused with a fresh source of matter. In fact, it has been suggested that a quasar could form when the Andromeda Galaxy collides with the Milky Way galaxy in approximately 3–5 billion years.
In the 1980s, unified models were developed in which quasars were classified as a particular kind of active galaxy, and a consensus emerged that in many cases it is simply the viewing angle that distinguishes them from other active galaxies, such as blazars and radio galaxies.
The highest-redshift quasar known () is UHZ1, with a redshift of approximately 10.1, which corresponds to a comoving distance of approximately 31.7 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8-billion-year history because the universe is expanding).
It is now understood that many quasars are triggered by the collisions of galaxies, which drives the mass of the galaxies into the supermassive black holes at their centers.
Properties
More than quasars have been found (as of July 2023), most from the Sloan Digital Sky Survey. All observed quasar spectra have redshifts between 0.056 and 10.1 (as of 2024), which means they range between 600 million and 30 billion light-years away from Earth. Because of the great distances to the farthest quasars and the finite velocity of light, they and their surrounding space appear as they existed in the very early universe.
The power of quasars originates from supermassive black holes that are believed to exist at the core of most galaxies. The Doppler shifts of stars near the cores of galaxies indicate that they are revolving around tremendous masses with very steep gravity gradients, suggesting black holes.
Although quasars appear faint when viewed from Earth, they are visible from extreme distances, being the most luminous objects in the known universe. The brightest quasar in the sky is 3C 273 in the constellation of Virgo. It has an average apparent magnitude of 12.8 (bright enough to be seen through a medium-size amateur telescope), but it has an absolute magnitude of −26.7. From a distance of about 33 light-years, this object would shine in the sky about as brightly as the Sun. This quasar's luminosity is, therefore, about 4 trillion (4) times that of the Sun, or about 100 times that of the total light of giant galaxies like the Milky Way. This assumes that the quasar is radiating energy in all directions, but the active galactic nucleus is believed to be radiating preferentially in the direction of its jet. In a universe containing hundreds of billions of galaxies, most of which had active nuclei billions of years ago but only seen today, it is statistically certain that thousands of energy jets should be pointed toward the Earth, some more directly than others. In many cases it is likely that the brighter the quasar, the more directly its jet is aimed at the Earth. Such quasars are called blazars.
The hyperluminous quasar APM 08279+5255 was, when discovered in 1998, given an absolute magnitude of −32.2. High-resolution imaging with the Hubble Space Telescope and the 10 m Keck Telescope revealed that this system is gravitationally lensed. A study of the gravitational lensing of this system suggests that the light emitted has been magnified by a factor of ~10. It is still substantially more luminous than nearby quasars such as 3C 273.
Quasars were much more common in the early universe than they are today. This discovery by Maarten Schmidt in 1967 was early strong evidence against steady-state cosmology and in favor of the Big Bang cosmology. Quasars show the locations where supermassive black holes are growing rapidly (by accretion). Detailed simulations reported in 2021 showed that galaxy structures, such as spiral arms, use gravitational forces to 'put the brakes on' gas that would otherwise orbit galaxy centers forever; instead the braking mechanism enabled the gas to fall into the supermassive black holes, releasing enormous radiant energies. These black holes co-evolve with the mass of stars in their host galaxy in a way not fully understood at present. One idea is that jets, radiation and winds created by the quasars shut down the formation of new stars in the host galaxy, a process called "feedback". The jets that produce strong radio emission in some quasars at the centers of clusters of galaxies are known to have enough power to prevent the hot gas in those clusters from cooling and falling on to the central galaxy.
Quasars' luminosities are variable, with time scales that range from months to hours. This means that quasars generate and emit their energy from a very small region, since each part of the quasar would have to be in contact with other parts on such a time scale as to allow the coordination of the luminosity variations. This would mean that a quasar varying on a time scale of a few weeks cannot be larger than a few light-weeks across. The emission of large amounts of power from a small region requires a power source far more efficient than the nuclear fusion that powers stars. The conversion of gravitational potential energy to radiation by infalling to a black hole converts between 6% and 32% of the mass to energy, compared to 0.7% for the conversion of mass to energy in a star like the Sun. It is the only process known that can produce such high power over a very long term. (Stellar explosions such as supernovas and gamma-ray bursts, and direct matter–antimatter annihilation, can also produce very high power output, but supernovae only last for days, and the universe does not appear to have had large amounts of antimatter at the relevant times.)
Since quasars exhibit all the properties common to other active galaxies such as Seyfert galaxies, the emission from quasars can be readily compared to those of smaller active galaxies powered by smaller supermassive black holes. To create a luminosity of 1040 watts (the typical brightness of a quasar), a supermassive black hole would have to consume the material equivalent of 10 solar masses per year. The brightest known quasars devour 1000 solar masses of material every year. The largest known is estimated to consume matter equivalent to 10 Earths per second. Quasar luminosities can vary considerably over time, depending on their surroundings. Since it is difficult to fuel quasars for many billions of years, after a quasar finishes accreting the surrounding gas and dust, it becomes an ordinary galaxy.
Radiation from quasars is partially "nonthermal" (i.e., not due to black-body radiation), and approximately 10% are observed to also have jets and lobes like those of radio galaxies that also carry significant (but poorly understood) amounts of energy in the form of particles moving at relativistic speeds. Extremely high energies might be explained by several mechanisms (see Fermi acceleration and Centrifugal mechanism of acceleration). Quasars can be detected over the entire observable electromagnetic spectrum, including radio, infrared, visible light, ultraviolet, X-ray and even gamma rays. Most quasars are brightest in their rest-frame ultraviolet wavelength of 121.6 nm Lyman-alpha emission line of hydrogen, but due to the tremendous redshifts of these sources, that peak luminosity has been observed as far to the red as 900.0 nm, in the near infrared. A minority of quasars show strong radio emission, which is generated by jets of matter moving close to the speed of light. When viewed downward, these appear as blazars and often have regions that seem to move away from the center faster than the speed of light (superluminal expansion). This is an optical illusion due to the properties of special relativity.
Quasar redshifts are measured from the strong spectral lines that dominate their visible and ultraviolet emission spectra. These lines are brighter than the continuous spectrum. They exhibit Doppler broadening corresponding to mean speed of several percent of the speed of light. Fast motions strongly indicate a large mass. Emission lines of hydrogen (mainly of the Lyman series and Balmer series), helium, carbon, magnesium, iron and oxygen are the brightest lines. The atoms emitting these lines range from neutral to highly ionized, leaving it highly charged. This wide range of ionization shows that the gas is highly irradiated by the quasar, not merely hot, and not by stars, which cannot produce such a wide range of ionization.
Like all (unobscured) active galaxies, quasars can be strong X-ray sources. Radio-loud quasars can also produce X-rays and gamma rays by inverse Compton scattering of lower-energy photons by the radio-emitting electrons in the jet.
Iron quasars show strong emission lines resulting from low-ionization iron (Fe ), such as IRAS 18508-7815.
Spectral lines, reionization, and the early universe
Quasars also provide some clues as to the end of the Big Bang's reionization. The oldest known quasars (z = 6) display a Gunn–Peterson trough and have absorption regions in front of them indicating that the intergalactic medium at that time was neutral gas. More recent quasars show no absorption region, but rather their spectra contain a spiky area known as the Lyman-alpha forest; this indicates that the intergalactic medium has undergone reionization into plasma, and that neutral gas exists only in small clouds.
The intense production of ionizing ultraviolet radiation is also significant, as it would provide a mechanism for reionization to occur as galaxies form. Despite this, current theories suggest that quasars were not the primary source of reionization; the primary causes of reionization were probably the earliest generations of stars, known as Population III stars (possibly 70%), and dwarf galaxies (very early small high-energy galaxies) (possibly 30%).
Quasars show evidence of elements heavier than helium, indicating that galaxies underwent a massive phase of star formation, creating population III stars between the time of the Big Bang and the first observed quasars. Light from these stars may have been observed in 2005 using NASA's Spitzer Space Telescope, although this observation remains to be confirmed.
Quasar subtypes
The taxonomy of quasars includes various subtypes representing subsets of the quasar population having distinct properties.
Radio-loud quasars are quasars with powerful jets that are strong sources of radio-wavelength emission. These make up about 10% of the overall quasar population.
Radio-quiet quasars are those quasars lacking powerful jets, with relatively weaker radio emission than the radio-loud population. The majority of quasars (about 90%) are radio-quiet.
Broad absorption-line (BAL) quasars are quasars whose spectra exhibit broad absorption lines that are blue-shifted relative to the quasar's rest frame, resulting from gas flowing outward from the active nucleus in the direction toward the observer. Broad absorption lines are found in about 10% of quasars, and BAL quasars are usually radio-quiet. In the rest-frame ultraviolet spectra of BAL quasars, broad absorption lines can be detected from ionized carbon, magnesium, silicon, nitrogen, and other elements.
Type 2 (or Type II) quasars are quasars in which the accretion disc and broad emission lines are highly obscured by dense gas and dust. They are higher-luminosity counterparts of Type 2 Seyfert galaxies.
Red quasars are quasars with optical colors that are redder than normal quasars, thought to be the result of moderate levels of dust extinction within the quasar host galaxy. Infrared surveys have demonstrated that red quasars make up a substantial fraction of the total quasar population.
Optically violent variable (OVV) quasars are radio-loud quasars in which the jet is directed toward the observer. Relativistic beaming of the jet emission results in strong and rapid variability of the quasar brightness. OVV quasars are also considered to be a type of blazar.
Weak emission line quasars are quasars having unusually faint emission lines in the ultraviolet/visible spectrum.
Role in celestial reference systems
Because quasars are extremely distant, bright, and small in apparent size, they are useful reference points in establishing a measurement grid on the sky.
The International Celestial Reference System (ICRS) is based on hundreds of extra-galactic radio sources, mostly quasars, distributed around the entire sky. Because they are so distant, they are apparently stationary to current technology, yet their positions can be measured with the utmost accuracy by very-long-baseline interferometry (VLBI). The positions of most are known to 0.001 arcsecond or better, which is orders of magnitude more precise than the best optical measurements.
Multiple quasars
A grouping of two or more quasars on the sky can result from a chance alignment, where the quasars are not physically associated, from actual physical proximity, or from the effects of gravity bending the light of a single quasar into two or more images by gravitational lensing.
When two quasars appear to be very close to each other as seen from Earth (separated by a few arcseconds or less), they are commonly referred to as a "double quasar". When the two are also close together in space (i.e. observed to have similar redshifts), they are termed a "quasar pair", or as a "binary quasar" if they are close enough that their host galaxies are likely to be physically interacting.
As quasars are overall rare objects in the universe, the probability of three or more separate quasars being found near the same physical location is very low, and determining whether the system is closely separated physically requires significant observational effort. The first true triple quasar was found in 2007 by observations at the W. M. Keck Observatory in Mauna Kea, Hawaii. LBQS 1429-008 (or QQQ J1432-0106) was first observed in 1989 and at the time was found to be a double quasar. When astronomers discovered the third member, they confirmed that the sources were separate and not the result of gravitational lensing. This triple quasar has a redshift of z = 2.076. The components are separated by an estimated 30–50 kiloparsecs (roughly 97,000–160,000 light-years), which is typical for interacting galaxies. In 2013, the second true triplet of quasars, QQQ J1519+0627, was found with a redshift z = 1.51, the whole system fitting within a physical separation of 25 kpc (about 80,000 light-years).
The first true quadruple quasar system was discovered in 2015 at a redshift z = 2.0412 and has an overall physical scale of about 200 kpc (roughly 650,000 light-years).
A multiple-image quasar is a quasar whose light undergoes gravitational lensing, resulting in double, triple or quadruple images of the same quasar. The first such gravitational lens to be discovered was the double-imaged quasar Q0957+561 (or Twin Quasar) in 1979.
An example of a triply lensed quasar is PG1115+08.
Several quadruple-image quasars are known, including the Einstein Cross and the Cloverleaf Quasar, with the first such discoveries happening in the mid-1980s.
Gallery
| Physical sciences | Active galactic nucleus | null |
25247 | https://en.wikipedia.org/wiki/Quill | Quill | A quill is a writing tool made from a moulted flight feather (preferably a primary wing-feather) of a large bird. Quills were used for writing with ink before the invention of the dip pen/metal-nibbed pen, the fountain pen, and, eventually, the ballpoint pen.
As with the earlier reed pen (and later dip pen), a quill has no internal ink reservoir and therefore needs to periodically be dipped into an inkwell during writing. The hand-cut goose quill is rarely used as a calligraphy tool anymore because many papers are now derived from wood pulp and would quickly wear a quill down. However it is still the tool of choice for a few scribes who have noted that quills provide an unmatched sharp stroke as well as greater flexibility than a steel pen.
Description
The shaft of a flight feather is long and hollow, making it an obvious candidate for being crafted into a pen. The process of making a quill from a feather involves curing the shaft to harden it, then fashioning its tip into a nib using a pen knife or other small cutting tool.
A quill pen is in effect a hollow tube which has one closed end, and has one open end at which part of the tube wall extends into a sharp point and has in it a thin slit leading to this point.
The hollow shaft of the feather (the calamus) acts as an ink reservoir and ink flows to the tip through the slit by capillary action.
In a carefully prepared quill, the slit does not widen through wetting with ink and drying. It will retain its shape adequately, requiring only infrequent sharpening; it can be used repeatedly until there is little left of it.
Sources
The strongest quills come from the primary flight feathers discarded by birds during their annual moult. Although some have claimed that feathers from the left wing are better suited to right-handed writers because the feather curves away from the sight line, over the back of the hand, the quill barrel is cut to six or seven inches in length so no such consideration of curvature or 'sight-line' is necessary. Additionally, writing with the left hand in the era in which the quill was popular was discouraged, and quills were never sold as left- and right-handed, only by their size and species.
Goose feathers are most commonly used; scarcer, more expensive swan feathers are used for larger lettering. Depending on availability and strength of the feather, as well as quality and characteristic of the line wanted by the writer, other feathers used for quill-pen making include those from the crow, eagle, owl, turkey, and hawk too. Crow feathers were particularly useful as quills when fine work, such as accounting books, was required. Each bird could supply only about 10 to 12 good-quality quills.
On a true quill, the barbs are stripped off completely on the trailing edge. (The pinion for example only has significant barbs on one side of the barrel.) Later, a fashion developed for stripping partially and leaving a decorative top of a few barbs. The fancy, fully-plumed quill is mostly a Hollywood invention and has little basis in reality. Most, if not all, manuscript illustrations of scribes show a quill devoid of decorative barbs, or at least mostly stripped.
Uses
Quill pens were used to write the vast majority of medieval manuscripts. Quill pens were also used to write Magna Carta and the Declaration of Independence. U.S. President Thomas Jefferson bred geese specially at Monticello to supply his tremendous need for quills.
Quill pens are still used today mainly by professional scribes and calligraphers.
Quills are also used as the plectrum material in string instruments, particularly the harpsichord.
From the 17th to 19th centuries, the central tube of the quill was used as a priming tube (filled with gunpowder) for cannon fire.
History
The quill pen evolved from the reed pen, of Egyptian origin. Quills were the primary writing instrument in the Barbarian Kingdoms from the 6th to the 19th century. The best quills were usually made from goose, swan, and later turkey feathers. Quills went into decline after the invention of the metal pen, mass production beginning in Great Britain as early as 1822 by John Mitchell of Birmingham. In the Eastern Mediterranean and much of the Islamic world, quills were not used as writing implements. Only reed pens were used as writing implements.
Quill pens were the instrument of choice during the medieval era due to their compatibility with parchment and vellum. Before this, the reed pen had been used, but a finer letter was achieved on animal skin using a cured quill. Other than written text, they were often used to create figures, decorations, and images on manuscripts, although many illuminators and painters preferred fine brushes for their work. The variety of different strokes in formal hands was accomplished by good penmanship as the tip was square cut and rigid, exactly as it is today with modern steel pens.
It was much later, in the 1600s, with the increased popularity of writing, especially in the copperplate script promoted by the many printed manuals available from the 'Writing Masters', that quills became more pointed and flexible.
Quills are denominated from the order in which they are fixed in the wing; the first is favoured by the expert calligrapher, the second and third quills also being satisfactory, together with the pinion feather. The 5th and 6th feathers are also used. No other feather on the wing would be considered suitable by a professional scribe.
Information can be obtained on the techniques of curing and cutting quills:
An accurate account of the Victorian process by William Bishop, from research with one of the last London quill dressers, is recorded in the Calligrapher's Handbook cited on this page.
As a symbol
From the 19th century in radical and socialist symbolism, quills have been used to symbolize clerks and intelligentsia. Some notable examples are the Radical Civic Union, the Czech National Social Party in combination with the hammer, symbol of the labour movement, or the Democratic Party of Socialists of Montenegro.
Quills appear on the seals of the United States Census Bureau and the Administrative Office of the United States Courts. They also appear in the coats of arms of several US Army Adjutant general units which focus on administrative duties.
Quills are on the coats of arms of a number of municipalities such as Bargfeld-Stegen in Germany and La Canonja in Spain.
Three books and a quill pen are the symbols of Saint Hilary of Poitiers.
Quill and pen knives
A quill knife was the original primary tool used for cutting and sharpening quills, a process known as "dressing".
Following the decline of the quill in the 1820s, after the introduction of the maintenance-free, mass-produced steel dip nib by John Mitchell, knives were still manufactured but became known as desk knives, stationery knives or latterly as the name stuck "pen" knives.
There is a small but significant difference between a pen knife and a quill knife, in that the quill knife has a blade that is flat on one side and convex on the other which facilitates the round cuts required to shape a quill.
A "pen" knife by contrast has two flat sides. This distinction is not recognised by modern traders, dealers or collectors, who define a quill knife as any small knife with a fixed or hinged blade, including such items as ornamental fruit knives.
Today
While quills are rarely used as writing instruments in the modern day, they are still being produced as specialty items, mostly for hobbyists. Such quills tend to have metal nibs or are sometimes even outfitted with a ballpoint pen inside to remove the need for a separate source of ink.
According to the Supreme Court Historical Society, 20 goose-quill pens, neatly crossed, are placed at the four counsel tables each day the U.S. Supreme Court is in session; "most lawyers appear before the Court only once, and gladly take the quills home as souvenirs." This has been done since the earliest sessions of the Court.
In the Jewish tradition quill pens, called kulmus (), are used by scribes to write Torah Scrolls, Mezuzot, and Tefillin.
Music
Plectra for psalteries and lutes can be cut similarly to writing pens. The rachis, the portion of the stem between the barbs, not the calamus, of the primary flight feathers of birds of the Corvidae was preferred for harpsichords. In modern instruments, plastic is more common, but they are often still called "quills". The lesiba uses a quill attached to a string to produce sound.
| Technology | Writing tools | null |
25264 | https://en.wikipedia.org/wiki/Quantum%20chromodynamics | Quantum chromodynamics | In theoretical physics, quantum chromodynamics (QCD) is the study of the strong interaction between quarks mediated by gluons. Quarks are fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a type of quantum field theory called a non-abelian gauge theory, with symmetry group SU(3). The QCD analog of electric charge is a property called color. Gluons are the force carriers of the theory, just as photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A large body of experimental evidence for QCD has been gathered over the years.
QCD exhibits three salient properties:
Color confinement. Due to the force between two color charges remaining constant as they are separated, the energy grows until a quark–antiquark pair is spontaneously produced, turning the initial hadron into a pair of hadrons instead of isolating a color charge. Although analytically unproven, color confinement is well established from lattice QCD calculations and decades of experiments.
Asymptotic freedom, a steady reduction in the strength of interactions between quarks and gluons as the energy scale of those interactions increases (and the corresponding length scale decreases). The asymptotic freedom of QCD was discovered in 1973 by David Gross and Frank Wilczek, and independently by David Politzer in the same year. For this work, all three shared the 2004 Nobel Prize in Physics.
Chiral symmetry breaking, the spontaneous symmetry breaking of an important global symmetry of quarks, detailed below, with the result of generating masses for hadrons far above the masses of the quarks, and making pseudoscalar mesons exceptionally light. Yoichiro Nambu was awarded the 2008 Nobel Prize in Physics for elucidating the phenomenon in 1960, a dozen years before the advent of QCD. Lattice simulations have confirmed all his generic predictions.
Terminology
Physicist Murray Gell-Mann coined the word quark in its present sense. It originally comes from the phrase "Three quarks for Muster Mark" in Finnegans Wake by James Joyce. On June 27, 1978, Gell-Mann wrote a private letter to the editor of the Oxford English Dictionary, in which he related that he had been influenced by Joyce's words: "The allusion to three quarks seemed perfect." (Originally, only three quarks had been discovered.)
The three kinds of charge in QCD (as opposed to one in quantum electrodynamics or QED) are usually referred to as "color charge" by loose analogy to the three kinds of color (red, green and blue) perceived by humans. Other than this nomenclature, the quantum parameter "color" is completely unrelated to the everyday, familiar phenomenon of color.
The force between quarks is known as the colour force (or color force) or strong interaction, and is responsible for the nuclear force.
Since the theory of electric charge is dubbed "electrodynamics", the Greek word (, "color") is applied to the theory of color charge, "chromodynamics".
History
With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and ever-growing number of particles called hadrons. It seemed that such a large number of particles could not all be fundamental. First, the particles were classified by charge and isospin by Eugene Wigner and Werner Heisenberg; then, in 1953–56, according to strangeness by Murray Gell-Mann and Kazuhiko Nishijima (see Gell-Mann–Nishijima formula). To gain greater insight, the hadrons were sorted into groups having similar properties and masses using the eightfold way, invented in 1961 by Gell-Mann and Yuval Ne'eman. Gell-Mann and George Zweig, correcting an earlier approach of Shoichi Sakata, went on to propose in 1963 that the structure of the groups could be explained by the existence of three flavors of smaller particles inside the hadrons: the quarks. Gell-Mann also briefly discussed a field theory model in which quarks interact with gluons.
Perhaps the first remark that quarks should possess an additional quantum number was made as a short footnote in the preprint of Boris Struminsky in connection with the Ω− hyperon being composed of three strange quarks with parallel spins (this situation was peculiar, because since quarks are fermions, such a combination is forbidden by the Pauli exclusion principle):
Boris Struminsky was a PhD student of Nikolay Bogolyubov. The problem considered in this preprint was suggested by Nikolay Bogolyubov, who advised Boris Struminsky in this research. In the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was also presented by Albert Tavkhelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste (Italy), in May 1965.
A similar mysterious situation was with the Δ++ baryon; in the quark model, it is composed of three up quarks with parallel spins. In 1964–65, Greenberg and Han–Nambu independently resolved the problem by proposing that quarks possess an additional SU(3) gauge degree of freedom, later called color charge. Han and Nambu noted that quarks might interact via an octet of vector gauge bosons: the gluons.
Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was defined as a particle that could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory.
Richard Feynman argued that high energy experiments showed quarks are real particles: he called them partons (since they were parts of hadrons). By particles, Feynman meant objects that travel along paths, elementary particles in a field theory.
The difference between Feynman's and Gell-Mann's approaches reflected a deep split in the theoretical physics community. Feynman thought the quarks have a distribution of position or momentum, like any other particle, and he (correctly) believed that the diffusion of parton momentum explained diffractive scattering. Although Gell-Mann believed that certain quark charges could be localized, he was open to the possibility that the quarks themselves could not be localized because space and time break down. This was the more radical approach of S-matrix theory.
James Bjorken proposed that pointlike partons would imply certain relations in deep inelastic scattering of electrons and protons, which were verified in experiments at SLAC in 1969. This led physicists to abandon the S-matrix approach for the strong interactions.
In 1973 the concept of color as the source of a "strong field" was developed into the theory of QCD by physicists Harald Fritzsch and Heinrich Leutwyler, together with physicist Murray Gell-Mann. In particular, they employed the general field theory developed in 1954 by Chen Ning Yang and Robert Mills (see Yang–Mills theory), in which the carrier particles of a force can themselves radiate further carrier particles. (This is different from QED, where the photons that carry the electromagnetic force do not radiate further photons.)
The discovery of asymptotic freedom in the strong interactions by David Gross, David Politzer and Frank Wilczek allowed physicists to make precise predictions of the results of many high energy experiments using the quantum field theory technique of perturbation theory. Evidence of gluons was discovered in three-jet events at PETRA in 1979. These experiments became more and more precise, culminating in the verification of perturbative QCD at the level of a few percent at LEP, at CERN.
The other side of asymptotic freedom is confinement. Since the force between color charges does not decrease with distance, it is believed that quarks and gluons can never be liberated from hadrons. This aspect of the theory is verified within lattice QCD computations, but is not mathematically proven. One of the Millennium Prize Problems announced by the Clay Mathematics Institute requires a claimant to produce such a proof. Other aspects of non-perturbative QCD are the exploration of phases of quark matter, including the quark–gluon plasma.
Theory
Some definitions
Every field theory of particle physics is based on certain symmetries of nature whose existence is deduced from observations. These can be
local symmetries, which are the symmetries that act independently at each point in spacetime. Each such symmetry is the basis of a gauge theory and requires the introduction of its own gauge bosons.
global symmetries, which are symmetries whose operations must be simultaneously applied to all points of spacetime.
QCD is a non-abelian gauge theory (or Yang–Mills theory) of the SU(3) gauge group obtained by taking the color charge to define a local symmetry.
Since the strong interaction does not discriminate between different flavors of quark, QCD has approximate flavor symmetry, which is broken by the differing masses of the quarks.
There are additional global symmetries whose definitions require the notion of chirality, discrimination between left and right-handed. If the spin of a particle has a positive projection on its direction of motion then it is called right-handed; otherwise, it is left-handed. Chirality and handedness are not the same, but become approximately equivalent at high energies.
Chiral symmetries involve independent transformations of these two types of particle.
Vector symmetries (also called diagonal symmetries) mean the same transformation is applied on the two chiralities.
Axial symmetries are those in which one transformation is applied on left-handed particles and the inverse on the right-handed particles.
Additional remarks: duality
As mentioned, asymptotic freedom means that at large energy – this corresponds also to short distances – there is practically no interaction between the particles. This is in contrast – more precisely one would say dual– to what one is used to, since usually one connects the absence of interactions with large distances. However, as already mentioned in the original paper of Franz Wegner, a solid state theorist who introduced 1971 simple gauge invariant lattice models, the high-temperature behaviour of the original model, e.g. the strong decay of correlations at large distances, corresponds to the low-temperature behaviour of the (usually ordered!) dual model, namely the asymptotic decay of non-trivial correlations, e.g. short-range deviations from almost perfect arrangements, for short distances. Here, in contrast to Wegner, we have only the dual model, which is that one described in this article.
Symmetry groups
The color group SU(3) corresponds to the local symmetry whose gauging gives rise to QCD. The electric charge labels a representation of the local symmetry group U(1), which is gauged to give QED: this is an abelian group. If one considers a version of QCD with Nf flavors of massless quarks, then there is a global (chiral) flavor symmetry group SUL(Nf) × SUR(Nf) × UB(1) × UA(1). The chiral symmetry is spontaneously broken by the QCD vacuum to the vector (L+R) SUV(Nf) with the formation of a chiral condensate. The vector symmetry, UB(1) corresponds to the baryon number of quarks and is an exact symmetry. The axial symmetry UA(1) is exact in the classical theory, but broken in the quantum theory, an occurrence called an anomaly. Gluon field configurations called instantons are closely related to this anomaly.
There are two different types of SU(3) symmetry: there is the symmetry that acts on the different colors of quarks, and this is an exact gauge symmetry mediated by the gluons, and there is also a flavor symmetry that rotates different flavors of quarks to each other, or flavor SU(3). Flavor SU(3) is an approximate symmetry of the vacuum of QCD, and is not a fundamental symmetry at all. It is an accidental consequence of the small mass of the three lightest quarks.
In the QCD vacuum there are vacuum condensates of all the quarks whose mass is less than the QCD scale. This includes the up and down quarks, and to a lesser extent the strange quark, but not any of the others. The vacuum is symmetric under SU(2) isospin rotations of up and down, and to a lesser extent under rotations of up, down, and strange, or full flavor group SU(3), and the observed particles make isospin and SU(3) multiplets.
The approximate flavor symmetries do have associated gauge bosons, observed particles like the rho and the omega, but these particles are nothing like the gluons and they are not massless. They are emergent gauge bosons in an approximate string description of QCD.
Lagrangian
The dynamics of the quarks and gluons are defined by the quantum chromodynamics Lagrangian. The gauge invariant QCD Lagrangian is
where is the quark field, a dynamical function of spacetime, in the fundamental representation of the SU(3) gauge group, indexed by and running from to ; is the gauge covariant derivative; the γμ are Gamma matrices connecting the spinor representation to the vector representation of the Lorentz group.
Herein, the gauge covariant derivative couples the quark field with a coupling strength to the gluon fields via the infinitesimal SU(3) generators in the fundamental representation. An explicit representation of these generators is given by , wherein the are the Gell-Mann matrices.
The symbol represents the gauge invariant gluon field strength tensor, analogous to the electromagnetic field strength tensor, Fμν, in quantum electrodynamics. It is given by:
where are the gluon fields, dynamical functions of spacetime, in the adjoint representation of the SU(3) gauge group, indexed by a, b and c running from to ; and fabc are the structure constants of SU(3) (the generators of the adjoint representation). Note that the rules to move-up or pull-down the a, b, or c indices are trivial, (+, ..., +), so that fabc = fabc = fabc whereas for the μ or ν indices one has the non-trivial relativistic rules corresponding to the metric signature (+ − − −).
The variables m and g correspond to the quark mass and coupling of the theory, respectively, which are subject to renormalization.
An important theoretical concept is the Wilson loop (named after Kenneth G. Wilson). In lattice QCD, the final term of the above Lagrangian is discretized via Wilson loops, and more generally the behavior of Wilson loops can distinguish confined and deconfined phases.
Fields
Quarks are massive spin- fermions that carry a color charge whose gauging is the content of QCD. Quarks are represented by Dirac fields in the fundamental representation 3 of the gauge group SU(3). They also carry electric charge (either − or +) and participate in weak interactions as part of weak isospin doublets. They carry global quantum numbers including the baryon number, which is for each quark, hypercharge and one of the flavor quantum numbers.
Gluons are spin-1 bosons that also carry color charges, since they lie in the adjoint representation 8 of SU(3). They have no electric charge, do not participate in the weak interactions, and have no flavor. They lie in the singlet representation 1 of all these symmetry groups.
Each type of quark has a corresponding antiquark, of which the charge is exactly opposite. They transform in the conjugate representation to quarks, denoted .
Dynamics
According to the rules of quantum field theory, and the associated Feynman diagrams, the above theory gives rise to three basic interactions: a quark may emit (or absorb) a gluon, a gluon may emit (or absorb) a gluon, and two gluons may directly interact. This contrasts with QED, in which only the first kind of interaction occurs, since photons have no charge. Diagrams involving Faddeev–Popov ghosts must be considered too (except in the unitarity gauge).
Area law and confinement
Detailed computations with the above-mentioned Lagrangian show that the effective potential between a quark and its anti-quark in a meson contains a term that increases in proportion to the distance between the quark and anti-quark (), which represents some kind of "stiffness" of the interaction between the particle and its anti-particle at large distances, similar to the entropic elasticity of a rubber band (see below). This leads to confinement of the quarks to the interior of hadrons, i.e. mesons and nucleons, with typical radii Rc, corresponding to former "Bag models" of the hadrons The order of magnitude of the "bag radius" is 1 fm (= 10−15 m). Moreover, the above-mentioned stiffness is quantitatively related to the so-called "area law" behavior of the expectation value of the Wilson loop product PW of the ordered coupling constants around a closed loop W; i.e. is proportional to the area enclosed by the loop. For this behavior the non-abelian behavior of the gauge group is essential.
Methods
Further analysis of the content of the theory is complicated. Various techniques have been developed to work with QCD. Some of them are discussed briefly below.
Perturbative QCD
This approach is based on asymptotic freedom, which allows perturbation theory to be used accurately in experiments performed at very high energies. Although limited in scope, this approach has resulted in the most precise tests of QCD to date.
Lattice QCD
Among non-perturbative approaches to QCD, the most well established is lattice QCD. This approach uses a discrete set of spacetime points (called the lattice) to reduce the analytically intractable path integrals of the continuum theory to a very difficult numerical computation that is then carried out on supercomputers like the QCDOC, which was constructed for precisely this purpose. While it is a slow and resource-intensive approach, it has wide applicability, giving insight into parts of the theory inaccessible by other means, in particular into the explicit forces acting between quarks and antiquarks in a meson. However, the numerical sign problem makes it difficult to use lattice methods to study QCD at high density and low temperature (e.g. nuclear matter or the interior of neutron stars).
1/N expansion
A well-known approximation scheme, the expansion, starts from the idea that the number of colors is infinite, and makes a series of corrections to account for the fact that it is not. Until now, it has been the source of qualitative insight rather than a method for quantitative predictions. Modern variants include the AdS/CFT approach.
Effective theories
For specific problems, effective theories may be written down that give qualitatively correct results in certain limits. In the best of cases, these may then be obtained as systematic expansions in some parameters of the QCD Lagrangian. One such effective field theory is chiral perturbation theory or ChiPT, which is the QCD effective theory at low energies. More precisely, it is a low energy expansion based on the spontaneous chiral symmetry breaking of QCD, which is an exact symmetry when quark masses are equal to zero, but for the u, d and s quark, which have small mass, it is still a good approximate symmetry. Depending on the number of quarks that are treated as light, one uses either SU(2) ChiPT or SU(3) ChiPT. Other effective theories are heavy quark effective theory (which expands around heavy quark mass near infinity), and soft-collinear effective theory (which expands around large ratios of energy scales). In addition to effective theories, models like the Nambu–Jona-Lasinio model and the chiral model are often used when discussing general features.
QCD sum rules
Based on an Operator product expansion one can derive sets of relations that connect different observables with each other.
Experimental tests
The notion of quark flavors was prompted by the necessity of explaining the properties of hadrons during the development of the quark model. The notion of color was necessitated by the puzzle of the . This has been dealt with in the section on the history of QCD.
The first evidence for quarks as real constituent elements of hadrons was obtained in deep inelastic scattering experiments at SLAC. The first evidence for gluons came in three-jet events at PETRA.
Several good quantitative tests of perturbative QCD exist:
The running of the QCD coupling as deduced from many observations
Scaling violation in polarized and unpolarized deep inelastic scattering
Vector boson production at colliders (this includes the Drell–Yan process)
Direct photons produced in hadronic collisions
Jet cross sections in colliders
Event shape observables at the LEP
Heavy-quark production in colliders
Quantitative tests of non-perturbative QCD are fewer, because the predictions are harder to make. The best is probably the running of the QCD coupling as probed through lattice computations of heavy-quarkonium spectra. There is a recent claim about the mass of the heavy meson Bc . Other non-perturbative tests are currently at the level of 5% at best. Continuing work on masses and form factors of hadrons and their weak matrix elements are promising candidates for future quantitative tests. The whole subject of quark matter and the quark–gluon plasma is a non-perturbative test bed for QCD that still remains to be properly exploited.
One qualitative prediction of QCD is that there exist composite particles made solely of gluons called glueballs that have not yet been definitively observed experimentally. A definitive observation of a glueball with the properties predicted by QCD would strongly confirm the theory. In principle, if glueballs could be definitively ruled out, this would be a serious experimental blow to QCD. But, as of 2013, scientists are unable to confirm or deny the existence of glueballs definitively, despite the fact that particle accelerators have sufficient energy to generate them.
Cross-relations to condensed matter physics
There are unexpected cross-relations to condensed matter physics. For example, the notion of gauge invariance forms the basis of the well-known Mattis spin glasses, which are systems with the usual spin degrees of freedom for i =1,...,N, with the special fixed "random" couplings Here the εi and εk quantities can independently and "randomly" take the values ±1, which corresponds to a most-simple gauge transformation This means that thermodynamic expectation values of measurable quantities, e.g. of the energy are invariant.
However, here the coupling degrees of freedom , which in the QCD correspond to the gluons, are "frozen" to fixed values (quenching). In contrast, in the QCD they "fluctuate" (annealing), and through the large number of gauge degrees of freedom the entropy plays an important role (see below).
For positive J0 the thermodynamics of the Mattis spin glass corresponds in fact simply to a "ferromagnet in disguise", just because these systems have no "frustration" at all. This term is a basic measure in spin glass theory. Quantitatively it is identical with the loop product along a closed loop W. However, for a Mattis spin glass – in contrast to "genuine" spin glasses – the quantity PW never becomes negative.
The basic notion "frustration" of the spin-glass is actually similar to the Wilson loop quantity of the QCD. The only difference is again that in the QCD one is dealing with SU(3) matrices, and that one is dealing with a "fluctuating" quantity. Energetically, perfect absence of frustration should be non-favorable and atypical for a spin glass, which means that one should add the loop product to the Hamiltonian, by some kind of term representing a "punishment". In the QCD the Wilson loop is essential for the Lagrangian rightaway.
The relation between the QCD and "disordered magnetic systems" (the spin glasses belong to them) were additionally stressed in a paper by Fradkin, Huberman and Shenker, which also stresses the notion of duality.
A further analogy consists in the already mentioned similarity to polymer physics, where, analogously to Wilson loops, so-called "entangled nets" appear, which are important for the formation of the entropy-elasticity (force proportional to the length) of a rubber band. The non-abelian character of the SU(3) corresponds thereby to the non-trivial "chemical links", which glue different loop segments together, and "asymptotic freedom" means in the polymer analogy simply the fact that in the short-wave limit, i.e. for (where Rc is a characteristic correlation length for the glued loops, corresponding to the above-mentioned "bag radius", while λw is the wavelength of an excitation) any non-trivial correlation vanishes totally, as if the system had crystallized.
There is also a correspondence between confinement in QCD – the fact that the color field is only different from zero in the interior of hadrons – and the behaviour of the usual magnetic field in the theory of type-II superconductors: there the magnetism is confined to the interior of the Abrikosov flux-line lattice, i.e., the London penetration depth λ of that theory is analogous to the confinement radius Rc of quantum chromodynamics. Mathematically, this correspondendence is supported by the second term, on the r.h.s. of the Lagrangian.
| Physical sciences | Quantum mechanics | null |
25265 | https://en.wikipedia.org/wiki/Queue%20%28abstract%20data%20type%29 | Queue (abstract data type) | In computer science, a queue is a collection of entities that are maintained in a sequence and can be modified by the addition of entities at one end of the sequence and the removal of entities from the other end of the sequence. By convention, the end of the sequence at which elements are added is called the back, tail, or rear of the queue, and the end at which elements are removed is called the head or front of the queue, analogously to the words used when people line up to wait for goods or services.
The operation of adding an element to the rear of the queue is known as enqueue, and the operation of removing an element from the front is known as dequeue. Other operations may also be allowed, often including a peek or front operation that returns the value of the next element to be dequeued without dequeuing it.
The operations of a queue make it a first-in-first-out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. A queue is an example of a linear data structure, or more abstractly a sequential collection.
Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes.
A queue has two ends, the top, which is the only position at which the push operation may occur, and the bottom, which is the only position at which the pop operation may occur. A queue may be implemented as circular buffers and linked lists, or by using both the stack pointer and the base pointer.
Queues provide services in computer science, transport, and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer.
Another usage of queues is in the implementation of breadth-first search.
Queue implementation
Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again.
Fixed-length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified a fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue.
A bounded queue is a queue limited to a fixed number of items.
There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the operations—en-queuing and de-queuing—in O(1) time.
Linked list
A doubly linked list has O(1) insertion and deletion at both ends, so it is a natural choice for queues.
A regular singly linked list only has efficient insertion and deletion at one end. However, a small modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an efficient queue.
A deque implemented using a modified dynamic array
Queues and programming languages
Queues may be implemented as a separate data type, or maybe considered a special case of a double-ended queue (deque) and not implemented separately. For example, Perl and Ruby allow pushing and popping an array from both ends, so one can use push and shift functions to enqueue and dequeue a list (or, in reverse, one can use unshift and pop), although in some cases these operations are not efficient.
C++'s Standard Template Library provides a "queue" templated class which is restricted to only push/pop operations. Since J2SE5.0, Java's library contains a interface that specifies queue operations; implementing classes include and (since J2SE 1.6) . PHP has an SplQueue class and third party libraries like beanstalk'd and Gearman.
Example
A simple queue implemented in JavaScript:
class Queue {
constructor() {
this.items = [];
}
enqueue(element) {
this.items.push(element);
}
dequeue() {
return this.items.shift();
}
}
Purely functional implementation
Queues can also be implemented as a purely functional data structure. There are two implementations. The first one only achieves per operation on average. That is, the amortized time is , but individual operations can take where n is the number of elements in the queue. The second implementation is called a real-time queue and it allows the queue to be persistent with operations in O(1) worst-case time. It is a more complex implementation and requires lazy lists with memoization.
Amortized queue
This queue's data is stored in two singly-linked lists named and . The list holds the front part of the queue. The list holds the remaining elements (a.k.a., the rear of the queue) in reverse order. It is easy to insert into the front of the queue by adding a node at the head of . And, if is not empty, it is easy to remove from the end of the queue by removing the node at the head of . When is empty, the list is reversed and assigned to and then the head of is removed.
The insert ("enqueue") always takes time. The removal ("dequeue") takes when the list is not empty. When is empty, the reverse takes where is the number of elements in . But, we can say it is amortized time, because every element in had to be inserted and we can assign a constant cost for each element in the reverse to when it was inserted.
Real-time queue
The real-time queue achieves time for all operations, without amortization. This discussion will be technical, so recall that, for a list, denotes its length, that NIL represents an empty list and represents the list whose head is h and whose tail is t.
The data structure used to implement our queues consists of three singly-linked lists where f is the front of the queue and r is the rear of the queue in reverse order. The invariant of the structure is that s is the rear of f without its first elements, that is . The tail of the queue is then almost and
inserting an element x to is almost . It is said almost, because in both of those results, . An auxiliary function must then be called for the invariant to be satisfied. Two cases must be considered, depending on whether is the empty list, in which case , or not. The formal definition is and where is f followed by r reversed.
Let us call the function which returns f followed by r reversed. Let us furthermore assume that , since it is the case when this function is called. More precisely, we define a lazy function which takes as input three lists such that , and return the concatenation of f, of r reversed and of a. Then .
The inductive definition of rotate is and . Its running time is , but, since lazy evaluation is used, the computation is delayed until the results are forced by the computation.
The list s in the data structure has two purposes. This list serves as a counter for , indeed, if and only if s is the empty list. This counter allows us to ensure that the rear is never longer than the front list. Furthermore, using s, which is a tail of f, forces the computation of a part of the (lazy) list f during each tail and insert operation. Therefore, when , the list f is totally forced. If it was not the case, the internal representation of f could be some append of append of... of append, and forcing would not be a constant time operation anymore.
| Mathematics | Data structures and types | null |
25267 | https://en.wikipedia.org/wiki/Quantum%20field%20theory | Quantum field theory | In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines field theory and the principle of relativity with ideas behind quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory.
History
Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory.
Theoretical background
Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity. A brief overview of these theoretical precursors follows.
The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact". It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.
Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.
The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.
Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization. Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles.
In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.
In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred. It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations.
Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators.
Quantum electrodynamics
Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.
Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators. With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.
In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.
In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.
The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.
It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory. QFT naturally incorporated antiparticles in its formalism.
Infinities and renormalization
Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta. It was not until 20 years later that a systematic approach to remove such infinities was developed.
A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community.
Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.
In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift. Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations.
The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. As Tomonaga said in his Nobel lecture:Since those parts of the modified mass and charge due to field reactions [become infinite], it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with [the] Americans'.
By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities".
At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams. The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.
It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.
Non-renormalizability
Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.
The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.
The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant , which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.
With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.
Source theory
Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory, but in 1951 he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields. Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966 then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields. Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed.
In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general. Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities.
Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury. The neglect of source theory by the physics community was a major disappointment for Schwinger:The lack of appreciation of these facts by others was depressing, but understandable. -J. SchwingerSee "the shoes incident" between J. Schwinger and S. Weinberg.
Standard model
In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups. In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.
Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable.
Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.
By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored, until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion.
Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.
These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades. The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model.
Other developments
The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory.
Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry theories only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973, but to date have not been widely accepted as part of the Standard Model due to lack of experimental evidence.
Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity.
Condensed-matter-physics
Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics.
Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter.
Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems.
Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect.
Principles
For simplicity, natural units are used in the following sections, in which the reduced Planck constant and the speed of light are both set to one.
Classical fields
A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity and the electric field and magnetic field in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom.
Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields.
Canonical quantization and path integrals are two common formulations of QFT. To motivate the fundamentals of QFT, an overview of classical field theory follows.
The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as , where is the position vector, and is the time. Suppose the Lagrangian of the field, , is
where is the Lagrangian density, is the time-derivative of the field, is the gradient operator, and is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:
we obtain the equations of motion for the field, which describe the way it varies in time and space:
This is known as the Klein–Gordon equation.
The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows:
where is a complex number (normalized by convention), denotes complex conjugation, and is the frequency of the normal mode:
Thus each normal mode corresponding to a single can be seen as a classical harmonic oscillator with frequency .
Canonical quantization
The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator.
The displacement of a classical harmonic oscillator is described by
where is a complex number (normalized by convention), and is the oscillator's frequency. Note that is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label of a quantum field.
For a quantum harmonic oscillator, is promoted to a linear operator :
Complex numbers and are replaced by the annihilation operator and the creation operator , respectively, where denotes Hermitian conjugation. The commutation relation between the two is
The Hamiltonian of the simple harmonic oscillator can be written as
The vacuum state , which is the lowest energy state, is defined by
and has energy
One can easily check that which implies that increases the energy of the simple harmonic oscillator by . For example, the state is an eigenstate of energy .
Any energy eigenstate state of a single harmonic oscillator can be obtained from by successively applying the creation operator : and any state of the system can be expressed as a linear combination of the states
A similar procedure can be applied to the real scalar field , by promoting it to a quantum field operator , while the annihilation operator , the creation operator and the angular frequency are now for a particular :
Their commutation relations are:
where is the Dirac delta function. The vacuum state is defined by
Any quantum state of the field can be obtained from by successively applying creation operators (or by a linear combination of such states), e.g.
While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization.
The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields, vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary.
The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:
where is a spacetime index, , etc. The summation over the index has been omitted following the Einstein notation. If the parameter is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory.
Path integrals
The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time to some final state at , the total time is divided into small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let be the Hamiltonian (i.e. generator of time evolution), then
Taking the limit , the above product of integrals becomes the Feynman path integral:
where is the Lagrangian involving and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian via Legendre transformation. The initial and final conditions of the path integral are respectively
In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand.
Two-point correlation function
In calculations, one often encounters expression likein the free or interacting theory, respectively. Here, and are position four-vectors, is the time ordering operator that shuffles its operands so the time-components and increase from right to left, and is the ground state (vacuum state) of the interacting theory, different from the free ground state . This expression represents the probability amplitude for the field to propagate from to , and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short.
The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be
In an interacting theory, where the Lagrangian or Hamiltonian contains terms or that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function.
In canonical quantization, the two-point correlation function can be written as:
where is an infinitesimal number and is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in -theory, the interacting term of the Hamiltonian is , and the expansion of the two-point correlator in terms of becomesThis perturbation expansion expresses the interacting two-point function in terms of quantities that are evaluated in the free theory.
In the path integral formulation, the two-point correlation function can be written
where is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in , reducing the interacting two-point function to quantities in the free theory.
Wick's theorem further reduce any -point correlation function in the free theory to a sum of products of two-point correlation functions. For example,
Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory. This makes the Feynman propagator one of the most important quantities in quantum field theory.
Feynman diagram
Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the term in the two-point correlation function in the theory is
After applying Wick's theorem, one of the terms is
This term can instead be obtained from the Feynman diagram
.
The diagram consists of
external vertices connected with one edge and represented by dots (here labeled and ).
internal vertices connected with four edges and represented by dots (here labeled ).
edges connecting the vertices and represented by lines.
Every vertex corresponds to a single field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules:
For every internal vertex , write down a factor .
For every edge that connects two vertices and , write down a factor .
Divide by the symmetry factor of the diagram.
With the symmetry factor , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space.
In order to compute the -point correlation function to the -th order, list all valid Feynman diagrams with external points and or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise,
is equal to the sum of (expressions corresponding to) all connected diagrams with external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the interaction theory discussed above, every vertex must have four legs.
In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.
Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing loops are referred to as -loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction. Lines whose end points are vertices can be thought of as the propagation of virtual particles.
Renormalization
Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities.
Parameters appearing in the Lagrangian, such as the mass and the coupling constant , have no physical meaning — , , and the field strength are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off , obtain expressions for the physical quantities, and then take the limit . This is an example of regularization, a class of methods to treat divergences in QFT, with being the regulator.
The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of theory, the field strength is first redefined:
where is the bare field, is the renormalized field, and is a constant to be determined. The Lagrangian density becomes:
where and are the experimentally measurable, renormalized, mass and coupling constant, respectively, and
are constants to be determined. The first three terms are the Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator . Compute Feynman diagrams, in which divergent terms will depend on . Then, define , , and such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit is taken. In this way, meaningful finite quantities are obtained.
It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT, while quantum gravity is non-renormalizable.
Renormalization group
The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales. The way in which each parameter changes with scale is described by its β function. Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.
As an example, the coupling constant in QED, namely the elementary charge , has the following β function:
where is the energy scale under which the measurement of is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.
The coupling constant in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group , has the following β function:
where is the number of quark flavours. In the case where (the Standard Model has ), the coupling constant decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.
Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and supersymmetric Yang–Mills theory.
According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off , i.e. that the theory is no longer valid at energies higher than , and all degrees of freedom above the scale are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory. The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them. According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off from calculations in such a theory merely indicates that new physical phenomena appear at scales above , where a new theory is necessary.
Other theories
The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction.
As an example, quantum electrodynamics contains a Dirac field representing the electron field and a vector field representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is:
where are Dirac matrices, , and is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass and the (bare) elementary charge . The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.
Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg.
Gauge symmetry
If the following transformation to the fields is performed at every spacetime point (a local transformation), then the QED Lagrangian remains unchanged, or invariant:
where is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory. Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations and is yet another symmetry transformation . For any , is an element of the group, thus QED is said to have gauge symmetry. The photon field may be referred to as the gauge boson.
is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories). Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an gauge symmetry. It contains three Dirac fields representing quark fields as well as eight vector fields representing gluon fields, which are the gauge bosons. The QCD Lagrangian density is:
where is the gauge covariant derivative:
where is the coupling constant, are the eight generators of in the fundamental representation ( matrices),
and are the structure constants of . Repeated indices are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation:
where is an element of at every spacetime point :
The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density under a certain local transformation of the fields, the measure of the path integral may change. For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group , in which all anomalies exactly cancel.
The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group.
Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law. For example, the symmetry of QED implies charge conservation.
Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field , being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description.
To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally. A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization.
Spontaneous symmetry-breaking
Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.
To illustrate the mechanism, consider a linear sigma model containing real scalar fields, described by the Lagrangian density:
where and are real parameters. The theory admits an global symmetry:
The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field satisfying
Without loss of generality, let the ground state be in the -th direction:
The original fields can be rewritten as:
and the original Lagrangian density as:
where . The original global symmetry is no longer manifest, leaving only the subgroup . The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.
Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, has continuous symmetries (the dimension of its Lie algebra), while has . The number of broken symmetries is their difference, , which corresponds to the massless fields .
On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.
In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures. In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.
Supersymmetry
All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.
The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations and the Lorentz transformations . In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators , called supercharges, which themselves transform as Weyl fermions. The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, , which generate the corresponding supersymmetry, supersymmetry, and so on. Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory.
The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group. Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), supersymmetric Yang–Mills theory, and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.
If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity.
Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.
Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.
Other spacetimes
The theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime.
In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases. In high-energy physics, string theory is a type of (1+1)-dimensional QFT, while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.
In Minkowski space, the flat metric is used to raise and lower spacetime indices in the Lagrangian, e.g.
where is the inverse of satisfying .
For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used:
where is the inverse of .
For a real scalar field, the Lagrangian density in a general spacetime background is
where , and denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background.
Topological quantum field theory
The correlation functions and physical predictions of a QFT depend on the spacetime metric . For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric. QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers. The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the
link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond.
Perturbative and non-perturbative methods
Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model.
Mathematical rigor
In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined.
However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory, can be given a sound mathematical interpretation from their finite-dimensional analogues.
Since the 1950s, theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics, which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem, and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions, the three-dimensional scalar field theories with a quartic interaction, etc.
Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms.
Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms. One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).
Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows.
| Physical sciences | Quantum mechanics | null |
25268 | https://en.wikipedia.org/wiki/Quantum%20electrodynamics | Quantum electrodynamics | In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.
In technical terms, QED can be described as a very accurate way to calculate the probability of the position and movement of particles, even those massless such as photons, and the quantity depending on position (field) of those particles, and described light and matter beyond the wave-particle duality proposed by Albert Einstein in 1905. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen. It is the most precise and stringently tested theory in physics.
History
The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who during the 1920s computed the coefficient of spontaneous emission of an atom. He is credited with coining the term "quantum electrodynamics".
Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and Enrico Fermi, physicists came to believe that, in principle, it was possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting doubt on the theory's internal consistency. This suggested that special relativity and quantum mechanics were fundamentally incompatible.
Difficulties increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, later known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies that the theory was unable to explain.
A first indication of a possible solution was given by Bethe in 1947. He made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result with good experimental agreement. This procedure was named renormalization.
Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to produce fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Tomonaga, Schwinger, and Feynman were jointly awarded the 1965 Nobel Prize in Physics for their work in this area. Their contributions, and Dyson's, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed unlike the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, became one of the fundamental aspects of quantum field theory and is seen as a criterion for a theory's general acceptability. Even though renormalization works well in practice, Feynman was never entirely comfortable with its mathematical validity, referring to renormalization as a "shell game" and "hocus pocus".
Neither Feynman nor Dirac were happy with that way to approach the observations made in theoretical physics, above all in quantum mechanics.
QED is the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s, developed by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on Schwinger's pioneering work, Gerald Guralnik, Dick Hagen, and Tom Kibble, Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.
Feynman's view of quantum electrodynamics
Introduction
Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter, a classic non-mathematical exposition of QED from the point of view articulated below.
The key components of Feynman's presentation of QED are three basic actions.
A photon goes from one place and time to another place and time.
An electron goes from one place and time to another place and time.
An electron emits or absorbs a photon at a certain place and time.
These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram.
As well as the visual shorthand for the actions, Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time to another place and time , the associated quantity is written in Feynman's shorthand as , and it depends on only the momentum and polarization of the photon. The similar quantity for an electron moving from to is written . It depends on the momentum and polarization of the electron, in addition to a constant Feynman calls n, sometimes called the "bare" mass of the electron: it is related to, but not the same as, the measured electron mass. Finally, the quantity that tells us about the probability amplitude for an electron to emit or absorb a photon Feynman calls j, and is sometimes called the "bare" charge of the electron: it is a constant, and is related to, but not the same as, the measured electron charge e.
QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman.
The basic rules of probability amplitudes that will be used are:
The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way "reveals" which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). Whenever one is able to observe which alternative takes place, one always finds that the probability of the event is the sum of the probabilities of the alternatives. Indeed, if this were not the case, the very term "alternatives" to describe these processes would be inappropriate. What (a) says is that once the physical means for observing which alternative occurred is removed, one cannot still say that the event is occurring through "exactly one of the alternatives" in the sense of adding probabilities; one must add the amplitudes instead.
Similarly, the independence criterion in (b) is very important: it only applies to processes which are not "entangled".
Basic constructions
Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability.
But there are other ways in which the result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering.
There is an infinite number of other intermediate "virtual" processes in which more and more photons are absorbed and/or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude.
That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.)
Probability amplitudes
Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers.
Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by
or
The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers.
Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction.
That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping.
Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", , where we would expect, from our everyday idea of probabilities, that it would be a sum.
Propagators
Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows:
where a shorthand symbol such as stands for the four real numbers that give the time and position in three dimensions of the point labeled A.
Mass renormalization
A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process", and Dirac also criticized this procedure as "in mathematics one does not get rid of infinities when it does not please you".
Conclusions
Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem."
Mathematical formulation
QED action
Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field.
The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the action
where
are Dirac matrices.
a bispinor field of spin-1/2 particles (e.g. electron–positron field).
, called "psi-bar", is sometimes referred to as the Dirac adjoint.
is the gauge covariant derivative.
e is the coupling constant, equal to the electric charge of the bispinor field.
is the covariant four-potential of the electromagnetic field generated by the electron itself. It is also known as a gauge field or a connection.
is the external field imposed by external source.
m is the mass of the electron or positron.
is the electromagnetic field tensor. This is also known as the curvature of the gauge field.
Expanding the covariant derivative reveals a second useful form of the Lagrangian (external field set to zero for simplicity)
where is the conserved current arising from Noether's theorem. It is written
Equations of motion
Expanding the covariant derivative in the Lagrangian gives
For simplicity, has been set to zero. Alternatively, we can absorb into a new gauge field and relabel the new field as
From this Lagrangian, the equations of motion for the and fields can be obtained.
Equation of motion for ψ
These arise most straightforwardly by considering the Euler-Lagrange equation for . Since the Lagrangian contains no terms, we immediately get
so the equation of motion can be written
Equation of motion for Aμ
Using the Euler–Lagrange equation for the field,
the derivatives this time are
Substituting back into () leads to
which can be written in terms of the current as
Now, if we impose the Lorenz gauge condition
the equations reduce to
which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the wave operator, .)
Interaction picture
This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state will give a final state in such a way to have
This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above:
and so, one has
where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series is called the Dyson series.
Feynman diagrams
Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta.
Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following
To these rules we must add a further one for closed loops that implies an integration on momenta , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). The signature of the metric is .
From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case
and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix:
from which we can compute the cross section for this scattering.
Nonperturbative phenomena
The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics.
Renormalizability
Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones
that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio.
Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories.
Nonconvergence of series
An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but is at best an asymptotic series.
From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory.
Electrodynamics in curved spacetime
This theory can be extended, at least as a classical field theory, to curved spacetime. This arises similarly to the flat spacetime case, from coupling a free electromagnetic theory to a free fermion theory and including an interaction which promotes the partial derivative in the fermion theory to a gauge-covariant derivative.
| Physical sciences | Quantum mechanics | null |
25272 | https://en.wikipedia.org/wiki/Quadratic%20reciprocity | Quadratic reciprocity | In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is:
This law, together with its supplements, allows the easy calculation of any Legendre symbol, making it possible to determine whether there is an integer solution for any quadratic equation of the form for an odd prime ; that is, to determine the "perfect squares" modulo . However, this is a non-constructive result: it gives no help at all for finding a specific solution; for this, other methods are required. For example, in the case using Euler's criterion one can give an explicit formula for the "square roots" modulo of a quadratic residue , namely,
indeed,
This formula only works if it is known in advance that is a quadratic residue, which can be checked using the law of quadratic reciprocity.
The quadratic reciprocity theorem was conjectured by Euler and Legendre and first proved by Gauss, who referred to it as the "fundamental theorem" in his Disquisitiones Arithmeticae and his papers, writing
The fundamental theorem must certainly be regarded as one of the most elegant of its type. (Art. 151)
Privately, Gauss referred to it as the "golden theorem". He published six proofs for it, and two more were found in his posthumous papers. There are now over 240 published proofs. The shortest known proof is included below, together with short proofs of the law's supplements (the Legendre symbols of −1 and 2).
Generalizing the reciprocity law to higher powers has been a leading problem in mathematics, and has been crucial to the development of much of the machinery of modern algebra, number theory, and algebraic geometry, culminating in Artin reciprocity, class field theory, and the Langlands program.
Motivating examples
Quadratic reciprocity arises from certain subtle factorization patterns involving perfect square numbers. In this section, we give examples which lead to the general case.
Factoring n2 − 5
Consider the polynomial and its values for The prime factorizations of these values are given as follows:
The prime factors dividing are , and every prime whose final digit is or ; no primes ending in or ever appear. Now, is a prime factor of some whenever , i.e. whenever i.e. whenever 5 is a quadratic residue modulo . This happens for and those primes with and the latter numbers and are precisely the quadratic residues modulo . Therefore, except for , we have that is a quadratic residue modulo iff is a quadratic residue modulo .
The law of quadratic reciprocity gives a similar characterization of prime divisors of for any prime q, which leads to a characterization for any integer .
Patterns among quadratic residues
Let p be an odd prime. A number modulo p is a quadratic residue whenever it is congruent to a square (mod p); otherwise it is a quadratic non-residue. ("Quadratic" can be dropped if it is clear from the context.) Here we exclude zero as a special case. Then as a consequence of the fact that the multiplicative group of a finite field of order p is cyclic of order p-1, the following statements hold:
There are an equal number of quadratic residues and non-residues; and
The product of two quadratic residues is a residue, the product of a residue and a non-residue is a non-residue, and the product of two non-residues is a residue.
For the avoidance of doubt, these statements do not hold if the modulus is not prime.
For example, there are only 3 quadratic residues (1, 4 and 9) in the multiplicative group modulo 15.
Moreover, although 7 and 8 are quadratic non-residues, their product 7x8 = 11 is also a quadratic non-residue, in contrast to the prime case.
Quadratic residues appear as entries in the following table, indexed by the row number as modulus and column number as root:
This table is complete for odd primes less than 50. To check whether a number m is a quadratic residue mod one of these primes p, find a ≡ m (mod p) and 0 ≤ a < p. If a is in row p, then m is a residue (mod p); if a is not in row p of the table, then m is a nonresidue (mod p).
The quadratic reciprocity law is the statement that certain patterns found in the table are true in general.
Legendre's version
Another way to organize the data is to see which primes are quadratic residues mod which other primes, as illustrated in the following table. The entry in row p column q is R if q is a quadratic residue (mod p); if it is a nonresidue the entry is N.
If the row, or the column, or both, are ≡ 1 (mod 4) the entry is blue or green; if both row and column are ≡ 3 (mod 4), it is yellow or orange.
The blue and green entries are symmetric around the diagonal: The entry for row p, column q is R (resp N) if and only if the entry at row q, column p, is R (resp N).
The yellow and orange ones, on the other hand, are antisymmetric: The entry for row p, column q is R (resp N) if and only if the entry at row q, column p, is N (resp R).
The reciprocity law states that these patterns hold for all p and q.
Ordering the rows and columns mod 4 makes the pattern clearer.
Supplements to Quadratic Reciprocity
The supplements provide solutions to specific cases of quadratic reciprocity. They are often quoted as partial results, without having to resort to the complete theorem.
q = ±1 and the first supplement
Trivially 1 is a quadratic residue for all primes. The question becomes more interesting for −1. Examining the table, we find −1 in rows 5, 13, 17, 29, 37, and 41 but not in rows 3, 7, 11, 19, 23, 31, 43 or 47. The former set of primes are all congruent to 1 modulo 4, and the latter are congruent to 3 modulo 4.
First Supplement to Quadratic Reciprocity. The congruence is solvable if and only if is congruent to 1 modulo 4.
q = ±2 and the second supplement
Examining the table, we find 2 in rows 7, 17, 23, 31, 41, and 47, but not in rows 3, 5, 11, 13, 19, 29, 37, or 43. The former primes are all ≡ ±1 (mod 8), and the latter are all ≡ ±3 (mod 8). This leads to
Second Supplement to Quadratic Reciprocity. The congruence is solvable if and only if is congruent to ±1 modulo 8.
−2 is in rows 3, 11, 17, 19, 41, 43, but not in rows 5, 7, 13, 23, 29, 31, 37, or 47. The former are ≡ 1 or ≡ 3 (mod 8), and the latter are ≡ 5, 7 (mod 8).
q = ±3
3 is in rows 11, 13, 23, 37, and 47, but not in rows 5, 7, 17, 19, 29, 31, 41, or 43. The former are ≡ ±1 (mod 12) and the latter are all ≡ ±5 (mod 12).
−3 is in rows 7, 13, 19, 31, 37, and 43 but not in rows 5, 11, 17, 23, 29, 41, or 47. The former are ≡ 1 (mod 3) and the latter ≡ 2 (mod 3).
Since the only residue (mod 3) is 1, we see that −3 is a quadratic residue modulo every prime which is a residue modulo 3.
q = ±5
5 is in rows 11, 19, 29, 31, and 41 but not in rows 3, 7, 13, 17, 23, 37, 43, or 47. The former are ≡ ±1 (mod 5) and the latter are ≡ ±2 (mod 5).
Since the only residues (mod 5) are ±1, we see that 5 is a quadratic residue modulo every prime which is a residue modulo 5.
−5 is in rows 3, 7, 23, 29, 41, 43, and 47 but not in rows 11, 13, 17, 19, 31, or 37. The former are ≡ 1, 3, 7, 9 (mod 20) and the latter are ≡ 11, 13, 17, 19 (mod 20).
Higher q
The observations about −3 and 5 continue to hold: −7 is a residue modulo p if and only if p is a residue modulo 7, −11 is a residue modulo p if and only if p is a residue modulo 11, 13 is a residue (mod p) if and only if p is a residue modulo 13, etc. The more complicated-looking rules for the quadratic characters of 3 and −5, which depend upon congruences modulo 12 and 20 respectively, are simply the ones for −3 and 5 working with the first supplement.
Example. For −5 to be a residue (mod p), either both 5 and −1 have to be residues (mod p) or they both have to be non-residues: i.e., p ≡ ±1 (mod 5) and p ≡ 1 (mod 4) or p ≡ ±2 (mod 5) and p ≡ 3 (mod 4). Using the Chinese remainder theorem these are equivalent to p ≡ 1, 9 (mod 20) or p ≡ 3, 7 (mod 20).
The generalization of the rules for −3 and 5 is Gauss's statement of quadratic reciprocity.
Statement of the theorem
Quadratic Reciprocity (Gauss's statement). If , then the congruence is solvable if and only if is solvable. If and , then the congruence is solvable if and only if is solvable.
Quadratic Reciprocity (combined statement). Define . Then the congruence is solvable if and only if is solvable.
Quadratic Reciprocity (Legendre's statement). If p or q are congruent to 1 modulo 4, then: is solvable if and only if is solvable. If p and q are congruent to 3 modulo 4, then: is solvable if and only if is not solvable.
The last is immediately equivalent to the modern form stated in the introduction above. It is a simple exercise to prove that Legendre's and Gauss's statements are equivalent – it requires no more than the first supplement and the facts about multiplying residues and nonresidues.
Proof
Apparently, the shortest known proof yet was published by B. Veklych in the American Mathematical Monthly.
Proofs of the supplements
The value of the Legendre symbol of (used in the proof above) follows directly from Euler's criterion:
by Euler's criterion, but both sides of this congruence are numbers of the form , so they must be equal.
Whether is a quadratic residue can be concluded if we know the number of solutions of the equation with which can be solved by standard methods. Namely, all its solutions where can be grouped into octuplets of the form , and what is left are four solutions of the form and possibly four additional solutions where and , which exist precisely if is a quadratic residue. That is, is a quadratic residue precisely if the number of solutions of this equation is divisible by . And this equation can be solved in just the same way here as over the rational numbers: substitute , where we demand that (leaving out the two solutions ), then the original equation transforms into
Here can have any value that does not make the denominator zero – for which there are possibilities (i.e. if is a residue, if not) – and also does not make zero, which excludes one more option, . Thus there are
possibilities for , and so together with the two excluded solutions there are overall solutions of the original equation. Therefore, is a residue modulo if and only if divides . This is a reformulation of the condition stated above.
History and alternative statements
The theorem was formulated in many ways before its modern form: Euler and Legendre did not have Gauss's congruence notation, nor did Gauss have the Legendre symbol.
In this article p and q always refer to distinct positive odd primes, and x and y to unspecified integers.
Fermat
Fermat proved (or claimed to have proved) a number of theorems about expressing a prime by a quadratic form:
He did not state the law of quadratic reciprocity, although the cases −1, ±2, and ±3 are easy deductions from these and others of his theorems.
He also claimed to have a proof that if the prime number p ends with 7, (in base 10) and the prime number q ends in 3, and p ≡ q ≡ 3 (mod 4), then
Euler conjectured, and Lagrange proved, that
Proving these and other statements of Fermat was one of the things that led mathematicians to the reciprocity theorem.
Euler
Translated into modern notation, Euler stated that for distinct odd primes p and q:
If q ≡ 1 (mod 4) then q is a quadratic residue (mod p) if and only if there exists some integer b such that p ≡ b2 (mod q).
If q ≡ 3 (mod 4) then q is a quadratic residue (mod p) if and only if there exists some integer b which is odd and not divisible by q such that p ≡ ±b2 (mod 4q).
This is equivalent to quadratic reciprocity.
He could not prove it, but he did prove the second supplement.
Legendre and his symbol
Fermat proved that if p is a prime number and a is an integer,
Thus if p does not divide a, using the non-obvious fact (see for example Ireland and Rosen below) that the residues modulo p form a field and therefore in particular the multiplicative group is cyclic, hence there can be at most two solutions to a quadratic equation:
Legendre lets a and A represent positive primes ≡ 1 (mod 4) and b and B positive primes ≡ 3 (mod 4), and sets out a table of eight theorems that together are equivalent to quadratic reciprocity:
He says that since expressions of the form
will come up so often he will abbreviate them as:
This is now known as the Legendre symbol, and an equivalent definition is used today: for all integers a and all odd primes p
Legendre's version of quadratic reciprocity
He notes that these can be combined:
A number of proofs, especially those based on Gauss's Lemma, explicitly calculate this formula.
The supplementary laws using Legendre symbols
From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows:
For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues :.
So either : are both even, or they are both odd. The sum of these two expressions is
which is an integer. Therefore,
Legendre's attempt to prove reciprocity is based on a theorem of his:
Legendre's Theorem. Let a, b and c be integers where any pair of the three are relatively prime. Moreover assume that at least one of ab, bc or ca is negative (i.e. they don't all have the same sign). If
are solvable then the following equation has a nontrivial solution in integers:
Example. Theorem I is handled by letting a ≡ 1 and b ≡ 3 (mod 4) be primes and assuming that and, contrary the theorem, that Then has a solution, and taking congruences (mod 4) leads to a contradiction.
This technique doesn't work for Theorem VIII. Let b ≡ B ≡ 3 (mod 4), and assume
Then if there is another prime p ≡ 1 (mod 4) such that
the solvability of leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a prime p; he was later able to show that all that is required is:
Legendre's Lemma. If p is a prime that is congruent to 1 modulo 4 then there exists an odd prime q such that
but he couldn't prove that either. Hilbert symbol (below) discusses how techniques based on the existence of solutions to can be made to work.
Gauss
Gauss first proves the supplementary laws. He sets the basis for induction by proving the theorem for ±3 and ±5. Noting that it is easier to state for −3 and +5 than it is for +3 or −5, he states the general theorem in the form:
If p is a prime of the form 4n + 1 then p, but if p is of the form 4n + 3 then −p, is a quadratic residue (resp. nonresidue) of every prime, which, with a positive sign, is a residue (resp. nonresidue) of p. In the next sentence, he christens it the "fundamental theorem" (Gauss never used the word "reciprocity").
Introducing the notation a R b (resp. a N b) to mean a is a quadratic residue (resp. nonresidue) (mod b), and letting a, a′, etc. represent positive primes ≡ 1 (mod 4) and b, b′, etc. positive primes ≡ 3 (mod 4), he breaks it out into the same 8 cases as Legendre:
In the next Article he generalizes this to what are basically the rules for the Jacobi symbol (below). Letting A, A′, etc. represent any (prime or composite) positive numbers ≡ 1 (mod 4) and B, B′, etc. positive numbers ≡ 3 (mod 4):
All of these cases take the form "if a prime is a residue (mod a composite), then the composite is a residue or nonresidue (mod the prime), depending on the congruences (mod 4)". He proves that these follow from cases 1) - 8).
Gauss needed, and was able to prove, a lemma similar to the one Legendre needed:
Gauss's Lemma. If p is a prime congruent to 1 modulo 8 then there exists an odd prime q such that:
The proof of quadratic reciprocity uses complete induction.
Gauss's Version in Legendre Symbols.
These can be combined:
Gauss's Combined Version in Legendre Symbols. Let
In other words:
Then:
A number of proofs of the theorem, especially those based on Gauss sums or the splitting of primes in algebraic number fields, derive this formula.
Other statements
The statements in this section are equivalent to quadratic reciprocity: if, for example, Euler's version is assumed, the Legendre-Gauss version can be deduced from it, and vice versa.
Euler's Formulation of Quadratic Reciprocity. If then
This can be proven using Gauss's lemma.
Quadratic Reciprocity (Gauss; Fourth Proof). Let a, b, c, ... be unequal positive odd primes, whose product is n, and let m be the number of them that are ≡ 3 (mod 4); check whether n/a is a residue of a, whether n/b is a residue of b, .... The number of nonresidues found will be even when m ≡ 0, 1 (mod 4), and it will be odd if m ≡ 2, 3 (mod 4).
Gauss's fourth proof consists of proving this theorem (by comparing two formulas for the value of Gauss sums) and then restricting it to two primes. He then gives an example: Let a = 3, b = 5, c = 7, and d = 11. Three of these, 3, 7, and 11 ≡ 3 (mod 4), so m ≡ 3 (mod 4). 5×7×11 R 3; 3×7×11 R 5; 3×5×11 R 7; and 3×5×7 N 11, so there are an odd number of nonresidues.
Eisenstein's Formulation of Quadratic Reciprocity. Assume
Then
Mordell's Formulation of Quadratic Reciprocity. Let a, b and c be integers. For every prime, p, dividing abc if the congruence
has a nontrivial solution, then so does:
Zeta function formulation
As mentioned in the article on Dedekind zeta functions, quadratic reciprocity is equivalent to the zeta function of a quadratic field being the product of the Riemann zeta function and a certain Dirichlet L-function
Jacobi symbol
The Jacobi symbol is a generalization of the Legendre symbol; the main difference is that the bottom number has to be positive and odd, but does not have to be prime. If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular
and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"):
However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator. Gauss's cases 9) - 14) above can be expressed in terms of Jacobi symbols:
and since p is prime the left hand side is a Legendre symbol, and we know whether M is a residue modulo p or not.
The formulas listed in the preceding section are true for Jacobi symbols as long as the symbols are defined. Euler's formula may be written
Example.
2 is a residue modulo the primes 7, 23 and 31:
But 2 is not a quadratic residue modulo 5, so it can't be one modulo 15. This is related to the problem Legendre had: if then a is a non-residue modulo every prime in the arithmetic progression m + 4a, m + 8a, ..., if there are any primes in this series, but that wasn't proved until decades after Legendre.
Eisenstein's formula requires relative primality conditions (which are true if the numbers are prime)
Let be positive odd integers such that:
Then
Hilbert symbol
The quadratic reciprocity law can be formulated in terms of the Hilbert symbol where a and b are any two nonzero rational numbers and v runs over all the non-trivial absolute values of the rationals (the Archimedean one and the p-adic absolute values for primes p). The Hilbert symbol is 1 or −1. It is defined to be 1 if and only if the equation has a solution in the completion of the rationals at v other than . The Hilbert reciprocity law states that , for fixed a and b and varying v, is 1 for all but finitely many v and the product of over all v is 1. (This formally resembles the residue theorem from complex analysis.)
The proof of Hilbert reciprocity reduces to checking a few special cases, and the non-trivial cases turn out to be equivalent to the main law and the two supplementary laws of quadratic reciprocity for the Legendre symbol. There is no kind of reciprocity in the Hilbert reciprocity law; its name simply indicates the historical source of the result in quadratic reciprocity. Unlike quadratic reciprocity, which requires sign conditions (namely positivity of the primes involved) and a special treatment of the prime 2, the Hilbert reciprocity law treats all absolute values of the rationals on an equal footing. Therefore, it is a more natural way of expressing quadratic reciprocity with a view towards generalization: the Hilbert reciprocity law extends with very few changes to all global fields and this extension can rightly be considered a generalization of quadratic reciprocity to all global fields.
Connection with cyclotomic fields
The early proofs of quadratic reciprocity are relatively unilluminating. The situation changed when Gauss used Gauss sums to show that quadratic fields are subfields of cyclotomic fields, and implicitly deduced quadratic reciprocity from a reciprocity theorem for cyclotomic fields. His proof was cast in modern form by later algebraic number theorists. This proof served as a template for class field theory, which can be viewed as a vast generalization of quadratic reciprocity.
Robert Langlands formulated the Langlands program, which gives a conjectural vast generalization of class field theory. He wrote:
I confess that, as a student unaware of the history of the subject and unaware of the connection with cyclotomy, I did not find the law or its so-called elementary proofs appealing. I suppose, although I would not have (and could not have) expressed myself in this way that I saw it as little more than a mathematical curiosity, fit more for amateurs than for the attention of the serious mathematician that I then hoped to become. It was only in Hermann Weyl's book on the algebraic theory of numbers that I appreciated it as anything more.
Other rings
There are also quadratic reciprocity laws in rings other than the integers.
Gaussian integers
In his second monograph on quartic reciprocity Gauss stated quadratic reciprocity for the ring of Gaussian integers, saying that it is a corollary of the biquadratic law in but did not provide a proof of either theorem. Dirichlet showed that the law in can be deduced from the law for without using quartic reciprocity.
For an odd Gaussian prime and a Gaussian integer relatively prime to define the quadratic character for by:
Let be distinct Gaussian primes where a and c are odd and b and d are even. Then
Eisenstein integers
Consider the following third root of unity:
The ring of Eisenstein integers is For an Eisenstein prime and an Eisenstein integer with define the quadratic character for by the formula
Let λ = a + bω and μ = c + dω be distinct Eisenstein primes where a and c are not divisible by 3 and b and d are divisible by 3. Eisenstein proved
Imaginary quadratic fields
The above laws are special cases of more general laws that hold for the ring of integers in any imaginary quadratic number field. Let k be an imaginary quadratic number field with ring of integers For a prime ideal with odd norm and define the quadratic character for as
for an arbitrary ideal factored into prime ideals define
and for define
Let i.e. is an integral basis for For with odd norm define (ordinary) integers a, b, c, d by the equations,
and a function
If m = Nμ and n = Nν are both odd, Herglotz proved
Also, if
Then
Polynomials over a finite field
Let F be a finite field with q = pn elements, where p is an odd prime number and n is positive, and let F[x] be the ring of polynomials in one variable with coefficients in F. If and f is irreducible, monic, and has positive degree, define the quadratic character for F[x] in the usual manner:
If is a product of monic irreducibles let
Dedekind proved that if are monic and have positive degrees,
Higher powers
The attempt to generalize quadratic reciprocity for powers higher than the second was one of the main goals that led 19th century mathematicians, including Carl Friedrich Gauss, Peter Gustav Lejeune Dirichlet, Carl Gustav Jakob Jacobi, Gotthold Eisenstein, Richard Dedekind, Ernst Kummer, and David Hilbert to the study of general algebraic number fields and their rings of integers; specifically Kummer invented ideals in order to state and prove higher reciprocity laws.
The ninth in the list of 23 unsolved problems which David Hilbert proposed to the Congress of Mathematicians in 1900 asked for the
"Proof of the most general reciprocity law [f]or an arbitrary number field". Building upon work by Philipp Furtwängler, Teiji Takagi, Helmut Hasse and others, Emil Artin discovered Artin reciprocity in 1923, a general theorem for which all known reciprocity laws are special cases, and proved it in 1927.
| Mathematics | Modular arithmetic | null |
25278 | https://en.wikipedia.org/wiki/Quadrilateral | Quadrilateral | In geometry a quadrilateral is a four-sided polygon, having four edges (sides) and four corners (vertices). The word is derived from the Latin words quadri, a variant of four, and latus, meaning "side". It is also called a tetragon, derived from Greek "tetra" meaning "four" and "gon" meaning "corner" or "angle", in analogy to other polygons (e.g. pentagon). Since "gon" means "angle", it is analogously called a quadrangle, or 4-angle. A quadrilateral with vertices , , and is sometimes denoted as .
Quadrilaterals are either simple (not self-intersecting), or complex (self-intersecting, or crossed). Simple quadrilaterals are either convex or concave.
The interior angles of a simple (and planar) quadrilateral ABCD add up to 360 degrees, that is
This is a special case of the n-gon interior angle sum formula: S = (n − 2) × 180° (here, n=4).
All non-self-crossing quadrilaterals tile the plane, by repeated rotation around the midpoints of their edges.
Simple quadrilaterals
Any quadrilateral that is not self-intersecting is a simple quadrilateral.
Convex quadrilateral
In a convex quadrilateral all interior angles are less than 180°, and the two diagonals both lie inside the quadrilateral.
Irregular quadrilateral (British English) or trapezium (North American English): no sides are parallel. (In British English, this was once called a trapezoid. For more, see .)
Trapezium (UK) or trapezoid (US): at least one pair of opposite sides are parallel. Trapezia (UK) and trapezoids (US) include parallelograms.
Isosceles trapezium (UK) or isosceles trapezoid (US): one pair of opposite sides are parallel and the base angles are equal in measure. Alternative definitions are a quadrilateral with an axis of symmetry bisecting one pair of opposite sides, or a trapezoid with diagonals of equal length.
Parallelogram: a quadrilateral with two pairs of parallel sides. Equivalent conditions are that opposite sides are of equal length; that opposite angles are equal; or that the diagonals bisect each other. Parallelograms include rhombi (including those rectangles called squares) and rhomboids (including those rectangles called oblongs). In other words, parallelograms include all rhombi and all rhomboids, and thus also include all rectangles.
Rhombus, rhomb: all four sides are of equal length (equilateral). An equivalent condition is that the diagonals perpendicularly bisect each other. Informally: "a pushed-over square" (but strictly including a square, too).
Rhomboid: a parallelogram in which adjacent sides are of unequal lengths, and some angles are oblique (equiv., having no right angles). Informally: "a pushed-over oblong". Not all references agree; some define a rhomboid as a parallelogram that is not a rhombus.
Rectangle: all four angles are right angles (equiangular). An equivalent condition is that the diagonals bisect each other, and are equal in length. Rectangles include squares and oblongs. Informally: "a box or oblong" (including a square).
Square (regular quadrilateral): all four sides are of equal length (equilateral), and all four angles are right angles. An equivalent condition is that opposite sides are parallel (a square is a parallelogram), and that the diagonals perpendicularly bisect each other and are of equal length. A quadrilateral is a square if and only if it is both a rhombus and a rectangle (i.e., four equal sides and four equal angles).
Oblong: longer than wide, or wider than long (i.e., a rectangle that is not a square).
Kite: two pairs of adjacent sides are of equal length. This implies that one diagonal divides the kite into congruent triangles, and so the angles between the two pairs of equal sides are equal in measure. It also implies that the diagonals are perpendicular. Kites include rhombi.
Tangential quadrilateral: the four sides are tangents to an inscribed circle. A convex quadrilateral is tangential if and only if opposite sides have equal sums.
Tangential trapezoid: a trapezoid where the four sides are tangents to an inscribed circle.
Cyclic quadrilateral: the four vertices lie on a circumscribed circle. A convex quadrilateral is cyclic if and only if opposite angles sum to 180°.
Right kite: a kite with two opposite right angles. It is a type of cyclic quadrilateral.
Harmonic quadrilateral: a cyclic quadrilateral such that the products of the lengths of the opposing sides are equal.
Bicentric quadrilateral: it is both tangential and cyclic.
Orthodiagonal quadrilateral: the diagonals cross at right angles.
Equidiagonal quadrilateral: the diagonals are of equal length.
Bisect-diagonal quadrilateral: one diagonal bisects the other into equal lengths. Every dart and kite is bisect-diagonal. When both diagonals bisect another, it's a parallelogram.
Ex-tangential quadrilateral: the four extensions of the sides are tangent to an excircle.
An equilic quadrilateral has two opposite equal sides that when extended, meet at 60°.
A Watt quadrilateral is a quadrilateral with a pair of opposite sides of equal length.
A quadric quadrilateral is a convex quadrilateral whose four vertices all lie on the perimeter of a square.
A diametric quadrilateral is a cyclic quadrilateral having one of its sides as a diameter of the circumcircle.
A Hjelmslev quadrilateral is a quadrilateral with two right angles at opposite vertices.
Concave quadrilaterals
In a concave quadrilateral, one interior angle is bigger than 180°, and one of the two diagonals lies outside the quadrilateral.
A dart (or arrowhead) is a concave quadrilateral with bilateral symmetry like a kite, but where one interior angle is reflex. See Kite.
Complex quadrilaterals
A self-intersecting quadrilateral is called variously a cross-quadrilateral, crossed quadrilateral, butterfly quadrilateral or bow-tie quadrilateral. In a crossed quadrilateral, the four "interior" angles on either side of the crossing (two acute and two reflex, all on the left or all on the right as the figure is traced out) add up to 720°.
Crossed trapezoid (US) or trapezium (Commonwealth): a crossed quadrilateral in which one pair of nonadjacent sides is parallel (like a trapezoid).
Antiparallelogram: a crossed quadrilateral in which each pair of nonadjacent sides have equal lengths (like a parallelogram).
Crossed rectangle: an antiparallelogram whose sides are two opposite sides and the two diagonals of a rectangle, hence having one pair of parallel opposite sides.
Crossed square: a special case of a crossed rectangle where two of the sides intersect at right angles.
Special line segments
The two diagonals of a convex quadrilateral are the line segments that connect opposite vertices.
The two bimedians of a convex quadrilateral are the line segments that connect the midpoints of opposite sides. They intersect at the "vertex centroid" of the quadrilateral (see below).
The four maltitudes of a convex quadrilateral are the perpendiculars to a side—through the midpoint of the opposite side.
Area of a convex quadrilateral
There are various general formulas for the area of a convex quadrilateral ABCD with sides .
Trigonometric formulas
The area can be expressed in trigonometric terms as
where the lengths of the diagonals are and and the angle between them is . In the case of an orthodiagonal quadrilateral (e.g. rhombus, square, and kite), this formula reduces to since is .
The area can be also expressed in terms of bimedians as
where the lengths of the bimedians are and and the angle between them is .
Bretschneider's formula expresses the area in terms of the sides and two opposite angles:
where the sides in sequence are , , , , where is the semiperimeter, and and are two (in fact, any two) opposite angles. This reduces to Brahmagupta's formula for the area of a cyclic quadrilateral—when .
Another area formula in terms of the sides and angles, with angle being between sides and , and being between sides and , is
In the case of a cyclic quadrilateral, the latter formula becomes
In a parallelogram, where both pairs of opposite sides and angles are equal, this formula reduces to
Alternatively, we can write the area in terms of the sides and the intersection angle of the diagonals, as long is not :
In the case of a parallelogram, the latter formula becomes
Another area formula including the sides , , , is
where is the distance between the midpoints of the diagonals, and is the angle between the bimedians.
The last trigonometric area formula including the sides , , , and the angle (between and ) is:
which can also be used for the area of a concave quadrilateral (having the concave part opposite to angle ), by just changing the first sign to .
Non-trigonometric formulas
The following two formulas express the area in terms of the sides , , and , the semiperimeter , and the diagonals , :
The first reduces to Brahmagupta's formula in the cyclic quadrilateral case, since then .
The area can also be expressed in terms of the bimedians , and the diagonals , :
In fact, any three of the four values , , , and suffice for determination of the area, since in any quadrilateral the four values are related by The corresponding expressions are:
if the lengths of two bimedians and one diagonal are given, and
if the lengths of two diagonals and one bimedian are given.
Vector formulas
The area of a quadrilateral can be calculated using vectors. Let vectors and form the diagonals from to and from to . The area of the quadrilateral is then
which is half the magnitude of the cross product of vectors and . In two-dimensional Euclidean space, expressing vector as a free vector in Cartesian space equal to and as , this can be rewritten as:
Diagonals
Properties of the diagonals in quadrilaterals
In the following table it is listed if the diagonals in some of the most basic quadrilaterals bisect each other, if their diagonals are perpendicular, and if their diagonals have equal length. The list applies to the most general cases, and excludes named subsets.
Note 1: The most general trapezoids and isosceles trapezoids do not have perpendicular diagonals, but there are infinite numbers of (non-similar) trapezoids and isosceles trapezoids that do have perpendicular diagonals and are not any other named quadrilateral.
Note 2: In a kite, one diagonal bisects the other. The most general kite has unequal diagonals, but there is an infinite number of (non-similar) kites in which the diagonals are equal in length (and the kites are not any other named quadrilateral).
Lengths of the diagonals
The lengths of the diagonals in a convex quadrilateral ABCD can be calculated using the law of cosines on each triangle formed by one diagonal and two sides of the quadrilateral. Thus
and
Other, more symmetric formulas for the lengths of the diagonals, are
and
Generalizations of the parallelogram law and Ptolemy's theorem
In any convex quadrilateral ABCD, the sum of the squares of the four sides is equal to the sum of the squares of the two diagonals plus four times the square of the line segment connecting the midpoints of the diagonals. Thus
where is the distance between the midpoints of the diagonals. This is sometimes known as Euler's quadrilateral theorem and is a generalization of the parallelogram law.
The German mathematician Carl Anton Bretschneider derived in 1842 the following generalization of Ptolemy's theorem, regarding the product of the diagonals in a convex quadrilateral
This relation can be considered to be a law of cosines for a quadrilateral. In a cyclic quadrilateral, where , it reduces to . Since , it also gives a proof of Ptolemy's inequality.
Other metric relations
If and are the feet of the normals from and to the diagonal in a convex quadrilateral ABCD with sides , , , , then
In a convex quadrilateral ABCD with sides , , , , and where the diagonals intersect at ,
where , , , and .
The shape and size of a convex quadrilateral are fully determined by the lengths of its sides in sequence and of one diagonal between two specified vertices. The two diagonals and the four side lengths of a quadrilateral are related by the Cayley-Menger determinant, as follows:
Angle bisectors
The internal angle bisectors of a convex quadrilateral either form a cyclic quadrilateral (that is, the four intersection points of adjacent angle bisectors are concyclic) or they are concurrent. In the latter case the quadrilateral is a tangential quadrilateral.
In quadrilateral ABCD, if the angle bisectors of and meet on diagonal , then the angle bisectors of and meet on diagonal .
Bimedians
The bimedians of a quadrilateral are the line segments connecting the midpoints of the opposite sides. The intersection of the bimedians is the centroid of the vertices of the quadrilateral.
The midpoints of the sides of any quadrilateral (convex, concave or crossed) are the vertices of a parallelogram called the Varignon parallelogram. It has the following properties:
Each pair of opposite sides of the Varignon parallelogram are parallel to a diagonal in the original quadrilateral.
A side of the Varignon parallelogram is half as long as the diagonal in the original quadrilateral it is parallel to.
The area of the Varignon parallelogram equals half the area of the original quadrilateral. This is true in convex, concave and crossed quadrilaterals provided the area of the latter is defined to be the difference of the areas of the two triangles it is composed of.
The perimeter of the Varignon parallelogram equals the sum of the diagonals of the original quadrilateral.
The diagonals of the Varignon parallelogram are the bimedians of the original quadrilateral.
The two bimedians in a quadrilateral and the line segment joining the midpoints of the diagonals in that quadrilateral are concurrent and are all bisected by their point of intersection.
In a convex quadrilateral with sides , , and , the length of the bimedian that connects the midpoints of the sides and is
where and are the length of the diagonals. The length of the bimedian that connects the midpoints of the sides and is
Hence
This is also a corollary to the parallelogram law applied in the Varignon parallelogram.
The lengths of the bimedians can also be expressed in terms of two opposite sides and the distance between the midpoints of the diagonals. This is possible when using Euler's quadrilateral theorem in the above formulas. Whence
and
Note that the two opposite sides in these formulas are not the two that the bimedian connects.
In a convex quadrilateral, there is the following dual connection between the bimedians and the diagonals:
The two bimedians have equal length if and only if the two diagonals are perpendicular.
The two bimedians are perpendicular if and only if the two diagonals have equal length.
Trigonometric identities
The four angles of a simple quadrilateral ABCD satisfy the following identities:
and
Also,
In the last two formulas, no angle is allowed to be a right angle, since tan 90° is not defined.
Let , , , be the sides of a convex quadrilateral, is the semiperimeter,
and and are opposite angles, then
and
.
We can use these identities to derive the Bretschneider's Formula.
Inequalities
Area
If a convex quadrilateral has the consecutive sides a, b, c, d and the diagonals p, q, then its area K satisfies
with equality only for a rectangle.
with equality only for a square.
with equality only if the diagonals are perpendicular and equal.
with equality only for a rectangle.
From Bretschneider's formula it directly follows that the area of a quadrilateral satisfies
with equality if and only if the quadrilateral is cyclic or degenerate such that one side is equal to the sum of the other three (it has collapsed into a line segment, so the area is zero).
Also,
with equality for a bicentric quadrilateral or a rectangle.
The area of any quadrilateral also satisfies the inequality
Denoting the perimeter as L, we have
with equality only in the case of a square.
The area of a convex quadrilateral also satisfies
for diagonal lengths p and q, with equality if and only if the diagonals are perpendicular.
Let a, b, c, d be the lengths of the sides of a convex quadrilateral ABCD with the area K and diagonals AC = p, BD = q. Then
with equality only for a square.
Let a, b, c, d be the lengths of the sides of a convex quadrilateral ABCD with the area K, then the following inequality holds:
with equality only for a square.
Diagonals and bimedians
A corollary to Euler's quadrilateral theorem is the inequality
where equality holds if and only if the quadrilateral is a parallelogram.
Euler also generalized Ptolemy's theorem, which is an equality in a cyclic quadrilateral, into an inequality for a convex quadrilateral. It states that
where there is equality if and only if the quadrilateral is cyclic. This is often called Ptolemy's inequality.
In any convex quadrilateral the bimedians m, n and the diagonals p, q are related by the inequality
with equality holding if and only if the diagonals are equal. This follows directly from the quadrilateral identity
Sides
The sides a, b, c, and d of any quadrilateral satisfy
and
Maximum and minimum properties
Among all quadrilaterals with a given perimeter, the one with the largest area is the square. This is called the isoperimetric theorem for quadrilaterals. It is a direct consequence of the area inequality
where K is the area of a convex quadrilateral with perimeter L. Equality holds if and only if the quadrilateral is a square. The dual theorem states that of all quadrilaterals with a given area, the square has the shortest perimeter.
The quadrilateral with given side lengths that has the maximum area is the cyclic quadrilateral.
Of all convex quadrilaterals with given diagonals, the orthodiagonal quadrilateral has the largest area. This is a direct consequence of the fact that the area of a convex quadrilateral satisfies
where θ is the angle between the diagonals p and q. Equality holds if and only if θ = 90°.
If P is an interior point in a convex quadrilateral ABCD, then
From this inequality it follows that the point inside a quadrilateral that minimizes the sum of distances to the vertices is the intersection of the diagonals. Hence that point is the Fermat point of a convex quadrilateral.
Remarkable points and lines in a convex quadrilateral
The centre of a quadrilateral can be defined in several different ways. The "vertex centroid" comes from considering the quadrilateral as being empty but having equal masses at its vertices. The "side centroid" comes from considering the sides to have constant mass per unit length. The usual centre, called just centroid (centre of area) comes from considering the surface of the quadrilateral as having constant density. These three points are in general not all the same point.
The "vertex centroid" is the intersection of the two bimedians. As with any polygon, the x and y coordinates of the vertex centroid are the arithmetic means of the x and y coordinates of the vertices.
The "area centroid" of quadrilateral ABCD can be constructed in the following way. Let Ga, Gb, Gc, Gd be the centroids of triangles BCD, ACD, ABD, ABC respectively. Then the "area centroid" is the intersection of the lines GaGc and GbGd.
In a general convex quadrilateral ABCD, there are no natural analogies to the circumcenter and orthocenter of a triangle. But two such points can be constructed in the following way. Let Oa, Ob, Oc, Od be the circumcenters of triangles BCD, ACD, ABD, ABC respectively; and denote by Ha, Hb, Hc, Hd the orthocenters in the same triangles. Then the intersection of the lines OaOc and ObOd is called the quasicircumcenter, and the intersection of the lines HaHc and HbHd is called the quasiorthocenter of the convex quadrilateral. These points can be used to define an Euler line of a quadrilateral. In a convex quadrilateral, the quasiorthocenter H, the "area centroid" G, and the quasicircumcenter O are collinear in this order, and HG = 2GO.
There can also be defined a quasinine-point center E as the intersection of the lines EaEc and EbEd, where Ea, Eb, Ec, Ed are the nine-point centers of triangles BCD, ACD, ABD, ABC respectively. Then E is the midpoint of OH.
Another remarkable line in a convex non-parallelogram quadrilateral is the Newton line, which connects the midpoints of the diagonals, the segment connecting these points being bisected by the vertex centroid. One more interesting line (in some sense dual to the Newton's one) is the line connecting the point of intersection of diagonals with the vertex centroid. The line is remarkable by the fact that it contains the (area) centroid. The vertex centroid divides the segment connecting the intersection of diagonals and the (area) centroid in the ratio 3:1.
For any quadrilateral ABCD with points P and Q the intersections of AD and BC and AB and CD, respectively, the circles (PAB), (PCD), (QAD), and (QBC) pass through a common point M, called a Miquel point.
For a convex quadrilateral ABCD in which E is the point of intersection of the diagonals and F is the point of intersection of the extensions of sides BC and AD, let ω be a circle through E and F which meets CB internally at M and DA internally at N. Let CA meet ω again at L and let DB meet ω again at K. Then there holds: the straight lines NK and ML intersect at point P that is located on the side AB; the straight lines NL and KM intersect at point Q that is located on the side CD.
Points P and Q are called "Pascal points" formed by circle ω on sides AB and CD.
Other properties of convex quadrilaterals
If exterior squares are drawn on all sides of a quadrilateral then the segments connecting the centers of opposite squares are (a) equal in length, and (b) perpendicular. Thus these centers are the vertices of an orthodiagonal quadrilateral. This is called Van Aubel's theorem.
For any simple quadrilateral with given edge lengths, there is a cyclic quadrilateral with the same edge lengths.
The four smaller triangles formed by the diagonals and sides of a convex quadrilateral have the property that the product of the areas of two opposite triangles equals the product of the areas of the other two triangles.
The angle at the intersection of the diagonals satisfies where are the diagonals of the quadrilateral.
Taxonomy
A hierarchical taxonomy of quadrilaterals is illustrated by the figure to the right. Lower classes are special cases of higher classes they are connected to. Note that "trapezoid" here is referring to the North American definition (the British equivalent is a trapezium). Inclusive definitions are used throughout.
Skew quadrilaterals
A non-planar quadrilateral is called a skew quadrilateral. Formulas to compute its dihedral angles from the edge lengths and the angle between two adjacent edges were derived for work on the properties of molecules such as cyclobutane that contain a "puckered" ring of four atoms. Historically the term gauche quadrilateral was also used to mean a skew quadrilateral. A skew quadrilateral together with its diagonals form a (possibly non-regular) tetrahedron, and conversely every skew quadrilateral comes from a tetrahedron where a pair of opposite edges is removed.
| Mathematics | Geometry | null |
25284 | https://en.wikipedia.org/wiki/Qubit | Qubit | In quantum computing, a qubit () or quantum bit is a basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two spin states (left-handed and the right-handed circular polarization) can also be measured as horizontal and vertical linear polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of multiple states simultaneously, a property that is fundamental to quantum mechanics and quantum computing.
Etymology
The coining of the term qubit is attributed to Benjamin Schumacher. In the acknowledgments of his 1995 paper, Schumacher states that the term qubit was created in jest during a conversation with William Wootters.
Bit versus qubit
A binary digit, characterized as 0 or 1, is used to represent information in classical computers. When averaged over both of its states (0,1), a binary digit can represent up to one bit of Shannon information, where a bit is the basic unit of information. However, in this article, the word bit is synonymous with a binary digit.
In classical computer technologies, a processed bit is implemented by one of two levels of low direct current voltage, and whilst switching from one of these two levels to the other, a so-called "forbidden zone" between two logic levels must be passed as fast as possible, as electrical voltage cannot change from one level to another instantly.
There are two possible outcomes for the measurement of a qubit—usually taken to have the value "0" and "1", like a bit. However, whereas the state of a bit can only be binary (either 0 or 1), the general state of a qubit according to quantum mechanics can arbitrarily be a coherent superposition of all computable states simultaneously. Moreover, whereas a measurement of a classical bit would not disturb its state, a measurement of a qubit would destroy its coherence and irrevocably disturb the superposition state. It is possible to fully encode one bit in one qubit. However, a qubit can hold more information, e.g., up to two bits using superdense coding.
A bit is always completely in either one of its two states, and a set of bits (e.g. a processor register or some bit array) can only hold a single of its possible states at any time. A quantum state can be in a superposition, which means that the qubit can have non-zero probability amplitude in its both states simultaneously (popularly expressed as "it can be in both states simultaneously"). A qubit requires two complex numbers to describe its two probability amplitudes, and these two complex numbers can together be viewed as a 2-dimensional complex vector, which is called a quantum state vector, or superposition state vector. Alternatively and equivalently, the value stored in a qubit can be described as a single point in a 2-dimensional complex coordinate space. Similarly, a set of n qubits, which is also called a register, requires complex numbers to describe its superposition state vector.
Standard representation
In quantum mechanics, the general quantum state of a qubit can be represented by a linear superposition of its two orthonormal basis states (or basis vectors). These vectors are usually denoted as and . They are written in the conventional Dirac—or "bra–ket"—notation; the and are pronounced "ket 0" and "ket 1", respectively. These two orthonormal basis states, , together called the computational basis, are said to span the two-dimensional linear vector (Hilbert) space of the qubit.
Qubit basis states can also be combined to form product basis states. A set of qubits taken together is called a quantum register. For example, two qubits could be represented in a four-dimensional linear vector space spanned by the following product basis states:
, , , and .
In general, n qubits are represented by a superposition state vector in 2n dimensional Hilbert space.
Qubit states
A pure qubit state is a coherent superposition of the basis states. This means that a single qubit () can be described by a linear combination of and :
where α and β are the probability amplitudes, and are both complex numbers. When we measure this qubit in the standard basis, according to the Born rule, the probability of outcome with value "0" is and the probability of outcome with value "1" is . Because the absolute squares of the amplitudes equate to probabilities, it follows that and must be constrained according to the second axiom of probability theory by the equation
The probability amplitudes, and , encode more than just the probabilities of the outcomes of a measurement; the relative phase between and is for example responsible for quantum interference, as seen in the double-slit experiment.
Bloch sphere representation
It might, at first sight, seem that there should be four degrees of freedom in , as and are complex numbers with two degrees of freedom each. However, one degree of freedom is removed by the normalization constraint . This means, with a suitable change of coordinates, one can eliminate one of the degrees of freedom. One possible choice is that of Hopf coordinates:
Additionally, for a single qubit the global phase of the state has no physically observable consequences, so we can arbitrarily choose to be real (or in the case that is zero), leaving just two degrees of freedom:
where is the physically significant relative phase.
The possible quantum states for a single qubit can be visualised using a Bloch sphere (see picture). Represented on such a 2-sphere, a classical bit could only be at the "North Pole" or the "South Pole", in the locations where and are respectively. This particular choice of the polar axis is arbitrary, however. The rest of the surface of the Bloch sphere is inaccessible to a classical bit, but a pure qubit state can be represented by any point on the surface. For example, the pure qubit state would lie on the equator of the sphere at the positive X-axis. In the classical limit, a qubit, which can have quantum states anywhere on the Bloch sphere, reduces to the classical bit, which can be found only at either poles.
The surface of the Bloch sphere is a two-dimensional space, which represents the observable state space of the pure qubit states. This state space has two local degrees of freedom, which can be represented by the two angles and .
Mixed state
A pure state is fully specified by a single ket, a coherent superposition, represented by a point on the surface of the Bloch sphere as described above. Coherence is essential for a qubit to be in a superposition state. With interactions, quantum noise and decoherence, it is possible to put the qubit in a mixed state, a statistical combination or "incoherent mixture" of different pure states. Mixed states can be represented by points inside the Bloch sphere (or in the Bloch ball). A mixed qubit state has three degrees of freedom: the angles and , as well as the length of the vector that represents the mixed state.
Quantum error correction can be used to maintain the purity of qubits.
Operations on qubits
There are various kinds of physical operations that can be performed on qubits.
Quantum logic gates, building blocks for a quantum circuit in a quantum computer, operate on a set of qubits (a register); mathematically, the qubits undergo a (reversible) unitary transformation described by multiplying the quantum gates unitary matrix with the quantum state vector. The result from this multiplication is a new quantum state vector.
Quantum measurement is an irreversible operation in which information is gained about the state of a single qubit, and coherence is lost. The result of the measurement of a single qubit with the state will be either with probability or with probability . Measurement of the state of the qubit alters the magnitudes of α and β. For instance, if the result of the measurement is , α is changed to 0 and β is changed to the phase factor no longer experimentally accessible. If measurement is performed on a qubit that is entangled, the measurement may collapse the state of the other entangled qubits.
Initialization or re-initialization to a known value, often . This operation collapses the quantum state (exactly like with measurement). Initialization to may be implemented logically or physically: Logically as a measurement, followed by the application of the Pauli-X gate if the result from the measurement was . Physically, for example if it is a superconducting phase qubit, by lowering the energy of the quantum system to its ground state.
Sending the qubit through a quantum channel to a remote system or machine (an I/O operation), potentially as part of a quantum network.
Quantum entanglement
An important distinguishing feature between qubits and classical bits is that multiple qubits can exhibit quantum entanglement; the qubit itself is an exhibition of quantum entanglement. In this case, quantum entanglement is a local or nonlocal property of two or more qubits that allows a set of qubits to express higher correlation than is possible in classical systems.
The simplest system to display quantum entanglement is the system of two qubits. Consider, for example, two entangled qubits in the Bell state:
In this state, called an equal superposition, there are equal probabilities of measuring either product state or , as . In other words, there is no way to tell if the first qubit has value "0" or "1" and likewise for the second qubit.
Imagine that these two entangled qubits are separated, with one each given to Alice and Bob. Alice makes a measurement of her qubit, obtaining—with equal probabilities—either or , i.e., she can now tell if her qubit has value "0" or "1". Because of the qubits' entanglement, Bob must now get exactly the same measurement as Alice. For example, if she measures a , Bob must measure the same, as is the only state where Alice's qubit is a . In short, for these two entangled qubits, whatever Alice measures, so would Bob, with perfect correlation, in any basis, however far apart they may be and even though both can not tell if their qubit has value "0" or "1"—a most surprising circumstance that cannot be explained by classical physics.
Controlled gate to construct the Bell state
Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some specified operation. In particular, the controlled NOT gate (or CNOT or CX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is , and otherwise leaves it unchanged. With respect to the unentangled product basis , , , , it maps the basis states as follows:
.
A common application of the CNOT gate is to maximally entangle two qubits into the Bell state. To construct , the inputs A (control) and B (target) to the CNOT gate are:
and
After applying CNOT, the output is the Bell State: .
Applications
The Bell state forms part of the setup of the superdense coding, quantum teleportation, and entangled quantum cryptography algorithms.
Quantum entanglement also allows multiple states (such as the Bell state mentioned above) to be acted on simultaneously, unlike classical bits that can only have one value at a time. Entanglement is a necessary ingredient of any quantum computation that cannot be done efficiently on a classical computer. Many of the successes of quantum computation and communication, such as quantum teleportation and superdense coding, make use of entanglement, suggesting that entanglement is a resource that is unique to quantum computation. A major hurdle facing quantum computing, as of 2018, in its quest to surpass classical digital computing, is noise in quantum gates that limits the size of quantum circuits that can be executed reliably.
Quantum register
A number of qubits taken together is a qubit register. Quantum computers perform calculations by manipulating qubits within a register.
Qudits and qutrits
The term qudit denotes the unit of quantum information that can be realized in suitable d-level quantum systems. A qubit register that can be measured to N states is identical to an N-level qudit. A rarely used synonym for qudit is quNit, since both d and N are frequently used to denote the dimension of a quantum system.
Qudits are similar to the integer types in classical computing, and may be mapped to (or realized by) arrays of qubits. Qudits where the d-level system is not an exponent of 2 cannot be mapped to arrays of qubits. It is for example possible to have 5-level qudits.
In 2017, scientists at the National Institute of Scientific Research constructed a pair of qudits with 10 different states each, giving more computational power than 6 qubits.
In 2022, researchers at the University of Innsbruck succeeded in developing a universal qudit quantum processor with trapped ions. In the same year, researchers at Tsinghua University's Center for Quantum Information implemented the dual-type qubit scheme in trapped ion quantum computers using the same ion species.
Also in 2022, researchers at the University of California, Berkeley developed a technique to dynamically control the cross-Kerr interactions between fixed-frequency qutrits, achieving high two-qutrit gate fidelities. This was followed by a demonstration of extensible control of superconducting qudits up to in 2024 based on programmable two-photon interactions.
Similar to the qubit, the qutrit is the unit of quantum information that can be realized in suitable 3-level quantum systems. This is analogous to the unit of classical information trit of ternary computers. Besides the advantage associated with the enlarged computational space, the third qutrit level can be exploited to implement efficient compilation of multi-qubit gates.
Physical implementations
Any two-level quantum-mechanical system can be used as a qubit. Multilevel systems can be used as well, if they possess two states that can be effectively decoupled from the rest (e.g., the ground state and first excited state of a nonlinear oscillator). There are various proposals. Several physical implementations that approximate two-level systems to various degrees have been successfully realized. Similarly to a classical bit where the state of a transistor in a processor, the magnetization of a surface in a hard disk and the presence of current in a cable can all be used to represent bits in the same computer, an eventual quantum computer is likely to use various combinations of qubits in its design.
All physical implementations are affected by noise. The so-called T1 lifetime and T2 dephasing time are a time to characterize the physical implementation and represent their sensitivity to noise. A higher time does not necessarily mean that one or the other qubit is better suited for quantum computing because gate times and fidelities need to be considered, too.
Different applications like quantum sensing, quantum computing and quantum communication use different implementations of qubits to suit their application.
The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only.
Qubit storage
In 2008 a team of scientists from the U.K. and U.S. reported the first relatively long (1.75 seconds) and coherent transfer of a superposition state in an electron spin "processing" qubit to a nuclear spin "memory" qubit. This event can be considered the first relatively consistent quantum data storage, a vital step towards the development of quantum computing. In 2013, a modification of similar systems (using charged rather than neutral donors) has dramatically extended this time, to 3 hours at very low temperatures and 39 minutes at room temperature. Room temperature preparation of a qubit based on electron spins instead of nuclear spin was also demonstrated by a team of scientists from Switzerland and Australia. An increased coherence of qubits is being explored by researchers who are testing the limitations of a Ge hole spin-orbit qubit structure.
| Physical sciences | Information | Basics and measurement |
25295 | https://en.wikipedia.org/wiki/Quiver | Quiver | A quiver is a container for holding arrows or bolts. It can be carried on an archer's body, the bow, or the ground, depending on the type of shooting and the archer's personal preference. Quivers were traditionally made of leather, wood, furs, and other natural materials, but are now often made of metal or plastic.
Etymology
The English word quiver has its origins in Old French, written as quivre, cuevre, or coivre.
Types
Belt quiver
The most common style of quiver is a flat or cylindrical container suspended from the belt. They are found across many cultures from North America to China. Many variations of this type exist, such as being canted forwards or backwards, and being carried on the dominant hand side, off-hand side, or the small of the back. Some variants enclose almost the entire arrow, while minimalist "pocket quivers" consist of little more than a small stiff pouch that only covers the first few inches. The Bayeux Tapestry shows that most bowmen in medieval Europe used belt quivers.
Back quiver
Back quivers are secured to the archer's back by leather straps, with the nock ends protruding above the dominant hand's shoulder. Arrows can be drawn over the shoulder rapidly by the nock. This style of quiver was used by native peoples of North America and Africa, and was also commonly depicted in bas-reliefs from ancient Assyria. They were also used in Ancient Greece and often feature on sculptural representations of Artemis, goddess of the hunt. While popular in cinema and 20th century art for depictions of medieval European characters (such as Robin Hood), this style of quiver was rarely used in medieval Europe.
Ground quiver
A ground quiver is used for both target shooting or warfare when the archer is shooting from a fixed location. They can be simply stakes in the ground with a ring at the top to hold the arrows, or more elaborate designs that hold the arrows within reach without the archer having to lean down to draw.
Bow quiver
A modern invention, the bow quiver attaches directly to the bow's limbs and holds the arrows steady with a clip of some kind. They are popular with compound bow hunters as it allows one piece of equipment to be carried in the field without encumbering the hunter's body.
Arrow bag
A style used by medieval English longbowmen and several other cultures, an arrow bag is a simple drawstring cloth sack with a leather spacer at the top to keep the arrows divided. When not in use, the drawstring could be closed, completely covering the arrows so as to protect them from rain and dirt. Some had straps or rope sewn to them for carrying, but many either were tucked into the belt or set on the ground before battle to allow easier access.
Japanese quivers
Yebira refers to a variety of quiver designs. The Yazutsu is a different type, used in Kyudo. Their main use is to transport and protect arrows.
Gallery
| Technology | Archery | null |
25297 | https://en.wikipedia.org/wiki/Quinine | Quinine | Quinine is a medication used to treat malaria and babesiosis. This includes the treatment of malaria due to Plasmodium falciparum that is resistant to chloroquine when artesunate is not available. While sometimes used for nocturnal leg cramps, quinine is not recommended for this purpose due to the risk of serious side effects. It can be taken by mouth or intravenously. Malaria resistance to quinine occurs in certain areas of the world. Quinine is also used as an ingredient in tonic water and other beverages to impart a bitter taste.
Common side effects include headache, ringing in the ears, vision issues, and sweating. More severe side effects include deafness, low blood platelets, and an irregular heartbeat. Use can make one more prone to sunburn. While it is unclear if use during pregnancy carries potential for fetal harm, treating malaria during pregnancy with quinine when appropriate is still recommended. Quinine is an alkaloid, a naturally occurring chemical compound. How it works as a medicine is not entirely clear.
Quinine was first isolated in 1820 from the bark of a cinchona tree, which is native to Peru, and its molecular formula was determined by Adolph Strecker in 1854. The class of chemical compounds to which it belongs is thus called the cinchona alkaloids. Bark extracts had been used to treat malaria since at least 1632 and it was introduced to Spain as early as 1636 by Jesuit missionaries returning from the New World. It is on the World Health Organization's List of Essential Medicines. Treatment of malaria with quinine marks the first known use of a chemical compound to treat an infectious disease.
Uses
Medical
As of 2006, quinine is no longer recommended by the World Health Organization (WHO) as a first-line treatment for malaria, because there are other substances that are equally effective with fewer side effects. They recommend that it be used only when artemisinins are not available. Quinine is also used to treat lupus and arthritis.
Quinine was frequently prescribed as an off-label treatment for leg cramps at night, but this has become less common since 2010 due to a warning from the US Food and Drug Administration (FDA) that such practice is associated with life-threatening side effects. Quinine can also act as a competitive inhibitor of monoamine oxidase (MAO), an enzyme that removes neurotransmitters from the brain. As an MAO inhibitor, it has potential to serve as a treatment for individuals with psychological disorders, similar to antidepressants that inhibit MAO.
Available forms
Quinine is a basic amine and is usually provided as a salt. Various existing preparations include the hydrochloride, dihydrochloride, sulfate, bisulfate and gluconate. In the United States, quinine sulfate is commercially available in 324 mg tablets under the brand name Qualaquin.
All quinine salts may be given orally or intravenously (IV); quinine gluconate may also be given intramuscularly (IM) or rectally (PR). The main problem with rectal administration is that the dose can be expelled before it is completely absorbed; in practice, this is corrected by giving a further half dose. No injectable preparation of quinine is licensed in the US; quinidine is used instead.
Beverages
Quinine is a flavor component of tonic water and bitter lemon drink mixers. On the soda gun behind many bars, tonic water is designated by the letter "Q" representing quinine.
Tonic water was initially marketed as a means of delivering quinine to consumers in order to offer anti-malarial protection. According to tradition, because of the bitter taste of anti-malarial quinine tonic, British colonials in India mixed it with gin to make it more palatable, thus creating the gin and tonic cocktail, which is still popular today. While it is possible to drink enough tonic water to temporarily achieve quinine levels that offer anti-malarial protection, it is not a sustainable long-term means of protection.
In France, quinine is an ingredient of an known as , or "Cap Corse", and the wine-based Dubonnet. In Spain, quinine (also known as "Peruvian bark" for its origin from the native cinchona tree) is sometimes blended into sweet Malaga wine, which is then called "Malaga Quina". In Italy, the traditional flavoured wine Barolo Chinato is infused with quinine and local herbs, and is served as a . In Britain, the company A.G. Barr uses quinine as an ingredient in the carbonated and caffeinated beverage Irn-Bru. In Uruguay and Argentina, quinine is an ingredient of a PepsiCo tonic water named Paso de los Toros. In Denmark, it is used as an ingredient in the carbonated sports drink Faxe Kondi made by Royal Unibrew.
As a flavouring agent in drinks, quinine is limited to 83 ppm () in the United States, and in the European Union.
Scientific
Quinine (and quinidine) are used as the chiral moiety for the ligands used in Sharpless asymmetric dihydroxylation as well as for numerous other chiral catalyst backbones. Because of its relatively constant and well-known fluorescence quantum yield, quinine is used in photochemistry as a common fluorescence standard.
Contraindications
Because of the narrow difference between its therapeutic and toxic effects, quinine is a common cause of drug-induced disorders, including thrombocytopenia and thrombotic microangiopathy. Even from minor levels occurring in common beverages, quinine can have severe adverse effects involving multiple organ systems, among which are immune system effects and fever, hypotension, hemolytic anemia, acute kidney injury, liver toxicity, and blindness. In people with atrial fibrillation, conduction defects, or heart block, quinine can cause heart arrhythmias, and should be avoided.
Quinine can cause hemolysis in G6PD deficiency (an inherited deficiency), but this risk is small and the physician should not hesitate to use quinine in people with G6PD deficiency when there is no alternative.
While not necessarily an absolute contraindication, concomitant administration of quinine with drugs primarily metabolized by CYP2D6 may lead to higher than expected plasma concentrations of the drug, due to quinine's strong inhibition of the enzyme.
Adverse effects
Quinine can cause unpredictable serious and life-threatening blood and cardiovascular reactions including low platelet count and hemolytic–uremic syndrome/thrombotic thrombocytopenic purpura (HUS/TTP), long QT syndrome and other serious cardiac arrhythmias including torsades de pointes, blackwater fever, disseminated intravascular coagulation, leukopenia, and neutropenia. Some people who have developed TTP due to quinine have gone on to develop kidney failure. It can also cause serious hypersensitivity reactions including anaphylactic shock, urticaria, serious skin rashes, including Stevens–Johnson syndrome and toxic epidermal necrolysis, angioedema, facial edema, bronchospasm, granulomatous hepatitis, and itchiness.
The most common adverse effects involve a group of symptoms called cinchonism, which can include headache, vasodilation and sweating, nausea, tinnitus, hearing impairment, vertigo or dizziness, blurred vision, and disturbance in color perception. More severe cinchonism includes vomiting, diarrhea, abdominal pain, deafness, blindness, and disturbances in heart rhythms. Cinchonism is much less common when quinine is given by mouth, but oral quinine is not well tolerated (quinine is exceedingly bitter and many people will vomit after ingesting quinine tablets). Other drugs, such as Fansidar (sulfadoxine with pyrimethamine) or Malarone (proguanil with atovaquone), are often used when oral therapy is required. Quinine ethyl carbonate is tasteless and odourless, but is available commercially only in Japan. Blood glucose, electrolyte and cardiac monitoring are not necessary when quinine is given by mouth.
Quinine has diverse unwanted interactions with numerous prescription drugs, such as potentiating the anticoagulant effects of warfarin. It is a strong inhibitor of CYP2D6, an enzyme involved in the metabolism of many drugs.
Mechanism of action
Quinine is used for its toxicity to the malarial pathogen, Plasmodium falciparum, by interfering with its ability to dissolve and metabolize hemoglobin. As with other quinoline antimalarial drugs, the precise mechanism of action of quinine has not been fully resolved, although in vitro studies indicate it inhibits nucleic acid and protein synthesis, and inhibits glycolysis in P. falciparum. The most widely accepted hypothesis of its action is based on the well-studied and closely related quinoline drug, chloroquine. This model involves the inhibition of hemozoin biocrystallization in the heme detoxification pathway, which facilitates the aggregation of cytotoxic heme. Free cytotoxic heme accumulates in the parasites, causing their deaths. Quinine may target the malaria purine nucleoside phosphorylase enzyme.
Chemistry
The UV absorption of quinine peaks around 350 nm (in UVA). Fluorescent emission peaks at around 460 nm (bright blue/cyan hue). Quinine is highly fluorescent (quantum yield ~0.58) in 0.1 M sulfuric acid solution.
Synthesis
Cinchona trees remain the only economically practical source of quinine. However, under wartime pressure during World War II, research towards its synthetic production was undertaken. A formal chemical synthesis was accomplished in 1944 by American chemists R.B. Woodward and W.E. Doering. Since then, several more efficient quinine total syntheses have been achieved, but none of them can compete in economic terms with isolation of the alkaloid from natural sources. The first synthetic organic dye, mauveine, was discovered by William Henry Perkin in 1856 while he was attempting to synthesize quinine.
Biosynthesis
In the first step of quinine biosynthesis, the enzyme strictosidine synthase catalyzes a stereoselective Pictet–Spengler reaction between tryptamine and secologanin to yield strictosidine. Suitable modification of strictosidine leads to an aldehyde. Hydrolysis and decarboxylation would initially remove one carbon from the iridoid portion and produce corynantheal. Then the tryptamine side-chain were cleaved adjacent to the nitrogen, and this nitrogen was then bonded to the acetaldehyde function to yield cinchonaminal. Ring opening in the indole heterocyclic ring could generate new amine and keto functions. The new quinoline heterocycle would then be formed by combining this amine with the aldehyde produced in the tryptamine side-chain cleavage, giving cinchonidinone. For the last step, hydroxylation and methylation gives quinine.
Catalysis
Quinine and other Cinchona alkaloids can be used as catalysts for stereoselective reactions in organic synthesis. For example, the quinine-catalyzed Michael addition of a malononitrile to α,β-enones gives a high degree of sterechemical control.
History
Quinine was used as a muscle relaxant by the Quechua people, who are indigenous to Peru, Bolivia and Ecuador, to halt shivering. The Quechua would mix the ground bark of cinchona trees with sweetened water to offset the bark's bitter taste, thus producing something similar to tonic water.
Spanish Jesuit missionaries were the first to bring cinchona to Europe. The Spanish had observed the Quechua's use of cinchona and were aware of the medicinal properties of cinchona bark by the 1570s or earlier: Nicolás Monardes (1571) and Juan Fragoso (1572) both described a tree, which was subsequently identified as the cinchona tree, whose bark was used to produce a drink to treat diarrhea. Quinine has been used in unextracted form by Europeans since at least the early 17th century.
A popular story of how it was brought to Europe by the Countess of Chinchon was debunked by medical historian Alec Haggis around 1941. During the 17th century, malaria was endemic to the swamps and marshes surrounding the city of Rome. It had caused the deaths of several popes, many cardinals and countless common Roman citizens. Most of the Catholic priests trained in Rome had seen malaria patients and were familiar with the shivering brought on by the febrile phase of the disease.
The Jesuit Agostino Salumbrino (1564–1642), an apothecary by training who lived in Lima (now in present-day Peru), observed the Quechua using the bark of the cinchona tree to treat such shivering. While its effect in treating malaria (and malaria-induced shivering) was unrelated to its effect in controlling shivering from rigors, it was a successful medicine against malaria. At the first opportunity, Salumbrino sent a small quantity to Rome for testing as a malaria treatment. In the years that followed, cinchona bark, known as Jesuit's bark or Peruvian bark, became one of the most valuable commodities shipped from Peru to Europe. When King Charles II was cured of malaria at the end of the 17th Century with quinine, it became popular in London. It remained the antimalarial drug of choice until the 1940s, when other drugs took over.
The form of quinine most effective in treating malaria was found by Charles Marie de La Condamine in 1737. In 1820, French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou first isolated quinine from the bark of a tree in the genus Cinchona – probably Cinchona pubescens – and subsequently named the substance. The name was derived from the original Quechua (Inca) word for the cinchona tree bark, quina or quina-quina, which means "bark of bark" or "holy bark". Prior to 1820, the bark was dried, ground to a fine powder, and mixed into a liquid (commonly wine) in order to be drunk. Large-scale use of quinine as a malaria prophylaxis started around 1850. In 1853 Paul Briquet published a brief history and discussion of the literature on "quinquina".
Quinine played a significant role in the colonization of Africa by Europeans. The availability of quinine for treatment had been said to be the prime reason Africa ceased to be known as the "white man's grave". A historian said, "it was quinine's efficacy that gave colonists fresh opportunities to swarm into the Gold Coast, Nigeria and other parts of west Africa".
To maintain their monopoly on cinchona bark, Peru and surrounding countries began outlawing the export of cinchona seeds and saplings in the early 19th century. In 1865, Manuel Incra Mamani collected seeds from a plant particularly high in quinine and provided them to Charles Ledger. Ledger sent them to his brother, who sold them to the Dutch government. Mamani was arrested on a seed collecting trip in 1871, and beaten so severely, likely because of providing the seeds to foreigners, that he died soon afterwards.
By the late 19th century the Dutch grew the plants in Indonesian plantations. Soon they became the main suppliers of the tree. In 1913 they set up the Kina Bureau, a cartel of cinchona producers charged with controlling price and production. By the 1930s Dutch plantations in Java were producing 22 million pounds of cinchona bark, or 97% of the world's quinine production. U.S. attempts to prosecute the Kina Bureau proved unsuccessful.
During World War II, Allied powers were cut off from their supply of quinine when Germany conquered the Netherlands, and Japan controlled the Philippines and Indonesia. The US had obtained four million cinchona seeds from the Philippines and began operating cinchona plantations in Costa Rica. Additionally, they began harvesting wild cinchona bark during the Cinchona Missions. Such supplies came too late. Tens of thousands of US troops in Africa and the South Pacific died of malaria due to the lack of quinine. Despite controlling the supply, the Japanese did not make effective use of quinine, and thousands of Japanese troops in the southwest Pacific died as a result.
Quinine remained the antimalarial drug of choice until after World War II. Since then, other drugs that have fewer side effects, such as chloroquine, have largely replaced it.
Bromo Quinine were brand name cold tablets containing quinine, manufactured by Grove Laboratories. They were first marketed in 1889 and available until at least the 1960s.
Conducting research in central Missouri, John S. Sappington independently developed an anti-malaria pill from quinine. Sappington began importing cinchona bark from Peru in 1820. In 1832, using quinine derived from the cinchona bark, Sappington developed a pill to treat a variety of fevers, such as scarlet fever, yellow fever, and influenza in addition to malaria. These illnesses were widespread in the Missouri and Mississippi valleys. He manufactured and sold "Dr. Sappington's Anti-Fever Pills" across Missouri. Demand became so great that within three years, Sappington founded a company known as Sappington and Sons to sell his pills nationwide.
Society and culture
Natural occurrence
The bark of Remijia contains 0.5–2% of quinine. The bark is cheaper than bark of Cinchona. As it has an intense taste, it is used for making tonic water.
Regulation in the US
From 1969 to 1992, the US Food and Drug Administration (FDA) received 157 reports of health problems related to quinine use, including 23 which had resulted in death. In 1994, the FDA banned the marketing of over-the-counter quinine as a treatment for nocturnal leg cramps. Pfizer Pharmaceuticals had been selling the brand name Legatrin for this purpose. It is also sold as a softgel (by SmithKlineBeecham) as Q-vel. Doctors may still prescribe quinine, but the FDA has ordered firms to stop marketing unapproved drug products containing quinine. The FDA is also cautioning consumers about off-label use of quinine to treat leg cramps. Quinine is approved for treatment of malaria, but was also commonly prescribed to treat leg cramps and similar conditions. Because malaria is life-threatening, the risks associated with quinine use are considered acceptable when used to treat that condition.
Though Legatrin was banned by the FDA for the treatment of leg cramps, the drug manufacturer URL Mutual has branded a quinine-containing drug named Qualaquin. It is marketed as a treatment for malaria and is sold in the United States only by prescription. In 2004, the CDC reported only 1,347 confirmed cases of malaria in the United States.
Termination of pregnancy
For much of the 20th century, women's use of an overdose of quinine to deliberately terminate a pregnancy was a relatively common abortion method in various parts of the world, including China.
Cutting agent
Quinine is sometimes detected as a cutting agent in street drugs such as cocaine and heroin.
Other animals
Quinine is used as a treatment for Cryptocaryon irritans (commonly referred to as white spot, crypto or marine ich) infection of marine aquarium fish.
| Biology and health sciences | Drugs and pharmacology | null |
25312 | https://en.wikipedia.org/wiki/Quantum%20gravity | Quantum gravity | Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vicinity of black holes or similar compact astrophysical objects, as well as in the early stages of the universe moments after the Big Bang.
Three of the four fundamental forces of nature are described within the framework of quantum mechanics and quantum field theory: the electromagnetic interaction, the strong force, and the weak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although general relativity is highly regarded for its elegance and accuracy, it has limitations: the gravitational singularities inside black holes, the ad hoc postulation of dark matter, as well as dark energy and its relation to the cosmological constant are among the current unsolved mysteries regarding gravity, all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to the Planck length, like those near the center of a black hole, quantum fluctuations of spacetime are expected to play an important role. Finally, the discrepancies between the predicted value for the vacuum energy and the observed values (which, depending on considerations, can be of 60 or 120 orders of magnitude) highlight the necessity for a quantum theory of gravity.
The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular being M-theory and loop quantum gravity. All of these approaches aim to describe the quantum behavior of the gravitational field, which does not necessarily include unifying all fundamental interactions into a single mathematical framework. However, many approaches to quantum gravity, such as string theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as a theory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories include causal dynamical triangulation, noncommutative geometry, and twistor theory.
One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near the Planck scale, around 10−35 meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energy particle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed.
Thought experiment approaches have been suggested as a testing tool for quantum gravity theories. In the field of quantum gravity there are several open questions – e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions, even in the absence of lab experiments or physical observations.
In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. This field of study is called phenomenological quantum gravity.
Overview
Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature of spacetime: in the slogan of John Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve." On the other hand, quantum field theory is typically formulated in the flat spacetime used in special relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is not renormalizable. Even in the simpler case where the curvature of spacetime is fixed a priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable.
It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe.
One major obstacle is that for quantum field theory in curved spacetime with a fixed metric, bosonic/fermionic operator fields supercommute for spacelike separated points. (This is a way of imposing a principle of locality.) However, in quantum gravity, the metric is dynamical, so that whether two points are spacelike separated depends on the state. In fact, they can be in a quantum superposition of being spacelike and not spacelike separated.
Quantum mechanics and general relativity
Graviton
The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the graviton. These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly.
Nonrenormalizability of gravity
General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory.
However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale.
On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all.
It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries.
Quantum gravity as an effective field theory
In an effective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally.
By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Another example is the calculation of the corrections to the Bekenstein-Hawking entropy formula.
Spacetime background dependence
A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in spacetime.
On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory.
String theory
String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamic way.
Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence.
Background independent theories
Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory.
Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks.
Semi-classical quantum gravity
Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation.
Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles).
Problem of time
A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories, time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level.
Candidate theories
There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available.
String theory
The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.
In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge.
Loop quantum gravity
Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space.
The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. This is derived from the following considerations: In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory.
The quantum state of spacetime is described in the theory by means of a mathematical structure called spin networks. Spin networks were initially introduced by Roger Penrose in abstract form, and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime.
The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields. In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.
The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined within the theory. In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks.
Other theories
There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Such theories include:
Experimental tests
As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, since the 2000s, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field of phenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention.
The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement,
violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam. The latter scenario has been searched for in light from gamma-ray bursts and both astrophysical and atmospheric neutrinos, placing limits on phenomenological quantum gravity parameters.
ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48 m, or 13 orders of magnitude below the Planck scale.
The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due to interstellar dust interference.
| Physical sciences | Basics_6 | null |
25319 | https://en.wikipedia.org/wiki/Quetzalcoatlus | Quetzalcoatlus | Quetzalcoatlus () is a genus of azhdarchid pterosaur that lived during the Maastrichtian age of the Late Cretaceous in North America. The type specimen, recovered in 1971 from the Javelina Formation of Texas, United States, consists of several wing fragments and was described as Quetzalcoatlus northropi in 1975 by Douglas Lawson. The first part of the name refers to the Aztec serpent god of the sky, Quetzalcōātl, while the second part honors Jack Northrop, designer of a tailless fixed-wing aircraft. The remains of a second species were found between 1972 and 1974, also by Lawson, around from the Q. northropi locality. In 2021, these remains were assigned the name Quetzalcoatlus lawsoni by Brian Andres and (posthumously) Wann Langston Jr, as part of a monograph on the genus.
Quetzalcoatlus northropi has gained fame as a candidate for the largest flying animal ever discovered, though estimating its size has been difficult due to the fragmentary nature of the only known specimen. Wingspan estimates over the years have ranged from , though this has more recently been narrowed down to around based on extrapolations from more complete members of the Azhdarchidae, the family to which Quetzalcoatlus belongs. The smaller and more complete Q. lawsoni had a wingspan of around . The proportions of Quetzalcoatlus were typical of azhdarchids, with a very long neck and beak, shortened non-wing digits that were well adapted for walking, and a very short tail.
Historical interpretations of the diet of Quetzalcoatlus have ranged from scavenging to skim-feeding like the modern skimmer bird. However, more recent research has found that it most likely hunted small prey on the ground, in a similar way to storks and ground hornbills. This has been dubbed the terrestrial stalking model and is thought to be a common feeding behavior among large azhdarchids. On the other hand, the second species, Q. lawsoni, appears to have been associated with alkaline lakes, and a diet of small aquatic invertebrates has been suggested. Similarly, while Q. northropi is speculated have been fairly solitary, Q. lawsoni appears to have been highly gregarious (social).
For years it was uncertain how Quetzalcoatlus took off. Early models using a bipedal (two-legged) posture, such as that of Sankar Chatterjee and R.J. Templin in 2004, were heavily reliant on a relatively low body weight (about in Chatterjee and Templin's case) and struggled to explain how takeoff was achieved. Based on the work of Mark P. Witton and Michael Habib in 2010, it now seems likely that pterosaurs, especially larger taxa such as Quetzalcoatlus, launched quadrupedally (from a four-legged posture), using the powerful muscles of their forelimbs to propel themselves off the ground and into the air.
Research history and taxonomy
Discovery and naming
The genus Quetzalcoatlus is based on fossils discovered in rocks pertaining to the Late Cretaceous Javelina Formation in Big Bend National Park, Texas. Remains of dinosaurs and other prehistoric life had been found in the area since the beginning of the 20th century. The first Quetzalcoatlus fossils were discovered in 1971 by the graduate student Douglas A. Lawson while conducting field work for his Master's degree project on the paleoecology of the Javelina Formation. This field work was supervised by Wann Langston Jr., an experienced paleontologist who had been doing field work in the region since 1938 and since 1963 led expeditions through his position as curator at the Texas Science and Natural History Museum. The two had first visited the park together in March 1970, with Lawson discovering the first Tyrannosaurus rex fossil from Texas. Returning in 1971, Lawson discovered a bone while investigating an arroyo on the western edge of the park, and returned to Austin with a section of it. He and Langston then identified it as a pterosaur fossil based on its hollow internal structure with thin walls. Returning in November 1971 for further excavations, they were struck by the unprecedented size of the remains compared to known pterosaurs. The initial material consisted of a giant radius and ulna, two fused wristbones known as syncarpals, and the end of the wing finger. Altogether, the material comprised a partial left wing from an individual (specimen number TMM 41450-3) later estimated at over in wingspan. Lawson described the remains in his 1972 thesis as "Pteranodon gigas", and diagnosed it as being “nearly twice as large as any previously described species of Pteranodon". As a thesis is not recognized as a published worked by the International Code for Zoological Nomenclature (ICZN), "Pteranodon gigas" is not a valid name. Further field work at the site was conducted in March 1973, when fragments were found alongside a long and delicate bone connected to an apparently larger element. This fossil was left in the ground until April 1974, when they fully excavated the larger element, a humerus. Due to the close association of discovered remains, Langston felt confident there were nothing more to be found at the site. Several later excavations of the site have indeed been unsuccessful.
Lawson announced his discovery in the journal Science in March of 1975, with a depiction of the animal's size compared to a large aircraft and a Pteranodon gracing the cover of the issue. Lawson wrote that it was "without doubt the largest flying animal presently known". He illustrated and briefly described the remains known at the time, but did not offer a name and indicated that a more extensive description was in preparation that would diagnose the species. In May, he submitted a short response to his original paper to the journal, considering how such an enormous animal could have flown. Within the paper, he briefly established the name Quetzalcoatlus northropi, but did still not provide a diagnosis or a more detailed description, which would later cause nomenclatural problems. Though not specified in the original publication, Lawson's named the genus after the Aztec feathered serpent god Quetzalcōātl, while the specific name honors John Knudsen Northrop, the founder of Northrop Corporation, who drove the development of large tailless Northrop YB-49 aircraft designs resembling Quetzalcoatlus. The discovery of the giant pterosaur left a strong impression on both the scientific community and the general public, and was reported on throughout the world. It was featured in Time Magazine and appeared on the cover of Scientific American in 1981 alongside an article on pterosaurs by Langston. The species would become referenced by over 500 scientific publications, with Quetzalcoatlus northropi becoming the single most cited pterosaur species and Quetzalcoatlus the fourth most cited pterosaur genus after Pteranodon, Rhamphorhynchus, and Pterodactylus, much older genera with many more species than Quetzalcoatlus.
Prior to the announcement of the discovery, Langston had returned to Big Bend with a group of fossil preparators in February 1973, primarily aiming to excavate bones of the dinosaur Alamosaurus. One of the preparators, a young man named Bill Amaral who went on to be a respected field worker, had been skipping his lunches to conduct additional explorations of the area. He came across some additional fragments of pterosaur bone on a different portion of the ridge, around 40 kilometers away from the original site. Two more new sites quickly followed nearby, producing many fragments which the crew figured could be fit back together, in addition to a complete carpal and intact wing bone. Langston noted in his field notes that none of these bones suggested animals as large as Lawson's original specimen. Further remains came from Amaral's first site in April of 1974, after Lawson's site had been exhausted; a long neck vertebra and a pair of jawbones appeared. Associated structures were initially hoped to represent filamentous pycnofibres, but were later confirmed to be conifer needles. Near the end of the 1974 season, Langston stumbled over a much more complete pterosaur skeleton; it consisted of a wing, multiple vertebrae, a femur and multiple other long bones. They lacked time to fully excavate it, leaving it in the ground until the next field season. This area, where many smaller specimens began to emerge, came to be known as Pterodactyl Ridge. Two of the smaller individuals were reported in the first 1975 paper by Lawson, presumed to belong to the same species, though Langston would begin to question the idea they belonged to Q. northropi by the early 1980s. Excavations continued in 1976, and eight new specimens emerged in 1977; in 1979, despite complications due to losing the field notes form 1977, Langston discovered another new site that would produce an additional ten specimens. Most importantly, a humerus of the smaller animal was finally found, which Langston considered of great importance to understanding Quetzalcoatlus. Several further new localities followed in 1980, but 1981 proved less successful and Langston began to suspect the ridge may have been mostly depleted of pterosaur fossils. There was similarly little success in 1982, and visits during 1983 and 1985 proved to provide the last substantive discoveries of Quetzalcoatlus fossils. Langston returned in 1989, 1991, 1992, and 1996, but only found isolated bones and fragments, though eventually a handful of additional specimens were discovered by former student Thomas Lehman. A visit to Lawson's initial site during 1991 showed that all traces of excavation had by now eroded away. Langston would visit Big Bend for the last time in 1999, having concluded the pterosaur expeditions to focus on the excavation of two skulls of Deinosuchus, another famous fossil of the area.
Later research
The expected further description implicated by Lawson never came. For the next 50 years, the material would remain under incomplete study, and few concrete anatomical details were documented within the literature. Much confusion surrounded the smaller individuals from Pterodactyl Ridge. In a 1981 article on pterosaurs, Langston expressed reservations whether they were truly the same species as the immense Q. northropi. In the meantime, Langston focused on the animal's publicity. He worked on a life-sized gliding replica of Quetzalcoatlus northropi with aeronautical engineer Paul MacCready between 1981 and 1985, promoting it in a dedicated IMAX film. The model was created to understand the flight of the animal — prior to Lawson's discovery such a large flier wasn't thought possible, and the subject remained controversial at the time. Furthermore, the model was intended to allow people to experience the animal in a more dynamic manner than a mere static display or film. Around this time he also created a skeletal mount of the genus that was exhibited at the Texas Memorial Museum. The next scientific effort of note was a 1996 paper by Langston and pterosaur specialist Alexander Kellner. By this time, Langston was confident the smaller animals were a separate species. A full publication establishing such a species was still in preparation at the time, but due to the importance of the skull material for the understanding of azhdarchid anatomy, the skull anatomy was published first. In this publication, he animal was referred to as Quetzalcoatlus sp., a placeholder designation for material not assigned to any particularly valid species. Once again, the planned further publication failed to materialize for decades, and Quetzalcoatlus sp. remained in limbo. A publication on the bioaeromechanics of the genus was also planned by Langston and James Cunningham, but this failed to materialize and the partially completed manuscript later became lost. Ultimately, a comprehensive publication on Quetzalcoatlus sp. would not appear before Langston's death in 2013. By this point he had produced many notes and individual descriptions, but had not begun writing any formal manuscript that could be published.
In 2021, a comprehensive description of the genus was finally published, the 19th entry in the Memoir series of special publications by the Society of Vertebrate Paleontology in the Journal of Vertebrate Paleontology. It consisted of five studies published together. Kevin Padian was the primary organizer of the project. A paper on the history of discoveries in Big Bend National Park was authored by Matthew J. Brown, Chris Sagebiel, and Brian Andres. It focused on curating a comprehensive list of specimens belonging to each species to Quetzalcoatlus and the locality information of each within the Big Bend. Thomas Lehman contributed a study on the palaeonvironment that Quetzalcoatlus would have resided within, based upon work he had begun with Langston as early as 1993. Brian Andres published a study on the morphology and taxonomy of the genus, established the species Quetzalcoatlus lawsoni for the smaller animal that had gone for decades without a name. The specific name honoured Lawson, who discovered Quetzalcoatlus. Despite not contributing directly to the written manuscript, the authors of the memoir and Langston's family agreed that he posthumously be considered a co-author of this paper due the basis of the work in the decades of research he dedicates to the subject. Also authored by Andres was a phylogenetic study of Quetzacoatlus and its relationships within Pterosauria, with a focus on the persistence of many lineages into the Late Cretaceous contra classical interpretations of Quetzalcoatlus as the last of a dying lineage. Finally, a study on the functional morphology of the genus was authored by Padian, James Cunningham, and John Conway (who contributed scientific illustrations and cover art to the Memoir), with Langston once again considered a posthumous co-author due to his foundational work on the subject. Brown and Padian prefaced the Memoir, who once again emphasized their gratitute to Langston for his decades of work on the animal leading up to the publication.
Taxonomic history
Before the 2021 description, the status of the genus Quetzalcoatlus was problematic. Mark Witton and colleagues noted that the holotype of Q. northropi includes elements that are typically undiagnostic, and that the specimen was nearly identical to that of other giant azhdarchids such as the contemporary Hatzegopteryx from Romania. The genus could therefore be a Nomen dubium or a synonym of Hatzegopteryx. However, these authors noted that the skull of the then-unnamed Q. lawsoni differed from Hatzegopteryx, suggesting that they are distinct taxa. An additional complication to these discussions is the likelihood that large pterosaurs such as Q. northropi could have made transcontinental flights, suggesting that locations as disparate as North America and Europe could have shared giant azhdarchid species.
Initially, it was assumed that the smaller specimens of Quetzalcoatlus were juvenile or subadult forms of the larger Q. northropi. Later, when more remains were found, the possibility emerged that they belonged to a separate species. This possible second species from Texas was provisionally referred to as a Quetzalcoatlus sp. by Alexander Kellner and Wann Langston Jr. in 1996, indicating that its status was too uncertain to give it a full new species name. The smaller specimens are more complete than the Q. northropi holotype and include four partial skulls. This species was named Q. lawsoni in 2021, in honor of the genus' original describer.
Reclassified or indeterminate fossils
In 1982, fragmentary azhdarchid remains, in the form of a wing phalanx, a partial femur, a vertebra and a tibia, were uncovered in strata from the Dinosaur Park Formation of Canada. While initially assigned to Quetzalcoatlus, the discovery of additional remains allowed its referral to a new genus and species, Cryodrakon boreas, in 2019. In 1986, jaws and cervical (neck) vertebrae from a pterosaur were uncovered in the Javelina Formation, from which the original specimens of Quetzalcoatlus originated. In his 1991 book on pterosaurs, Peter Wellnhofer assigned the remains to Quetzalcoatlus sp., leading to a reconstruction by illustrator John Sibbick of a Quetzalcoatlus with a short beak. In 2021, Brian Andres and Wann Langston Jr. dubbed the short-beaked Javelina pterosaur Wellnhopterus brevirostris, after Wellnhofer and the shortness of its beak. The same remains were named "Javelinadactylus sagebieli" in a now-retracted paper by Hebert Campos. An azhdarchid cervical (neck) vertebra (BMR P2002.2), discovered in 2002 in strata from the Maastrichtian age Hell Creek Formation (alongside a Tyrannosaurus rex specimen), was originally assigned to Quetzalcoatlus. In 2021, Andres and Langston Jr. determined that BMR P2002.2 was a putative azhdarchiform. A cervical vertebra (FSAC-OB 14) similar to that of Quetzalcoatlus is known from Morocco, and was described as aff. Quetzalcoatlus.
Description
Quetzalcoatlus northropi was among the largest azhdarchids, though was rivalled in size by Arambourgiania and Hatzegopteryx (and possibly Cryodrakon). Azhdarchids were split into two primary categories: short-necked taxa with short, robust beaks (i.e. Bakonydraco, Hatzegopteryx, and Wellnhopterus), and long-necked taxa with longer, slenderer beaks (i.e. Zhejiangopterus). Of these, Quetzalcoatlus falls squarely into the latter. Based on the limb morphology of Q. lawsoni, related azhdarchids such as Zhejiangopterus, and pterosaurs at large, in addition to azhdarchid tracks from South Korea, Quetzalcoatlus was likely quadrupedal. As a pterosaur, Quetzalcoatlus would have been covered in hair-like pycnofibres, and had extensive wing-membranes, which would have been distended by a long wing-finger.
Size
Quetzalcoatlus is regarded as one of the largest pterosaurs, though its exact size has been difficult to determine. In 1975, Douglas Lawson compared the wing bones of Q. northropi to equivalent elements in Dsungaripterus and Pteranodon and suggested that it represented an individual with a wingspan of around , or, alternatively, or . Estimates put forward in subsequent years varied dramatically, ranging from , owing to differences in methodology. From the 1980s onwards, estimates were narrowed down to . More recent estimates based on greater knowledge of azhdarchid proportions place its wingspan at . This would approach the maximum size possible for azhdarchids, estimated at around at least without significant changes to Q. northropi's overall anatomy. In the 2021 Quetzalcoatlus redescription, Q. lawsoni was estimated to have a wingspan of around . In 2022, Gregory S. Paul estimated that a larger wingspan of and a body length of .
Body mass estimates for giant azhdarchids are problematic because no existing species shares a similar size or body plan, and in consequence, published results vary widely. Crawford Greenewalt gave mass estimates of between for Q. northropi, with the former figure assuming a small wingspan of , and the latter assuming a far larger wingspan of . A majority of estimates published since the 2000s, have hovered around , due largely to a greater understanding of how aberrant the anatomy of azhdarchids was in comparison to other pterosaur clades. In 2021, Kevin Padian and his colleagues estimated that Q. lawsoni would have weighed , while a year later, Gregory S. Paul estimated a body mass of .
Skull
A complete skull is not known from either Quetzalcoatlus species, so reconstructions necessarily draw from the eight specimens of Q. lawsoni that preserve skull elements. The skull of Q. lawsoni, based on the length of the mandible, was about long. Like other azhdarchoids, it had a long, toothless beak that consisted largely of the premaxilla and maxilla. The nasoantorbital fenestra, an opening combining the external naris (which housed the nostril) and antorbital fenestra, was very large, with more than 40% of its height being above the orbit. The orbit is small and obovate (an inverted egg shape). At the base of the beak, formed from the premaxilla, was a crest, referred to by some authors as a sagittal crest. Though its exact form has yet to be determined, due to the poor preservation of Q. lawsoni's posterior skull, two distinct morphotypes have been suggested: one with a square sagittal crest and a tall nasoantorbital fenestra, and one with a more semicircular sagittal crest and a shorter nasoantorbital fenestra. The beak was long and slender. Its tip is not preserved in any specimen, so it is not clear how it was shaped. The beak likely had a gape of around 52 degrees. The mandibular symphyses would have widened slightly as the jaw opened, widening it to a certain degree, which has led to suggestions of some sort of gular pouch.
Postcrania
Quetzalcoatlus had nine elongated cervical (neck) vertebrae that were compressed dorsoventrally (top to bottom), and accordingly better suited for dorsoventral motion than lateral (side-to-side) motion. However, the lateral range of motion was still extensive, and the neck and head could swing left and right about 180 degrees. Like in other azhdarchoids, the cervical vertebrae overall were very low, with neural arches that were essentially within the centrum. In most azhdarchids, the neural spine of the seventh cervical vertebra was fairly long. This was not the case in Q. lawsoni, where the neural spine was shorter. Internally, the cervical vertebrae were supported by trabeculae (bony struts) that increased their buckling load by 90%. This may have been an adaptation to counteract shearing forces exerted on the neck while in flight, though may have enabled neck-bashing behaviors like those seen in giraffes.
In azhdarchids like Quetzalcoatlus, the torso was proportionally small, about half as long again as the humerus. In Quetzalcoatlus specifically, the vertebrae at the base of the neck and the pectoral girdle (shoulder girdle) are poorly known. The first four dorsal (back) vertebrae are fused into a notarium, as in some other pterosaurs and birds: the vertebral count of the notarium is unlike Zhejiangopterus, which may be a close relative, but like Azhdarcho. Most other dorsal vertebrae are absent, except for those integrated into the sacrum. Seven true sacral vertebrae are preserved. No caudal (tail) vertebrae are preserved. The pelvis of one Q. lawsoni specimen (TMM 41954-57) is large compared to that of other specimens, with deep posterior (rear) emargination and no preserved symphisis. This suggests sexual dimorphism similar to that suggested for other monofenestratans (i.e. Darwinopterus, Anhanguera and Nyctosaurus).
Quetzalcoatlus and other azhdarchids have forelimb and hind limb proportions more similar to modern running ungulate mammals than to members of other pterosaur clades, implying that they were uniquely suited to a terrestrial lifestyle. The wings were short and broad, and like in all pterosaurs, forelimb musclature was extensive. Flapping power came from several muscle groups on the torso, forearm and manus (hand). The humerus was short and robust, with considerable mobility. Its morphology differs somewhat between species, with Q. lawsoni's humerus having a proportionally shorter deltopectoral crest, and Q. northropi's being shaped more like a twisted hourglass. The ulna of Q. northropi was relatively shorter than that of Q. lawsoni, measuring 1.36 times the length of the humerus, as opposed to 1.52 times the length of the humerus in Q. lawsoni and other azhdarchiforms. The wing finger may have been between the elbow and the torso whilst on land. The first digit is the smallest, and the third is the biggest, with the exception of the wing finger. The femur was significantly more gracile than the humerus, though is still among the most robust bones in Quetzalcoatlus' skeleton, judging by Q. lawsoni. Azhdarchids overall had fairly narrow feet, no longer than 30% of the length of the tibia, which may have born fleshy pads, similar to those of tapejarids. Q. lawsoni possessed well-developed pedal (foot) unguals, which supported moderately curved claws, shorter and slightly straighter than those of tapejarids.
Classification
When describing Quetzalcoatlus in 1975, Douglas Lawson and Crawford Greenewalt opted not to assign it to a clade more specific than Pterodactyloidea, though comparisons with Arambourgiania (then Titanopteryx) from Jordan had been drawn earlier that year. In 1984, Lev Alexandrovich Nessov erected the subfamily Azhdarchinae within Pteranodontidae to contain Azhdarcho, Quetzalcoatlus, and Titanopteryx. Unaware of that subfamily, in the same year, Kevin Padian erected the family Titanopterygiidae to accommodate Quetzalcoatlus and Titanopteryx, defining it based on the length and general morphology of the cervical vertebrae. Two years later, in 1986, noting commonalities not only in contained genera but in diagnostic features, he rendered Titanopterygiidae a junior synonym of Azhdarchinae, elevating the latter to family level and forming the family Azhdarchidae. In 2003, the clade Azhdarchoidea was defined by David Unwin. Azhdarchids were determined to form a clade, Neoazhdarchia, with Tapejaridae. Montanazhdarcho from North America and Zhejiangopterus from China were incorporated into Azhdarchidae. In the supplementary material for their 2014 paper describing Kryptodrakon progenitor, Andres, James Clark and Xing Xu named a new subfamily, Quetzalcoatlinae, of which Quetzalcoatlus is the type genus.
The relationship between Quetzalcoatlus and other giant azhdarchids, like Arambourgiania and Hatzegopteryx, is not certain. In 2021, Brian Andres recovered them as sister taxa, with Arambourgiania being the sister taxon of Quetzalcoatlus and Hatzegopteryx being slightly more basal. However, Rodrigo V. Pêgas et al., in 2022, instead recovered Quetzalcoatlus as part of one of two quetzalcoatline branches, alongside Cryodrakon; the other giant azhdarchid genera were recovered on the other branch. A similar dichotomy was recovered by Leonardo Ortiz David et al. that same year, with the inclusion of Thanatosdrakon as Quetzalcoatlus' sister genus.
The first of the below phylogenetic analyses shows the results of Andres (2021). The second shows the results of Ortiz David et al. (2022).
Topology 1: Andres (2021).
Topology 2: Ortiz David and colleagues (2022).
Paleobiology
Feeding and ecological niche
In 2008, Mark Witton and Darren Naish pointed out that although azhdarchids have historically been considered to have been scavengers, probers of sediment, swimmers, waders, aerial predators, or stork-like generalists, most researchers until that point had considered them to have been skim-feeders living in coastal settings, which fed by trawling their lower jaws through water while flying and catching prey from the surface (like skimmers and some terns). In general, pterosaurs have historically been considered marine piscivores (fish-eaters), and despite their unusual anatomy, azhdarchids have been assumed to have occupied the same ecological niche. Witton and Naish noted that evidence for this mode of feeding lacked support from azhdarchid anatomy and functional morphology; they lacked cranial features such as sideways compressed lower jaws and the shock-absorbing adaptations required, and their jaws instead appear to have been almost triangular in cross-section, unlike those of skim-feeders and probers.
Witton and Naish instead stated that azhdarchids probably inhabited inland environments, based on the taphonomic contexts their fossils have been found in (more than half the fossils surveyed were from for example fluvial or alluvial deposits, and most of the marine occurrences also had fossils of terrestrial lifeforms), and their morphology made them ill-suited for lifestyles other than wading and foraging terrestrially, though their feet were relatively small, slender, and had pads, not suited for wading either. These researchers instead argued that azhdarchids were similar to storks or ground hornbills, generalists they termed "terrestrial stalkers" that foraged in different kinds of environments for small animals and carrion, supported by their apparent proficiency on the ground and relatively inflexible necks. Witton and Naish suggested that their more generalist lifestyle could explain the group's resilience compared to other pterosaur lineages, which were not thought to have survived until the late Maastrichtian like the azhdarchids did (pterosaurs went extinct along with the non-bird dinosaurs during the Cretaceous-Paleogene extinction event 66 million years ago).
Witton elaborated in a 2013 book that the proportions of azhdarchids would have been consistent with them striding through vegetated areas with their long limbs, and their downturned skull and jaws reaching the ground. Their long, stiffened necks would be an advantage as it would help lowering and raising the head and give it a vantage point when searching for prey, and enable them to grab small animals and fruit. In a 2021 study, Labita and Martill noted that azhdarchids might have been less terrestrial than suggested by Witton and Naish, since azhdarchid fossils were known from marine strata, such as Phosphatodraco from Morocco and Arambourgiania from the phosphates of Jordan. They noted that no azhdarchids had been found in truly terrestrial strata, and proposed they could instead have been associated with aquatic environments, such as rivers, lakes, marine and off-shore settings.
Q. northropi is found in plains deposits, and due to the paucity and location of its remains, was speculated by Thomas Lehman to have been a solitary hunter that favoured riparian environments. Q. lawsoni, however, is found in great numbers in facies that likely represent alkaline lakes. It may have lived like modern gregarious wading birds, feeding on small invertebrates such as annelids, crustaceans and insects that inhabit such environments. The two species, if contemporaneous, were likely separated by such behavioural and ecological differences. The internal anatomy of Q. northropi's cervical vertebrae suggests that it, and other giant azhdarchids, may have been able to pick up prey animals weighing without issue, though prey size would have been limited by the size of Q. northropi's skull and gullet rather than body mass.
Pterosaurs are generally thought to have gone gradually extinct by decreasing in diversity towards the end of the Cretaceous, but Longrich and colleagues suggested this impression could be a result of the poor fossil records for pterosaurs (the Signor-Lipps Effect). Pterosaurs during this time had increased niche-partitioning compared to earlier faunas from the Santonian and Campanian ages, and they were able to outcompete birds in large size based niches, and birds therefore remained small, not exceeding wingspans during the Late Cretaceous (most pterosaurs during this time had larger wingspans, and thereby avoided the small-size niche). To these researchers, this indicated that the extinction of pterosaurs was abrupt instead of gradual, caused by the catastrophic Chicxulub impact. Their extinction freed up more niches that were then filled by birds, which led to their evolutionary radiation in the Early Cenozoic.
Locomotion
Witton summarized ideas about azhdarchid flight abilities in 2013, and noted they had generally been considered adapted for soaring, although some have found it possible their musculature allowed flapping flight like in swans and geese. Their short and potentially broad wings may have been suited for flying in terrestrial environments, as this is similar to some large, terrestrially soaring birds. Albatross-like soaring has also been suggested, but Witton thought this unlikely due to the supposed terrestrial bias of their fossils and adaptations for foraging on the ground. Studies of azhdarchid flight abilities indicate they would have been able to fly for long and probably fast (especially if they had an adequate amount of fat and muscle as nourishment), so that geographical barriers would not present obstacles.
Azhdarchids are also the only group of pterosaurs to which trackways have been assigned, such as Haenamichnus from Korea, which matches this group in shape, age, and size. One long trackway of this kind shows that azhdarchids walked with their limbs held directly underneath their body, and along with the morphology of their feet indicates they were more proficient on the ground than other pterosaurs. According to Witton, their proportions indicate they were not good swimmers, and though they could probably launch from water, they were not as good at this as some other pterosaur groups.
Terrestrial locomotion in azhdarchids like Quetzalcoatlus likely involved a pacing gait, wherein the limbs on one side of the body would move at the same time, followed by those of the opposite side. For example, the forelimb on one side of the body would lift off the ground and move forward first, to avoid colliding with the hind foot, and the hind limb would follow suit. The forefoot would be planted in the ground just before the hind foot. Once the stride completed, the same process would repeat on the opposite side of the body.
Flight
The nature of flight in Quetzalcoatlus and other giant azhdarchids was poorly understood until serious biomechanical studies were conducted in the 21st century. A 1984 experiment by Paul MacCready used practical aerodynamics to test the flight of Quetzalcoatlus. MacCready constructed a model flying machine or, ornithopter, with a simple computer functioning as an autopilot. The model successfully flew with a combination of soaring and wing flapping. The model was based on a then-current weight estimate of around , far lower than more modern estimates of over . The method of flight in these pterosaurs depends largely on their weight, which has been controversial, and widely differing masses have been favored by different scientists. Some researchers have suggested that these animals employed slow, soaring flight, while others have concluded that their flight was fast and dynamic. In 2010, Donald Henderson argued that the mass of Q. northropi had been underestimated, even the highest estimates, and that it was too massive to have achieved powered flight. He estimated it in his 2010 paper as , and argued that it may have been flightless.
Other flight capability estimates have disagreed with Henderson's research, suggesting instead an animal superbly adapted to long-range, extended flight. In 2010, Mike Habib, a professor of biomechanics at Chatham University, and Mark Witton, a British paleontologist, undertook further investigation into the claims of flightlessness in large pterosaurs. After factoring wingspan, body weight, and aerodynamics, computer modeling led the two researchers to conclude that Q. northropi was capable of flight up to for 7 to 10 days at altitudes of . Habib further suggested a maximum flight range of for Q. northropi. Henderson's work was also further criticized by Witton and Habib in another study, which pointed out that, although Henderson used excellent mass estimations, they were based on outdated pterosaur models, which caused Henderson's mass estimations to be more than double what Habib used in his estimations and that anatomical study of Q. northropi and other big pterosaur forelimbs showed a higher degree of robustness than would be expected if they were purely quadrupedal. This study proposed that large pterosaurs most likely utilized a short burst of powered flight to then transition to thermal soaring. However, a study from 2022 suggests that they would only have flown occasionally and for short distances, like the Kori bustard (the world's heaviest bird that actively flies) and that they were not able to soar at all. Studies of Q. northropi and Q. lawsoni published in 2021 by Kevin Padian et al. instead suggested that Quetzalcoatlus was a very powerful flier.
Launching
Early interpretations of Quetzalcoatlus launching relied on bipedal models. In 2004, Sankar Chatterjee and R.J. Templin used a model and utilised a running launch cycle powered by the hind limbs, in which Q. northropi was only barely able to take off. In 2008, Michael Habib suggested that the only feasible takeoff method for a Quetzalcoatlus was one that was mainly powered by the forelimbs. In 2010, Mark Witton and Habib noted that the femur of Quetzalcoatlus was only a third as strong as what would be expected from a bird of equal size, whereas the humerus is considerably stronger, and affirmed that an azhdarchid the size of Quetzalcoatlus would have great difficulty taking off bipedally. Thus, they considered a quadrupedal launching method, with the forelimbs applying most of the necessary force, a likelier method of takeoff. In 2021, Kevin Padian et al. attempted to resurrect the bipedal launch model, using a comparatively light weight estimate of . They suggested that Quetzalcoatlus' hind limbs were more powerful than previously suggested, and that they were strong enough to launch its body as high as off the ground without the aid of the forelimbs. A large breastbone would support the necessary muscles to create a flight stroke, allowing Quetzalcoatlus to gain enough clearance to begin the downstrokes needed for takeoff. Padian et al. also suggested that the legs and feet were likely tucked under the body during flight, as in modern birds.
Paleoenvironment
Quetzalcoatlus is known from the Lancian portion of the Javelina Formation, in a fauna dominated by Alamosaurus. It co-existed with another azhdarchid known as Wellnhopterus, as well as an additional pterosaur taxon, suggesting a relatively high diversity of Late Cretaceous pterosaur genera. The depositional environment represents a floodplain which was probably semi-arid, analogous in terms of climate and flora to the coastal plains of southern Mexico, consisting of an evergreen or semideciduous tropical forest. These forests consisted largely of angiosperm trees such as Javelinoxylon, conifers related to the modern Araucaria, and woody vines, with a closed canopy in excess of in height. The remains of both Quetzalcoatlus species are found in association with freshwater environments. Q. lawsoni, in particular, is strongly associated with abandoned channel and lake facies, which are rare in the Javelina Formation. These facies preserve a diverse fauna of gastropods and bivalves, though the vertebrate fauna known from other aquatic environments belonging the Javelina, such as crocodiles, fishes and turtles, are absent. This suggests that the environment was inhospitable compared to normal stream channels, and high carbonate precipitation suggests that the water may have been highly alkaline. Eggshell fragments that may be attributable to Quetzalcoatlus suggest that they may have nested around alkaline lakes.
Cultural significance
In 1975, artist Giovanni Caselli depicted Quetzalcoatlus as a small-headed scavenger with an extremely long neck in the book The evolution and ecology of the Dinosaurs by British paleontologist Beverly Halstead. Over the next twenty-five years prior to future discoveries, it would launch similar depictions colloquially known as "paleomemes" in various books, as noted by Darren Naish.
In 1985, the US Defense Advanced Research Projects Agency (DARPA) and AeroVironment used Q. northropi as the basis for an experimental ornithopter unmanned aerial vehicle (UAV). They produced a half-scale model weighing , with a wingspan of . Coincidentally, Douglas A. Lawson, who discovered Q. northropi in Texas in 1971, named it after John "Jack" Northrop, a developer of tailless flying wing aircraft in the 1940s. The replica of Q. northropi incorporates a "flight control system/autopilot which processes pilot commands and sensor inputs, implements several feedback loops, and delivers command signals to its various servo-actuators". It is on exhibit at the National Air and Space Museum.
In 2010, several life-sized models of Q. northropi were put on display on London's South Bank as the centerpiece exhibit for the Royal Society's 350th-anniversary exhibition. The models, which included both flying and standing individuals with wingspans of over , were intended to help build public interest in science. The models were created by scientists from the University of Portsmouth.
| Biology and health sciences | Pterosaurs | Animals |
25336 | https://en.wikipedia.org/wiki/Quantum%20entanglement | Quantum entanglement | Quantum entanglement is the phenomenon of a group of particles being generated, interacting, or sharing spatial proximity in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical physics and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.
Measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles can, in some cases, be found to be perfectly correlated. For example, if a pair of entangled particles is generated such that their total spin is known to be zero, and one particle is found to have clockwise spin on a first axis, then the spin of the other particle, measured on the same axis, is found to be anticlockwise. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a particle's properties results in an apparent and irreversible wave function collapse of that particle and changes the original quantum state. With entangled particles, such measurements affect the entangled system as a whole.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior impossible, as it violated the local realism view of causality (Einstein referring to it as "spooky action at a distance") and argued that the accepted formulation of quantum mechanics must therefore be incomplete.
Later, however, the counterintuitive predictions of quantum mechanics were verified in tests where polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. This established that the correlations produced from quantum entanglement cannot be explained in terms of local hidden variables, i.e., properties contained within the individual particles themselves.
However, despite the fact that entanglement can produce statistical correlations between events in widely separated places, it cannot be used for faster-than-light communication.
Quantum entanglement has been demonstrated experimentally with photons, electrons, top quarks, molecules and even small diamonds. The use of quantum entanglement in communication and computation is an active area of research and development.
History
Albert Einstein and Niels Bohr engaged in a long-running collegial dispute about the meaning of quantum mechanics, now known as the Bohr–Einstein debates. During these debates, Einstein introduced a thought experiment about a box that emits a photon. He noted that the experimenter's choice of what measurement to make upon the box will change what can be predicted about the photon, even if the photon is very far away. This argument, which Einstein had formulated by 1931, was an early recognition of the phenomenon that would later be called entanglement. That same year, Hermann Weyl observed in his textbook on group theory and quantum mechanics that quantum systems made of multiple interacting pieces exhibit a kind of Gestalt, in which "the whole is greater than the sum of its parts". In 1932, Erwin Schrödinger wrote down the defining equations of quantum entanglement but set them aside, unpublished. In 1935, Grete Hermann studied the mathematics of an electron interacting with a photon and noted the phenomenon that would come to be called entanglement. Later that same year, Einstein, Boris Podolsky and Nathan Rosen published a paper on what is now known as the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment that attempted to show that "the quantum-mechanical description of physical reality given by wave functions is not complete". Their thought experiment had two systems interact, then separate, and they showed that afterwards quantum mechanics cannot describe the two systems individually.
Shortly after this paper appeared, Erwin Schrödinger wrote a letter to Einstein in German in which he used the word Verschränkung (translated by himself as entanglement) to describe situations like that of the EPR scenario. Schrödinger followed up with a full paper defining and discussing the notion of entanglement, saying "I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought."
Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity. Einstein later referred to the effects of entanglement as "spukhafte Fernwirkung" or "spooky action at a distance", meaning the acquisition of a value of a property at one location resulting from a measurement at a distant location.
In 1946, John Archibald Wheeler suggested studying the polarization of pairs of gamma-ray photons produced by electron–positron annihilation. Chien-Shiung Wu and I. Shaknov carried out this experiment in 1949, thereby demonstrating that the entangled particle pairs considered by EPR could be created in the laboratory.
Despite Schrödinger's claim of its importance, little work on entanglement was published for decades after his paper was published. In 1964 John S. Bell demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, and showed that quantum theory predicts violations of this limit for certain entangled systems. His inequality is experimentally testable, and there have been numerous relevant experiments, starting with the pioneering work of Stuart Freedman and John Clauser in 1972 and Alain Aspect's experiments in 1982.
While Bell actively discouraged students from pursuing work like his as too esoteric, after a talk at Oxford a student named Artur Ekert suggested that the violation of a Bell inequality could be used as a resource for communication. Ekert followed up by publishing a quantum key distribution protocol called E91 based on it.
In 1992, the entanglement concept was leveraged to propose quantum teleportation, an effect that was realized experimentally in 1997.
Beginning in the mid-1990s, Anton Zeilinger used the generation of entanglement via parametric down-conversion to develop entanglement swapping and demonstrate quantum cryptography with entangled photons.
In 2022, the Nobel Prize in Physics was awarded to Aspect, Clauser, and Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science".
Concept
Meaning of entanglement
Just as energy is a resource that facilitates mechanical operations, entanglement is a resource that facilitates performing tasks that involve communication and computation. The mathematical definition of entanglement can be paraphrased as saying that maximal knowledge about the whole of a system does not imply maximal knowledge about the individual parts of that system. If the quantum state that describes a pair of particles is entangled, then the results of measurements upon one half of the pair can be strongly correlated with the results of measurements upon the other. However, entanglement is not the same as "correlation" as understood in classical probability theory and in daily life. Instead, entanglement can be thought of as potential correlation that can be used to generate actual correlation in an appropriate experiment. The correlations generated from an entangled quantum state cannot in general be replicated by classical probability.
An example of entanglement is a subatomic particle that decays into an entangled pair of other particles. The decay events obey the various conservation laws, and as a result, the measurement outcomes of one daughter particle must be highly correlated with the measurement outcomes of the other daughter particle (so that the total momenta, angular momenta, energy, and so forth remains roughly the same before and after this process). For instance, a spin-zero particle could decay into a pair of spin-1/2 particles. Since the total spin before and after this decay must be zero (by the conservation of angular momentum), whenever the first particle is measured to be spin up on some axis, the other, when measured on the same axis, is always found to be spin down. This is called the spin anti-correlated case; and if the prior probabilities for measuring each spin are equal, the pair is said to be in the singlet state. Perfect anti-correlations like this could be explained by "hidden variables" within the particles. For example, we could hypothesize that the particles are made in pairs such that one carries a value of "up" while the other carries a value of "down". Then, knowing the result of the spin measurement upon one particle, we could predict that the other will have the opposite value. Bell illustrated this with a story about a colleague, Bertlmann, who always wore socks with mismatching colors. "Which colour he will have on a given foot on a given day is quite unpredictable," Bell wrote, but upon observing "that the first sock is pink you can be already sure that the second sock will not be pink." Revealing the remarkable features of quantum entanglement requires considering multiple distinct experiments, such as spin measurements along different axes, and comparing the correlations obtained in these different configurations.
Quantum systems can become entangled through various types of interactions. For some ways in which entanglement may be achieved for experimental purposes, see the section below on methods. Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made. In more detail, this process involves the particles becoming entangled with the environment, as a consequence of which, the quantum state describing the particles themselves is no longer entangled.
Mathematically, an entangled system can be defined to be one whose quantum state cannot be factored as a product of states of its local constituents; that is to say, they are not individual particles but are an inseparable whole. When entanglement is present, one constituent cannot be fully described without considering the other(s). The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term.
Paradox
The singlet state described above is the basis for one version of the EPR paradox. In this variant, introduced by David Bohm, a source emits particles and sends them in opposite directions. The state describing each pair is entangled. In the standard textbook presentation of quantum mechanics, performing a spin measurement on one of the particles causes the wave function for the whole pair to collapse into a state in which each particle has a definite spin (either up or down) along the axis of measurement. The outcome is random, with each possibility having a probability of 50%. However, if both spins are measured along the same axis, they are found to be anti-correlated. This means that the random outcome of the measurement made on one particle seems to have been transmitted to the other, so that it can make the "right choice" when it too is measured.
The distance and timing of the measurements can be chosen so as to make the interval between the two measurements spacelike, hence, any causal effect connecting the events would have to travel faster than light. According to the principles of special relativity, it is not possible for any information to travel between two such measuring events. It is not even possible to say which of the measurements came first. For two spacelike separated events and there are inertial frames in which is first and others in which is first. Therefore, the correlation between the two measurements cannot be explained as one measurement determining the other: different observers would disagree about the role of cause and effect.
Failure of local hidden-variable theories
A possible resolution to the paradox is to assume that quantum theory is incomplete, and the result of measurements depends on predetermined "hidden variables". The state of the particles being measured contains some hidden variables, whose values effectively determine, right from the moment of separation, what the outcomes of the spin measurements are going to be. This would mean that each particle carries all the required information with it, and nothing needs to be transmitted from one particle to the other at the time of measurement. Einstein and others (see the previous section) originally believed this was the only way out of the paradox, and the accepted quantum mechanical description (with a random measurement outcome) must be incomplete.
Local hidden variable theories fail, however, when measurements of the spin of entangled particles along different axes are considered. If a large number of pairs of such measurements are made (on a large number of pairs of entangled particles), then statistically, if the local realist or hidden variables view were correct, the results would always satisfy Bell's inequality. A number of experiments have shown in practice that Bell's inequality is not satisfied. Moreover, when measurements of the entangled particles are made in moving relativistic reference frames, in which each measurement (in its own relativistic time frame) occurs before the other, the measurement results remain correlated.
The fundamental issue about measuring spin along different axes is that these measurements cannot have definite values at the same time―they are incompatible in the sense that these measurements' maximum simultaneous precision is constrained by the uncertainty principle. This is contrary to what is found in classical physics, where any number of properties can be measured simultaneously with arbitrary accuracy. It has been proven mathematically that compatible measurements cannot show Bell-inequality-violating correlations, and thus entanglement is a fundamentally non-classical phenomenon.
Nonlocality and entanglement
As discussed above, entanglement is necessary to produce a violation of a Bell inequality. However, the mere presence of entanglement alone is insufficient, as Bell himself noted in his 1964 paper. This is demonstrated, for example, by Werner states, which are a family of states describing pairs of particles. For appropriate choices of the key parameter that identifies a given Werner state within the full set thereof, the Werner states exhibit entanglement. Yet pairs of particles described by Werner states always admit a local hidden variable model. In other words, these states cannot power the violation of a Bell inequality, despite possessing entanglement. This can be generalized from pairs of particles to larger collections as well.
The violation of Bell inequalities is often called quantum nonlocality. This term is not without controversy. It is sometimes argued that using the term nonlocality carries the unwarranted implication that the violation of Bell inequalities must be explained by physical, faster-than-light signals. In other words, the failure of local hidden-variable models to reproduce quantum mechanics is not necessarily a sign of true nonlocality in quantum mechanics itself. Despite these reservations, the term nonlocality has become a widespread convention.
The term nonlocality is also sometimes applied to other concepts besides the nonexistence of a local hidden-variable model, such as whether states can be distinguished by local measurements. Moreover, quantum field theory is often said to be local because observables defined within spacetime regions that are spacelike separated must commute. These other uses of local and nonlocal are not discussed further here.
Mathematical details
The following subsections use the formalism and theoretical framework developed in the articles bra–ket notation and mathematical formulation of quantum mechanics.
Pure states
Consider two arbitrary quantum systems and , with respective Hilbert spaces and . The Hilbert space of the composite system is the tensor product
If the first system is in state and the second in state , the state of the composite system is
States of the composite system that can be represented in this form are called separable states, or product states. However, not all states of the composite system are separable. Fix a basis for and a basis for . The most general state in is of the form
.
This state is separable if there exist vectors so that yielding and It is inseparable if for any vectors at least for one pair of coordinates we have If a state is inseparable, it is called an 'entangled state'.
For example, given two basis vectors of and two basis vectors of , the following is an entangled state:
If the composite system is in this state, it is impossible to attribute to either system or system a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero (as it is for any pure state), the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". The above example is one of four Bell states, which are (maximally) entangled pure states (pure states of the space, but which cannot be separated into pure states of each and ).
Now suppose Alice is an observer for system , and Bob is an observer for system . If in the entangled state given above Alice makes a measurement in the eigenbasis of , there are two possible outcomes, occurring with equal probability: Alice can obtain the outcome 0, or she can obtain the outcome 1. If she obtains the outcome 0, then she can predict with certainty that Bob's result will be 1. Likewise, if she obtains the outcome 1, then she can predict with certainty that Bob's result will be 0. In other words, the results of measurements on the two qubits will be perfectly anti-correlated. This remains true even if the systems and are spatially separated. This is the foundation of the EPR paradox.
The outcome of Alice's measurement is random. Alice cannot decide which state to collapse the composite system into, and therefore cannot transmit information to Bob by acting on her system. Causality is thus preserved, in this particular scheme. For the general argument, see no-communication theorem.
Ensembles
As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has less information about the system, then one calls it an 'ensemble' and describes it by a density matrix, which is a positive-semidefinite matrix, or a trace class when the state space is infinite-dimensional, and which has trace 1. By the spectral theorem, such a matrix takes the general form:
where the wi are positive-valued probabilities (they sum up to 1), the vectors are unit vectors, and in the infinite-dimensional case, we would take the closure of such states in the trace norm. We can interpret as representing an ensemble where is the proportion of the ensemble whose states are . When a mixed state has rank 1, it therefore describes a 'pure ensemble'. When there is less than total information about the state of a quantum system we need density matrices to represent the state.
Experimentally, a mixed ensemble might be realized as follows. Consider a "black box" apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are identical. The apparatus might produce electrons that are all in the same state; in this case, the electrons received by the observer are then a pure ensemble. However, the apparatus could produce electrons in different states. For example, it could produce two populations of electrons: one with state with spins aligned in the positive direction, and the other with state with spins aligned in the negative direction. Generally, this is a mixed ensemble, as there can be any number of populations, each corresponding to a different state.
Following the definition above, for a bipartite composite system, mixed states are just density matrices on . That is, it has the general form
where the wi are positively valued probabilities, , and the vectors are unit vectors. This is self-adjoint and positive and has trace 1.
Extending the definition of separability from the pure case, we say that a mixed state is separable if it can be written as
where the are positively valued probabilities and the s and s are themselves mixed states (density operators) on the subsystems and respectively. In other words, a state is separable if it is a probability distribution over uncorrelated states, or product states. By writing the density matrices as sums of pure ensembles and expanding, we may assume without loss of generality that and are themselves pure ensembles. A state is then said to be entangled if it is not separable.
In general, finding out whether or not a mixed state is entangled is considered difficult. The general bipartite case has been shown to be NP-hard. For the and cases, a necessary and sufficient criterion for separability is given by the famous Positive Partial Transpose (PPT) condition.
Reduced density matrices
The idea of a reduced density matrix was introduced by Paul Dirac in 1930. Consider as above systems and each with a Hilbert space . Let the state of the composite system be
As indicated above, in general there is no way to associate a pure state to the component system . However, it still is possible to associate a density matrix. Let
.
which is the projection operator onto this state. The state of is the partial trace of over the basis of system :
The sum occurs over and the identity operator in . is sometimes called the reduced density matrix of on subsystem . Colloquially, we "trace out" or "trace over" system to obtain the reduced density matrix on .
For example, the reduced density matrix of for the entangled state
discussed above is
This demonstrates that the reduced density matrix for an entangled pure ensemble is a mixed ensemble. In contrast, the density matrix of for the pure product state discussed above is
the projection operator onto .
In general, a bipartite pure state ρ is entangled if and only if its reduced states are mixed rather than pure.
Entanglement as a resource
In quantum information theory, entangled states are considered a 'resource', i.e., something costly to produce and that allows implementing valuable transformations. The setting in which this perspective is most evident is that of "distant labs", i.e., two quantum systems labelled "A" and "B" on each of which arbitrary quantum operations can be performed, but which do not interact with each other quantum mechanically. The only interaction allowed is the exchange of classical information, which combined with the most general local quantum operations gives rise to the class of operations called LOCC (local operations and classical communication). These operations do not allow the production of entangled states between systems A and B. But if A and B are provided with a supply of entangled states, then these, together with LOCC operations can enable a larger class of transformations.
If Alice and Bob share an entangled state, Alice can tell Bob over a telephone call how to reproduce a quantum state she has in her lab. Alice performs a joint measurement on together with her half of the entangled state and tells Bob the results. Using Alice's results Bob operates on his half of the entangled state to make it equal to . Since Alice's measurement necessarily erases the quantum state of the system in her lab, the state is not copied, but transferred: it is said to be "teleported" to Bob's laboratory through this protocol.
Entanglement swapping is variant of teleportation that allows two parties that have never interacted to share an entangled state. The swapping protocol begins with two EPR sources. One source emits an entangled pair of particles A and B, while the other emits a second entangled pair of particles C and D. Particles B and C are subjected to a measurement in the basis of Bell states. The state of the remaining particles, C and D, collapses to a Bell state, leaving them entangled despite never having interacted with each other.
An interaction between a qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting it interact with B's qubit (which is now a LOCC operation, since both qubits are in B's lab) and then teleporting the qubit back to A. Two maximally entangled states of two qubits are used up in this process. Thus entangled states are a resource that enables the realization of quantum interactions (or of quantum channels) in a setting where only LOCC are available, but they are consumed in the process. There are other applications where entanglement can be seen as a resource, e.g., private communication or distinguishing quantum states.
Multipartite entanglement
Quantum states describing systems made of more than two pieces can also be entangled. An example for a three-qubit system is the Greenberger–Horne–Zeilinger (GHZ) state,
Another three-qubit example is the W state:
Tracing out any one of the three qubits turns the GHZ state into a separable state, whereas the result of tracing over any of the three qubits in the W state is still entangled. This illustrates how multipartite entanglement is a more complicated topic than bipartite entanglement: systems composed of three or more parts can exhibit multiple qualitatively different types of entanglement. A single particle cannot be maximally entangled with more than a particle at a time, a property called monogamy.
Classification of entanglement
Not all quantum states are equally valuable as a resource. One method to quantify this value is to use an entanglement measure that assigns a numerical value to each quantum state. However, it is often interesting to settle for a coarser way to compare quantum states. This gives rise to different classification schemes. Most entanglement classes are defined based on whether states can be converted to other states using LOCC or a subclass of these operations. The smaller the set of allowed operations, the finer the classification. Important examples are:
If two states can be transformed into each other by a local unitary operation, they are said to be in the same LU class. This is the finest of the usually considered classes. Two states in the same LU class have the same value for entanglement measures and the same value as a resource in the distant-labs setting. There is an infinite number of different LU classes (even in the simplest case of two qubits in a pure state).
If two states can be transformed into each other by local operations including measurements with probability larger than 0, they are said to be in the same 'SLOCC class' ("stochastic LOCC"). Qualitatively, two states and in the same SLOCC class are equally powerful, since one can transform each into the other, but since the transformations and may succeed with different probability, they are no longer equally valuable. E.g., for two pure qubits there are only two SLOCC classes: the entangled states (which contains both the (maximally entangled) Bell states and weakly entangled states like ) and the separable ones (i.e., product states like ).
Instead of considering transformations of single copies of a state (like ) one can define classes based on the possibility of multi-copy transformations. E.g., there are examples when is impossible by LOCC, but is possible. A very important (and very coarse) classification is based on the property whether it is possible to transform an arbitrarily large number of copies of a state into at least one pure entangled state. States that have this property are called distillable. These states are the most useful quantum states since, given enough of them, they can be transformed (with local operations) into any entangled state and hence allow for all possible uses. It came initially as a surprise that not all entangled states are distillable; those that are not are called 'bound entangled'.
A different entanglement classification is based on what the quantum correlations present in a state allow A and B to do: one distinguishes three subsets of entangled states: (1) the non-local states, which produce correlations that cannot be explained by a local hidden variable model and thus violate a Bell inequality, (2) the steerable states that contain sufficient correlations for A to modify ("steer") by local measurements the conditional reduced state of B in such a way, that A can prove to B that the state they possess is indeed entangled, and finally (3) those entangled states that are neither non-local nor steerable. All three sets are non-empty.
Entropy
In this section, the entropy of a mixed state is discussed as well as how it can be viewed as a measure of quantum entanglement.
Definition
In classical information theory , the Shannon entropy, is associated to a probability distribution, , in the following way:
Since a mixed state is a probability distribution over an ensemble, this leads naturally to the definition of the von Neumann entropy:
which can be expressed in terms of the eigenvalues of :
.
Since an event of probability 0 should not contribute to the entropy, and given that
the convention is adopted. When a pair of particles is described by the spin singlet state discussed above, the von Neumann entropy of either particle is , which can be shown to be the maximum entropy for mixed states.
As a measure of entanglement
Entropy provides one tool that can be used to quantify entanglement, although other entanglement measures exist. If the overall system is pure, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure.
It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution . Therefore, a bipartite pure state is said to be a maximally entangled state if the reduced state of each subsystem of is the diagonal matrix
For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure.
Rényi entropy also can be used as a measure of entanglement.
Entanglement measures
Entanglement measures quantify the amount of entanglement in a (often viewed as a bipartite) quantum state. As aforementioned, entanglement entropy is the standard measure of entanglement for pure states (but no longer a measure of entanglement for mixed states). For mixed states, there are some entanglement measures in the literature and no single one is standard.
Entanglement cost
Distillable entanglement
Entanglement of formation
Concurrence
Relative entropy of entanglement
Squashed entanglement
Logarithmic negativity
Most (but not all) of these entanglement measures reduce for pure states to entanglement entropy, and are difficult (NP-hard) to compute for mixed states as the dimension of the entangled system grows.
Quantum field theory
The Reeh–Schlieder theorem of quantum field theory is sometimes interpreted as saying that entanglement is omnipresent in the quantum vacuum.
Applications
Entanglement has many applications in quantum information theory. With the aid of entanglement, otherwise impossible tasks may be achieved.
Among the best-known applications of entanglement are superdense coding and quantum teleportation.
Most researchers believe that entanglement is necessary to realize quantum computing (although this is disputed by some).
Entanglement is used in some protocols of quantum cryptography, but to prove the security of quantum key distribution (QKD) under standard assumptions does not require entanglement. However, the device independent security of QKD is shown exploiting entanglement between the communication partners.
In August 2014, Brazilian researcher Gabriela Barreto Lemos, from the University of Vienna, and team were able to "take pictures" of objects using photons that had not interacted with the subjects, but were entangled with photons that did interact with such objects. The idea has been adapted to make infrared images using only standard cameras that are insensitive to infrared.
Entangled states
There are several canonical entangled states that appear often in theory and experiments.
For two qubits, the Bell states are
These four pure states are all maximally entangled and form an orthonormal basis of the Hilbert space of the two qubits. They provide examples of how quantum mechanics can violate Bell-type inequalities.
For qubits, the GHZ state is
which reduces to the Bell state for . The traditional GHZ state was defined for . GHZ states are occasionally extended to qudits, i.e., systems of d rather than 2 dimensions.
Also for qubits, there are spin squeezed states, a class of squeezed coherent states satisfying certain restrictions on the uncertainty of spin measurements, which are necessarily entangled. Spin squeezed states are good candidates for enhancing precision measurements using quantum entanglement.
For two bosonic modes, a NOON state is
This is like the Bell state except the basis states and have been replaced with "the N photons are in one mode" and "the N photons are in the other mode".
Finally, there also exist twin Fock states for bosonic modes, which can be created by feeding a Fock state into two arms leading to a beam splitter. They are the sum of multiple NOON states, and can be used to achieve the Heisenberg limit.
For the appropriately chosen measures of entanglement, Bell, GHZ, and NOON states are maximally entangled while spin squeezed and twin Fock states are only partially entangled.
Methods of creating entanglement
Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontaneous parametric down-conversion to generate a pair of photons entangled in polarization. Other methods include the use of a fibre coupler to confine and mix photons, photons emitted from decay cascade of the bi-exciton in a quantum dot, or the use of the Hong–Ou–Mandel effect. Quantum entanglement of a particle and its antiparticle, such as an electron and a positron, can be created by partial overlap of the corresponding quantum wave functions in Hardy's interferometer. In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades.
It is also possible to create entanglement between quantum systems that never directly interacted, through the use of entanglement swapping. Two independently prepared, identical particles may also be entangled if their wave functions merely spatially overlap, at least partially.
Testing a system for entanglement
A density matrix ρ is called separable if it can be written as a convex sum of product states, namely
with probabilities. By definition, a state is entangled if it is not separable.
For 2-qubit and qubit-qutrit systems (2 × 2 and 2 × 3 respectively) the simple Peres–Horodecki criterion provides both a necessary and a sufficient criterion for separability, and thus—inadvertently—for detecting entanglement. However, for the general case, the criterion is merely a necessary one for separability, as the problem becomes NP-hard when generalized. Other separability criteria include (but not limited to) the range criterion, reduction criterion, and those based on uncertainty relations. See Ref. for a review of separability criteria in discrete-variable systems and Ref. for a review on techniques and challenges in experimental entanglement certification in discrete-variable systems.
A numerical approach to the problem is suggested by Jon Magne Leinaas, Jan Myrheim and Eirik Ovrum in their paper "Geometrical aspects of entanglement". Leinaas et al. offer a numerical approach, iteratively refining an estimated separable state towards the target state to be tested, and checking if the target state can indeed be reached.
In continuous variable systems, the Peres–Horodecki criterion also applies. Specifically, Simon formulated a particular version of the Peres–Horodecki criterion in terms of the second-order moments of canonical operators and showed that it is necessary and sufficient for -mode Gaussian states (see Ref. for a seemingly different but essentially equivalent approach). It was later found that Simon's condition is also necessary and sufficient for -mode Gaussian states, but no longer sufficient for -mode Gaussian states. Simon's condition can be generalized by taking into account the higher order moments of canonical operators or by using entropic measures.
In quantum gravity
There is a fundamental conflict, referred to as the problem of time, between the way the concept of time is used in quantum mechanics, and the role it plays in general relativity. In standard quantum theories time acts as an independent background through which states evolve, while general relativity treats time as a dynamical variable which relates directly with matter. Part of the effort to reconcile these approaches to time results in the Wheeler–DeWitt equation, which predicts the state of the universe is timeless or static, contrary to ordinary experience.
Work started by Don Page and William Wootters suggests that the universe appears to evolve for observers on the inside because of energy entanglement between an evolving system and a clock system, both within the universe. In this way the overall system can remain timeless while parts experience time via entanglement. The issue remains an open question closely related to attempts at theories of quantum gravity.
In general relativity, gravity arises from the curvature of spacetime and that curvature derives from the distribution of matter. However, matter is governed by quantum mechanics. Integration of these two theories faces many problems. In an (unrealistic) model space called the anti-de Sitter space, the AdS/CFT correspondence allows a quantum gravitational system to be related to a quantum field theory without gravity. Using this correspondence, Mark Van Raamsdonk suggested that spacetime arises as an emergent phenomenon of the quantum degrees of freedom that are entangled and live in the boundary of the spacetime.
Experiments demonstrating and using entanglement
Bell tests
A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics against the hypothesis of local hidden variables. These tests empirically evaluate the implications of Bell's theorem. To date, all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". In earlier tests, it could not be ruled out that the result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location. However, so-called "loophole-free" Bell tests have since been performed where the locations were sufficiently separated that communications at the speed of light would have taken longer—in one case, 10,000 times longer—than the interval between the measurements.
In 2017, Yin et al. reported setting a new quantum entanglement distance record of 1,203 km, demonstrating the survival of a two-photon pair and a violation of a Bell inequality, reaching a CHSH valuation of , under strict Einstein locality conditions, from the Micius satellite to bases in Lijian, Yunnan and Delingha, Quinhai, increasing the efficiency of transmission over prior fiberoptic experiments by an order of magnitude.
Entanglement of top quarks
In 2023 the LHC using techniques from quantum tomography measured entanglement at the highest energy so far, a rare intersection between quantum information and high energy physics based on theoretical work first proposed in 2021. The experiment was carried by the ATLAS detector measuring the spin of top-quark pair production and the effect was observed with a more than 5σ level of significance, the top quark is the heaviest known particle and therefore has a very short lifetime () being the only quark that decays before undergoing hadronization (~ ) and spin decorrelation (~ ), so the spin information is transferred without much loss to the leptonic decays products that will be caught by the detector. The spin polarization and correlation of the particles was measured and tested for entanglement with concurrence as well as the Peres–Horodecki criterion and subsequently the effect has been confirmed too in the CMS detector.
Entanglement of macroscopic objects
In 2020, researchers reported the quantum entanglement between the motion of a millimetre-sized mechanical oscillator and a disparate distant spin system of a cloud of atoms. Later work complemented this work by quantum-entangling two mechanical oscillators.
Entanglement of elements of living systems
In October 2018, physicists reported producing quantum entanglement using living organisms, particularly between photosynthetic molecules within living bacteria and quantized light.
Living organisms (green sulphur bacteria) have been studied as mediators to create quantum entanglement between otherwise non-interacting light modes, showing high entanglement between light and bacterial modes, and to some extent, even entanglement within the bacteria.
Entanglement of quarks and gluons in protons
Physicists at Brookhaven National Laboratory demonstrated quantum entanglement within protons, showing quarks and gluons are interdependent rather than isolated particles. Using high-energy electron-proton collisions, they revealed maximal entanglement, reshaping our understanding of proton structure.
| Physical sciences | Quantum mechanics | null |
25409 | https://en.wikipedia.org/wiki/Reptile | Reptile | Reptiles, as commonly defined, are a group of tetrapods with an ectothermic ('cold-blooded') metabolism and amniotic development. Living traditional reptiles comprise four orders: Testudines (turtles), Crocodilia (crocodilians), Squamata (lizards and snakes), and Rhynchocephalia (the tuatara). As of May 2023, about 12,000 living species of reptiles are listed in the Reptile Database. The study of the traditional reptile orders, customarily in combination with the study of modern amphibians, is called herpetology.
Reptiles have been subject to several conflicting taxonomic definitions. In Linnaean taxonomy, reptiles are gathered together under the class Reptilia ( ), which corresponds to common usage. Modern cladistic taxonomy regards that group as paraphyletic, since genetic and paleontological evidence has determined that birds (class Aves), as members of Dinosauria, are more closely related to living crocodilians than to other reptiles, and are thus nested among reptiles from an evolutionary perspective. Many cladistic systems therefore redefine Reptilia as a clade (monophyletic group) including birds, though the precise definition of this clade varies between authors. Others prioritize the clade Sauropsida, which typically refers to all amniotes more closely related to modern reptiles than to mammals.
The earliest known proto-reptiles originated from the Carboniferous period, having evolved from advanced reptiliomorph tetrapods which became increasingly adapted to life on dry land. The earliest known eureptile ("true reptile") was Hylonomus, a small and superficially lizard-like animal which lived in Nova Scotia during the Bashkirian age of the Late Carboniferous, around . Genetic and fossil data argues that the two largest lineages of reptiles, Archosauromorpha (crocodilians, birds, and kin) and Lepidosauromorpha (lizards, and kin), diverged during the Permian period. In addition to the living reptiles, there are many diverse groups that are now extinct, in some cases due to mass extinction events. In particular, the Cretaceous–Paleogene extinction event wiped out the pterosaurs, plesiosaurs, and all non-avian dinosaurs alongside many species of crocodyliforms and squamates (e.g., mosasaurs). Modern non-bird reptiles inhabit all the continents except Antarctica.
Reptiles are tetrapod vertebrates, creatures that either have four limbs or, like snakes, are descended from four-limbed ancestors. Unlike amphibians, reptiles do not have an aquatic larval stage. Most reptiles are oviparous, although several species of squamates are viviparous, as were some extinct aquatic clades – the fetus develops within the mother, using a (non-mammalian) placenta rather than contained in an eggshell. As amniotes, reptile eggs are surrounded by membranes for protection and transport, which adapt them to reproduction on dry land. Many of the viviparous species feed their fetuses through various forms of placenta analogous to those of mammals, with some providing initial care for their hatchlings. Extant reptiles range in size from a tiny gecko, Sphaerodactylus ariasae, which can grow up to to the saltwater crocodile, Crocodylus porosus, which can reach over in length and weigh over .
Classification
Research history
In the 13th century, the category of reptile was recognized in Europe as consisting of a miscellany of egg-laying creatures, including "snakes, various fantastic monsters, lizards, assorted amphibians, and worms", as recorded by Beauvais in his Mirror of Nature.
In the 18th century, the reptiles were, from the outset of classification, grouped with the amphibians. Linnaeus, working from species-poor Sweden, where the common adder and grass snake are often found hunting in water, included all reptiles and amphibians in class in his Systema Naturæ.
The terms reptile and amphibian were largely interchangeable, reptile (from Latin repere, 'to creep') being preferred by the French. J.N. Laurenti was the first to formally use the term Reptilia for an expanded selection of reptiles and amphibians basically similar to that of Linnaeus. Today, the two groups are still commonly treated under the single heading herpetology.
It was not until the beginning of the 19th century that it became clear that reptiles and amphibians are, in fact, quite different animals, and P.A. Latreille erected the class Batracia (1825) for the latter, dividing the tetrapods into the four familiar classes of reptiles, amphibians, birds, and mammals. The British anatomist T.H. Huxley made Latreille's definition popular and, together with Richard Owen, expanded Reptilia to include the various fossil "antediluvian monsters", including dinosaurs and the mammal-like (synapsid) Dicynodon he helped describe. This was not the only possible classification scheme: In the Hunterian lectures delivered at the Royal College of Surgeons in 1863, Huxley grouped the vertebrates into mammals, sauroids, and ichthyoids (the latter containing the fishes and amphibians). He subsequently proposed the names of Sauropsida and Ichthyopsida for the latter two groups. In 1866, Haeckel demonstrated that vertebrates could be divided based on their reproductive strategies, and that reptiles, birds, and mammals were united by the amniotic egg.
The terms Sauropsida ("lizard faces") and Theropsida ("beast faces") were used again in 1916 by E.S. Goodrich to distinguish between lizards, birds, and their relatives on the one hand (Sauropsida) and mammals and their extinct relatives (Theropsida) on the other. Goodrich supported this division by the nature of the hearts and blood vessels in each group, and other features, such as the structure of the forebrain. According to Goodrich, both lineages evolved from an earlier stem group, Protosauria ("first lizards") in which he included some animals today considered reptile-like amphibians, as well as early reptiles.
In 1956, D.M.S. Watson observed that the first two groups diverged very early in reptilian history, so he divided Goodrich's Protosauria between them. He also reinterpreted Sauropsida and Theropsida to exclude birds and mammals, respectively. Thus his Sauropsida included Procolophonia, Eosuchia, Millerosauria, Chelonia (turtles), Squamata (lizards and snakes), Rhynchocephalia, Crocodilia, "thecodonts" (paraphyletic basal Archosauria), non-avian dinosaurs, pterosaurs, ichthyosaurs, and sauropterygians.
In the late 19th century, a number of definitions of Reptilia were offered. The biological traits listed by Lydekker in 1896, for example, include a single occipital condyle, a jaw joint formed by the quadrate and articular bones, and certain characteristics of the vertebrae. The animals singled out by these formulations, the amniotes other than the mammals and the birds, are still those considered reptiles today.
The synapsid/sauropsid division supplemented another approach, one that split the reptiles into four subclasses based on the number and position of temporal fenestrae, openings in the sides of the skull behind the eyes. This classification was initiated by Henry Fairfield Osborn and elaborated and made popular by Romer's classic Vertebrate Paleontology. Those four subclasses were:
Anapsida – no fenestrae – cotylosaurs and chelonia (turtles and relatives)
Synapsida – one low fenestra – pelycosaurs and therapsids (the 'mammal-like reptiles')
Euryapsida – one high fenestra (above the postorbital and squamosal) – protorosaurs (small, early lizard-like reptiles) and the marine sauropterygians and ichthyosaurs, the latter called Parapsida in Osborn's work.
Diapsida – two fenestrae – most reptiles, including lizards, snakes, crocodilians, dinosaurs and pterosaurs.
The composition of Euryapsida was uncertain. Ichthyosaurs were, at times, considered to have arisen independently of the other euryapsids, and given the older name Parapsida. Parapsida was later discarded as a group for the most part (ichthyosaurs being classified as incertae sedis or with Euryapsida). However, four (or three if Euryapsida is merged into Diapsida) subclasses remained more or less universal for non-specialist work throughout the 20th century. It has largely been abandoned by recent researchers: In particular, the anapsid condition has been found to occur so variably among unrelated groups that it is not now considered a useful distinction.
Phylogenetics and modern definition
By the early 21st century, vertebrate paleontologists were beginning to adopt phylogenetic taxonomy, in which all groups are defined in such a way as to be monophyletic; that is, groups which include all descendants of a particular ancestor. The reptiles as historically defined are paraphyletic, since they exclude both birds and mammals. These respectively evolved from dinosaurs and from early therapsids, both of which were traditionally called "reptiles". Birds are more closely related to crocodilians than the latter are to the rest of extant reptiles. Colin Tudge wrote:
Mammals are a clade, and therefore the cladists are happy to acknowledge the traditional taxon Mammalia; and birds, too, are a clade, universally ascribed to the formal taxon Aves. Mammalia and Aves are, in fact, subclades within the grand clade of the Amniota. But the traditional class Reptilia is not a clade. It is just a section of the clade Amniota: The section that is left after the Mammalia and Aves have been hived off. It cannot be defined by synapomorphies, as is the proper way. Instead, it is defined by a combination of the features it has and the features it lacks: reptiles are the amniotes that lack fur or feathers. At best, the cladists suggest, we could say that the traditional Reptilia are 'non-avian, non-mammalian amniotes'.
Despite the early proposals for replacing the paraphyletic Reptilia with a monophyletic Sauropsida, which includes birds, that term was never adopted widely or, when it was, was not applied consistently.
When Sauropsida was used, it often had the same content or even the same definition as Reptilia. In 1988, Jacques Gauthier proposed a cladistic definition of Reptilia as a monophyletic node-based crown group containing turtles, lizards and snakes, crocodilians, and birds, their common ancestor and all its descendants. While Gauthier's definition was close to the modern consensus, nonetheless, it became considered inadequate because the actual relationship of turtles to other reptiles was not yet well understood at this time. Major revisions since have included the reassignment of synapsids as non-reptiles, and classification of turtles as diapsids. Gauthier 1994 and Laurin and Reisz 1995's definition of Sauropsida defined the scope of the group as distinct and broader than that of Reptilia, encompassing Mesosauridae as well as Reptilia sensu stricto.
A variety of other definitions were proposed by other scientists in the years following Gauthier's paper. The first such new definition, which attempted to adhere to the standards of the PhyloCode, was published by Modesto and Anderson in 2004. Modesto and Anderson reviewed the many previous definitions and proposed a modified definition, which they intended to retain most traditional content of the group while keeping it stable and monophyletic. They defined Reptilia as all amniotes closer to Lacerta agilis and Crocodylus niloticus than to Homo sapiens. This stem-based definition is equivalent to the more common definition of Sauropsida, which Modesto and Anderson synonymized with Reptilia, since the latter is better known and more frequently used. Unlike most previous definitions of Reptilia, however, Modesto and Anderson's definition includes birds, as they are within the clade that includes both lizards and crocodiles.
Taxonomy
General classification of extinct and living reptiles, focusing on major groups.
Reptilia/Sauropsida
Parareptilia
Eureptilia
Captorhinidae
Diapsida
Araeoscelidia
Neodiapsida
Drepanosauromorpha (placement uncertain)
Younginiformes (paraphyletic)
Ichthyosauromorpha (placement uncertain)
Thalattosauria (placement uncertain)
Sauria
Lepidosauromorpha
Lepidosauriformes
Rhynchocephalia (tuatara)
Squamata (lizards and snakes)
Choristodera (placement uncertain)
Sauropterygia (placement uncertain)
Pantestudines (turtles and kin, placement uncertain)
Archosauromorpha
Protorosauria (paraphyletic)
Rhynchosauria
Allokotosauria
Archosauriformes
Phytosauria
Archosauria
Pseudosuchia
Crocodilia (crocodilians)
Avemetatarsalia/Ornithodira
Pterosauria
Dinosauria
Ornithischia
Saurischia (including birds (Aves))
Phylogeny
The cladogram presented here illustrates the "family tree" of reptiles, and follows a simplified version of the relationships found by M.S. Lee, in 2013. All genetic studies have supported the hypothesis that turtles are diapsids; some have placed turtles within Archosauromorpha, though a few have recovered turtles as Lepidosauromorpha instead. The cladogram below used a combination of genetic (molecular) and fossil (morphological) data to obtain its results.
The position of turtles
The placement of turtles has historically been highly variable. Classically, turtles were considered to be related to the primitive anapsid reptiles. Molecular work has usually placed turtles within the diapsids. As of 2013, three turtle genomes have been sequenced. The results place turtles as a sister clade to the archosaurs, the group that includes crocodiles, non-avian dinosaurs, and birds. However, in their comparative analysis of the timing of organogenesis, Werneburg and Sánchez-Villagra (2009) found support for the hypothesis that turtles belong to a separate clade within Sauropsida, outside the saurian clade altogether.
Evolutionary history
Origin of the reptiles
The origin of the reptiles lies about 310–320 million years ago, in the steaming swamps of the late Carboniferous period, when the first reptiles evolved from advanced reptiliomorphs.
The oldest known animal that may have been an amniote is Casineria (though it may have been a temnospondyl). A series of footprints from the fossil strata of Nova Scotia dated to show typical reptilian toes and imprints of scales. These tracks are attributed to Hylonomus, the oldest unquestionable reptile known.
It was a small, lizard-like animal, about long, with numerous sharp teeth indicating an insectivorous diet. Other examples include Westlothiana (for the moment considered a reptiliomorph rather than a true amniote) and Paleothyris, both of similar build and presumably similar habit.
However, microsaurs have been at times considered true reptiles, so an earlier origin is possible.
Rise of the reptiles
The earliest amniotes, including stem-reptiles (those amniotes closer to modern reptiles than to mammals), were largely overshadowed by larger stem-tetrapods, such as Cochleosaurus, and remained a small, inconspicuous part of the fauna until the Carboniferous Rainforest Collapse. This sudden collapse affected several large groups. Primitive tetrapods were particularly devastated, while stem-reptiles fared better, being ecologically adapted to the drier conditions that followed. Primitive tetrapods, like modern amphibians, need to return to water to lay eggs; in contrast, amniotes, like modern reptiles – whose eggs possess a shell that allows them to be laid on land – were better adapted to the new conditions. Amniotes acquired new niches at a faster rate than before the collapse and at a much faster rate than primitive tetrapods. They acquired new feeding strategies including herbivory and carnivory, previously only having been insectivores and piscivores. From this point forward, reptiles dominated communities and had a greater diversity than primitive tetrapods, setting the stage for the Mesozoic (known as the Age of Reptiles). One of the best known early stem-reptiles is Mesosaurus, a genus from the Early Permian that had returned to water, feeding on fish.
A 2021 examination of reptile diversity in the Carboniferous and the Permian suggests a much higher degree of diversity than previously thought, comparable or even exceeding that of synapsids. Thus, the "First Age of Reptiles" was proposed.
Anapsids, synapsids, diapsids, and sauropsids
It was traditionally assumed that the first reptiles retained an anapsid skull inherited from their ancestors. This type of skull has a skull roof with only holes for the nostrils, eyes and a pineal eye. The discoveries of synapsid-like openings (see below) in the skull roof of the skulls of several members of Parareptilia (the clade containing most of the amniotes traditionally referred to as "anapsids"), including lanthanosuchoids, millerettids, bolosaurids, some nycteroleterids, some procolophonoids and at least some mesosaurs made it more ambiguous and it is currently uncertain whether the ancestral amniote had an anapsid-like or synapsid-like skull. These animals are traditionally referred to as "anapsids", and form a paraphyletic basic stock from which other groups evolved. Very shortly after the first amniotes appeared, a lineage called Synapsida split off; this group was characterized by a temporal opening in the skull behind each eye giving room for the jaw muscle to move. These are the "mammal-like amniotes", or stem-mammals, that later gave rise to the true mammals. Soon after, another group evolved a similar trait, this time with a double opening behind each eye, earning them the name Diapsida ("two arches"). The function of the holes in these groups was to lighten the skull and give room for the jaw muscles to move, allowing for a more powerful bite.
Turtles have been traditionally believed to be surviving parareptiles, on the basis of their anapsid skull structure, which was assumed to be primitive trait. The rationale for this classification has been disputed, with some arguing that turtles are diapsids that evolved anapsid skulls, improving their armor. Later morphological phylogenetic studies with this in mind placed turtles firmly within Diapsida. All molecular studies have strongly upheld the placement of turtles within diapsids, most commonly as a sister group to extant archosaurs.
Permian reptiles
With the close of the Carboniferous, the amniotes became the dominant tetrapod fauna. While primitive, terrestrial reptiliomorphs still existed, the synapsid amniotes evolved the first truly terrestrial megafauna (giant animals) in the form of pelycosaurs, such as Edaphosaurus and the carnivorous Dimetrodon. In the mid-Permian period, the climate became drier, resulting in a change of fauna: The pelycosaurs were replaced by the therapsids.
The parareptiles, whose massive skull roofs had no postorbital holes, continued and flourished throughout the Permian. The pareiasaurian parareptiles reached giant proportions in the late Permian, eventually disappearing at the close of the period (the turtles being possible survivors).
Early in the period, the modern reptiles, or crown-group reptiles, evolved and split into two main lineages: the Archosauromorpha (forebears of turtles, crocodiles, and dinosaurs) and the Lepidosauromorpha (predecessors of modern lizards and tuataras). Both groups remained lizard-like and relatively small and inconspicuous during the Permian.
Mesozoic reptiles
The close of the Permian saw the greatest mass extinction known (see the Permian–Triassic extinction event), an event prolonged by the combination of two or more distinct extinction pulses. Most of the earlier parareptile and synapsid megafauna disappeared, being replaced by the true reptiles, particularly archosauromorphs. These were characterized by elongated hind legs and an erect pose, the early forms looking somewhat like long-legged crocodiles. The archosaurs became the dominant group during the Triassic period, though it took 30 million years before their diversity was as great as the animals that lived in the Permian. Archosaurs developed into the well-known dinosaurs and pterosaurs, as well as the ancestors of crocodiles. Since reptiles, first rauisuchians and then dinosaurs, dominated the Mesozoic era, the interval is popularly known as the "Age of Reptiles". The dinosaurs also developed smaller forms, including the feather-bearing smaller theropods. In the Cretaceous period, these gave rise to the first true birds.
The sister group to Archosauromorpha is Lepidosauromorpha, containing lizards and tuataras, as well as their fossil relatives. Lepidosauromorpha contained at least one major group of the Mesozoic sea reptiles: the mosasaurs, which lived during the Cretaceous period. The phylogenetic placement of other main groups of fossil sea reptiles – the ichthyopterygians (including ichthyosaurs) and the sauropterygians, which evolved in the early Triassic – is more controversial. Different authors linked these groups either to lepidosauromorphs or to archosauromorphs, and ichthyopterygians were also argued to be diapsids that did not belong to the least inclusive clade containing lepidosauromorphs and archosauromorphs.
Cenozoic reptiles
The close of the Cretaceous period saw the demise of the Mesozoic era reptilian megafauna (see the Cretaceous–Paleogene extinction event, also known as K-T extinction event). Of the large marine reptiles, only sea turtles were left; and of the non-marine large reptiles, only the semi-aquatic crocodiles and broadly similar choristoderes survived the extinction, with last members of the latter, the lizard-like Lazarussuchus, becoming extinct in the Miocene. Of the great host of dinosaurs dominating the Mesozoic, only the small beaked birds survived. This dramatic extinction pattern at the end of the Mesozoic led into the Cenozoic. Mammals and birds filled the empty niches left behind by the reptilian megafauna and, while reptile diversification slowed, bird and mammal diversification took an exponential turn. However, reptiles were still important components of the megafauna, particularly in the form of large and giant tortoises.
After the extinction of most archosaur and marine reptile lines by the end of the Cretaceous, reptile diversification continued throughout the Cenozoic. Squamates took a massive hit during the K–Pg event, only recovering ten million years after it, but they underwent a great radiation event once they recovered, and today squamates make up the majority of living reptiles (> 95%). Approximately 10,000 extant species of traditional reptiles are known, with birds adding about 10,000 more, almost twice the number of mammals, represented by about 5,700 living species (excluding domesticated species).
Morphology and physiology
Circulation
All lepidosaurs and turtles have a three-chambered heart consisting of two atria, one variably partitioned ventricle, and two aortas that lead to the systemic circulation. The degree of mixing of oxygenated and deoxygenated blood in the three-chambered heart varies depending on the species and physiological state. Under different conditions, deoxygenated blood can be shunted back to the body or oxygenated blood can be shunted back to the lungs. This variation in blood flow has been hypothesized to allow more effective thermoregulation and longer diving times for aquatic species, but has not been shown to be a fitness advantage.
For example, iguana hearts, like the majority of the squamate hearts, are composed of three chambers with two aorta and one ventricle, cardiac involuntary muscles. The main structures of the heart are the sinus venosus, the pacemaker, the left atrium, the right atrium, the atrioventricular valve, the cavum venosum, cavum arteriosum, the cavum pulmonale, the muscular ridge, the ventricular ridge, pulmonary veins, and paired aortic arches.
Some squamate species (e.g., pythons and monitor lizards) have three-chambered hearts that become functionally four-chambered hearts during contraction. This is made possible by a muscular ridge that subdivides the ventricle during ventricular diastole and completely divides it during ventricular systole. Because of this ridge, some of these squamates are capable of producing ventricular pressure differentials that are equivalent to those seen in mammalian and avian hearts.
Crocodilians have an anatomically four-chambered heart, similar to birds, but also have two systemic aortas and are therefore capable of bypassing their pulmonary circulation. In turtles, the ventricle is not perfectly divided, so a mix of aerated and nonaerated blood can occur.
Metabolism
Modern non-avian reptiles exhibit some form of cold-bloodedness (i.e. some mix of poikilothermy, ectothermy, and bradymetabolism) so that they have limited physiological means of keeping the body temperature constant and often rely on external sources of heat. Due to a less stable core temperature than birds and mammals, reptilian biochemistry requires enzymes capable of maintaining efficiency over a greater range of temperatures than in the case for warm-blooded animals. The optimum body temperature range varies with species, but is typically below that of warm-blooded animals; for many lizards, it falls in the range, while extreme heat-adapted species, like the American desert iguana Dipsosaurus dorsalis, can have optimal physiological temperatures in the mammalian range, between . While the optimum temperature is often encountered when the animal is active, the low basal metabolism makes body temperature drop rapidly when the animal is inactive.
As in all animals, reptilian muscle action produces heat. In large reptiles, like leatherback turtles, the low surface-to-volume ratio allows this metabolically produced heat to keep the animals warmer than their environment even though they do not have a warm-blooded metabolism. This form of homeothermy is called gigantothermy; it has been suggested as having been common in large dinosaurs and other extinct large-bodied reptiles.
The benefit of a low resting metabolism is that it requires far less fuel to sustain bodily functions. By using temperature variations in their surroundings, or by remaining cold when they do not need to move, reptiles can save considerable amounts of energy compared to endothermic animals of the same size. A crocodile needs from a tenth to a fifth of the food necessary for a lion of the same weight and can live half a year without eating. Lower food requirements and adaptive metabolisms allow reptiles to dominate the animal life in regions where net calorie availability is too low to sustain large-bodied mammals and birds.
It is generally assumed that reptiles are unable to produce the sustained high energy output necessary for long distance chases or flying. Higher energetic capacity might have been responsible for the evolution of warm-bloodedness in birds and mammals. However, investigation of correlations between active capacity and thermophysiology show a weak relationship. Most extant reptiles are carnivores with a sit-and-wait feeding strategy; whether reptiles are cold blooded due to their ecology is not clear. Energetic studies on some reptiles have shown active capacities equal to or greater than similar sized warm-blooded animals.
Respiratory system
All reptiles breathe using lungs. Aquatic turtles have developed more permeable skin, and some species have modified their cloaca to increase the area for gas exchange. Even with these adaptations, breathing is never fully accomplished without lungs. Lung ventilation is accomplished differently in each main reptile group. In squamates, the lungs are ventilated almost exclusively by the axial musculature. This is also the same musculature that is used during locomotion. Because of this constraint, most squamates are forced to hold their breath during intense runs. Some, however, have found a way around it. Varanids, and a few other lizard species, employ buccal pumping as a complement to their normal "axial breathing". This allows the animals to completely fill their lungs during intense locomotion, and thus remain aerobically active for a long time. Tegu lizards are known to possess a proto-diaphragm, which separates the pulmonary cavity from the visceral cavity. While not actually capable of movement, it does allow for greater lung inflation, by taking the weight of the viscera off the lungs.
Crocodilians actually have a muscular diaphragm that is analogous to the mammalian diaphragm. The difference is that the muscles for the crocodilian diaphragm pull the pubis (part of the pelvis, which is movable in crocodilians) back, which brings the liver down, thus freeing space for the lungs to expand. This type of diaphragmatic setup has been referred to as the "hepatic piston". The airways form a number of double tubular chambers within each lung. On inhalation and exhalation air moves through the airways in the same direction, thus creating a unidirectional airflow through the lungs. A similar system is found in birds, monitor lizards and iguanas.
Most reptiles lack a secondary palate, meaning that they must hold their breath while swallowing. Crocodilians have evolved a bony secondary palate that allows them to continue breathing while remaining submerged (and protect their brains against damage by struggling prey). Skinks (family Scincidae) also have evolved a bony secondary palate, to varying degrees. Snakes took a different approach and extended their trachea instead. Their tracheal extension sticks out like a fleshy straw, and allows these animals to swallow large prey without suffering from asphyxiation.
Turtles and tortoises
How turtles breathe has been the subject of much study. To date, only a few species have been studied thoroughly enough to get an idea of how those turtles breathe. The varied results indicate that turtles have found a variety of solutions to this problem.
The difficulty is that most turtle shells are rigid and do not allow for the type of expansion and contraction that other amniotes use to ventilate their lungs. Some turtles, such as the Indian flapshell (Lissemys punctata), have a sheet of muscle that envelops the lungs. When it contracts, the turtle can exhale. When at rest, the turtle can retract the limbs into the body cavity and force air out of the lungs. When the turtle protracts its limbs, the pressure inside the lungs is reduced, and the turtle can suck air in. Turtle lungs are attached to the inside of the top of the shell (carapace), with the bottom of the lungs attached (via connective tissue) to the rest of the viscera. By using a series of special muscles (roughly equivalent to a diaphragm), turtles are capable of pushing their viscera up and down, resulting in effective respiration, since many of these muscles have attachment points in conjunction with their forelimbs (indeed, many of the muscles expand into the limb pockets during contraction).
Breathing during locomotion has been studied in three species, and they show different patterns. Adult female green sea turtles do not breathe as they crutch along their nesting beaches. They hold their breath during terrestrial locomotion and breathe in bouts as they rest. North American box turtles breathe continuously during locomotion, and the ventilation cycle is not coordinated with the limb movements. This is because they use their abdominal muscles to breathe during locomotion. The last species to have been studied is the red-eared slider, which also breathes during locomotion, but takes smaller breaths during locomotion than during small pauses between locomotor bouts, indicating that there may be mechanical interference between the limb movements and the breathing apparatus. Box turtles have also been observed to breathe while completely sealed up inside their shells.
Sound production
Compared with frogs, birds, and mammals, reptiles are less vocal. Sound production is usually limited to hissing, which is produced merely by forcing air though a partly closed glottis and is not considered to be a true vocalization. The ability to vocalize exists in crocodilians, some lizards and turtles; and typically involves vibrating fold-like structures in the larynx or glottis. Some geckos and turtles possess true vocal cords, which have elastin-rich connective tissue.
Hearing in snakes
Hearing in humans relies on 3 parts of the ear; the outer ear that directs sound waves into the ear canal, the middle ear that transmits incoming sound waves to the inner ear, and the inner ear that helps in hearing and keeping their balance. Unlike humans and other mammals, snakes do not possess an outer ear, a middle ear, and a tympanum but have an inner ear structure with cochleas directly connected to their jawbone. They are able to feel the vibrations generated from the sound waves in their jaw as they move on the ground. This is done by the use of mechanoreceptors, sensory nerves that run along the body of snakes directing the vibrations along the spinal nerves to the brain. Snakes have a sensitive auditory perception and can tell which direction sound being made is coming from so that they can sense the presence of prey or predator but it is still unclear how sensitive snakes are to sound waves traveling through the air.
Skin
Reptilian skin is covered in a horny epidermis, making it watertight and enabling reptiles to live on dry land, in contrast to amphibians. Compared to mammalian skin, that of reptiles is rather thin and lacks the thick dermal layer that produces leather in mammals.
Exposed parts of reptiles are protected by scales or scutes, sometimes with a bony base (osteoderms), forming armor. In lepidosaurs, such as lizards and snakes, the whole skin is covered in overlapping epidermal scales. Such scales were once thought to be typical of the class Reptilia as a whole, but are now known to occur only in lepidosaurs. The scales found in turtles and crocodiles are of dermal, rather than epidermal, origin and are properly termed scutes. In turtles, the body is hidden inside a hard shell composed of fused scutes.
Lacking a thick dermis, reptilian leather is not as strong as mammalian leather. It is used in leather-wares for decorative purposes for shoes, belts and handbags, particularly crocodile skin.
Shedding
Reptiles shed their skin through a process called ecdysis which occurs continuously throughout their lifetime. In particular, younger reptiles tend to shed once every five to six weeks while adults shed three to four times a year. Younger reptiles shed more because of their rapid growth rate. Once full size, the frequency of shedding drastically decreases. The process of ecdysis involves forming a new layer of skin under the old one. Proteolytic enzymes and lymphatic fluid is secreted between the old and new layers of skin. Consequently, this lifts the old skin from the new one allowing shedding to occur. Snakes will shed from the head to the tail while lizards shed in a "patchy pattern". Dysecdysis, a common skin disease in snakes and lizards, will occur when ecdysis, or shedding, fails. There are numerous reasons why shedding fails and can be related to inadequate humidity and temperature, nutritional deficiencies, dehydration and traumatic injuries. Nutritional deficiencies decrease proteolytic enzymes while dehydration reduces lymphatic fluids to separate the skin layers. Traumatic injuries on the other hand, form scars that will not allow new scales to form and disrupt the process of ecdysis.
Excretion
Excretion is performed mainly by two small kidneys. In diapsids, uric acid is the main nitrogenous waste product; turtles, like mammals, excrete mainly urea. Unlike the kidneys of mammals and birds, reptile kidneys are unable to produce liquid urine more concentrated than their body fluid. This is because they lack a specialized structure called a loop of Henle, which is present in the nephrons of birds and mammals. Because of this, many reptiles use the colon to aid in the reabsorption of water. Some are also able to take up water stored in the bladder. Excess salts are also excreted by nasal and lingual salt glands in some reptiles.
In all reptiles, the urinogenital ducts and the rectum both empty into an organ called a cloaca. In some reptiles, a midventral wall in the cloaca may open into a urinary bladder, but not all. It is present in all turtles and tortoises as well as most lizards, but is lacking in the monitor lizard, the legless lizards. It is absent in the snakes, alligators, and crocodiles.
Many turtles and lizards have proportionally very large bladders. Charles Darwin noted that the Galapagos tortoise had a bladder which could store up to 20% of its body weight. Such adaptations are the result of environments such as remote islands and deserts where water is very scarce. Other desert-dwelling reptiles have large bladders that can store a long-term reservoir of water for up to several months and aid in osmoregulation.
Turtles have two or more accessory urinary bladders, located lateral to the neck of the urinary bladder and dorsal to the pubis, occupying a significant portion of their body cavity. Their bladder is also usually bilobed with a left and right section. The right section is located under the liver, which prevents large stones from remaining in that side while the left section is more likely to have calculi.
Digestion
Most reptiles are insectivorous or carnivorous and have simple and comparatively short digestive tracts due to meat being fairly simple to break down and digest. Digestion is slower than in mammals, reflecting their lower resting metabolism and their inability to divide and masticate their food. Their poikilotherm metabolism has very low energy requirements, allowing large reptiles like crocodiles and large constrictors to live from a single large meal for months, digesting it slowly.
While modern reptiles are predominantly carnivorous, during the early history of reptiles several groups produced some herbivorous megafauna: in the Paleozoic, the pareiasaurs; and in the Mesozoic several lines of dinosaurs. Today, turtles are the only predominantly herbivorous reptile group, but several lines of agamas and iguanas have evolved to live wholly or partly on plants.
Herbivorous reptiles face the same problems of mastication as herbivorous mammals but, lacking the complex teeth of mammals, many species swallow rocks and pebbles (so called gastroliths) to aid in digestion: The rocks are washed around in the stomach, helping to grind up plant matter. Fossil gastroliths have been found associated with both ornithopods and sauropods, though whether they actually functioned as a gastric mill in the latter is disputed. Salt water crocodiles also use gastroliths as ballast, stabilizing them in the water or helping them to dive. A dual function as both stabilizing ballast and digestion aid has been suggested for gastroliths found in plesiosaurs.
Nerves
The reptilian nervous system contains the same basic part of the amphibian brain, but the reptile cerebrum and cerebellum are slightly larger. Most typical sense organs are well developed with certain exceptions, most notably the snake's lack of external ears (middle and inner ears are present). There are twelve pairs of cranial nerves. Due to their short cochlea, reptiles use electrical tuning to expand their range of audible frequencies.
Vision
Most reptiles are diurnal animals. The vision is typically adapted to daylight conditions, with color vision and more advanced visual depth perception than in amphibians and most mammals.
Reptiles usually have excellent vision, allowing them to detect shapes and motions at long distances. They often have poor vision in low-light conditions. Birds, crocodiles and turtles have three types of photoreceptor: rods, single cones and double cones, which gives them sharp color vision and enables them to see ultraviolet wavelengths. The lepidosaurs appear to have lost the duplex retina and only have a single class of receptor that is cone-like or rod-like depending on whether the species is diurnal or nocturnal. In many burrowing species, such as blind snakes, vision is reduced.
Many lepidosaurs have a photosensory organ on the top of their heads called the parietal eye, which are also called third eye, pineal eye or pineal gland. This "eye" does not work the same way as a normal eye does as it has only a rudimentary retina and lens and thus, cannot form images. It is, however, sensitive to changes in light and dark and can detect movement.
Some snakes have extra sets of visual organs (in the loosest sense of the word) in the form of pits sensitive to infrared radiation (heat). Such heat-sensitive pits are particularly well developed in the pit vipers, but are also found in boas and pythons. These pits allow the snakes to sense the body heat of birds and mammals, enabling pit vipers to hunt rodents in the dark.
Most reptiles, as well as birds, possess a nictitating membrane, a translucent third eyelid which is drawn over the eye from the inner corner. In crocodilians, it protects its eyeball surface while allowing a degree of vision underwater. However, many squamates, geckos and snakes in particular, lack eyelids, which are replaced by a transparent scale. This is called the brille, spectacle, or eyecap. The brille is usually not visible, except for when the snake molts, and it protects the eyes from dust and dirt.
Reproduction
Reptiles generally reproduce sexually, though some are capable of asexual reproduction. All reproductive activity occurs through the cloaca, the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis, while squamates, including snakes and lizards, possess a pair of hemipenes, only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm.
Most reptiles lay amniotic eggs covered with leathery or calcareous shells. An amnion (5), chorion (6), and allantois (8) are present during embryonic life. The eggshell (1) protects the crocodile embryo (11) and keeps it from drying out, but it is flexible to allow gas exchange. The chorion (6) aids in gas exchange between the inside and outside of the egg. It allows carbon dioxide to exit the egg and oxygen gas to enter the egg. The albumin (9) further protects the embryo and serves as a reservoir for water and protein. The allantois (8) is a sac that collects the metabolic waste produced by the embryo. The amniotic sac (10) contains amniotic fluid (12) which protects and cushions the embryo. The amnion (5) aids in osmoregulation and serves as a saltwater reservoir. The yolk sac (2) surrounding the yolk (3) contains protein and fat rich nutrients that are absorbed by the embryo via vessels (4) that allow the embryo to grow and metabolize. The air space (7) provides the embryo with oxygen while it is hatching. This ensures that the embryo will not suffocate while it is hatching. There are no larval stages of development. Viviparity and ovoviviparity have evolved in squamates and many extinct clades of reptiles. Among squamates, many species, including all boas and most vipers, use this mode of reproduction. The degree of viviparity varies; some species simply retain the eggs until just before hatching, others provide maternal nourishment to supplement the yolk, and yet others lack any yolk and provide all nutrients via a structure similar to the mammalian placenta. The earliest documented case of viviparity in reptiles is the Early Permian mesosaurs, although some individuals or taxa in that clade may also have been oviparous because a putative isolated egg has also been found. Several groups of Mesozoic marine reptiles also exhibited viviparity, such as mosasaurs, ichthyosaurs, and Sauropterygia, a group that includes pachypleurosaurs and Plesiosauria.
Asexual reproduction has been identified in squamates in six families of lizards and one snake. In some species of squamates, a population of females is able to produce a unisexual diploid clone of the mother. This form of asexual reproduction, called parthenogenesis, occurs in several species of gecko, and is particularly widespread in the teiids (especially Aspidocelis) and lacertids (Lacerta). In captivity, Komodo dragons (Varanidae) have reproduced by parthenogenesis.
Parthenogenetic species are suspected to occur among chameleons, agamids, xantusiids, and typhlopids.
Some reptiles exhibit temperature-dependent sex determination (TDSD), in which the incubation temperature determines whether a particular egg hatches as male or female. TDSD is most common in turtles and crocodiles, but also occurs in lizards and tuatara. To date, there has been no confirmation of whether TDSD occurs in snakes.
Longevity
Giant tortoises are among the longest-lived vertebrate animals (over 100 years by some estimates) and have been used as a model for studying longevity. DNA analysis of the genomes of Lonesome George, the iconic last member of Chelonoidis abingdonii, and the Aldabra giant tortoise Aldabrachelys gigantea led to the detection of lineage-specific variants affecting DNA repair genes that might contribute to our understanding of increased lifespan.
Cognition
Reptiles are generally considered less intelligent than mammals and birds. The size of their brain relative to their body is much less than that of mammals, the encephalization quotient being about one tenth of that of mammals, though larger reptiles can show more complex brain development. Larger lizards, like the monitors, are known to exhibit complex behavior, including cooperation and cognitive abilities allowing them to optimize their foraging and territoriality over time. Crocodiles have relatively larger brains and show a fairly complex social structure. The Komodo dragon is even known to engage in play, as are turtles, which are also considered to be social creatures, and sometimes switch between monogamy and promiscuity in their sexual behavior. One study found that wood turtles were better than white rats at learning to navigate mazes. Another study found that giant tortoises are capable of learning through operant conditioning, visual discrimination and retained learned behaviors with long-term memory. Sea turtles have been regarded as having simple brains, but their flippers are used for a variety of foraging tasks (holding, bracing, corralling) in common with marine mammals.
There is evidence that reptiles are sentient and able to feel emotions including anxiety and pleasure.
Defense mechanisms
Many small reptiles, such as snakes and lizards, that live on the ground or in the water are vulnerable to being preyed on by all kinds of carnivorous animals. Thus, avoidance is the most common form of defense in reptiles. At the first sign of danger, most snakes and lizards crawl away into the undergrowth, and turtles and crocodiles will plunge into water and sink out of sight.
Camouflage and warning
Reptiles tend to avoid confrontation through camouflage. Two major groups of reptile predators are birds and other reptiles, both of which have well-developed color vision. Thus the skins of many reptiles have cryptic coloration of plain or mottled gray, green, and brown to allow them to blend into the background of their natural environment. Aided by the reptiles' capacity for remaining motionless for long periods, the camouflage of many snakes is so effective that people or domestic animals are most typically bitten because they accidentally step on them.
When camouflage fails to protect them, blue-tongued skinks will try to ward off attackers by displaying their blue tongues, and the frill-necked lizard will display its brightly colored frill. These same displays are used in territorial disputes and during courtship. If danger arises so suddenly that flight is useless, crocodiles, turtles, some lizards, and some snakes hiss loudly when confronted by an enemy. Rattlesnakes rapidly vibrate the tip of the tail, which is composed of a series of nested, hollow beads to ward off approaching danger.
In contrast to the normal drab coloration of most reptiles, the lizards of the genus Heloderma (the Gila monster and the beaded lizard) and many of the coral snakes have high-contrast warning coloration, warning potential predators they are venomous. A number of non-venomous North American snake species have colorful markings similar to those of the coral snake, an oft cited example of Batesian mimicry.
Alternative defense in snakes
Camouflage does not always fool a predator. When caught out, snake species adopt different defensive tactics and use a complicated set of behaviors when attacked. Some species, like cobras or hognose snakes, first elevate their head and spread out the skin of their neck in an effort to look large and threatening. Failure of this strategy may lead to other measures practiced particularly by cobras, vipers, and closely related species, which use venom to attack. The venom is modified saliva, delivered through fangs from a venom gland. Some non-venomous snakes, such as American hognose snakes or European grass snake, play dead when in danger; some, including the grass snake, exude a foul-smelling liquid to deter attackers.
Defense in crocodilians
When a crocodilian is concerned about its safety, it will gape to expose the teeth and tongue. If this does not work, the crocodilian gets a little more agitated and typically begins to make hissing sounds. After this, the crocodilian will start to change its posture dramatically to make itself look more intimidating. The body is inflated to increase apparent size. If absolutely necessary, it may decide to attack an enemy.
Some species try to bite immediately. Some will use their heads as sledgehammers and literally smash an opponent, some will rush or swim toward the threat from a distance, even chasing the opponent onto land or galloping after it. The main weapon in all crocodiles is the bite, which can generate very high bite force. Many species also possess canine-like teeth. These are used primarily for seizing prey, but are also used in fighting and display.
Shedding and regenerating tails
Geckos, skinks, and some other lizards that are captured by the tail will shed part of the tail structure through a process called autotomy and thus be able to flee. The detached tail will continue to thrash, creating a deceptive sense of continued struggle and distracting the predator's attention from the fleeing prey animal. The detached tails of leopard geckos can wiggle for up to 20 minutes. The tail grows back in most species, but some, like crested geckos, lose their tails for the rest of their lives. In many species the tails are of a separate and dramatically more intense color than the rest of the body so as to encourage potential predators to strike for the tail first. In the shingleback skink and some species of geckos, the tail is short and broad and resembles the head, so that the predators may attack it rather than the more vulnerable front part.
Reptiles that are capable of shedding their tails can partially regenerate them over a period of weeks. The new section will however contain cartilage rather than bone, and will never grow to the same length as the original tail. It is often also distinctly discolored compared to the rest of the body and may lack some of the external sculpting features seen in the original tail.
Relations with humans
In cultures and religions
Dinosaurs have been widely depicted in culture since the English palaeontologist Richard Owen coined the name dinosaur in 1842. As soon as 1854, the Crystal Palace Dinosaurs were on display to the public in south London. One dinosaur appeared in literature even earlier, as Charles Dickens placed a Megalosaurus in the first chapter of his novel Bleak House in 1852.
The dinosaurs featured in books, films, television programs, artwork, and other media have been used for both education and entertainment. The depictions range from the realistic, as in the television documentaries of the 1990s and first decade of the 21st century, to the fantastic, as in the monster movies of the 1950s and 1960s.
The snake or serpent has played a powerful symbolic role in different cultures. In Egyptian history, the Nile cobra adorned the crown of the pharaoh. It was worshipped as one of the gods and was also used for sinister purposes: murder of an adversary and ritual suicide (Cleopatra). In Greek mythology, snakes are associated with deadly antagonists, as a chthonic symbol, roughly translated as earthbound. The nine-headed Lernaean Hydra that Hercules defeated and the three Gorgon sisters are children of Gaia, the earth. Medusa was one of the three Gorgon sisters who Perseus defeated. Medusa is described as a hideous mortal, with snakes instead of hair and the power to turn men to stone with her gaze. After killing her, Perseus gave her head to Athena who fixed it to her shield called the Aegis. The Titans are depicted in art with their legs replaced by bodies of snakes for the same reason: They are children of Gaia, so they are bound to the earth. In Hinduism, snakes are worshipped as gods, with many women pouring milk on snake pits. The cobra is seen on the neck of Shiva, while Vishnu is depicted often as sleeping on a seven-headed snake or within the coils of a serpent. There are temples in India solely for cobras sometimes called Nagraj (King of Snakes), and it is believed that snakes are symbols of fertility. In the annual Hindu festival of Nag Panchami, snakes are venerated and prayed to. In religious terms, the snake and jaguar are arguably the most important animals in ancient Mesoamerica. "In states of ecstasy, lords dance a serpent dance; great descending snakes adorn and support buildings from Chichen Itza to Tenochtitlan, and the Nahuatl word coatl meaning serpent or twin, forms part of primary deities such as Mixcoatl, Quetzalcoatl, and Coatlicue." In Christianity and Judaism, a serpent appears in Genesis to tempt Adam and Eve with the forbidden fruit from the Tree of Knowledge of Good and Evil.
The turtle has a prominent position as a symbol of steadfastness and tranquility in religion, mythology, and folklore from around the world. A tortoise's longevity is suggested by its long lifespan and its shell, which was thought to protect it from any foe. In the cosmological myths of several cultures a World Turtle carries the world upon its back or supports the heavens.
Medicine
Deaths from snakebites are uncommon in many parts of the world, but are still counted in tens of thousands per year in India. Snakebite can be treated with antivenom made from the venom of the snake. To produce antivenom, a mixture of the venoms of different species of snake is injected into the body of a horse in ever-increasing dosages until the horse is immunized. Blood is then extracted; the serum is separated, purified and freeze-dried. The cytotoxic effect of snake venom is being researched as a potential treatment for cancers.
Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesised for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug.
Geckos have also been used as medicine, especially in China. Turtles have been used in Chinese traditional medicine for thousands of years, with every part of the turtle believed to have medical benefits. There is a lack of scientific evidence that would correlate claimed medical benefits to turtle consumption. Growing demand for turtle meat has placed pressure on vulnerable wild populations of turtles.
Commercial farming
Crocodiles are protected in many parts of the world, and are farmed commercially. Their hides are tanned and used to make leather goods such as shoes and handbags; crocodile meat is also considered a delicacy. The most commonly farmed species are the saltwater and Nile crocodiles. Farming has resulted in an increase in the saltwater crocodile population in Australia, as eggs are usually harvested from the wild, so landowners have an incentive to conserve their habitat. Crocodile leather is made into wallets, briefcases, purses, handbags, belts, hats, and shoes. Crocodile oil has been used for various purposes.
Snakes are also farmed, primarily in East and Southeast Asia, and their production has become more intensive in the last decade. Snake farming has been troubling for conservation in the past as it can lead to overexploitation of wild snakes and their natural prey to supply the farms. However, farming snakes can limit the hunting of wild snakes, while reducing the slaughter of higher-order vertebrates like cows. The energy efficiency of snakes is higher than expected for carnivores, due to their ectothermy and low metabolism. Waste protein from the poultry and pig industries is used as feed in snake farms. Snake farms produce meat, snake skin, and antivenom.
Turtle farming is another known but controversial practice. Turtles have been farmed for a variety of reasons, ranging from food to traditional medicine, the pet trade, and scientific conservation. Demand for turtle meat and medicinal products is one of the main threats to turtle conservation in Asia. Though commercial breeding would seem to insulate wild populations, it can stoke the demand for them and increase wild captures. Even the potentially appealing concept of raising turtles at a farm to release into the wild is questioned by some veterinarians who have had some experience with farm operations. They caution that this may introduce into the wild populations infectious diseases that occur on the farm, but have not (yet) been occurring in the wild.
Reptiles in captivity
A herpetarium is a zoological exhibition space for reptiles and amphibians.
In the Western world, some snakes (especially relatively docile species such as the ball python and corn snake) are sometimes kept as pets. Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and geckos (such as the popular leopard gecko and the crested gecko).
Turtles and tortoises are increasingly popular pets, but keeping them can be challenging due to their particular requirements, such as temperature control, the need for UV light sources, and a varied diet. The long lifespans of turtles and especially tortoises mean they can potentially outlive their owners. Good hygiene and significant maintenance is necessary when keeping reptiles, due to the risks of Salmonella and other pathogens. Regular hand-washing after handling is an important measure to prevent infection.
| Biology and health sciences | Biology | null |
25418 | https://en.wikipedia.org/wiki/Proof%20by%20contradiction | Proof by contradiction | In logic, proof by contradiction is a form of proof that establishes the truth or the validity of a proposition by showing that assuming the proposition to be false leads to a contradiction.
Although it is quite freely used in mathematical proofs, not every school of mathematical thought accepts this kind of nonconstructive proof as universally valid.
More broadly, proof by contradiction is any form of argument that establishes a statement by arriving at a contradiction, even when the initial assumption is not the negation of the statement to be proved. In this general sense, proof by contradiction is also known as indirect proof, proof by assuming the opposite, and reductio ad impossibile.
A mathematical proof employing proof by contradiction usually proceeds as follows:
The proposition to be proved is P.
We assume P to be false, i.e., we assume ¬P.
It is then shown that ¬P implies falsehood. This is typically accomplished by deriving two mutually contradictory assertions, Q and ¬Q, and appealing to the law of noncontradiction.
Since assuming P to be false leads to a contradiction, it is concluded that P is in fact true.
An important special case is the existence proof by contradiction: in order to demonstrate that an object with a given property exists, we derive a contradiction from the assumption that all objects satisfy the negation of the property.
Formalization
The principle may be formally expressed as the propositional formula ¬¬P ⇒ P, equivalently (¬P ⇒ ⊥) ⇒ P, which reads: "If assuming P to be false implies falsehood, then P is true."
In natural deduction the principle takes the form of the rule of inference
which reads: "If is proved, then may be concluded."
In sequent calculus the principle is expressed by the sequent
which reads: "Hypotheses and entail the conclusion or ."
Justification
In classical logic the principle may be justified by the examination of the truth table of the proposition ¬¬P ⇒ P, which demonstrates it to be a tautology:
Another way to justify the principle is to derive it from the law of the excluded middle, as follows. We assume ¬¬P and seek to prove P. By the law of excluded middle P either holds or it does not:
if P holds, then of course P holds.
if ¬P holds, then we derive falsehood by applying the law of noncontradiction to ¬P and ¬¬P, after which the principle of explosion allows us to conclude P.
In either case, we established P. It turns out that, conversely, proof by contradiction can be used to derive the law of excluded middle.
In classical sequent calculus LK proof by contradiction is derivable from the inference rules for negation:
Relationship with other proof techniques
Refutation by contradiction
Proof by contradiction is similar to refutation by contradiction, also known as proof of negation, which states that ¬P is proved as follows:
The proposition to be proved is ¬P.
Assume P.
Derive falsehood.
Conclude ¬P.
In contrast, proof by contradiction proceeds as follows:
The proposition to be proved is P.
Assume ¬P.
Derive falsehood.
Conclude P.
Formally these are not the same, as refutation by contradiction applies only when the proposition to be proved is negated, whereas proof by contradiction may be applied to any proposition whatsoever. In classical logic, where and may be freely interchanged, the distinction is largely obscured. Thus in mathematical practice, both principles are referred to as "proof by contradiction".
Law of the excluded middle
Proof by contradiction is equivalent to the law of the excluded middle, first formulated by Aristotle, which states that either an assertion or its negation is true, P ∨ ¬P.
Law of non-contradiction
The law of noncontradiction was first stated as a metaphysical principle by Aristotle. It posits that a proposition and its negation cannot both be true, or equivalently, that a proposition cannot be both true and false. Formally the law of non-contradiction is written as ¬(P ∧ ¬P) and read as "it is not the case that a proposition is both true and false". The law of non-contradiction neither follows nor is implied by the principle of Proof by contradiction.
The laws of excluded middle and non-contradiction together mean that exactly one of P and ¬P is true.
Proof by contradiction in intuitionistic logic
In intuitionistic logic proof by contradiction is not generally valid, although some particular instances can be derived. In contrast, proof of negation and principle of noncontradiction are both intuitionistically valid.
Brouwer–Heyting–Kolmogorov interpretation of proof by contradiction gives the following intuitionistic validity condition:
If we take "method" to mean algorithm, then the condition is not acceptable, as it would allow us to solve the Halting problem. To see how, consider the statement H(M) stating "Turing machine M halts or does not halt". Its negation ¬H(M) states that "M neither halts nor does not halt", which is false by the law of noncontradiction (which is intuitionistically valid). If proof by contradiction were intuitionistically valid, we would obtain an algorithm for deciding whether an arbitrary Turing machine M halts, thereby violating the (intuitionistically valid) proof of non-solvability of the Halting problem.
A proposition P which satisfies is known as a ¬¬-stable proposition. Thus in intuitionistic logic proof by contradiction is not universally valid, but can only be applied to the ¬¬-stable propositions. An instance of such a proposition is a decidable one, i.e., satisfying . Indeed, the above proof that the law of excluded middle implies proof by contradiction can be repurposed to show that a decidable proposition is ¬¬-stable. A typical example of a decidable proposition is a statement that can be checked by direct computation, such as " is prime" or " divides ".
Examples of proofs by contradiction
Euclid's Elements
An early occurrence of proof by contradiction can be found in Euclid's Elements, Book 1, Proposition 6:
If in a triangle two angles equal one another, then the sides opposite the equal angles also equal one another.
The proof proceeds by assuming that the opposite sides are not equal, and derives a contradiction.
Hilbert's Nullstellensatz
An influential proof by contradiction was given by David Hilbert. His
Nullstellensatz states:
If are polynomials in indeterminates with complex coefficients, which have no common complex zeros, then there are polynomials such that
Hilbert proved the statement by assuming that there are no such polynomials and derived a contradiction.
Infinitude of primes
Euclid's theorem states that there are infinitely many primes. In Euclid's Elements the theorem is stated in Book IX, Proposition 20:
Prime numbers are more than any assigned multitude of prime numbers.
Depending on how we formally write the above statement, the usual proof takes either the form of a proof by contradiction or a refutation by contradiction. We present here the former, see below how the proof is done as refutation by contradiction.
If we formally express Euclid's theorem as saying that for every natural number there is a prime bigger than it, then we employ proof by contradiction, as follows.
Given any number , we seek to prove that there is a prime larger than . Suppose to the contrary that no such p exists (an application of proof by contradiction). Then all primes are smaller than or equal to , and we may form the list of them all. Let be the product of all primes and . Because is larger than all prime numbers it is not prime, hence it must be divisible by one of them, say . Now both and are divisible by , hence so is their difference , but this cannot be because 1 is not divisible by any primes. Hence we have a contradiction and so there is a prime number bigger than
Examples of refutations by contradiction
The following examples are commonly referred to as proofs by contradiction, but formally employ refutation by contradiction (and therefore are intuitionistically valid).
Infinitude of primes
Let us take a second look at Euclid's theorem – Book IX, Proposition 20:
Prime numbers are more than any assigned multitude of prime numbers.
We may read the statement as saying that for every finite list of primes, there is another prime not on that list,
which is arguably closer to and in the same spirit as Euclid's original formulation. In this case Euclid's proof applies refutation by contradiction at one step, as follows.
Given any finite list of prime numbers , it will be shown that at least one additional prime number not in this list exists. Let be the product of all the listed primes and a prime factor of , possibly itself. We claim that is not in the given list of primes. Suppose to the contrary that it were (an application of refutation by contradiction). Then would divide both and , therefore also their difference, which is . This gives a contradiction, since no prime number divides 1.
Irrationality of the square root of 2
The classic proof that the square root of 2 is irrational is a refutation by contradiction.
Indeed, we set out to prove the negation ¬ ∃ a, b ∈ . a/b = by assuming that there exist natural numbers a and b whose ratio is the square root of two, and derive a contradiction.
Proof by infinite descent
Proof by infinite descent is a method of proof whereby a smallest object with desired property is shown not to exist as follows:
Assume that there is a smallest object with the desired property.
Demonstrate that an even smaller object with the desired property exists, thereby deriving a contradiction.
Such a proof is again a refutation by contradiction. A typical example is the proof of the proposition "there is no smallest positive rational number": assume there is a smallest positive rational number q and derive a contradiction by observing that is even smaller than q and still positive.
Russell's paradox
Russell's paradox, stated set-theoretically as "there is no set whose elements are precisely those sets that do not contain themselves", is a negated statement whose usual proof is a refutation by contradiction.
Notation
Proofs by contradiction sometimes end with the word "Contradiction!". Isaac Barrow and Baermann used the notation Q.E.A., for "quod est absurdum" ("which is absurd"), along the lines of Q.E.D., but this notation is rarely used today. A graphical symbol sometimes used for contradictions is a downwards zigzag arrow "lightning" symbol (U+21AF: ↯), for example in Davey and Priestley. Others sometimes used include a pair of opposing arrows (as or ), struck-out arrows (), a stylized form of hash (such as U+2A33: ⨳), or the "reference mark" (U+203B: ※), or .
Hardy's view
G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game."
Automated theorem proving
In automated theorem proving the method of resolution is based on proof by contradiction. That is, in order to show that a given statement is entailed by given hypotheses, the automated prover assumes the hypotheses and the negation of the statement, and attempts to derive a contradiction.
| Mathematics | Mathematical logic | null |
25453 | https://en.wikipedia.org/wiki/Rheology | Rheology | Rheology (; ) is the study of the flow of matter, primarily in a fluid (liquid or gas) state but also as "soft solids" or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. Rheology is the branch of physics that deals with the deformation and flow of materials, both solids and liquids.
The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920 from a suggestion by a colleague, Markus Reiner. The term was inspired by the aphorism of Heraclitus (often mistakenly attributed to Simplicius), (, 'everything flows') and was first used to describe the flow of liquids and the deformation of solids. It applies to substances that have a complex microstructure, such as muds, sludges, suspensions, and polymers and other glass formers (e.g., silicates), as well as many foods and additives, bodily fluids (e.g., blood) and other biological materials, and other materials that belong to the class of soft matter such as food.
Newtonian fluids can be characterized by a single coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate. Only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the strain rate (the relative flow velocity) are called non-Newtonian fluids.
Rheology generally accounts for the behavior of non-Newtonian fluids by characterizing the minimum number of functions that are needed to relate stresses with rate of change of strain or strain rates. For example, ketchup can have its viscosity reduced by shaking (or other forms of mechanical agitation, where the relative movement of different layers in the material actually causes the reduction in viscosity), but water cannot. Ketchup is a shear-thinning material, like yogurt and emulsion paint (US terminology latex paint or acrylic paint), exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the opposite behavior, rheopecty (viscosity increasing with relative deformation), and are called shear-thickening or dilatant materials. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain-rate-dependent viscosity is also often called Non-Newtonian fluid mechanics.
The experimental characterisation of a material's rheological behaviour is known as rheometry, although the term rheology is frequently used synonymously with rheometry, particularly by experimentalists. Theoretical aspects of rheology are the relation of the flow/deformation behaviour of material and its internal structure (e.g., the orientation and elongation of polymer molecules) and the flow/deformation behaviour of materials that cannot be described by classical fluid mechanics or elasticity.
Scope
In practice, rheology is principally concerned with extending continuum mechanics to characterize the flow of materials that exhibit a combination of elastic, viscous and plastic behavior by properly combining elasticity and (Newtonian) fluid mechanics. It is also concerned with predicting mechanical behavior (on the continuum mechanical scale) based on the micro- or nanostructure of the material, e.g. the molecular size and architecture of polymers in solution or the particle size distribution in a solid suspension.
Materials with the characteristics of a fluid will flow when subjected to a stress, which is defined as the force per area. There are different sorts of stress (e.g. shear, torsional, etc.), and materials can respond differently under different stresses. Much of theoretical rheology is concerned with associating external forces and torques with internal stresses, internal strain gradients, and flow velocities.
Rheology unites the seemingly unrelated fields of plasticity and non-Newtonian fluid dynamics by recognizing that materials undergoing these types of deformation are unable to support a stress (particularly a shear stress, since it is easier to analyze shear deformation) in static equilibrium. In this sense, a solid undergoing plastic deformation is a fluid, although no viscosity coefficient is associated with this flow. Granular rheology refers to the continuum mechanical description of granular materials.
One of the major tasks of rheology is to establish by measurement the relationships between strains (or rates of strain) and stresses, although a number of theoretical developments (such as assuring frame invariants) are also required before using the empirical data. These experimental techniques are known as rheometry and are concerned with the determination of well-defined rheological material functions. Such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics.
The characterization of flow or deformation originating from a simple shear stress field is called shear rheometry (or shear rheology). The study of extensional flows is called extensional rheology. Shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows.
Viscoelasticity
Fluid and solid character are relevant at long times:We consider the application of a constant stress (a so-called creep experiment):
if the material, after some deformation, eventually resists further deformation, it is considered a solid
if, by contrast, the material flows indefinitely, it is considered a fluid
By contrast, elastic and viscous (or intermediate, viscoelastic) behaviour is relevant at short times (transient behaviour):We again consider the application of a constant stress:
if the material deformation strain increases linearly with increasing applied stress, then the material is linear elastic within the range it shows recoverable strains. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, without any time delay.
if the material deformation strain rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the time delay between the applied constant stress and the maximum strain.
if the materials behaves as a combination of viscous and elastic components, then the material is viscoelastic. Theoretically such materials can show both instantaneous deformation as elastic material and a delayed time dependent deformation as in fluids.
Plasticity is the behavior observed after the material is subjected to a yield stress:A material that behaves as a solid under low applied stresses may start to flow above a certain level of stress, called the yield stress of the material. The term plastic solid is often used when this plasticity threshold is rather high, while yield stress fluid is used when the threshold stress is rather low. However, there is no fundamental difference between the two concepts.
Dimensionless numbers
Deborah number
On one end of the spectrum we have an inviscid or a simple Newtonian fluid and on the other end, a rigid solid; thus the behavior of all materials fall somewhere in between these two ends. The difference in material behavior is characterized by the level and nature of elasticity present in the material when it deforms, which takes the material behavior to the non-Newtonian regime. The non-dimensional Deborah number is designed to account for the degree of non-Newtonian behavior in a flow. The Deborah number is defined as the ratio of the characteristic time of relaxation (which purely depends on the material and other conditions like the temperature) to the characteristic time of experiment or observation. Small Deborah numbers represent Newtonian flow, while non-Newtonian (with both viscous and elastic effects present) behavior occurs for intermediate range Deborah numbers, and high Deborah numbers indicate an elastic/rigid solid. Since Deborah number is a relative quantity, the numerator or the denominator can alter the number. A very small Deborah number can be obtained for a fluid with extremely small relaxation time or a very large experimental time, for example.
Reynolds number
In fluid mechanics, the Reynolds number is a measure of the ratio of inertial forces () to viscous forces () and consequently it quantifies the relative importance of these two types of effect for given flow conditions. Under low Reynolds numbers viscous effects dominate and the flow is laminar, whereas at high Reynolds numbers inertia predominates and the flow may be turbulent. However, since rheology is concerned with fluids which do not have a fixed viscosity, but one which can vary with flow and time, calculation of the Reynolds number can be complicated.
It is one of the most important dimensionless numbers in fluid dynamics and is used, usually along with other dimensionless numbers, to provide a criterion for determining dynamic similitude. When two geometrically similar flow patterns, in perhaps different fluids with possibly different flow rates, have the same values for the relevant dimensionless numbers, they are said to be dynamically similar.
Typically it is given as follows:
where:
us – mean flow velocity, [m s−1]
L – characteristic length, [m]
μ – (absolute) dynamic fluid viscosity, [N s m−2] or [Pa s]
ν – kinematic fluid viscosity: , [m2 s−1]
ρ – fluid density, [kg m−3].
Measurement
Rheometers are instruments used to characterize the rheological properties of materials, typically fluids that are melts or solution. These instruments impose a specific stress field or deformation to the fluid, and monitor the resultant deformation or stress. Instruments can be run in steady flow or oscillatory flow, in both shear and extension.
Applications
Rheology has applications in materials science, engineering, geophysics, physiology, human biology and pharmaceutics. Materials science is utilized in the production of many industrially important substances, such as cement, paint, and chocolate, which have complex flow characteristics. In addition, plasticity theory has been similarly important for the design of metal forming processes. The science of rheology and the characterization of viscoelastic properties in the production and use of polymeric materials has been critical for the production of many products for use in both the industrial and military sectors.
Study of flow properties of liquids is important for pharmacists working in the manufacture of several dosage forms, such as simple liquids, ointments, creams, pastes etc. The flow behavior of liquids under applied stress is of great relevance in the field of pharmacy. Flow properties are used as important quality control tools to maintain the superiority of the product and reduce batch to batch variations.
Materials science
Polymers
Examples may be given to illustrate the potential applications of these principles to practical problems in the processing and use of rubbers, plastics, and fibers. Polymers constitute the basic materials of the rubber and plastic industries and are of vital importance to the textile, petroleum, automobile, paper, and pharmaceutical industries. Their viscoelastic properties determine the mechanical performance of the final products of these industries, and also the success of processing methods at intermediate stages of production.
In viscoelastic materials, such as most polymers and plastics, the presence of liquid-like behaviour depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. The silicone toy 'Silly Putty' behaves quite differently depending on the time rate of applying a force. Pull on it slowly and it exhibits continuous flow, similar to that evidenced in a highly viscous liquid. Alternatively, when hit hard and directly, it shatters like a silicate glass.
In addition, conventional rubber undergoes a glass transition (often called a rubber-glass transition). E.g. The Space Shuttle Challenger disaster was caused by rubber O-rings that were being used well below their glass transition temperature on an unusually cold Florida morning, and thus could not flex adequately to form proper seals between sections of the two solid-fuel rocket boosters.
Biopolymers
Sol-gel
With the viscosity of a sol adjusted into a proper range, both optical quality glass fiber and refractory ceramic fiber can be drawn which are used for fiber-optic sensors and thermal insulation, respectively. The mechanisms of hydrolysis and condensation, and the rheological factors that bias the structure toward linear or branched structures are the most critical issues of sol-gel science and technology.
Geophysics
The scientific discipline of geophysics includes study of the flow of molten lava and study of debris flows (fluid mudslides). This disciplinary branch also deals with solid Earth materials which only exhibit flow over extended time-scales. Those that display viscous behaviour are known as rheids. For example, granite can flow plastically with a negligible yield stress at room temperatures (i.e. a viscous flow). Long-term creep experiments (~10 years) indicate that the viscosity of granite and glass under ambient conditions are on the order of 1020 poises.
Physiology
Physiology includes the study of many bodily fluids that have complex structure and composition, and thus exhibit a wide range of viscoelastic flow characteristics. In particular there is a specialist study of blood flow called hemorheology. This is the study of flow properties of blood and its elements (plasma and formed elements, including red blood cells, white blood cells and platelets). Blood viscosity is determined by plasma viscosity, hematocrit (volume fraction of red blood cell, which constitute 99.9% of the cellular elements) and mechanical behaviour of red blood cells. Therefore, red blood cell mechanics is the major determinant of flow properties of blood.(The ocular Vitreous humor is subject to rheologic observations, particularly during studies of age-related vitreous liquefaction, or synaeresis.)
The leading characteristic for hemorheology has been shear thinning in steady shear flow. Other non-Newtonian rheological characteristics that blood can demonstrate includes pseudoplasticity, viscoelasticity, and thixotropy.
Red blood cell aggregation
There are two current major hypotheses to explain blood flow predictions and shear thinning responses. The two models also attempt to demonstrate the drive for reversible red blood cell aggregation, although the mechanism is still being debated. There is a direct effect of red blood cell aggregation on blood viscosity and circulation. The foundation of hemorheology can also provide information for modeling of other biofluids. The bridging or "cross-bridging" hypothesis suggests that macromolecules physically crosslink adjacent red blood cells into rouleaux structures. This occurs through adsorption of macromolecules onto the red blood cell surfaces. The depletion layer hypothesis suggests the opposite mechanism. The surfaces of the red blood cells are bound together by an osmotic pressure gradient that is created by depletion layers overlapping. The effect of rouleaux aggregation tendency can be explained by hematocrit and fibrinogen concentration in whole blood rheology. Some techniques researchers use are optical trapping and microfluidics to measure cell interaction in vitro.
Disease and diagnostics
Changes to viscosity has been shown to be linked with diseases like hyperviscosity, hypertension, sickle cell anemia, and diabetes. Hemorheological measurements and genomic testing technologies act as preventative measures and diagnostic tools.
Hemorheology has also been correlated with aging effects, especially with impaired blood fluidity, and studies have shown that physical activity may improve the thickening of blood rheology.
Zoology
Many animals make use of rheological phenomena, for example sandfish that exploit the granular rheology of dry sand to "swim" in it or land gastropods that use snail slime for adhesive locomotion. Certain animals produce specialized endogenous complex fluids, such as the sticky slime produced by velvet worms to immobilize prey or the fast-gelling underwater slime secreted by hagfish to deter predators.
Food rheology
Food rheology is important in the manufacture and processing of food products, such as cheese and gelato. An adequate rheology is important for the indulgence of many common foods, particularly in the case of sauces, dressings, yogurt, or fondue.
Thickening agents, or thickeners, are substances which, when added to an aqueous mixture, increase its viscosity without substantially modifying its other properties, such as taste. They provide body, increase stability, and improve suspension of added ingredients. Thickening agents are often used as food additives and in cosmetics and personal hygiene products. Some thickening agents are gelling agents, forming a gel. The agents are materials used to thicken and stabilize liquid solutions, emulsions, and suspensions. They dissolve in the liquid phase as a colloid mixture that forms a weakly cohesive internal structure. Food thickeners frequently are based on either polysaccharides (starches, vegetable gums, and pectin), or proteins.
Concrete rheology
Concrete's and mortar's workability is related to the rheological properties of the fresh cement paste. The mechanical properties of hardened concrete increase if less water is used in the concrete mix design, however reducing the water-to-cement ratio may decrease the ease of mixing and application. To avoid these undesired effects, superplasticizers are typically added to decrease the apparent yield stress and the viscosity of the fresh paste. Their addition highly improves concrete and mortar properties.
Filled polymer rheology
The incorporation of various types of fillers into polymers is a common means of reducing cost and to impart certain desirable mechanical, thermal, electrical and magnetic properties to the resulting material. The advantages that filled polymer systems have to offer come with an increased complexity in the rheological behavior.
Usually when the use of fillers is considered, a compromise has to be made between the improved mechanical properties in the solid state on one side and the increased difficulty in melt processing, the problem of achieving uniform dispersion of the filler in the polymer matrix and the economics of the process due to the added step of compounding on the other. The rheological properties of filled polymers are determined not only by the type and amount of filler, but also by the shape, size and size distribution of its particles. The viscosity of filled systems generally increases with increasing filler fraction. This can be partially ameliorated via broad particle size distributions via the Farris effect. An additional factor is the stress transfer at the filler-polymer interface. The interfacial adhesion can be substantially enhanced via a coupling agent that adheres well to both the polymer and the filler particles. The type and amount of surface treatment on the filler are thus additional parameters affecting the rheological and material properties of filled polymeric systems.
It is important to take into consideration wall slip when performing the rheological characterization of highly filled materials, as there can be a large difference between the actual strain and the measured strain.
Rheologist
A rheologist is an interdisciplinary scientist or engineer who studies the flow of complex liquids or the deformation of soft solids. It is not a primary degree subject; there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the physical sciences (e.g. chemistry, physics, geology, biology), engineering (e.g. mechanical, chemical, materials science, plastics engineering and engineering or civil engineering), medicine, or certain technologies, notably materials or food. Typically, a small amount of rheology may be studied when obtaining a degree, but a person working in rheology will extend this knowledge during postgraduate research or by attending short courses and by joining a professional association.
| Physical sciences | Fluid mechanics | Physics |
25456 | https://en.wikipedia.org/wiki/Rifle | Rifle | A rifle is a long-barreled firearm designed for accurate shooting and higher stopping power, with a barrel that has a helical or spiralling pattern of grooves (rifling) cut into the bore wall. In keeping with their focus on accuracy, rifles are typically designed to be held with both hands and braced firmly against the shooter's shoulder via a buttstock for stability during shooting. Rifles have been used in warfare, law enforcement, hunting and target shooting sports.
The term was originally rifled gun, with the verb rifle referring to the early modern machining process of creating grooves with cutting tools. By the 20th century, the weapon had become so common that the modern noun rifle is now often used for any log-shaped handheld ranged weapon designed for well-aimed discharge activated by a trigger.
Like all typical firearms, a rifle's projectile (bullet) is propelled by the contained deflagration of a combustible propellant compound (originally black powder and now nitrocellulose and other smokeless powders), although other propulsive means are used, such as compressed air in air rifles, which are popular for vermin control, small game hunting, competitive target shooting and casual sport shooting (plinking).
The distinct feature that separates a rifle from the earlier smoothbore long guns (e.g., arquebuses, muskets) is the rifling within its barrel. The raised areas of a barrel's rifling are called lands; they make contact with and exert torque on the projectile as it moves down the bore, imparting a spin. When the projectile leaves the barrel, this spin persists and lends gyroscopic stability to the projectile due to conservation of angular momentum, increasing accuracy and hence effective range.
Terminology
Historically, rifles only fired a single projectile with each squeeze of the trigger. Modern rifles are commonly classified as single-shot, bolt-action, semi-automatic, or automatic. Single-shot, bolt-action, and semi-automatic rifles are limited by their designs to fire a single shot for each trigger pull. Only automatic rifles are capable of firing more than one round per trigger squeeze; however, some automatic rifles are limited to fixed bursts of two, three, or more rounds per squeeze.
Modern automatic rifles overlap to some extent in design and function with machine guns. In fact, many light machine guns are adaptations of existing automatic rifle designs, such as the RPK and M27 Infantry Automatic Rifle. A military's light machine guns are typically chambered for the same caliber ammunition as its service rifles. Generally, the difference between an automatic rifle and a machine gun comes down to weight, cooling system, and ammunition feed system. Rifles, with their relatively lighter components (which overheat quickly) and smaller capacity magazines, are incapable of sustained automatic fire in the way that machine guns are; they trade this capability in favor of increased mobility. Modern military rifles are fed by magazines, while machine guns are generally belt-fed. Many machine guns allow the operator to quickly exchange barrels in order to prevent overheating, whereas rifles generally do not. Most machine guns fire from an open bolt in order to reduce the danger of "cook-off", while almost all rifles fire from a closed bolt for accuracy. Machine guns are often crewed by more than one soldier; the rifle is an individual weapon.
The term "rifle" is sometimes used to describe larger rifled crew-served weapons firing explosive shells, for example, recoilless rifles and naval rifles.
In many works of fiction "rifle" refers to any weapon that has a stock and is shouldered before firing, even if the weapon is not rifled or does not fire solid projectiles (e.g. "laser rifle").
Historical overview
The origins of rifling are difficult to trace, but some of the earliest European experiments seem to have been carried out during the 15th century. Archers had long realized that a twist added to the tail feathers of their arrows gave them greater accuracy. Early muskets produced large quantities of smoke and soot, which had to be cleaned from the action and bore of the musket frequently, either through the action of repeated bore scrubbing, or a deliberate attempt to create "soot grooves" that would allow for more shots to be fired from the firearm.
While many people contributed to the development of the concept of rifling and rifles, Friedrich Engels claimed it as a German invention in his extensive writings about the history of the rifle, and the evolution and use of the technology.
Some of the earliest examples of European grooved gun barrels were reportedly manufactured during 1440, and further developed by Gaspard Kollner of Vienna , although other scholars allege they were a joint effort between Kollner and Augustus Kotter of Nuremberg . Military commanders preferred smoothbore weapons for infantry use because rifles were much more prone to problems due to powder fouling the barrel and because they took longer to reload and fire than muskets.
Rifles were created as an improvement in the accuracy of smoothbore muskets. In the early 18th century, Benjamin Robins, an English mathematician, realized that an elongated bullet would retain the momentum and kinetic energy of a musket ball, but would slice through the air with greater ease. The black powder used in early muzzle-loading rifles quickly fouled the barrel, making loading slower and more difficult. The greater range of the rifle was considered to be of little practical use since the smoke from black powder quickly obscured the battlefield and made it almost impossible to aim the weapon from a distance. Since musketeers could not afford to take the time to stop and clean their barrels in the middle of a battle, rifles were limited to use by sharpshooters and non-military uses like hunting.
Muskets were smoothbore, large caliber weapons using spherical ammunition fired at relatively low velocity. Due to the high cost and great difficulty of precision manufacturing, and the need to load readily from the muzzle, the musket ball was a loose fit in the barrel. Consequently, on firing the ball bounced off the sides of the barrel when fired and the final direction on leaving the muzzle was unpredictable.
The performance of early muskets defined the style of warfare at the time. Due to the lack of accuracy, soldiers were deployed in long lines (thus line infantry) to fire at the opposing forces. Precise aim was thus not necessary to hit an opponent. Muskets were used for comparatively rapid, imprecisely aimed volley fire, and the average soldier could be easily trained to use them.
In the Province of Pennsylvania USA, one of the most successful early rifles, the long rifle, was developed over the course of the 18th century. Compared to the more common Brown Bess, these Pennsylvania and Kentucky rifles had a tighter bore with no space between bullet and barrel, and still used balls instead of conical bullets. The balls the long rifle used were smaller, allowing the production of more rounds for a given amount of lead. These rifles also had longer barrels, allowing more accuracy, which were rifled with a helical groove. These first started appearing sometime before 1740, one early example being made by Jacob Dickert, a German immigrant. By 1750 there were a number of such manufacturers in the area. The longer barrel was a departure by local gunsmiths from their German roots, allowing bullets to achieve a higher speed (as the burning gunpowder was contained longer) before emerging from the barrel.
During the 1700s (18th century), colonial settlers, particularly those immigrating from Germany and Switzerland, adapted and improved upon their European rifles. The improved long rifles were used for precise shooting, aiming, and firing at individual targets, instead of the musket's use for imprecise fire. During the American Revolution, the colonist troops favoured these more accurate rifles while their use was resisted by the British and Hessian troops.
By the time of the American Revolutionary War, these rifles were commonly used by frontiersmen, and Congress authorized the establishment of ten companies of riflemen. One of the most critical units was Morgan's Riflemen, led by Daniel Morgan. This sharpshooting unit eventually proved itself integral to the Battle of Saratoga, and in the southern states where General Morgan commanded as well. Taking advantage of the rifle's improved accuracy, Morgan's sharpshooters picked off cannoneers and officers, reducing the impact of enemy artillery. This kind of advantage was considered pivotal in many battles, such as the battles of Cowpens, Saratoga, and King's Mountain.
Later during the Napoleonic Wars, the British 95th Regiment (Green Jackets) and 60th Regiment, (Royal American), as well as sharpshooters and riflemen during the War of 1812, used the rifle to great effect during skirmishing. Because of a slower loading time than a musket, they were not adopted by the whole army. Since rifles were used by sharpshooters who did not routinely fire over other men's shoulders, long length was not required to avoid the forward line. A shorter length made a handier weapon in which tight-fitting balls did not have to be rammed so far down the barrel.
The invention of the Minié ball in the 1840s solved the slow loading problem, and in the 1850s and 1860s rifles quickly replaced muskets on the battlefield. Many rifles, often referred to as rifled muskets, were very similar to the muskets they replaced, but the military also experimented with other designs. Breech-loading weapons proved to have a much faster rate of fire than muzzleloaders, causing military forces to abandon muzzle loaders in favor of breech-loading designs in the late 1860s. In the later part of the 19th century, rifles were generally single-shot, breech-loading guns, designed for aimed, discretionary fire by individual soldiers. Then, as now, rifles had a stock, either fixed or folding, to be braced against the shoulder when firing.
The adoption of cartridges and breech-loading in the 19th century was concurrent with the general adoption of rifles. In the early part of the 20th century, soldiers were trained to shoot accurately over long ranges with high-powered cartridges. World War I Lee–Enfield rifles (among others) were equipped with long-range 'volley sights' for massed firing at ranges of up to . Individual shots were unlikely to hit, but a platoon firing repeatedly could produce a 'beaten ground' effect similar to light artillery or machine guns.
Currently, rifles are the most common firearm in general use for hunting (with the exception of bird hunting, where shotguns are favored). Rifles derived from military designs have long been popular with civilian shooters.
19th century
During the Napoleonic Wars the British army established several experimental units known as "Rifles", armed with the Baker rifle. These Rifle Regiments were deployed as skirmishers during the Peninsular War of 1807 to 1814 in Spain and Portugal, and proved more effective than skirmishers armed with muskets due to their accuracy and long range.
In Central Asia, Uzbeks, Kazakhs and Tadjiks in the course of the 19th century adopted a form of large-calibre rifle: the or karamultyk.
Muzzle-loading
Gradually, rifles appeared with cylindrical barrels cut with helical grooves, the surfaces between the grooves being "lands". The innovation was shortly followed by the mass adoption of breech-loading weapons, as it was not practical to push an overbore bullet down through a rifled barrel. The dirt and grime from prior shots were pushed down ahead of a tight bullet or ball (which may have been a looser fit in the clean barrel before the first shot), and loading was far more difficult, as the lead had to be deformed to go down in the first place, reducing the accuracy due to deformation. Several systems were tried to deal with the problem, usually by resorting to an under-bore bullet that expanded upon firing.
The original muzzle-loading rifle, with a closely fitting ball to take the rifling grooves, was loaded with difficulty, particularly when foul, and for this reason was not generally used for military purposes. With the advent of rifling, the bullet itself did not initially change but was wrapped in a greased, cloth patch to grip the rifling grooves.
The first half of the 19th century saw a distinct change in the shape and function of the bullet. In 1826 Henri-Gustave Delvigne, a French infantry officer, invented a breech with abrupt shoulders on which a spherical bullet was rammed down until it caught the rifling grooves. Delvigne's method, however, deformed the bullet and was inaccurate.
Soon after, Louis-Etienne de Thouvenin invented the Carabine à tige, which had a stem at the bottom of the barrel that would deform and expand the base of the bullet when rammed, therefore enabling accurate contact with the rifling. However, the area around the stem clogged and got dirty easily.
Minié system – the "rifled musket"
The famous Minié system, invented by French Army Captain Claude-Étienne Minié, relied on a conical bullet (known as a Minié ball) with a hollow skirt at the base of the bullet. When fired, the skirt would expand from the pressure of the exploding charge and grip the rifling as the round was fired. The better seal gave more power, as less gas escaped past the bullet. Also, for the same bore (caliber) diameter a long bullet was heavier than a round ball. The extra grip also spun the bullet more consistently, which increased the range from about 50 yards for a smoothbore musket to about 300 yards for a rifle using the Minié system. The expanding skirt of the Minié ball also solved the problem that earlier tight-fitting bullets were difficult to load as black powder residue fouled the inside of the barrel. The Minié system allowed conical bullets to be loaded into rifles just as quickly as round balls in smooth bores, which allowed rifle muskets to replace muskets on the battlefield.
Minié system rifles, notably the U.S. Springfield and the British Enfield of the early 1860s featured prominently in the U.S. Civil War of 1861-1865, due to their enhanced power and accuracy. At the time of the Crimean War (1853-1856) the Minié rifle was considered the "best in military use".
Over the 19th century, bullet design continued to evolve, the bullets becoming gradually smaller and lighter. By 1910 the standard blunt-nosed bullet had been replaced by the pointed, 'spitzer' bullet, an innovation that increased range and penetration. Cartridge design evolved from simple paper tubes containing black powder and shot, to sealed brass cases with integral primers for ignition, and black powder was replaced by cordite, and then by other nitro-cellulose-based smokeless powder mixtures, propelling bullets to higher velocities than before.
The increased velocity meant that new problems arose, and so bullets went from using soft lead to harder lead, then to copper-jacketed, in order to better engage the spiral grooves without "stripping" them in the same way that a screw or bolt thread would be stripped if subjected to extreme forces.
Breech loading
From 1836, breech-loading rifles were introduced with the German Dreyse Needle gun, followed by the French Tabatière in 1857, by the British Calisher and Terry carbine made in Birmingham and later in 1864 by the better known British Snider–Enfield. Primitive chamber-locking mechanisms were soon replaced by bolt-action mechanisms, exemplified by the French Chassepot in 1866. Breech-loading was to have a major impact on warfare, as breech-loading rifles can be fired at a rate many times faster than muzzle-loaded rifles and - significantly - can be loaded from a prone rather than standing position. Firing prone (i.e., lying down) is more accurate than firing from a standing position, and a prone rifleman presents a much smaller target than a standing soldier. The higher accuracy and range, combined with reduced vulnerability generally benefited defense, while making the traditional battle between lines of standing and volleying infantrymen obsolete.
Revolving rifle
Revolving rifles were an attempt to increase the rate of fire of rifles by combining them with the revolving firing mechanism that had been developed earlier for revolving pistols. Colt began experimenting with revolving rifles in the early-19th century, and other manufacturers like Remington later experimented with them as well. The Colt Revolving Rifle Model 1855, an early repeating rifle, became the first to be used by the U.S. Government and saw some limited action during the American Civil War. Revolvers, both rifles and pistols, tend to spray fragments of metal from the front of the cylinder.
Repeating rifle
The Winchester repeating rifle was invented in 1866. The firer pulled on a lever to reload the rifle with a stored cartridge.
Cartridge storage
An important area of development was the way that cartridges were stored and used in firearms. The Spencer repeating rifle was a breech-loading manually-operated lever-action rifle that was adopted by the United States. Over 20,000 were used during the American Civil War. It was the first adoption of a removable magazine-fed infantry rifle. The design was completed by Christopher Spencer in 1860. It used copper rimfire cartridges stored in a removable seven-round tube magazine, enabling the rounds to be fired one after another. A rifleman could exchange an emptied magazine for another.
Modern
In the Russo-Japanese War of 1904–1905, military observers from Europe and the United States witnessed a major conflict fought with high velocity bolt-action rifles firing smokeless powder. The Battle of Mukden fought in 1905 consisted of nearly 343,000 Russian troops against over 281,000 Japanese troops. The Russian Mosin–Nagant Model 1891 in 7.62 mm was pitted against the Japanese Arisaka Type 30 bolt-action rifle in 6.5 mm; both had velocities well over .
Until the late 19th century rifles tended to be very long, some long rifles reaching approximately in length to maximize accuracy, making early rifles impractical for use by cavalry. However, following the advent of more powerful smokeless powder, a shorter barrel did not impair accuracy as much. As a result, cavalry saw limited, but noteworthy, usage in 20th-century conflicts.
The advent of the massed, rapid firepower of the machine gun, submachine gun and rifled artillery was so quick as to outstrip the development of any way to attack a trench defended by riflemen and machine gunners. The carnage of World War I was perhaps the greatest vindication and vilification of the rifle as a military weapon.
The M1 Garand was a semi-automatic rapid-fire rifle developed for modern warfare use in World War II.
During and after World War II it became accepted that most infantry engagements occurred at ranges of less than 300 m; the range and power of the large full-powered rifle cartridges were "overkill", requiring weapons heavier than otherwise necessary. This led to Germany's development of the 7.92×33mm (short) round, the MKb-42, and ultimately, the assault rifle. Today, an infantryman's rifle is optimized for ranges of 300 m or less, and soldiers are trained to deliver individual rounds or bursts of fire within these distances. Typically, the application of accurate, long-range fire is the domain of the marksman and the sniper in warfare, and of enthusiastic target shooters in peacetime. The modern marksman rifle and sniper rifle are usually capable of accuracy better than 0.3 mrad at 100 yards (1 arcminute).
3D printed rifle
The Grizzly is a 3D printed .22-caliber rifle created around August 2013. It was created using a Stratasys Dimension 1200es printer. It was created by a Canadian only known by the pseudonym "Matthew" who told The Verge that he was in his late 20s, and his main job was making tools for the construction industry.
The original Grizzly fired a single shot before breaking. Grizzly 2.0 fired fourteen bullets before getting damaged due to the strain.
In October 2020, another 3D-printed 9mm rifle known as the "FGC-9mm" was created. It is reported that it can be made in 2 weeks with $500 of tools. A second model was later made in April 2021.
Youth rifle
A youth rifle is a rifle designed or modified for fitting children or other small-framed shooters. A youth rifle is often a single-shot .22 caliber rifle, or a bolt-action rifle, although some youth rifles are semi-automatic. They are usually very light, with a greatly shortened length of pull, which is necessary to accommodate children. Youth stocks are available for many popular rifles, such as the Ruger 10/22, a semi-automatic .22 LR rifle, allowing a youth rifle to be made from a standard rifle by simply changing the stock. The typical ages of shooters for such rifles vary from about age 5+.
Technical aspects
Rifling
The usual form of rifling was helical grooves in a round bore.
Some early rifled firearms had barrels with a twisted polygonal bore. The Whitworth rifle was the first such type designed to spin the round for accuracy. Bullets for these guns were made to match the shape of the bore so the bullet would grip the rifle bore and take a spin that way. These were generally large caliber weapons, and the ammunition still did not fit tightly in the barrel. Many different shapes and degrees of spiraling were used in experimental designs. One widely produced example was the Metford rifling in the Pattern 1888 Lee–Metford service rifle. Although uncommon, polygonal rifling is still used in some weapons today, one example being the Glock line of pistols (which fire standard bullets). Many of the early designs were prone to dangerous backfiring, which could lead to the destruction of the weapon and serious injury to the person firing it.
Barrel wear
As the bullet enters the barrel, it inserts itself into the rifling, a process that gradually wears down the barrel, and also causes the barrel to heat up more rapidly. Therefore, some machine guns are equipped with quick-change barrels that can be swapped every few thousand rounds, or in earlier designs, were water-cooled. Unlike older carbon steel barrels, which were limited to around 1,000 shots before the extreme heat caused accuracy to fade, modern stainless steel barrels for target rifles are much more resistant to wear, allowing many thousands of rounds to be fired before accuracy drops. (Many shotguns and small arms have chrome-lined barrels to reduce wear and enhance corrosion resistance. This is rare on rifles designed for extreme accuracy, as the plating process is difficult and liable to reduce the effect of the rifling.) Modern ammunition has a hardened lead core with a softer outer cladding or jacket, typically of an alloy of copper and nickel – cupro-nickel. Some ammunition is coated with molybdenum disulfide to further reduce internal friction – the so-called 'moly-coated' bullet.
Rate of fire
Rifles were initially single-shot, muzzle-loading weapons. During the 18th century, breech-loading weapons were designed, which allowed the rifleman to reload while under cover, but defects in manufacturing and the difficulty in forming a reliable gas-tight seal prevented widespread adoption. During the 19th century, multi-shot repeating rifles using lever, pump or linear bolt actions became standard, further increasing the rate of fire and minimizing the fuss involved in loading a firearm. The problem of proper seal creation had been solved with the use of brass cartridge cases, which expanded in an elastic fashion at the point of firing and effectively sealed the breech while the pressure remained high, then relaxed back enough to allow for easy removal. By the end of the 19th century, the leading bolt-action design was that of Paul Mauser, whose action—wedded to a reliable design possessing a five-shot magazine—became a world standard through two world wars and beyond. The Mauser rifle was paralleled by Britain's ten-shot Lee–Enfield and America's 1903 Springfield Rifle models. The American M1903 closely copied Mauser's original design.
Range
Barrel rifling dramatically increased the range and accuracy of the musket. Indeed, throughout its development, the rifle's history has been marked by increases in range and accuracy. From the Minié rifle and beyond, the rifle has become ever more potent at long-range strikes.
In recent decades, large-caliber anti-materiel rifles, typically firing between 12.7 mm and 20 mm caliber cartridges, have been developed. The US Barrett M82A1 is probably the best-known such rifle. A second example is the AX50 by Accuracy International. These weapons are typically used to strike critical, vulnerable targets such as computerized command and control vehicles, radio trucks, radar antennae, vehicle engine blocks and the jet engines of enemy aircraft. Anti-materiel rifles can be used against human targets, but the much higher weight of rifle and ammunition, and the massive recoil and muzzle blast, usually make them less than practical for such use. The Barrett M82 is designed with a maximum effective range of , although it has a confirmed kill distance of in Afghanistan during Operation Anaconda in 2002. The record for the longest confirmed kill shot stands at , set by an unnamed soldier with Canada's elite special operations unit Joint Task Force 2 using a McMillan TAC-50 sniper rifle.
Bullet rotational speed (RPM)
Bullets leaving a rifled barrel can spin at a rotational speed of over 100,000 revolutions per minute (rpm) (or over about 1.67 kilohertz, since 1 RPM = 1/60 Hz). The rotational speed depends both on the muzzle velocity of the bullet and the pitch of the rifling. Excessive rotational speed can exceed the bullet's designed limits and the inadequate centripetal force will fail to keep the bullet from disintegrating in a radial fashion. The rotational speed of the bullet can be calculated by using the formula below.
MV/ twist rate = rotational speed
Using metric units, the formula divides the number of millimeters in a meter (1000) by the barrel twist in millimeters (the length of travel along the barrel per full rotation). This number is then multiplied by the muzzle velocity in meters per second (m/s) and the number of seconds in a minute (60).
MV (in m/s) × (1000 mm /twist) × 60 s/min = Bullet RPM
For example, using a barrel that has a twist rate of 190 mm with a muzzle velocity of 900 m/s:
900 m/s × (1000 mm /(190 mm)) × 60 s/min = 284 210 RPM
Using imperial units, the formula divides the number of inches in a foot (12) by the rate of twist that the barrel has. This number is multiplied by the muzzle velocity (MV) and the number of seconds in a minute (60). For example, a bullet with a muzzle velocity of leaving a barrel that twists once per foot (1/12") would rotate at 180,000rpm.
MV (in fps) × (12 in. /twist rate) × 60 s/min. = Bullet RPM
For example, using a barrel that has a twist rate of 1 turn in 8" with a muzzle velocity of 3000 ft/s:
3000 fps × (12"/(8"/rotation)) × 60 s/min. = 270,000 RPM
Caliber
Rifles may be chambered in a variety of calibers (bullet or barrel diameters), from as low as 4.4 mm (.17 inch) varmint calibers to as high as 20 mm (.80 caliber) in the case of the largest anti-tank rifles. The term caliber essentially refers to the width of the bullet fired through a rifle's barrel. Armies have consistently attempted to find and procure the most lethal and accurate caliber for their firearms.
The standard calibers used by the world's militaries tend to follow worldwide trends. These trends have significantly changed during the centuries of firearm design and re-design. Muskets were normally chambered for large calibers, such as .50 or .59 (12.7 mm or 15 mm), with the theory that these large bullets caused the most damage.
During World War I and II, most rifles were chambered in .30 caliber (7.62 mm), a combination of power and speed. Examples would be the .303 British Lee–Enfield, the American M1903 .30-06, and the German 8mm Mauser K98.
An exception was the Italian Modello 91 rifle, which used the 6.5×52mm Mannlicher–Carcano cartridge.
Detailed study of infantry combat during and after World War II revealed that most small-arms engagements occurred within 100 meters, meaning that the power and range of the traditional .30-caliber weapons (designed for engagements at 500 meters and beyond) were essentially wasted. The single greatest predictor of an individual soldier's combat effectiveness was the number of rounds he fired. Weapons designers and strategists realized that service rifles firing smaller-caliber projectiles would allow troops to carry far more ammunition for the same weight. The lower recoil and more generous magazine capacities of small-caliber weapons also allow troops a much greater volume of fire, compared to historical battle rifles. Smaller, faster traveling, less stable projectiles have also demonstrated greater terminal ballistics and therein, a greater lethality than traditional .30-caliber rounds. Most modern service rifles fire a projectile of approximately 5.56 mm. Examples of firearms in this range are the American 5.56 mm M16 and the Russian 5.45×39mm AK-74.
Types of rifle
By mechanism
Air gun
Spring-piston
Break barrel
Fixed barrel
Underlever
Sidelever
Overlever
Pneumatic (internal pressure reservoir)
Pump pneumatic, either single-stroke or multi-stroke
Pre-charged pneumatic (PCP)
Compressed gas (external pressure reservoir)
CO2
High pressure air (HPA)
Firearm
Single-shot
Muzzle-loading rifle, some flintlock and mostly caplock
Breech-loading rifle
Breechblock rifle, either trapdoor, rolling, dropping, tilting or screwed
Break-action rifle
Double rifle (list)
Repeating
Manual
Revolving rifle
Lever-action rifle, e.g. Winchester rifle, Spencer rifle
Pump-action rifle, e.g. Colt Lightning Carbine
Bolt-action rifle
Turn-pull, e.g. Mauser G98, Lee–Enfield, Mosin-Nagant
Straight-pull, e.g. Ross rifle, K31, Mannlicher M1895, Blaser R93/R8
Bolt-release rifle, also known as lever-release rifle, e.g. Verney-Carron SpeedLine
Self-loading
Semi-automatic rifle
Automatic rifle
Selective-fire rifle
By usage
Military and law enforcement
Anti-materiel rifle
Anti-tank rifle
Long rifle
Personal defense weapon
Precision rifle
Designated marksman rifle
Sniper rifle (list)
Scout rifle
Service rifle
Assault rifle (list)
Battle rifle (list)
Carbine (list)
Civilian
Hunting rifle
Buffalo rifle
Elephant rifle
Express rifle
Punt gun
Varmint rifle
Match/target rifle
Benchrest rifle
Modern sporting rifle
Short-barreled rifle
Varmint rifle
| Technology | Projectile weapons | null |
25484 | https://en.wikipedia.org/wiki/Rosemary | Rosemary | Salvia rosmarinus (), commonly known as rosemary, is a shrub with fragrant, evergreen, needle-like leaves and white, pink, purple, or blue flowers. It is native to the Mediterranean region, as well as Portugal and Spain. Until 2017, it was known by the scientific name Rosmarinus officinalis (), now a synonym.
It is a member of the sage family Lamiaceae, which includes many other medicinal and culinary herbs. The name rosemary derives from Latin (). Rosemary has a fibrous root system.
Description
Rosemary is an aromatic evergreen shrub with leaves similar to Tsuga needles. It is native to the Mediterranean region, but is reasonably hardy in cool climates. Special cultivars like 'Arp' can withstand winter temperatures down to about . It can withstand droughts, surviving a severe lack of water for lengthy periods. It is considered a potentially invasive species and its seeds are often difficult to start, with a low germination rate and relatively slow growth, but the plant can live as long as 35 years.
Forms range from upright to trailing; the upright forms can reach between tall. The leaves are evergreen, long and broad, green above, and white below, with dense, short, woolly hair.
The plant flowers in spring and summer in temperate climates, but the plants can be in constant bloom in warm climates; flowers are white, pink, purple or deep blue. The branches are dotted with groups of 2 to 3 flowers down its length. Rosemary also has a tendency to flower outside its normal flowering season; it has been known to flower as late as early December, and as early as mid-February (in the Northern Hemisphere).
Taxonomy
Salvia rosmarinus is now considered one of many hundreds of species in the genus Salvia. Formerly it was placed in a much smaller genus, Rosmarinus, which contained only two to four species including R. officinalis, which is now considered a synonym of S. rosmarinus. Salvia jordanii (formerly Rosmarinus eriocalyx) is a closely related species native to Iberia and the Maghreb of Africa. Both the original and current genus names of the species were applied by the 18th-century naturalist and founding taxonomist Carl Linnaeus. Elizabeth Kent noted in her Flora Domestica (1823), "The botanical name of this plant is compounded of two Latin words, signifying Sea-dew; and indeed Rosemary thrives best by the sea."
Distribution
It is native to the Mediterranean region, as well as Portugal and northwestern Spain. It was first mentioned on cuneiform stone tablets as early as 5000 BCE. The herb was naturalized in China as early as 220 CE, during the late Han dynasty.
Rosemary came to England at an unknown date, though it is likely that the Romans brought it when they invaded Britain in 43 CE. Even so, there are no viable records containing rosemary in Britain until the 8th century CE. This mention was in a document which was later credited to Charlemagne, who promoted the general usage of herbs and ordered rosemary specifically to be grown in monastic gardens and farms.
There are no records of rosemary being properly naturalized in Britain until 1338, when cuttings were sent to Queen Philippa by her mother, Countess Joan of Hainault. It was then planted in the garden of the old palace of Westminster. Since then, rosemary can be found in most English herbal texts.
Rosemary finally arrived in the Americas with early European settlers in the beginning of the 17th century, and was soon spread to South America and distributed globally.
Cultivation
Since it is attractive and drought-tolerant, rosemary is used as an ornamental plant in gardens and for xeriscape landscaping, especially in regions of Mediterranean climate. It is considered easy to grow and pest-resistant. Rosemary can grow quite large and retain attractiveness for many years, can be pruned into formal shapes and low hedges, and has been used for topiary. It is easily grown in pots. The groundcover cultivars spread widely, with a dense and durable texture.
In order to harvest from the plant, the bush should be matured 2–3 years to ensure it is large enough to withstand it. The amount harvested should not exceed 20% of the growth in order to preserve the plant.
Cultivars
Numerous cultivars have been selected for garden use.
'Albus' – white flowers
'Arp' – leaves light green, lemon-scented and especially cold-hardy
'Aureus' – leaves speckled yellow
'Benenden Blue' – leaves narrow, dark green
'Blue Boy' – dwarf, small leaves
'Blue Rain' – pink flowers
'Golden Rain' – leaves green, with yellow streaks
'Gold Dust' – dark green leaves, with golden streaks but stronger than 'Golden Rain'
'Haifa' – low and small, white flowers
'Irene' – low and lax, trailing, intense blue flowers
'Lockwood de Forest' – procumbent selection from 'Tuscan Blue'
'Ken Taylor' – shrubby
'Majorica Pink' – pink flowers
'Miss Jessopp's Upright' – distinctive tall fastigiate form, with wider leaves.
'Pinkie' – pink flowers
'Prostratus' – lower groundcover
'Pyramidalis' (or 'Erectus') – fastigate form, pale blue flowers
'Remembrance' (or 'Gallipoli') – taken from the Gallipoli Peninsula
'Roseus' – pink flowers
'Salem' – pale blue flowers, cold-hardy similar to 'Arp'
'Severn Sea' – spreading, low-growing, with arching branches, flowers deep violet
'Sudbury Blue' – blue flowers
'Tuscan Blue' – traditional robust upright form
'Wilma's Gold' – yellow leaves
The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
'Benenden Blue'
'Miss Jessopp's Upright'
'Severn Sea'
'Sissinghurst Blue'
Uses
Aside from its usage in the fragrance industry, rosemary is not only used as a decorative plant in gardens, but also cultivated for practical applications, such as medicine and cooking. When the plant is fully grown, the leaves, twigs, and flowering apices are often extracted for use in these areas. The leaves are used to flavor various foods, such as stuffing and roasted meats. Rosemary, along with holly and ivy, was commonly used for Christmas decorations in the 17th century.
Culinary
Rosemary leaves are used as a flavoring in foods, such as stuffing and roasted lamb, pork, chicken, and turkey. Fresh or dried leaves are used in traditional Mediterranean cuisine. They have a bitter, astringent taste and a characteristic aroma which complements many cooked foods. Herbal tea can be made from the leaves. When roasted with meats or vegetables, the leaves impart a mustard-like aroma with an additional fragrance of charred wood that goes well with barbecued foods.
In amounts typically used to flavor foods, such as one teaspoon (1 gram), rosemary provides no nutritional value. Rosemary extract has been shown to improve the shelf life and heat stability of omega 3-rich oils which are prone to rancidity. Rosemary is also an effective antimicrobial herb.
Fragrance
Hungary water, which dates to the 14th century, was one of the first alcohol-based perfumes in Europe, and was primarily made from distilled rosemary.
Rosemary oil is used for purposes of fragrant bodily perfumes or to emit an aroma into a room; it is also burnt as incense, and used in shampoos and cleaning products.
Phytochemicals
Rosemary contains a number of phytochemicals, including rosmarinic acid, camphor, caffeic acid, ursolic acid, betulinic acid, carnosic acid, and carnosol. Rosemary essential oil contains 10–20% camphor.
Rosemary extract, specifically the type mainly consisting of carnosic acid and carnosol, is approved as a food antioxidant preservative in several countries. The E number is E392.
For hair growth
Some research shows that rosemary oil may help stimulate hair growth in some cases. One of the studies investigating the clinical efficacy of rosemary oil in the treatment of androgenetic alopecia and comparing its effects with minoxidil 2% (a current standard of care medication), found no significant difference between study groups using either rosemary oil or minoxidil regarding hair count, either at month 3 or month 6 of treatment. The frequencies of dry hair, greasy hair, and dandruff were not found to be significantly different from baseline at either month 3 or month 6 trial in the groups. The frequency of scalp itching at the 3- and 6-month trial points was significantly higher compared with baseline in both groups, however, it was more frequent in the minoxidil group at both assessed endpoints.
In culture
Rosemary was considered sacred to ancient Egyptians, Romans, and Greeks. In Don Quixote (Part One, Chapter XVII), the fictional hero uses rosemary in his recipe for balm of fierabras. It was written about by Pliny the Elder (23–79 CE) and Pedanius Dioscorides (c. 40 CE to c. 90 CE), a Greek botanist (amongst other things). The latter talked about rosemary in his most famous writing, De Materia Medica, one of the most influential herbal books in history.
The plant has been used as a symbol for remembrance during war commemorations and funerals in Europe and Australia. Mourners would throw it into graves as a symbol of remembrance for the dead.
In Australia, sprigs of rosemary are worn on ANZAC Day and sometimes Remembrance Day to signify remembrance; the herb grows wild on the Gallipoli Peninsula, where many Australians died during World War I.
Several Shakespeare plays refer to the use of rosemary in burial or memorial rites. In Shakespeare's Hamlet, Ophelia says, "There's rosemary, that's for remembrance. Pray you, love, remember." It likewise appears in Shakespeare's Winter's Tale in Act 4 Scene 4, where Perdita talks about "Rosemary and Rue". In Act 4 Scene 5 of Romeo and Juliet, Friar Lawrence admonishes the Capulet household to "stick your rosemary on this fair corse, and as the custom is, and in her best array, bear her to church." It is also said that "In the language of flowers it means 'fidelity in love.'"
In the Spanish fairy tale The Sprig of Rosemary, the heroine touches the hero with the titular sprig of rosemary in order to restore his magically lost memory.
Rosemary is very important in Danube Swabian culture being used for christenings, weddings, burials and festivals; for example, an apple with a sprig of rosemary in it is present at Kirchweih celebrations.
| Biology and health sciences | Herbs and spices | Plants |
25486 | https://en.wikipedia.org/wiki/Rosales | Rosales | Rosales (, ) are an order of flowering plants. Well-known members of Rosales include: roses, strawberries, blackberries and raspberries, apples and pears, plums, peaches and apricots, almonds, rowan and hawthorn, jujube, elms, banyans, figs, mulberries, breadfruit, nettles, hops, and cannabis.
Rosales contain about 7,700 species, distributed into nine families and about 260 genera. Their type family is the rose family, Rosaceae. The largest families are Rosaceae (91/4828) and Urticaceae (53/2625).
Taxonomy
The order Rosales is strongly supported as monophyletic in phylogenetic analyses of DNA sequences, such as those carried out by members of the Angiosperm Phylogeny Group. In their APG III system of plant classification, they defined Rosales as consisting of the nine families:
Cannabaceae (hemp family)
Dirachmaceae
Elaeagnaceae (oleaster/Russian olive family)
Moraceae (mulberry family)
Rhamnaceae (buckthorn family)
Rosaceae (rose family)
Ulmaceae (elm family)
Urticaceae (nettle family)
In the older classification system of Dahlgren the Rosales were in the superorder Rosiflorae (also called Rosanae). In the obsolete Cronquist system, the order Rosales was many times polyphyletic. It consisted of the family Rosaceae and 23 other families that are now placed in various other orders. These families and their placement in the APG III system are:
Alseuosmiaceae (Asterales)
Anisophylleaceae (Cucurbitales)
Brunelliaceae (Oxalidales)
Bruniaceae (Bruniales)
Byblidaceae (Lamiales)
Cephalotaceae (Oxalidales)
Chrysobalanaceae (Malpighiales)
Columelliaceae (Bruniales)
Connaraceae (Oxalidales)
Crassulaceae (Saxifragales)
Crossosomataceae (Crossosomatales)
Cunoniaceae (Oxalidales)
Davidsoniaceae (Cunoniaceae, Oxalidales)
Dialypetalanthaceae (Rubiaceae, Gentianales)
Eucryphiaceae (Cunoniaceae, Oxalidales)
Greyiaceae (Melianthaceae, Geraniales)
Grossulariaceae (Saxifragales)
Hydrangeaceae (Cornales)
Neuradaceae (Malvales)
Pittosporaceae (Apiales)
Rhabdodendraceae (Caryophyllales)
Rosaceae
Saxifragaceae (Saxifragales)
Surianaceae (Fabales)
Phylogeny
The relationships of Rosales families were resolved in a molecular phylogenetic study based on two nuclear genes and ten chloroplast genes:
The order Rosales is divided into three clades that have never been assigned a taxonomic rank. The basal clade consists of the family Rosaceae; another clade consists of four families, including Rhamnaceae; and the third clade consists of the four urticalean families.
The order is a sister to a clade consisting of Fagales and Cucurbitales.
Distribution
Different plants that fall under the order Rosales grow in many different parts of the world. They can be found in the mountains, the tropics and the arctic. Even though you can find a member of the order Rosales nearly anywhere, the specific families grow in different specific geographical locations. Wind-pollination is the way that the majority of the families that fall under the order Rosales (including Moraceae, Ulmaceae, and Urticaceae etc.) pollinate.
Importance
Within the order Rosales is the family Rosaceae, which includes numerous species that are cultivated for their fruit, making this one of the most economically important families of plants. Fruit produced by members of this family include apples, pears, plums, peaches, cherries, almonds, strawberries, blackberries and raspberries. Many ornamental species of plant are also in the family Rosaceae, including the rose after which the family and order were named. The rose, considered a symbol of love in many cultures, is featured prominently in poetry and literature. Modern garden varieties of roses such as hybrid teas, floribunda, and grandifora, originated from complex hybrids of several separate wild species native to different regions of Eurasia.
The Moraceae also produce important fruits, such as mulberries, figs, jackfruits, and breadfruits, and the leaves of the mulberry provide food for the silkworms used in commercial silk production.
The wood from the black cherry (Prunus serotina) and sweet cherry (P. avium) is used to make high quality furniture due to its color and ability to be bent. The Cannabis plant has been highly prized for millennia for its hemp, which has numerous uses. Other varieties of Cannabis are grown as a drug.
Plants in the order Rosales were used in the traditional medicines of many cultures. Medical cannabis has been recognized for its pharmaceutical use. The latex of some species of fig trees contains the enzyme ficin, which is effective in killing roundworms that infect the intestinal tracts of animals.
| Biology and health sciences | Rosales | Plants |
25524 | https://en.wikipedia.org/wiki/Research | Research | Research is "creative and systematic work undertaken to increase the stock of knowledge". It involves the collection, organization, and analysis of evidence to increase understanding of a topic, characterized by a particular attentiveness to controlling sources of bias and error. These activities are characterized by accounting and controlling for biases. A research project may be an expansion of past work in the field. To test the validity of instruments, procedures, or experiments, research may replicate elements of prior projects or the project as a whole.
The primary purposes of basic research (as opposed to applied research) are documentation, discovery, interpretation, and the research and development (R&D) of methods and systems for the advancement of human knowledge. Approaches to research depend on epistemologies, which vary considerably both within and between humanities and sciences. There are several forms of research: scientific, humanities, artistic, economic, social, business, marketing, practitioner research, life, technological, etc. The scientific study of research practices is known as meta-research.
A researcher is a person who conducts research, especially in order to discover new information or to reach a new understanding. In order to be a social researcher or a social scientist, one should have enormous knowledge of subjects related to social science that they are specialized in. Similarly, in order to be a natural science researcher, the person should have knowledge of fields related to natural science (physics, chemistry, biology, astronomy, zoology and so on). Professional associations provide one pathway to mature in the research profession.
Etymology
The word research is derived from the Middle French "recherche", which means "to go about seeking", the term itself being derived from the Old French term "recerchier," a compound word from "re-" + "cerchier", or "sercher", meaning 'search'. The earliest recorded use of the term was in 1577.
Definitions
Research has been defined in a number of different ways, and while there are similarities, there does not appear to be a single, all-encompassing definition that is embraced by all who engage in it.
Research, in its simplest terms, is searching for knowledge and searching for truth. In a formal sense, it is a systematic study of a problem attacked by a deliberately chosen strategy, which starts with choosing an approach to preparing a blueprint (design) and acting upon it in terms of designing research hypotheses, choosing methods and techniques, selecting or developing data collection tools, processing the data, interpretation, and ending with presenting solution(s) of the problem.
Another definition of research is given by John W. Creswell, who states that "research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question.
The Merriam-Webster Online Dictionary defines research more generally to also include studying already existing knowledge: "studious inquiry or examination; especially: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws".
Forms of research
Original research
Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research. This material is of a primary-source character. The purpose of the original research is to produce new knowledge rather than present the existing knowledge in a new form (e.g., summarized or classified). Original research can take various forms, depending on the discipline it pertains to. In experimental work, it typically involves direct or indirect observation of the researched subject(s), e.g., in the laboratory or in the field, documents the methodology, results, and conclusions of an experiment or set of experiments, or offers a novel interpretation of previous results. In analytical work, there are typically some new (for example) mathematical results produced or a new way of approaching an existing problem. In some subjects which do not typically carry out experimentation or analysis of this kind, the originality is in the particular way existing understanding is changed or re-interpreted based on the outcome of the work of the researcher.
The degree of originality of the research is among the major criteria for articles to be published in academic journals and usually established by means of peer review. Graduate students are commonly required to perform original research as part of a dissertation.
Scientific research
Scientific research is a systematic way of gathering data and harnessing curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world. It makes practical applications possible. Scientific research may be funded by public authorities, charitable organizations, and private organizations. Scientific research can be subdivided by discipline.
Generally, research is understood to follow a certain structural process. Though the order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied:
Observations and formation of the topic: Consists of the subject area of one's interest and following that subject area to conduct subject-related research. The subject area should not be randomly chosen since it requires reading a vast amount of literature on the topic to determine the gap in the literature the researcher intends to narrow. A keen interest in the chosen subject area is advisable. The research will have to be justified by linking its importance to already existing knowledge about the topic.
Hypothesis: A testable prediction which designates the relationship between two or more variables.
Conceptual definition: Description of a concept by relating it to other concepts.
Operational definition: Details in regards to defining the variables and how they will be measured/assessed in the study.
Gathering of data: Consists of identifying a population and selecting samples, gathering information from or about these samples by using specific research instruments. The instruments used for data collection must be valid and reliable.
Analysis of data: Involves breaking down the individual pieces of data to draw conclusions about it.
Data Interpretation: This can be represented through tables, figures, and pictures, and then described in words.
Test, revising of hypothesis
Conclusion, reiteration if necessary
A common misconception is that a hypothesis will be proven (see, rather, null hypothesis). Generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected (see falsifiability). However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true.
A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables.
Research in the humanities
Research in the humanities involves different methods such as for example hermeneutics and semiotics. Humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. Context is always important, and context can be social, historical, political, cultural, or ethnic. An example of research in the humanities is historical research, which is embodied in historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. Other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. These studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory.
Artistic research
Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth.
The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines. One of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. As such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis.
Artistic research has been defined by the School of Dance and Circus (Dans och Cirkushögskolan, DOCH), Stockholm in the following manner – "Artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. It is based on artistic practices, methods, and criticality. Through presented documentation, the insights gained shall be placed in a context." Artistic research aims to enhance knowledge and understanding with presentation of the arts. A simpler understanding by Julian Klein defines artistic research as any kind of research employing the artistic mode of perception. For a survey of the central problematics of today's artistic research, see Giaco Schiesser.
According to artist Hakan Topal, in artistic research, "perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities". Most writers, whether of fiction or non-fiction books, also have to do research to support their creative work. This may be factual, historical, or background research. Background research could include, for example, geographical or procedural research.
The Society for Artistic Research (SAR) publishes the triannual Journal for Artistic Research (JAR), an international, online, open access, and peer-reviewed journal for the identification, publication, and dissemination of artistic research and its methodologies, from all arts disciplines and it runs the Research Catalogue (RC), a searchable, documentary database of artistic research, to which anyone can contribute.
Patricia Leavy addresses eight arts-based research (ABR) genres: narrative inquiry, fiction-based research, poetry, music, dance, theatre, film, and visual art.
In 2016, the European League of Institutes of the Arts launched The Florence Principles' on the Doctorate in the Arts. The Florence Principles relating to the Salzburg Principles and the Salzburg Recommendations of the European University Association name seven points of attention to specify the Doctorate / PhD in the Arts compared to a scientific doctorate / PhD. The Florence Principles have been endorsed and are supported also by AEC, CILECT, CUMULUS and SAR.
Historical research
The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines that are commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are part of most formal historical research:
Identification of origin date
Evidence of localization
Recognition of authorship
Analysis of data
Identification of integrity
Attribution of credibility
Documentary research
Steps in conducting research
Research is often conducted using the hourglass model structure of research. The hourglass model starts with a broad spectrum for research, focusing in on the required information through the method of the project (like the neck of the hourglass), then expands the research in the form of discussion and results. The major steps in conducting research are:
Identification of research problem
Literature review
Specifying the purpose of research
Determining specific research questions
Specification of a conceptual framework, sometimes including a set of hypotheses
Choice of a methodology (for data collection)
Data collection
Verifying data
Analyzing and interpreting the data
Reporting and evaluating research
Communicating the research findings and, possibly, recommendations
The steps generally represent the overall process; however, they should be viewed as an ever-changing iterative process rather than a fixed set of steps. Most research begins with a general statement of the problem, or rather, the purpose for engaging in the study. The literature review identifies flaws or holes in previous research which provides justification for the study. Often, a literature review is conducted in a given subject area before a research question is identified. A gap in the current literature, as identified by a researcher, then engenders a research question. The research question may be parallel to the hypothesis. The hypothesis is the supposition to be tested. The researcher(s) collects data to test the hypothesis. The researcher(s) then analyzes and interprets the data via a variety of statistical methods, engaging in what is known as empirical research. The results of the data analysis in rejecting or failing to reject the null hypothesis are then reported and evaluated. At the end, the researcher may discuss avenues for further research. However, some researchers advocate for the reverse approach: starting with articulating findings and discussion of them, moving "up" to identification of a research problem that emerges in the findings and literature review. The reverse approach is justified by the transactional nature of the research endeavor where research inquiry, research questions, research method, relevant research literature, and so on are not fully known until the findings have fully emerged and been interpreted.
Rudolph Rummel says, "... no researcher should accept any one or two tests as definitive. It is only when a range of tests are consistent over many kinds of data, researchers, and methods can one have confidence in the results."
Plato in Meno talks about an inherent difficulty, if not a paradox, of doing research that can be paraphrased in the following way, "If you know what you're searching for, why do you search for it?! [i.e., you have already found it] If you don't know what you're searching for, what are you searching for?!"
Research methods
The goal of the research process is to produce new knowledge or deepen understanding of a topic or issue. This process takes three main forms (although, as previously discussed, the boundaries between them may be obscure):
Exploratory research, which helps to identify and define a problem or question.
Constructive research, which tests theories and proposes solutions to a problem or question.
Empirical research, which tests the feasibility of a solution using empirical evidence.
There are two major types of empirical research design: qualitative research and quantitative research. Researchers choose qualitative or quantitative methods according to the nature of the research topic they want to investigate and the research questions they aim to answer:
Qualitative research
Qualitative research refers to much more subjective non-quantitative, use different methods of collecting data, analyzing data, interpreting data for meanings, definitions, characteristics, symbols metaphors of things. Qualitative research further classified into the following types: Ethnography: This research mainly focus on culture of group of people which includes share attributes, language, practices, structure, value, norms and material things, evaluate human lifestyle. Ethno: people, Grapho: to write, this disciple may include ethnic groups, ethno genesis, composition, resettlement and social welfare characteristics. Phenomenology: It is very powerful strategy for demonstrating methodology to health professions education as well as best suited for exploring challenging problems in health professions educations. In addition, PMP researcher Mandy Sha argued that a project management approach is necessary to control the scope, schedule, and cost related to qualitative research design, participant recruitment, data collection, reporting, as well as stakeholder engagement.
Quantitative research
This involves systematic empirical investigation of quantitative properties and phenomena and their relationships, by asking a narrow question and collecting numerical data to analyze it utilizing statistical methods. The quantitative research designs are experimental, correlational, and survey (or descriptive). Statistics derived from quantitative research can be used to establish the existence of associative or causal relationships between variables. Quantitative research is linked with the philosophical and theoretical stance of positivism.
The quantitative data collection methods rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. These methods produce results that can be summarized, compared, and generalized to larger populations if the data are collected using proper sampling and data collection strategies. Quantitative research is concerned with testing hypotheses derived from theory or being able to estimate the size of a phenomenon of interest.
If the research question is about people, participants may be randomly assigned to different treatments (this is the only way that a quantitative study can be considered a true experiment). If this is not feasible, the researcher may collect data on participant and situational characteristics to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants.
In either qualitative or quantitative research, the researcher(s) may collect primary or secondary data. Primary data is data collected specifically for the research, such as through interviews or questionnaires. Secondary data is data that already exists, such as census data, which can be re-used for the research. It is good ethical research practice to use secondary data wherever possible.
Mixed-method research, i.e. research that includes qualitative and quantitative elements, using both primary and secondary data, is becoming more common. This method has benefits that using one method alone cannot offer. For example, a researcher may choose to conduct a qualitative study and follow it up with a quantitative study to gain additional insights.
Big data has brought big impacts on research methods so that now many researchers do not put much effort into data collection; furthermore, methods to analyze easily available huge amounts of data have also been developed.
Types of Research Method
1. Observatory Research Method
2. Correlation Research Method
Non-empirical research
Non-empirical (theoretical) research is an approach that involves the development of theory as opposed to using observation and experimentation. As such, non-empirical research seeks solutions to problems using existing knowledge as its source. This, however, does not mean that new ideas and innovations cannot be found within the pool of existing and established knowledge. Non-empirical research is not an absolute alternative to empirical research because they may be used together to strengthen a research approach. Neither one is less effective than the other since they have their particular purpose in science. Typically empirical research produces observations that need to be explained; then theoretical research tries to explain them, and in so doing generates empirically testable hypotheses; these hypotheses are then tested empirically, giving more observations that may need further explanation; and so on. See Scientific method.
A simple example of a non-empirical task is the prototyping of a new drug using a differentiated application of existing knowledge; another is the development of a business process in the form of a flow chart and texts where all the ingredients are from established knowledge. Much of cosmological research is theoretical in nature. Mathematics research does not rely on externally available data; rather, it seeks to prove theorems about mathematical objects.
Research ethics
Problems in research
Meta-research
Meta-research is the study of research through the use of research methods. Also known as "research on research", it aims to reduce waste and increase the quality of research in all fields. Meta-research concerns itself with the detection of bias, methodological flaws, and other errors and inefficiencies. Among the finding of meta-research is a low rates of reproducibility across a large number of fields. This widespread difficulty in reproducing research has been termed the "replication crisis."
Methods of research
In many disciplines, Western methods of conducting research are predominant. Researchers are overwhelmingly taught Western methods of data collection and study. The increasing participation of indigenous peoples as researchers has brought increased attention to the scientific lacuna in culturally sensitive methods of data collection. Western methods of data collection may not be the most accurate or relevant for research on non-Western societies. For example, "Hua Oranga" was created as a criterion for psychological evaluation in Māori populations, and is based on dimensions of mental health important to the Māori people – "taha wairua (the spiritual dimension), taha hinengaro (the mental dimension), taha tinana (the physical dimension), and taha whanau (the family dimension)".
Bias
Research is often biased in the languages that are preferred (linguicism) and the geographic locations where research occurs.
Periphery scholars face the challenges of exclusion and linguicism in research and academic publication. As the great majority of mainstream academic journals are written in English, multilingual periphery scholars often must translate their work to be accepted to elite Western-dominated journals. Multilingual scholars' influences from their native communicative styles can be assumed to be incompetence instead of difference.
For comparative politics, Western countries are over-represented in single-country studies, with heavy emphasis on Western Europe, Canada, Australia, and New Zealand. Since 2000, Latin American countries have become more popular in single-country studies. In contrast, countries in Oceania and the Caribbean are the focus of very few studies. Patterns of geographic bias also show a relationship with linguicism: countries whose official languages are French or Arabic are far less likely to be the focus of single-country studies than countries with different official languages. Within Africa, English-speaking countries are more represented than other countries.
Generalizability
Generalization is the process of more broadly applying the valid results of one study. Studies with a narrow scope can result in a lack of generalizability, meaning that the results may not be applicable to other populations or regions. In comparative politics, this can result from using a single-country study, rather than a study design that uses data from multiple countries. Despite the issue of generalizability, single-country studies have risen in prevalence since the late 2000s.
Publication peer review
Peer review is a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Usually, the peer review process involves experts in the same field who are consulted by editors to give a review of the scholarly works produced by a colleague of theirs from an unbiased and impartial point of view, and this is usually done free of charge. The tradition of peer reviews being done for free has however brought many pitfalls which are also indicative of why most peer reviewers decline many invitations to review. It was observed that publications from periphery countries rarely rise to the same elite status as those of North America and Europe, because limitations on the availability of resources including high-quality paper and sophisticated image-rendering software and printing tools render these publications less able to satisfy standards currently carrying formal or informal authority in the publishing industry. These limitations in turn result in the under-representation of scholars from periphery nations among the set of publications holding prestige status relative to the quantity and quality of those scholars' research efforts, and this under-representation in turn results in disproportionately reduced acceptance of the results of their efforts as contributions to the body of knowledge available worldwide.
Influence of the open-access movement
The open access movement assumes that all information generally deemed useful should be free and belongs to a "public domain", that of "humanity". This idea gained prevalence as a result of Western colonial history and ignores alternative conceptions of knowledge circulation. For instance, most indigenous communities consider that access to certain information proper to the group should be determined by relationships.
There is alleged to be a double standard in the Western knowledge system. On the one hand, "digital right management" used to restrict access to personal information on social networking platforms is celebrated as a protection of privacy, while simultaneously when similar functions are used by cultural groups (i.e. indigenous communities) this is denounced as "access control" and reprehended as censorship.
Future perspectives
Even though Western dominance seems to be prominent in research, some scholars, such as Simon Marginson, argue for "the need [for] a plural university world". Marginson argues that the East Asian Confucian model could take over the Western model.
This could be due to changes in funding for research both in the East and the West. Focused on emphasizing educational achievement, East Asian cultures, mainly in China and South Korea, have encouraged the increase of funding for research expansion. In contrast, in the Western academic world, notably in the United Kingdom as well as in some state governments in the United States, funding cuts for university research have occurred, which some say may lead to the future decline of Western dominance in research.
Neo-colonial approaches
Professionalisation
In several national and private academic systems, the professionalisation of research has resulted in formal job titles.
In Russia
In present-day Russia, and some other countries of the former Soviet Union, the term researcher (, nauchny sotrudnik) has been used both as a generic term for a person who has been carrying out scientific research, and as a job position within the frameworks of the Academy of Sciences, universities, and in other research-oriented establishments.
The following ranks are known:
Junior Researcher (Junior Research Associate)
Researcher (Research Associate)
Senior Researcher (Senior Research Associate)
Leading Researcher (Leading Research Associate)
Chief Researcher (Chief Research Associate)
Publishing
Academic publishing is a system that is necessary for academic scholars to peer review the work and make it available for a wider audience. The system varies widely by field and is also always changing, if often slowly. Most academic work is published in journal article or book form. There is also a large body of research that exists in either a thesis or dissertation form. These forms of research can be found in databases explicitly for theses and dissertations. In publishing, STM publishing is an abbreviation for academic publications in science, technology, and medicine.
Most established academic fields have their own scientific journals and other outlets for publication, though many academic journals are somewhat interdisciplinary, and publish work from several distinct fields or subfields. The kinds of publications that are accepted as contributions of knowledge or research vary greatly between fields, from the print to the electronic format. A study suggests that researchers should not give great consideration to findings that are not replicated frequently. It has also been suggested that all published studies should be subjected to some measure for assessing the validity or reliability of its procedures to prevent the publication of unproven findings. Business models are different in the electronic environment. Since about the early 1990s, licensing of electronic resources, particularly journals, has been very common. Presently, a major trend, particularly with respect to scholarly journals, is open access. There are two main forms of open access: open access publishing, in which the articles or the whole journal is freely available from the time of publication, and self-archiving, where the author makes a copy of their own work freely available on the web.
Research statistics and funding
Most funding for scientific research comes from three major sources: corporate research and development departments; private foundations; and government research councils such as the National Institutes of Health in the US and the Medical Research Council in the UK. These are managed primarily through universities and in some cases through military contractors. Many senior researchers (such as group leaders) spend a significant amount of their time applying for grants for research funds. These grants are necessary not only for researchers to carry out their research but also as a source of merit. The Social Psychology Network provides a comprehensive list of U.S. Government and private foundation funding sources.
The total number of researchers (full-time equivalents) per million inhabitants for individual countries is shown in the following table.
Research expenditure by type of research as a share of GDP for individual countries is shown in the following table.
| Physical sciences | Basics | null |
25599 | https://en.wikipedia.org/wiki/Rubidium | Rubidium | Rubidium is a chemical element; it has symbol Rb and atomic number 37. It is a very soft, whitish-grey solid in the alkali metal group, similar to potassium and caesium. Rubidium is the first alkali metal in the group to have a density higher than water. On Earth, natural rubidium comprises two isotopes: 72% is a stable isotope Rb, and 28% is slightly radioactive Rb, with a half-life of 48.8 billion years – more than three times as long as the estimated age of the universe.
German chemists Robert Bunsen and Gustav Kirchhoff discovered rubidium in 1861 by the newly developed technique, flame spectroscopy. The name comes from the Latin word , meaning deep red, the color of its emission spectrum. Rubidium's compounds have various chemical and electronic applications. Rubidium metal is easily vaporized and has a convenient spectral absorption range, making it a frequent target for laser manipulation of atoms. Rubidium is not a known nutrient for any living organisms. However, rubidium ions have similar properties and the same charge as potassium ions, and are actively taken up and treated by animal cells in similar ways.
Characteristics
Physical properties
Rubidium is a very soft, ductile, silvery-white metal. It has a melting point of and a boiling point of . It forms amalgams with mercury and alloys with gold, iron, caesium, sodium, and potassium, but not lithium (despite rubidium and lithium being in the same periodic group). Rubidium and potassium show a very similar purple color in the flame test, and distinguishing the two elements requires more sophisticated analysis, such as spectroscopy.
Chemical properties
Rubidium is the second most electropositive of the stable alkali metals and has a very low first ionization energy of only 403 kJ/mol. It has an electron configuration of [Kr]5s1 and is photosensitive. Due to its strong electropositive nature, rubidium reacts explosively with water to produce rubidium hydroxide and hydrogen gas. As with all the alkali metals, the reaction is usually vigorous enough to ignite metal or the hydrogen gas produced by the reaction, potentially causing an explosion. Rubidium, being denser than potassium, sinks in water, reacting violently; caesium explodes on contact with water. However, the reaction rates of all alkali metals depend upon surface area of metal in contact with water, with small metal droplets giving explosive rates. Rubidium has also been reported to ignite spontaneously in air.
Compounds
Rubidium chloride (RbCl) is probably the most used rubidium compound: among several other chlorides, it is used to induce living cells to take up DNA; it is also used as a biomarker, because in nature, it is found only in small quantities in living organisms and when present, replaces potassium. Other common rubidium compounds are the corrosive rubidium hydroxide (RbOH), the starting material for most rubidium-based chemical processes; rubidium carbonate (Rb2CO3), used in some optical glasses, and rubidium copper sulfate, Rb2SO4·CuSO4·6H2O. Rubidium silver iodide (RbAg4I5) has the highest room temperature conductivity of any known ionic crystal, a property exploited in thin film batteries and other applications.
Rubidium forms a number of oxides when exposed to air, including rubidium monoxide (Rb2O), Rb6O, and Rb9O2; rubidium in excess oxygen gives the superoxide RbO2. Rubidium forms salts with halogens, producing rubidium fluoride, rubidium chloride, rubidium bromide, and rubidium iodide.
Isotopes
Although rubidium is monoisotopic, rubidium in the Earth's crust is composed of two isotopes: the stable 85Rb (72.2%) and the radioactive 87Rb (27.8%). Natural rubidium is radioactive, with specific activity of about 670 Bq/g, enough to significantly expose a photographic film in 110 days. Thirty additional rubidium isotopes have been synthesized with half-lives of less than 3 months; most are highly radioactive and have few uses.
Rubidium-87 has a half-life of years, which is more than three times the age of the universe of years, making it a primordial nuclide. It readily substitutes for potassium in minerals, and is therefore fairly widespread. Rb has been used extensively in dating rocks; 87Rb beta decays to stable 87Sr. During fractional crystallization, Sr tends to concentrate in plagioclase, leaving Rb in the liquid phase. Hence, the Rb/Sr ratio in residual magma may increase over time, and the progressing differentiation results in rocks with elevated Rb/Sr ratios. The highest ratios (10 or more) occur in pegmatites. If the initial amount of Sr is known or can be extrapolated, then the age can be determined by measurement of the Rb and Sr concentrations and of the 87Sr/86Sr ratio. The dates indicate the true age of the minerals only if the rocks have not been subsequently altered (see rubidium–strontium dating).
Rubidium-82, one of the element's non-natural isotopes, is produced by electron-capture decay of strontium-82 with a half-life of 25.36 days. With a half-life of 76 seconds, rubidium-82 decays by positron emission to stable krypton-82.
Occurrence
Rubidium is not abundant, being one of 56 elements that combined make up 0.05% of the Earth's crust; at roughly the 23rd most abundant element in the Earth's crust it is more abundant than zinc or copper. It occurs naturally in the minerals leucite, pollucite, carnallite, and zinnwaldite, which contain as much as 1% rubidium oxide. Lepidolite contains between 0.3% and 3.5% rubidium, and is the commercial source of the element. Some potassium minerals and potassium chlorides also contain the element in commercially significant quantities.
Seawater contains an average of 125 μg/L of rubidium compared to the much higher value for potassium of 408 mg/L and the much lower value of 0.3 μg/L for caesium. Rubidium is the 18th most abundant element in seawater.
Because of its large ionic radius, rubidium is one of the "incompatible elements". During magma crystallization, rubidium is concentrated together with its heavier analogue caesium in the liquid phase and crystallizes last. Therefore, the largest deposits of rubidium and caesium are zone pegmatite ore bodies formed by this enrichment process. Because rubidium substitutes for potassium in the crystallization of magma, the enrichment is far less effective than that of caesium. Zone pegmatite ore bodies containing mineable quantities of caesium as pollucite or the lithium minerals lepidolite are also a source for rubidium as a by-product.
Two notable sources of rubidium are the rich deposits of pollucite at Bernic Lake, Manitoba, Canada, and the rubicline found as impurities in pollucite on the Italian island of Elba, with a rubidium content of 17.5%. Both of those deposits are also sources of caesium.
Production
Although rubidium is more abundant in Earth's crust than caesium, the limited applications and the lack of a mineral rich in rubidium limits the production of rubidium compounds to 2 to 4 tonnes per year. Several methods are available for separating potassium, rubidium, and caesium. The fractional crystallization of a rubidium and caesium alum yields after 30 subsequent steps pure rubidium alum. Two other methods are reported, the chlorostannate process and the ferrocyanide process.
For several years in the 1950s and 1960s, a by-product of potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium, with the rest being potassium and a small amount of caesium. Today the largest producers of caesium produce rubidium as a by-product from pollucite.
History
Rubidium was discovered in 1861 by Robert Bunsen and Gustav Kirchhoff, in Heidelberg, Germany, in the mineral lepidolite through flame spectroscopy. Because of the bright red lines in its emission spectrum, they chose a name derived from the Latin word , meaning "deep red".
Rubidium is a minor component in lepidolite. Kirchhoff and Bunsen processed 150 kg of a lepidolite containing only 0.24% rubidium monoxide (Rb2O). Both potassium and rubidium form insoluble salts with chloroplatinic acid, but those salts show a slight difference in solubility in hot water. Therefore, the less soluble rubidium hexachloroplatinate (Rb2PtCl6) could be obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, the process yielded 0.51 grams of rubidium chloride (RbCl) for further studies. Bunsen and Kirchhoff began their first large-scale isolation of caesium and rubidium compounds with of mineral water, which yielded 7.3 grams of caesium chloride and 9.2 grams of rubidium chloride. Rubidium was the second element, shortly after caesium, to be discovered by spectroscopy, just one year after the invention of the spectroscope by Bunsen and Kirchhoff.
The two scientists used the rubidium chloride to estimate that the atomic weight of the new element was 85.36 (the currently accepted value is 85.47). They tried to generate elemental rubidium by electrolysis of molten rubidium chloride, but instead of a metal, they obtained a blue homogeneous substance, which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance". They presumed that it was a subchloride (); however, the product was probably a colloidal mixture of the metal and rubidium chloride. In a second attempt to produce metallic rubidium, Bunsen was able to reduce rubidium by heating charred rubidium tartrate. Although the distilled rubidium was pyrophoric, they were able to determine the density and the melting point. The quality of this research in the 1860s can be appraised by the fact that their determined density differs by less than 0.1 g/cm3 and the melting point by less than 1 °C from the presently accepted values.
The slight radioactivity of rubidium was discovered in 1908, but that was before the theory of isotopes was established in 1910, and the low level of activity (half-life greater than 1010 years) made interpretation complicated. The now proven decay of 87Rb to stable 87Sr through beta decay was still under discussion in the late 1940s.
Rubidium had minimal industrial value before the 1920s. Since then, the most important use of rubidium is research and development, primarily in chemical and electronic applications. In 1995, rubidium-87 was used to produce a Bose–Einstein condensate, for which the discoverers, Eric Allin Cornell, Carl Edwin Wieman and Wolfgang Ketterle, won the 2001 Nobel Prize in Physics.
Applications
Rubidium compounds are sometimes used in fireworks to give them a purple color. Rubidium has also been considered for use in a thermoelectric generator using the magnetohydrodynamic principle, whereby hot rubidium ions are passed through a magnetic field. These conduct electricity and act like an armature of a generator, thereby generating an electric current. Rubidium, particularly vaporized 87Rb, is one of the most commonly used atomic species employed for laser cooling and Bose–Einstein condensation. Its desirable features for this application include the ready availability of inexpensive diode laser light at the relevant wavelength and the moderate temperatures required to obtain substantial vapor pressures. For cold-atom applications requiring tunable interactions, 85Rb is preferred for its rich Feshbach spectrum.
Rubidium has been used for polarizing 3He, producing volumes of magnetized 3He gas, with the nuclear spins aligned rather than random. Rubidium vapor is optically pumped by a laser, and the polarized Rb polarizes 3He through the hyperfine interaction. Such spin-polarized 3He cells are useful for neutron polarization measurements and for producing polarized neutron beams for other purposes.
The resonant element in atomic clocks utilizes the hyperfine structure of rubidium's energy levels, and rubidium is useful for high-precision timing. It is used as the main component of secondary frequency references (rubidium oscillators) in cell site transmitters and other electronic transmitting, networking, and test equipment. These rubidium standards are often used with GNSS to produce a "primary frequency standard" that has greater accuracy and is less expensive than caesium standards. Such rubidium standards are often mass-produced for the telecommunications industry.
Other potential or current uses of rubidium include a working fluid in vapor turbines, as a getter in vacuum tubes, and as a photocell component. Rubidium is also used as an ingredient in special types of glass, in the production of superoxide by burning in oxygen, in the study of potassium ion channels in biology, and as the vapor in atomic magnetometers. In particular, 87Rb is used with other alkali metals in the development of spin-exchange relaxation-free (SERF) magnetometers.
Rubidium-82 is used for positron emission tomography. Rubidium is very similar to potassium, and tissue with high potassium content will also accumulate the radioactive rubidium. One of the main uses is myocardial perfusion imaging. As a result of changes in the blood–brain barrier in brain tumors, rubidium collects more in brain tumors than normal brain tissue, allowing the use of radioisotope rubidium-82 in nuclear medicine to locate and image brain tumors. Rubidium-82 has a very short half-life of 76 seconds, and the production from decay of strontium-82 must be done close to the patient.
Rubidium was tested for the influence on manic depression and depression. Dialysis patients suffering from depression show a depletion in rubidium, and therefore a supplementation may help during depression. In some tests the rubidium was administered as rubidium chloride with up to 720 mg per day for 60 days.
Precautions and biological effects
Rubidium reacts violently with water and can cause fires. To ensure safety and purity, this metal is usually kept under dry mineral oil or sealed in glass ampoules in an inert atmosphere. Rubidium forms peroxides on exposure even to a small amount of air diffused into the oil, and storage is subject to similar precautions as the storage of metallic potassium.
Rubidium, like sodium and potassium, almost always has +1 oxidation state when dissolved in water, even in biological contexts. The human body tends to treat Rb+ ions as if they were potassium ions, and therefore concentrates rubidium in the body's intracellular fluid (i.e., inside cells). The ions are not particularly toxic; a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. The biological half-life of rubidium in humans measures 31–46 days. Although a partial substitution of potassium by rubidium is possible, when more than 50% of the potassium in the muscle tissue of rats was replaced with rubidium, the rats died.
| Physical sciences | Chemical elements_2 | null |
25600 | https://en.wikipedia.org/wiki/Ruthenium | Ruthenium | Ruthenium is a chemical element; it has symbol Ru and atomic number 44. It is a rare transition metal belonging to the platinum group of the periodic table. Like the other metals of the platinum group, ruthenium is unreactive to most chemicals. Karl Ernst Claus, a Russian scientist of Baltic-German ancestry, discovered the element in 1844 at Kazan State University and named it in honor of Russia, using the Latin name Ruthenia. Ruthenium is usually found as a minor component of platinum ores; the annual production has risen from about 19 tonnes in 2009 to some 35.5 tonnes in 2017. Most ruthenium produced is used in wear-resistant electrical contacts and thick-film resistors. A minor application for ruthenium is in platinum alloys and as a chemical catalyst. A new application of ruthenium is as the capping layer for extreme ultraviolet photomasks. Ruthenium is generally found in ores with the other platinum group metals in the Ural Mountains and in North and South America. Small but commercially important quantities are also found in pentlandite extracted from Sudbury, Ontario, and in pyroxenite deposits in South Africa.
Characteristics
Physical properties
Ruthenium, a polyvalent hard white metal, is a member of the platinum group and is in group 8 of the periodic table:
Whereas all other group 8 elements have two electrons in the outermost shell, in ruthenium the outermost shell has only one electron (the final electron is in a lower shell). This anomaly is also observed in the neighboring metals niobium (41), molybdenum (42), and rhodium (45).
Chemical properties
Ruthenium has four crystal modifications and does not tarnish at ambient conditions; it oxidizes upon heating to . Ruthenium dissolves in fused alkalis to give ruthenates (). It is not attacked by acids (even aqua regia) but is attacked by sodium hypochlorite at room temperature, and halogens at high temperatures. Ruthenium is most readily attacked by oxidizing agents. Small amounts of ruthenium can increase the hardness of platinum and palladium. The corrosion resistance of titanium is increased markedly by the addition of a small amount of ruthenium. The metal can be plated by electroplating and by thermal decomposition. A ruthenium–molybdenum alloy is known to be superconductive at temperatures below 10.6 K. Ruthenium is the only 4d transition metal that can assume the group oxidation state +8, and even then it is less stable there than the heavier congener osmium: this is the first group from the left of the table where the second and third-row transition metals display notable differences in chemical behavior. Like iron but unlike osmium, ruthenium can form aqueous cations in its lower oxidation +2 and +3 states.
Ruthenium is the first in a downward trend in the melting and boiling points and atomization enthalpy in the 4d transition metals after the maximum seen at molybdenum, because the 4d subshell is more than half full and the electrons are contributing less to metallic bonding. (Technetium, the previous element, has an exceptionally low value that is off the trend due to its half-filled [Kr]4d55s2 configuration, though it is not as far off the trend in the 4d series as manganese in the 3d transition series.) Unlike the lighter congener iron, ruthenium is paramagnetic at room temperature, as iron also is above its Curie point.
The reduction potentials in acidic aqueous solution for some common ruthenium species are shown below:
Isotopes
Naturally occurring ruthenium is composed of seven stable isotopes. Additionally, 34 radioactive isotopes have been discovered. Of these radioisotopes, the most stable are 106Ru with a half-life of 373.59 days, 103Ru with a half-life of 39.26 days and 97Ru with a half-life of 2.9 days.
Fifteen other radioisotopes have been characterized with atomic weights ranging from (90Ru) to 114.928 Da (115Ru). Most of these have half-lives that are less than five minutes; the exceptions are 95Ru (half-life 1.643 hours) and 105Ru (half-life 4.44 hours).
The primary decay mode before the most abundant isotope, 102Ru, is electron capture while the primary mode after is beta emission. The primary decay product before 102Ru is technetium and the primary decay product after is rhodium.
106Ru is a product of fission of a nucleus of uranium or plutonium. High concentrations of detected atmospheric 106Ru were associated with an alleged undeclared nuclear accident in Russia in 2017.
Occurrence
Ruthenium is found in about 100 parts per trillion in the Earth's crust, making it the 78th most abundant element. It is generally found in ores with the other platinum group metals in the Ural Mountains and in North and South America. Small but commercially important quantities are also found in pentlandite extracted from Sudbury, Ontario, Canada, and in pyroxenite deposits in South Africa. The native form of ruthenium is a very rare mineral (Ir replaces part of Ru in its structure). Ruthenium has a relatively high fission product yield in nuclear fission; and given that its most long-lived radioisotope has a half life of "only" around a year, there are often proposals to recover ruthenium in a new kind of nuclear reprocessing from spent fuel. An unusual ruthenium deposit can also be found at the natural nuclear fission reactor that was active in Oklo, Gabon, some two billion years ago. Indeed, the isotope ratio of ruthenium found there was one of several ways used to confirm that a nuclear fission chain reaction had indeed occurred at that site in the geological past. Uranium is no longer mined at Oklo, and there have never been serious attempts to recover any of the platinum group metals present there.
Production
Roughly 30 tonnes of ruthenium are mined each year, and world reserves are estimated at 5,000 tonnes. The composition of the mined platinum group metal (PGM) mixtures varies widely, depending on the geochemical formation. For example, the PGMs mined in South Africa contain on average 11% ruthenium while the PGMs mined in the former USSR contain only 2% (1992). Ruthenium, osmium, and iridium are considered the minor platinum group metals.
Ruthenium, like the other platinum group metals, is obtained commercially as a by-product from processing of nickel, copper, and platinum metal ore. During electrorefining of copper and nickel, noble metals such as silver, gold, and the platinum group metals precipitate as anode mud, the feedstock for the extraction. The metals are converted to ionized solutes by any of several methods, depending on the composition of the feedstock. One representative method is fusion with sodium peroxide followed by dissolution in aqua regia, and solution in a mixture of chlorine with hydrochloric acid. Osmium (Os), ruthenium (Ru), rhodium (Rh), and iridium (Ir) are insoluble in aqua regia and readily precipitate, leaving the other metals in solution. Rhodium is separated from the residue by treatment with molten sodium bisulfate. The insoluble residue, containing Ru, Os, and Ir is treated with sodium oxide, in which Ir is insoluble, producing dissolved Ru and Os salts. After oxidation to the volatile oxides, is separated from by precipitation of (NH4)3RuCl6 with ammonium chloride or by distillation or extraction with organic solvents of the volatile osmium tetroxide. Hydrogen is used to reduce ammonium ruthenium chloride, yielding a powder. The product is reduced using hydrogen, yielding the metal as a powder or sponge metal that can be treated with powder metallurgy techniques or argon-arc welding.
Ruthenium is contained in spent nuclear fuel, both as a direct fission product and as a product of neutron absorption by long-lived fission product . After allowing the unstable isotopes of ruthenium to decay, chemical extraction could yield ruthenium for use in all applications of ruthenium.
Ruthenium can also be produced by deliberate nuclear transmutation from . Given its relatively long half life, high fission product yield and high chemical mobility in the environment, is among the most often proposed non-actinides for commercial scale nuclear transmutation. has a relatively large neutron cross section, and because technetium has no stable isotopes, there would not be a problem of neutron activation of stable isotopes. Significant amounts of are produced in nuclear fission. They are also produced as a byproduct of the use of in nuclear medicine, because this isomer decays to . Exposing the target to strong enough neutron radiation will eventually yield appreciable quantities of ruthenium, which can be chemically separated while consuming .
Chemical compounds
The oxidation states of ruthenium range from 0 to +8, and −2. The properties of ruthenium and osmium compounds are often similar. The +2, +3, and +4 states are the most common. The most prevalent precursor is ruthenium trichloride, a red solid that is poorly defined chemically but versatile synthetically.
Oxides and chalcogenides
Ruthenium can be oxidized to ruthenium(IV) oxide (RuO2, oxidation state +4), which can, in turn, be oxidized by sodium metaperiodate to the volatile yellow tetrahedral ruthenium tetroxide, RuO4, an aggressive, strong oxidizing agent with structure and properties analogous to osmium tetroxide. RuO4 is mostly used as an intermediate in the purification of ruthenium from ores and radiowastes.
Dipotassium ruthenate (K2RuO4, +6) and potassium perruthenate (KRuO4, +7) are also known. Unlike osmium tetroxide, ruthenium tetroxide is less stable, is strong enough as an oxidising agent to oxidise dilute hydrochloric acid and organic solvents like ethanol at room temperature, and is easily reduced to ruthenate () in aqueous alkaline solutions; it decomposes to form the dioxide above 100 °C. Unlike iron but like osmium, ruthenium does not form oxides in its lower +2 and +3 oxidation states. Ruthenium forms dichalcogenides, which are diamagnetic semiconductors crystallizing in the pyrite structure. Ruthenium sulfide (RuS2) occurs naturally as the mineral laurite.
Like iron, ruthenium does not readily form oxoanions and prefers to achieve high coordination numbers with hydroxide ions instead. Ruthenium tetroxide is reduced by cold dilute potassium hydroxide to form black potassium perruthenate, KRuO4, with ruthenium in the +7 oxidation state. Potassium perruthenate can also be produced by oxidising potassium ruthenate, K2RuO4, with chlorine gas. The perruthenate ion is unstable and is reduced by water to form the orange ruthenate. Potassium ruthenate may be synthesized by reacting ruthenium metal with molten potassium hydroxide and potassium nitrate.
Some mixed oxides are also known, such as MIIRuIVO3, Na3RuVO4, NaRuO, and MLnRuO.
Halides and oxyhalides
The highest known ruthenium halide is the hexafluoride, a dark brown solid that melts at 54 °C. It hydrolyzes violently upon contact with water and easily disproportionates to form a mixture of lower ruthenium fluorides, releasing fluorine gas. Ruthenium pentafluoride is a tetrameric dark green solid that is also readily hydrolyzed, melting at 86.5 °C. The yellow ruthenium tetrafluoride is probably also polymeric and can be formed by reducing the pentafluoride with iodine. Among the binary compounds of ruthenium, these high oxidation states are known only in the oxides and fluorides.
Ruthenium trichloride is a well-known compound, existing in a black α-form and a dark brown β-form: the trihydrate is red. Of the known trihalides, trifluoride is dark brown and decomposes above 650 °C, tribromide is dark-brown and decomposes above 400 °C, and triiodide is black. Of the dihalides, difluoride is not known, dichloride is brown, dibromide is black, and diiodide is blue. The only known oxyhalide is the pale green ruthenium(VI) oxyfluoride, RuOF4.
Coordination and organometallic complexes
Ruthenium forms a variety of coordination complexes. Examples are the many pentaammine derivatives [Ru(NH3)5L]n+ that often exist for both Ru(II) and Ru(III). Derivatives of bipyridine and terpyridine are numerous, best known being the luminescent tris(bipyridine)ruthenium(II) chloride.
Ruthenium forms a wide range compounds with carbon–ruthenium bonds. Grubbs' catalyst is used for alkene metathesis. Ruthenocene is analogous to ferrocene structurally, but exhibits distinctive redox properties. The colorless liquid ruthenium pentacarbonyl converts in the absence of CO pressure to the dark red solid triruthenium dodecacarbonyl. Ruthenium trichloride reacts with carbon monoxide to give many derivatives including RuHCl(CO)(PPh3)3 and Ru(CO)2(PPh3)3 (Roper's complex). Heating solutions of ruthenium trichloride in alcohols with triphenylphosphine gives tris(triphenylphosphine)ruthenium dichloride (RuCl2(PPh3)3), which converts to the hydride complex chlorohydridotris(triphenylphosphine)ruthenium(II) (RuHCl(PPh3)3).
History
Though naturally occurring platinum alloys containing all six platinum-group metals were used for a long time by pre-Columbian Americans and known as a material to European chemists from the mid-16th century, not until the mid-18th century was platinum identified as a pure element. That natural platinum contained palladium, rhodium, osmium and iridium was discovered in the first decade of the 19th century. Platinum in alluvial sands of Russian rivers gave access to raw material for use in plates and medals and for the minting of ruble coins, starting in 1828. Residues from platinum production for coinage were available in the Russian Empire, and therefore most of the research on them was done in Eastern Europe.
It is possible that the Polish chemist Jędrzej Śniadecki isolated element 44 (which he called "vestium" after the asteroid Vesta discovered shortly before) from South American platinum ores in 1807. He published an announcement of his discovery in 1808. His work was never confirmed, however, and he later withdrew his claim of discovery.
Jöns Berzelius and Gottfried Osann nearly discovered ruthenium in 1827. They examined residues that were left after dissolving crude platinum from the Ural Mountains in aqua regia. Berzelius did not find any unusual metals, but Osann thought he found three new metals, which he called pluranium, ruthenium, and polinium. This discrepancy led to a long-standing controversy between Berzelius and Osann about the composition of the residues. As Osann was not able to repeat his isolation of ruthenium, he eventually relinquished his claims. The name "ruthenium" was chosen by Osann because the analysed samples stemmed from the Ural Mountains in Russia.
In 1844, Karl Ernst Claus, a Russian scientist of Baltic German descent, showed that the compounds prepared by Gottfried Osann contained small amounts of ruthenium, which Claus had discovered the same year. Claus isolated ruthenium from the platinum residues of rouble production while he was working in Kazan University, Kazan, the same way its heavier congener osmium had been discovered four decades earlier. Claus showed that ruthenium oxide contained a new metal and obtained 6 grams of ruthenium from the part of crude platinum that is insoluble in aqua regia. Choosing the name for the new element, Claus stated: "I named the new body, in honour of my Motherland, ruthenium. I had every right to call it by this name because Mr. Osann relinquished his ruthenium and the word does not yet exist in chemistry." The name itself derives from the Latin word Ruthenia.
In doing so, Claus started a trend that continues to this day – naming an element after a country.
Applications
Approximately 30.9 tonnes of ruthenium were consumed in 2016, 13.8 of them in electrical applications, 7.7 in catalysis, and 4.6 in electrochemistry.
Because it hardens platinum and palladium alloys, ruthenium is used in electrical contacts, where a thin film is sufficient to achieve the desired durability. With its similar properties to and lower cost than rhodium, electric contacts are a major use of ruthenium. The ruthenium plate is applied to the electrical contact and electrode base metal by electroplating or sputtering.
Ruthenium dioxide with lead and bismuth ruthenates are used in thick-film chip resistors. These two electronic applications account for 50% of the ruthenium consumption.
Ruthenium is seldom alloyed with metals outside the platinum group, where small quantities improve some properties. The added corrosion resistance in titanium alloys led to the development of a special alloy with 0.1% ruthenium. Ruthenium is also used in some advanced high-temperature single-crystal superalloys, with applications that include the turbines in jet engines. Several nickel based superalloy compositions are described, such as EPM-102 (with 3% Ru), TMS-162 (with 6% Ru), TMS-138, and TMS-174, the latter two containing 6% rhenium. Fountain pen nibs are frequently tipped with ruthenium alloy. From 1944 onward, the Parker 51 fountain pen was fitted with the "RU" nib, a 14K gold nib tipped with 96.2% ruthenium and 3.8% iridium.
Ruthenium is a component of mixed-metal oxide (MMO) anodes used for cathodic protection of underground and submerged structures, and for electrolytic cells for such processes as generating chlorine from salt water. The fluorescence of some ruthenium complexes is quenched by oxygen, finding use in optode sensors for oxygen. Ruthenium red, [(NH3)5Ru-O-Ru(NH3)4-O-Ru(NH3)5]6+, is a biological stain used to stain polyanionic molecules such as pectin and nucleic acids for light microscopy and electron microscopy. The beta-decaying isotope 106 of ruthenium is used in radiotherapy of eye tumors, mainly malignant melanomas of the uvea. Ruthenium-centered complexes are being researched for possible anticancer properties. Compared with platinum complexes, those of ruthenium show greater resistance to hydrolysis and more selective action on tumors.
Ruthenium tetroxide exposes latent fingerprints by reacting on contact with fatty oils or fats with sebaceous contaminants and producing brown/black ruthenium dioxide pigment.
Electronics
Electronics is the largest use of ruthenium. Ru metal is particularly nonvolatile, which is advantageous in microelectronic devices. Ru and its main oxide RuO2 have comparable electrical resistivities. Copper can be directly electroplated onto ruthenium, particular applications include barrier layers, transistor gates, and interconnects. Ru films can be deposited by chemical vapor deposition using volatile complexes such as ruthenium tetroxide and the organoruthenium compound (cyclohexadiene)Ru(CO)3.
Catalysis
Many ruthenium-containing compounds exhibit useful catalytic properties. Solutions containing ruthenium trichloride are highly active for olefin metathesis. Such catalysts are used commercially for the production of polynorbornene for example. Well defined ruthenium carbene and alkylidene complexes show similar reactivity but are only used on small-scale. The Grubbs' catalysts for example have been employed in the preparation of drugs and advanced materials.
Some ruthenium complexes are highly active catalysts for transfer hydrogenations (sometimes referred to as "borrowing hydrogen" reactions). Chiral ruthenium complexes, introduced by Ryoji Noyori, are employed for the enantioselective hydrogenation of ketones, aldehydes, and imines. A typical catalyst is (cymene)Ru(S,S-TsDPEN): A Nobel Prize in Chemistry was awarded in 2001 to Ryōji Noyori for contributions to the field of asymmetric hydrogenation.
Ruthenium-promoted cobalt catalysts are used in Fischer–Tropsch synthesis.
Emerging applications
Ruthenium-based compounds are components of dye-sensitized solar cells, which are proposed as low-cost solar cell system.
Health effects
Little is known about the health effects of ruthenium and it is relatively rare for people to encounter ruthenium compounds. Metallic ruthenium is inert (is not chemically reactive). Some compounds such as ruthenium oxide (RuO4) are highly toxic and volatile.
| Physical sciences | Chemical elements_2 | null |
25601 | https://en.wikipedia.org/wiki/Rhodium | Rhodium | Rhodium is a chemical element; it has symbol Rh and atomic number 45. It is a very rare, silvery-white, hard, corrosion-resistant transition metal. It is a noble metal and a member of the platinum group. It has only one naturally occurring isotope, which is 103Rh. Naturally occurring rhodium is usually found as a free metal or as an alloy with similar metals and rarely as a chemical compound in minerals such as bowieite and rhodplumsite. It is one of the rarest and most valuable precious metals. Rhodium is a group 9 element (cobalt group).
Rhodium is found in platinum or nickel ores with the other members of the platinum group metals. It was discovered in 1803 by William Hyde Wollaston in one such ore, and named for the rose color of one of its chlorine compounds.
The element's major use (consuming about 80% of world rhodium production) is as one of the catalysts in the three-way catalytic converters in automobiles. Because rhodium metal is inert against corrosion and most aggressive chemicals, and because of its rarity, rhodium is usually alloyed with platinum or palladium and applied in high-temperature and corrosion-resistive coatings. White gold is often plated with a thin rhodium layer to improve its appearance, while sterling silver is often rhodium-plated to resist tarnishing.
Rhodium detectors are used in nuclear reactors to measure the neutron flux level. Other uses of rhodium include asymmetric hydrogenation used to form drug precursors and the processes for the production of acetic acid.
History
Rhodium (from , meaning 'rose') was discovered in 1803 by William Hyde Wollaston, soon after he discovered palladium. He used crude platinum ore presumably obtained from South America. His procedure dissolved the ore in aqua regia and neutralized the acid with sodium hydroxide (NaOH). He then precipitated the platinum as ammonium chloroplatinate by adding ammonium chloride (). Most other metals like copper, lead, palladium, and rhodium were precipitated with zinc. Diluted nitric acid dissolved all but palladium and rhodium. Of these, palladium dissolved in aqua regia but rhodium did not, and the rhodium was precipitated by the addition of sodium chloride as . After being washed with ethanol, the rose-red precipitate was reacted with zinc, which displaced the rhodium in the ionic compound and thereby released the rhodium as free metal.
For decades, the rare element had only minor applications; for example, by the turn of the century, rhodium-containing thermocouples were used to measure temperatures up to 1800 °C. They have exceptionally good stability in the temperature range of 1300 to 1800 °C.
The first major application was electroplating for decorative uses and as corrosion-resistant coating. The introduction of the three-way catalytic converter by Volvo in 1976 increased the demand for rhodium. The previous catalytic converters used platinum or palladium, while the three-way catalytic converter used rhodium to reduce the amount of NOx in the exhaust.
Characteristics
Rhodium is a hard, silvery, durable metal that has a high reflectance. Rhodium metal does not normally form an oxide, even when heated. Oxygen is absorbed from the atmosphere only at the melting point of rhodium, but is released on solidification. Rhodium has both a higher melting point and lower density than platinum. It is not attacked by most acids: it is completely insoluble in nitric acid and dissolves slightly in aqua regia.
Rhodium belongs to group 9 of the periodic table, but exhibits an atypical ground state valence electron configuration for that group. Like neighboring elements niobium (41), ruthenium (44), and palladium (46), it only has one electron in its outermost s orbital.
Chemical properties
The common oxidation states of rhodium are +3 and +1. Oxidation states 0, +2, and +4 are also well known. A few complexes at still higher oxidation states are known.
The rhodium oxides include , , , , and . None are of technological significance.
All the Rh(III) halides are known but the hydrated trichloride is most frequently encountered. It is also available in an anhydrous form, which is somewhat refractory. Other rhodium(III) chlorides include sodium hexachlororhodate, , and pentaamminechlororhodium dichloride, . They are used in the recycling and purification of this very expensive metal. Heating a methanolic solution of hydrated rhodium trichloride with sodium acetate give the blue-green rhodium(II) acetate, , which features a Rh-Rh bond. This complex and related rhodium(II) trifluoroacetate have attracted attention as catalysts for cyclopropanation reactions. Hydrated rhodium trichloride is reduced by carbon monoxide, ethylene, and trifluorophosphine to give rhodium(I) complexes (L = CO, ). When treated with triphenylphosphine, hydrated rhodium trichloride converts to the maroon-colored , which is known as Wilkinson's catalyst. Reduction of rhodium carbonyl chloride gives hexarhodium hexadecacarbonyl, , and tetrarhodium dodecacarbonyl, , the two most common Rh(0) complexes.
As for other metals, rhodium forms high oxidation state binary fluorides. These include rhodium pentafluoride, a tetrameric complex with the true formula ) and rhodium hexafluoride.
Isotopes
Naturally occurring rhodium is composed of only one isotope, 103Rh. The most stable radioisotopes are 101Rh with a half-life of 3.3 years, 102Rh with a half-life of 207 days, 102mRh with a half-life of 2.9 years, and 99Rh with a half-life of 16.1 days. Twenty other radioisotopes have been characterized with atomic weights ranging from 92.926 u (93Rh) to 116.925 u (117Rh). Most of these have half-lives shorter than an hour, except 100Rh (20.8 hours) and 105Rh (35.36 hours). Rhodium has numerous meta states, the most stable being 102mRh (0.141 MeV) with a half-life of about 2.9 years and 101mRh (0.157 MeV) with a half-life of 4.34 days (see isotopes of rhodium).
In isotopes weighing less than 103 (the stable isotope), the primary decay mode is electron capture and the primary decay product is ruthenium. In isotopes greater than 103, the primary decay mode is beta emission and the primary product is palladium.
Occurrence
Rhodium is one of the rarest elements in the Earth's crust, comprising an estimated 0.0002 parts per million (2 × 10−10). Its rarity affects its price and its use in commercial applications. The concentration of rhodium in nickel meteorites is typically 1 part per billion. Rhodium has been measured in some potatoes with concentrations between 0.8 and 30 ppt.
Mining and price
Rhodium ores are a mixture with other metals such as palladium, silver, platinum, and gold. Few rhodium minerals are known. The separation of rhodium from the other metals poses significant challenges. Principal sources are located in South Africa, river sands of the Ural Mountains in Russia, and in North America, especially the copper-nickel sulfide mining area of the Sudbury, Ontario, region. Although the rhodium abundance at Sudbury is very small, the large amount of processed nickel ore makes rhodium recovery cost-effective.
The main exporter of rhodium is South Africa (approximately 80% in 2010) followed by Russia. The annual world production is 30 tonnes. The price of rhodium is highly variable.
Used nuclear fuels
Rhodium is a fission product of uranium-235: each kilogram of fission product contains a significant amount of the lighter platinum group metals. Used nuclear fuel is therefore a potential source of rhodium, but the extraction is complex and expensive, and the presence of rhodium radioisotopes requires a period of cooling storage for multiple half-lives of the longest-lived isotope (101Rh with a half-life of 3.3 years, and 102mRh with a half-life of 2.9 years), or about 10 years. These factors make the source unattractive and no large-scale extraction has been attempted.
Applications
The primary use of this element is in automobiles as a catalytic converter, changing harmful unburned hydrocarbons, carbon monoxide, and nitrogen oxide exhaust emissions into less noxious gases. Of 30,000 kg of rhodium consumed worldwide in 2012, 81% (24,300 kg) went into this application, and 8,060 kg was recovered from old converters. About 964 kg of rhodium was used in the glass industry, mostly for production of fiberglass and flat-panel glass, and 2,520 kg was used in the chemical industry.
In 2008, net demand (with the recycling accounted for) of rhodium for automotive converters made up 84% of the world usage, with the number fluctuating around 80% in 2015−2021.
Carbonylation
Rhodium catalysts are used in some industrial processes, notably those involving carbon monoxide. In the Monsanto process, rhodium iodides catalyze the carbonylation of methanol to produce acetic acid. This technology has been significantly displaced by the iridium-based Cativa process, which effects the same conversion but more efficiently. Rhodium-based complexes are the dominant catalysts for hydroformylation, which converts alkenes to aldehydes according to the following equation:
Rh-based hydroformylation underpins the industrial production of products as diverse as detergents, fragrances, and some drugs. Originally hydroformylation relied on much cheaper cobalt carbonyl-based catalysts, but that technology has largely been eclipsed by rhodium-based catalysts despite the cost differential.
Rhodium is also known to catalyze many reactions involving hydrogen gas and hydrosilanes. These include hydrogenations and hydrosilylations of alkenes. Rhodium metal, but not rhodium complexes, catalyzes the hydrogenation of benzene to cyclohexane.
Ornamental uses
Rhodium finds use in jewelry and for decorations. It is electroplated on white gold and platinum to give it a reflective white surface at time of sale, after which the thin layer wears away with use. This is known as rhodium flashing in the jewelry business. It may also be used in coating sterling silver to protect against tarnish (silver sulfide, Ag2S, produced from atmospheric hydrogen sulfide, H2S). Solid (pure) rhodium jewelry is very rare, more because of the difficulty of fabrication (high melting point and poor malleability) than because of the high price. The high cost ensures that rhodium is applied only as an electroplate. Rhodium has also been used for honors or to signify elite status, when more commonly used metals such as silver, gold or platinum were deemed insufficient. In 1979 the Guinness Book of World Records gave Paul McCartney a rhodium-plated disc for being history's all-time best-selling songwriter and recording artist.
Other uses
Rhodium is used as an alloying agent for hardening and improving the corrosion resistance of platinum and palladium. These alloys are used in furnace windings, bushings for glass fiber production, thermocouple elements, electrodes for aircraft spark plugs, and laboratory crucibles. Other uses include:
Electrical contacts, where it is valued for small electrical resistance, small and stable contact resistance, and great corrosion resistance.
Rhodium plated by either electroplating or evaporation is extremely hard and useful for optical instruments.
Filters in mammography systems for the characteristic X-rays it produces.
Rhodium neutron detectors are used in nuclear reactors to measure neutron flux levels—this method requires a digital filter to determine the current neutron flux level, generating three separate signals: immediate, a few seconds delay, and a minute delay, each with its own signal level; all three are combined in the rhodium detector signal. The three Palo Verde nuclear reactors each have 305 rhodium neutron detectors, 61 detectors on each of five vertical levels, providing an accurate 3D "picture" of reactivity and allowing fine tuning to consume the nuclear fuel most economically.
In automobile manufacturing, rhodium is also used in the construction of headlight reflectors.
Precautions
Being a noble metal, pure rhodium is inert and harmless in elemental form. However, chemical complexes of rhodium can be reactive. For rhodium chloride, the median lethal dose (LD50) for rats is 198 mg () per kilogram of body weight. Like the other noble metals, rhodium has not been found to serve any biological function.
People can be exposed to rhodium in the workplace by inhalation. The Occupational Safety and Health Administration (OSHA) has specified the legal limit (Permissible exposure limit) for rhodium exposure in the workplace at 0.1 mg/m3 over an 8-hour workday, and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL), at the same level. At levels of 100 mg/m3, rhodium is immediately dangerous to life or health. For soluble compounds, the PEL and REL are both 0.001 mg/m3.
| Physical sciences | Chemical elements_2 | null |
25602 | https://en.wikipedia.org/wiki/Radium | Radium | Radium is a chemical element; it has symbol Ra and atomic number 88. It is the sixth element in group 2 of the periodic table, also known as the alkaline earth metals. Pure radium is silvery-white, but it readily reacts with nitrogen (rather than oxygen) upon exposure to air, forming a black surface layer of radium nitride (Ra3N2). All isotopes of radium are radioactive, the most stable isotope being radium-226 with a half-life of 1,600 years. When radium decays, it emits ionizing radiation as a by-product, which can excite fluorescent chemicals and cause radioluminescence. For this property, it was widely used in self-luminous paints following its discovery. Of the radioactive elements that occur in quantity, radium is considered particularly toxic, and it is carcinogenic due to the radioactivity of both it and its immediate decay product radon as well as its tendency to accumulate in the bones.
Radium, in the form of radium chloride, was discovered by Marie and Pierre Curie in 1898 from ore mined at Jáchymov. They extracted the radium compound from uraninite and published the discovery at the French Academy of Sciences five days later. Radium was isolated in its metallic state by Marie Curie and André-Louis Debierne through the electrolysis of radium chloride in 1910, and soon afterwards the metal started being produced on larger scales in Austria, the United States, and Belgium. However, the amount of radium produced globally has always been small in comparison to other elements, and by the 2010s, annual production of radium, mainly via extraction from spent nuclear fuel, was less than 100 grams.
In nature, radium is found in uranium ores in quantities as small as a seventh of a gram per ton of uraninite, and in thorium ores in trace amounts. Radium is not necessary for living organisms, and its radioactivity and chemical reactivity make adverse health effects likely when it is incorporated into biochemical processes because of its chemical mimicry of calcium. As of 2018, other than in nuclear medicine, radium has no commercial applications. Formerly, from the 1910s to the 1970s, it was used as a radioactive source for radioluminescent devices and also in radioactive quackery for its supposed curative power. In nearly all of its applications, radium has been replaced with less dangerous radioisotopes, with one of its few remaining non-medical uses being the production of actinium in nuclear reactors.
Bulk properties
Radium is the heaviest known alkaline earth metal and is the only radioactive member of its group. Its physical and chemical properties most closely resemble its lighter congener, barium.
Pure radium is a volatile, lustrous silvery-white metal, even though its lighter congeners calcium, strontium, and barium have a slight yellow tint. Radium's lustrous surface rapidly becomes black upon exposure to air, likely due to the formation of radium nitride (Ra3N2). Its melting point is either or and its boiling point is ; however, this is not well established. Both of these values are slightly lower than those of barium, confirming periodic trends down the group 2 elements.
Like barium and the alkali metals, radium crystallizes in the body-centered cubic structure at standard temperature and pressure: the radium–radium bond distance is 514.8 picometers.
Radium has a density of 5.5 g/cm, higher than that of barium, and the two elements have similar crystal structures (bcc at standard temperature and pressure).
Isotopes
Radium has 33 known isotopes with mass numbers from 202 to 234, all of which are radioactive. Four of these – Ra (half-life 11.4 days), Ra (3.64 days), Ra (1600 years), and Ra (5.75 years) – occur naturally in the decay chains of primordial thorium-232, uranium-235, and uranium-238 (Ra from uranium-235, Ra from uranium-238, and the other two from thorium-232). These isotopes nevertheless still have half-lives too short to be primordial radionuclides, and only exist in nature from these decay chains.
Together with the mostly artificial Ra (15 d), which occurs in nature only as a decay product of minute traces of neptunium-237,
these are the five most stable isotopes of radium. All other 27 known radium isotopes have half-lives under two hours, and the majority have half-lives under a minute. Of these, Ra (half-life 28 s) also occurs as a Np daughter, and Ra and Ra would be produced by the still-unobserved double beta decay of natural radon isotopes. At least 12 nuclear isomers have been reported, the most stable of which is radium-205m with a half-life between 130~230 milliseconds; this is still shorter than twenty-four ground-state radium isotopes.
Ra is the most stable isotope of radium and is the last isotope in the decay chain of uranium-238 with a half-life of over a millennium; it makes up almost all of natural radium. Its immediate decay product is the dense radioactive noble gas radon (specifically the isotope Rn), which is responsible for much of the danger of environmental radium. It is 2.7 million times more radioactive than the same molar amount of natural uranium (mostly uranium-238), due to its proportionally shorter half-life.
A sample of radium metal maintains itself at a higher temperature than its surroundings because of the radiation it emits. Natural radium (which is mostly Ra) emits mostly alpha particles, but other steps in its decay chain (the uranium or radium series) emit alpha or beta particles, and almost all particle emissions are accompanied by gamma rays.
Experimental nuclear physics studies have shown that nuclei of several radium isotopes, such as Ra, Ra and Ra, have reflection-asymmetric ("pear-like") shapes. In particular, this experimental information on radium-224
has been obtained at ISOLDE using a technique called Coulomb excitation.
Chemistry
Radium only exhibits the oxidation state of +2 in solution. It forms the colorless Ra cation in aqueous solution, which is highly basic and does not form complexes readily. Most radium compounds are therefore simple ionic compounds, though participation from the 6s and 6p electrons (in addition to the valence 7s electrons) is expected due to relativistic effects and would enhance the covalent character of radium compounds such as RaF and RaAt. For this reason, the standard electrode potential for the half-reaction Ra (aq) + 2e → Ra (s) is −2.916 V, even slightly lower than the value −2.92 V for barium, whereas the values had previously smoothly increased down the group (Ca: −2.84 V; Sr: −2.89 V; Ba: −2.92 V). The values for barium and radium are almost exactly the same as those of the heavier alkali metals potassium, rubidium, and caesium.
Compounds
Solid radium compounds are white as radium ions provide no specific coloring, but they gradually turn yellow and then dark over time due to self-radiolysis from radium's alpha decay. Insoluble radium compounds coprecipitate with all barium, most strontium, and most lead compounds.
Radium oxide (RaO) is poorly characterized, as the reaction of radium with air results in the formation of radium nitride. Radium hydroxide (Ra(OH)2) is formed via the reaction of radium metal with water, and is the most readily soluble among the alkaline earth hydroxides and a stronger base than its barium congener, barium hydroxide. It is also more soluble than actinium hydroxide and thorium hydroxide: these three adjacent hydroxides may be separated by precipitating them with ammonia.
Radium chloride (RaCl2) is a colorless, luminescent compound. It becomes yellow after some time due to self-damage by the alpha radiation given off by radium when it decays. Small amounts of barium impurities give the compound a rose color. Its It is soluble in water, though less so than barium chloride, and its solubility decreases with increasing concentration of hydrochloric acid. Crystallization from aqueous solution gives the dihydrate RaCl2·2H2O, isomorphous with its barium analog.
Radium bromide (RaBr2) is also a colorless, luminous compound. In water, it is more soluble than radium chloride. Like radium chloride, crystallization from aqueous solution gives the dihydrate RaBr2·2H2O, isomorphous with its barium analog. The ionizing radiation emitted by radium bromide excites nitrogen molecules in the air, making it glow. The alpha particles emitted by radium quickly gain two electrons to become neutral helium, which builds up inside and weakens radium bromide crystals. This effect sometimes causes the crystals to break or even explode.
Radium nitrate (Ra(NO3)2) is a white compound that can be made by dissolving radium carbonate in nitric acid. As the concentration of nitric acid increases, the solubility of radium nitrate decreases, an important property for the chemical purification of radium.
Radium forms much the same insoluble salts as its lighter congener barium: it forms the insoluble sulfate (RaSO4, the most insoluble known sulfate), chromate (RaCrO4), carbonate (RaCO3), iodate (Ra(IO3)2), tetrafluoroberyllate (RaBeF4), and nitrate (Ra(NO3)2). With the exception of the carbonate, all of these are less soluble in water than the corresponding barium salts, but they are all isostructural to their barium counterparts. Additionally, radium phosphate, oxalate, and sulfite are probably also insoluble, as they coprecipitate with the corresponding insoluble barium salts. The great insolubility of radium sulfate (at 20 °C, only 2.1 mg will dissolve in 1 kg of water) means that it is one of the less biologically dangerous radium compounds. The large ionic radius of Ra (148 pm) results in weak ability to form coordination complexes and poor extraction of radium from aqueous solutions when not at high pH.
Occurrence
All isotopes of radium have half-lives much shorter than the age of the Earth, so that any primordial radium would have decayed long ago. Radium nevertheless still occurs in the environment, as the isotopes Ra, Ra, Ra, and Ra are part of the decay chains of natural thorium and uranium isotopes; since thorium and uranium have very long half-lives, these daughters are continually being regenerated by their decay. Of these four isotopes, the longest-lived is Ra (half-life 1600 years), a decay product of natural uranium. Because of its relative longevity, Ra is the most common isotope of the element, making up about one part per trillion of the Earth's crust; essentially all natural radium is Ra. Thus, radium is found in tiny quantities in the uranium ore uraninite and various other uranium minerals, and in even tinier quantities in thorium minerals. One ton of pitchblende typically yields about one seventh of a gram of radium. One kilogram of the Earth's crust contains about 900 picograms of radium, and one liter of sea water contains about 89 femtograms of radium.
History
Radium was discovered by Marie Skłodowska-Curie and her husband Pierre Curie on 21 December 1898 in a uraninite (pitchblende) sample from Jáchymov. While studying the mineral earlier, the Curies removed uranium from it and found that the remaining material was still radioactive. In July 1898, while studying pitchblende, they isolated an element similar to bismuth which turned out to be polonium. They then isolated a radioactive mixture consisting of two components: compounds of barium, which gave a brilliant green flame color, and unknown radioactive compounds which gave carmine spectral lines that had never been documented before. The Curies found the radioactive compounds to be very similar to the barium compounds, except they were less soluble. This discovery made it possible for the Curies to isolate the radioactive compounds and discover a new element in them. The Curies announced their discovery to the French Academy of Sciences on 26 December 1898. The naming of radium dates to about 1899, from the French word radium, formed in Modern Latin from radius (ray): this was in recognition of radium's emission of energy in the form of rays. The gaseous emissions of radium, radon, were recognized and studied extensively by Friedrich Ernst Dorn in the early 1900s, though at the time they were characterized as "radium emanations".
In September 1910, Marie Curie and André-Louis Debierne announced that they had isolated radium as a pure metal through the electrolysis of pure radium chloride (RaCl2) solution using a mercury cathode, producing radium–mercury amalgam. This amalgam was then heated in an atmosphere of hydrogen gas to remove the mercury, leaving pure radium metal.
Later that same year, E. Ebler isolated radium metal by thermal decomposition of its azide, Ra(N3)2. Radium metal was first industrially produced at the beginning of the 20th century by Biraco, a subsidiary company of Union Minière du Haut Katanga (UMHK) in its Olen plant in Belgium. The metal became an important export of Belgium from 1922 up until World War II.
The general historical unit for radioactivity, the curie, is based on the radioactivity of Ra. it was originally defined as the radioactivity of one gram of radium-226, but the definition was later refined to be .
Historical applications
Luminescent paint
Radium was formerly used in self-luminous paints for watches, aircraft switches, clocks, and instrument dials and panels. A typical self-luminous watch that uses radium paint contains around 1 microgram of radium. In the mid-1920s, a lawsuit was filed against the United States Radium Corporation by five dying "Radium Girls" – dial painters who had painted radium-based luminous paint on the components of watches and clocks. The dial painters were instructed to lick their brushes to give them a fine point, thereby ingesting radium. Their exposure to radium caused serious health effects which included sores, anemia, and bone cancer.
During the litigation, it was determined that the company's scientists and management had taken considerable precautions to protect themselves from the effects of radiation, but it did not seem to protect their employees. Additionally, for several years the companies had attempted to cover up the effects and avoid liability by insisting that the Radium Girls were instead suffering from syphilis.
As a result of the lawsuit, and an extensive study by the U.S. Public Health Service, the adverse effects of radioactivity became widely known, and radium-dial painters were instructed in proper safety precautions and provided with protective gear. Radium continued to be used in dials, especially in manufacturing during World War II, but from 1925 onward there were no further injuries to dial painters.
From the 1960s the use of radium paint was discontinued. In many cases luminous dials were implemented with non-radioactive fluorescent materials excited by light; such devices glow in the dark after exposure to light, but the glow fades. Where long-lasting self-luminosity in darkness was required, safer radioactive promethium-147 (half-life 2.6 years) or tritium (half-life 12 years) paint was used; both continue to be used as of 2018. These had the added advantage of not degrading the phosphor over time, unlike radium. Tritium as it is used in these applications is considered safer than radium, as it emits very low-energy beta radiation (even lower-energy than the beta radiation emitted by promethium) which cannot penetrate the skin, unlike the gamma radiation emitted by radium isotopes.
Clocks, watches, and instruments dating from the first half of the 20th century, often in military applications, may have been painted with radioactive luminous paint. They are usually no longer luminous; this is not due to radioactive decay of the radium (which has a half-life of 1600 years) but to the fluorescence of the zinc sulfide fluorescent medium being worn out by the radiation from the radium.
Originally appearing as white, most radium paint from before the 1960s has tarnished to yellow over time. The radiation dose from an intact device is usually only a hazard when many devices are grouped together or if the device is disassembled or tampered with.
Quackery
Radium was once an additive in products such as cosmetics, soap, razor blades, and even beverages due to its supposed curative powers. Many contemporary products were falsely advertised as being radioactive. Such products soon fell out of vogue and were prohibited by authorities in many countries after it was discovered they could have serious adverse health effects. (See, for instance, Radithor or Revigator types of "radium water" or "Standard Radium Solution for Drinking".) Spas featuring radium-rich water are still occasionally touted as beneficial, such as those in Misasa, Tottori, Japan, though the sources of radioactivity in these spas vary and may be attributed to radon and other radioisotopes.
Medical and research uses
Radium (usually in the form of radium chloride or radium bromide) was used in medicine to produce radon gas, which in turn was used as a cancer treatment. Several of these radon sources were used in Canada in the 1920s and 1930s. However, many treatments that were used in the early 1900s are not used anymore because of the harmful effects radium bromide exposure caused. Some examples of these effects are anaemia, cancer, and genetic mutations. As of 2011, safer gamma emitters such as Co, which is less costly and available in larger quantities, were usually used to replace the historical use of radium in this application, but factors including increasing costs of cobalt and risks of keeping radioactive sources on site have led to an increase in the use of linear particle accelerators for the same applications.
In the U.S., from 1940 through the 1960s, radium was used in nasopharyngeal radium irradiation, a treatment that was administered to children to treat hearing loss and chronic otitis. The procedure was also administered to airmen and submarine crew to treat barotrauma.
Early in the 1900s, biologists used radium to induce mutations and study genetics. As early as 1904, Daniel MacDougal used radium in an attempt to determine whether it could provoke sudden large mutations and cause major evolutionary shifts. Thomas Hunt Morgan used radium to induce changes resulting in white-eyed fruit flies. Nobel-winning biologist Hermann Muller briefly studied the effects of radium on fruit fly mutations before turning to more affordable x-ray experiments.
Production
Uranium had no large scale application in the late 19th century and therefore no large uranium mines existed. In the beginning, the silver mines in Jáchymov, Austria-Hungary (now Czech Republic) were the only large sources for uranium ore. The uranium ore was only a byproduct of the mining activities.
In the first extraction of radium, Curie used the residues after extraction of uranium from pitchblende. The uranium had been extracted by dissolution in sulfuric acid leaving radium sulfate, which is similar to barium sulfate but even less soluble in the residues. The residues also contained rather substantial amounts of barium sulfate which thus acted as a carrier for the radium sulfate. The first steps of the radium extraction process involved boiling with sodium hydroxide, followed by hydrochloric acid treatment to minimize impurities of other compounds. The remaining residue was then treated with sodium carbonate to convert the barium sulfate into barium carbonate (carrying the radium), thus making it soluble in hydrochloric acid. After dissolution, the barium and radium were reprecipitated as sulfates; this was then repeated to further purify the mixed sulfate. Some impurities that form insoluble sulfides were removed by treating the chloride solution with hydrogen sulfide, followed by filtering. When the mixed sulfates were pure enough, they were once more converted to mixed chlorides; barium and radium thereafter were separated by fractional crystallisation while monitoring the progress using a spectroscope (radium gives characteristic red lines in contrast to the green barium lines), and the electroscope.
After the isolation of radium by Marie and Pierre Curie from uranium ore from Jáchymov, several scientists started to isolate radium in small quantities. Later, small companies purchased mine tailings from Jáchymov mines and started isolating radium. In 1904, the Austrian government nationalised the mines and stopped exporting raw ore. Until 1912, when radium production increased, radium availability was low.
The formation of an Austrian monopoly and the strong urge of other countries to have access to radium led to a worldwide search for uranium ores. The United States took over as leading producer in the early 1910s, producing 70 g total from 1913 to 1920 in Pittsburgh alone.
The Curies' process was still used for industrial radium extraction in 1940, but mixed bromides were then used for the fractionation. If the barium content of the uranium ore is not high enough, additional barium can be added to carry the radium. These processes were applied to high grade uranium ores but may not have worked well with low grade ores. Small amounts of radium were still extracted from uranium ore by this method of mixed precipitation and ion exchange as late as the 1990s, but as of 2011, it is extracted only from spent nuclear fuel. Pure radium metal is isolated by reducing radium oxide with aluminium metal in a vacuum at 1,200 °C.
In 1954, the total worldwide supply of purified radium amounted to about . Zaire and Canada were briefly the largest producers of radium in the late 1970s. As of 1997 the chief radium-producing countries were Belgium, Canada, the Czech Republic, Slovakia, the United Kingdom, and Russia. The annual production of radium compounds was only about 100 g in total as of 1984; annual production of radium had reduced to less than 100 g by 2018.
Modern applications
Radium is seeing increasing use in the field of atomic, molecular, and optical physics. Symmetry breaking forces scale proportional to which makes radium, the heaviest alkaline earth element, well suited for constraining new physics beyond the standard model. Some radium isotopes, such as radium-225, have octupole deformed parity doublets that enhance sensitivity to charge parity violating new physics by two to three orders of magnitude compared to Hg.
Radium is also a promising candidate for trapped ion optical clocks. The radium ion has two subhertz-linewidth transitions from the ground state that could serve as the clock transition in an optical clock. A Ra+ trapped ion atomic clock has been demonstrated on the to transition, which has been considered for the creation of a transportable optical clock as all transitions necessary for clock operation can be addressed with direct diode lasers at common wavelengths.
Some of the few practical uses of radium are derived from its radioactive properties. More recently discovered radioisotopes, such as cobalt-60 and caesium-137, are replacing radium in even these limited uses because several of these isotopes are more powerful emitters, safer to handle, and available in more concentrated form.
The isotope Ra was approved by the United States Food and Drug Administration in 2013 for use in medicine as a cancer treatment of bone metastasis in the form of a solution including radium-223 chloride. The main indication of treatment is the therapy of bony metastases from castration-resistant prostate cancer.
Ra has also been used in experiments concerning therapeutic irradiation, as it is the only reasonably long-lived radium isotope which does not have radon as one of its daughters.
Radium was still used in 2007 as a radiation source in some industrial radiography devices to check for flawed metallic parts, similarly to X-ray imaging. When mixed with beryllium, radium acts as a neutron source. Up until at least 2004, radium-beryllium neutron sources were still sometimes used,
but other materials such as polonium and americium have become more common for use in neutron sources. RaBeF4-based (α, n) neutron sources have been deprecated despite the high number of neutrons they emit (1.84×10 neutrons per second) in favour of Am–Be sources. , the isotope Ra is mainly used to form Ac by neutron irradiation in a nuclear reactor.
Hazards
Radium is highly radioactive, as is its immediate decay product, radon gas. When ingested, 80% of the ingested radium leaves the body through the feces, while the other 20% goes into the bloodstream, mostly accumulating in the bones. This is because the body treats radium as calcium and deposits it in the bones, where radioactivity degrades marrow and can mutate bone cells. Exposure to radium, internal or external, can cause cancer and other disorders, because radium and radon emit alpha and gamma rays upon their decay, which kill and mutate cells. Radium is generally considered the most toxic of the radioactive elements.
Some of the biological effects of radium include the first case of "radium-dermatitis", reported in 1900, two years after the element's discovery. The French physicist Antoine Becquerel carried a small ampoule of radium in his waistcoat pocket for six hours and reported that his skin became ulcerated. Pierre Curie attached a tube filled with radium to his arm for ten hours, which resulted in the appearance of a skin lesion, suggesting the use of radium to attack cancerous tissue as it had attacked healthy tissue.
Handling of radium has been blamed for Marie Curie's death, due to aplastic anemia, though analysis of her levels of radium exposure done after her death find them within accepted safe levels and attribute her illness and death to her use of radiography. A significant amount of radium's danger comes from its daughter radon, which as a gas can enter the body far more readily than can its parent radium.
Regulation
The first published recommendations for protection against radium and radiation in general were made by the British X-ray and Radium Protection Committee and were adopted internationally in 1928 at the first meeting of the International Commission on Radiological Protection (ICRP), following preliminary guidance written by the Röntgen Society. This meeting led to further developments of radiation protection programs coordinated across all countries represented by the commission.
Exposure to radium is still regulated internationally by the ICRP, alongside the World Health Organization. The International Atomic Energy Agency (IAEA) publishes safety standards and provides recommendations for the handling of and exposure to radium in its works on naturally occurring radioactive materials and the broader International Basic Safety Standards, which are not enforced by the IAEA but are available for adoption by members of the organization. In addition, in efforts to reduce the quantity of old radiotherapy devices that contain radium, the IAEA has worked since 2022 to manage and recycle disused Ra sources.
In several countries, further regulations exist and are applied beyond those recommended by the IAEA and ICRP. For example, in the United States, the Environmental Protection Agency-defined Maximum Contaminant Level for radium is 5 pCi/L for drinking water; at the time of the Manhattan Project in the 1940s, the "tolerance level" for workers was set at 0.1 micrograms of ingested radium. The Occupational Safety and Health Administration does not specifically set exposure limits for radium, and instead limits ionizing radiation exposure in units of roentgen equivalent man based on the exposed area of the body. Radium sources themselves, rather than worker exposures, are regulated more closely by the Nuclear Regulatory Commission, which requires licensing for anyone possessing Ra with activity of more than 0.01 μCi. The particular governing bodies that regulate radioactive materials and nuclear energy are documented by the Nuclear Energy Agency for member countries for instance, in the Republic of Korea, the nation's radiation safety standards are managed by the Korea Radioisotope Institute, established in 1985, and the Korea Institute of Nuclear Safety, established in 1990 and the IAEA leads efforts in establishing governing bodies in locations that do not have government regulations on radioactive materials.
| Physical sciences | Chemical elements_2 | null |
25603 | https://en.wikipedia.org/wiki/Rhenium | Rhenium | Rhenium is a chemical element; it has symbol Re and atomic number 75. It is a silvery-gray, heavy, third-row transition metal in group 7 of the periodic table. With an estimated average concentration of 1 part per billion (ppb), rhenium is one of the rarest elements in the Earth's crust. It has one of the highest melting and boiling points of any element. It resembles manganese and technetium chemically and is mainly obtained as a by-product of the extraction and refinement of molybdenum and copper ores. It shows in its compounds a wide variety of oxidation states ranging from −1 to +7.
Rhenium was originally discovered in 1908 by Masataka Ogawa, but he mistakenly assigned it as element 43 rather than element 75 and named it nipponium. It was rediscovered in 1925 by Walter Noddack, Ida Tacke and Otto Berg, who gave it its present name. It was named after the river Rhine in Europe, from which the earliest samples had been obtained and worked commercially.
Nickel-based superalloys of rhenium are used in combustion chambers, turbine blades, and exhaust nozzles of jet engines. These alloys contain up to 6% rhenium, making jet engine construction the largest single use for the element. The second-most important use is as a catalyst: it is an excellent catalyst for hydrogenation and isomerization, and is used for example in catalytic reforming of naphtha for use in gasoline (rheniforming process). Because of the low availability relative to demand, rhenium is expensive, with price reaching an all-time high in 2008–09 of US$10,600 per kilogram (US$4,800 per pound). As of 2018, its price had dropped to US$2,844 per kilogram (US$1,290 per pound) due to increased recycling and a drop in demand for rhenium catalysts.
History
In 1908, Japanese chemist Masataka Ogawa announced that he had discovered the 43rd element and named it nipponium (Np) after Japan (Nippon in Japanese). In fact, he had found element 75 (rhenium) instead of element 43: both elements are in the same group of the periodic table. Ogawa's work was often incorrectly cited, because some of his key results were published only in Japanese; it is likely that his insistence on searching for element 43 prevented him from considering that he might have found element 75 instead. Just before Ogawa's death in 1930, Kenjiro Kimura analysed Ogawa's sample by X-ray spectroscopy at the Imperial University of Tokyo, and said to a friend that "it was beautiful rhenium indeed". He did not reveal this publicly, because under the Japanese university culture before World War II it was frowned upon to point out the mistakes of one's seniors, but the evidence became known to some Japanese news media regardless. As time passed with no repetitions of the experiments or new work on nipponium, Ogawa's claim faded away. The symbol Np was later used for the element neptunium, and the name "nihonium", also named after Japan, along with symbol Nh, was later used for element 113. Element 113 was also discovered by a team of Japanese scientists and was named in respectful homage to Ogawa's work. Today, Ogawa's claim is widely accepted as having been the discovery of element 75 in hindsight.
Rhenium ( meaning: "Rhine") received its current name when it was rediscovered by Walter Noddack, Ida Noddack, and Otto Berg in Germany. In 1925 they reported that they had detected the element in platinum ore and in the mineral columbite. They also found rhenium in gadolinite and molybdenite. In 1928 they were able to extract 1 g of the element by processing 660 kg of molybdenite. It was estimated in 1968 that 75% of the rhenium metal in the United States was used for research and the development of refractory metal alloys. It took several years from that point before the superalloys became widely used.
The original mischaracterization by Ogawa in 1908 and final work in 1925 makes rhenium perhaps the last stable element to be understood. Hafnium was discovered in 1923 and all other new elements discovered since then, such as francium, are radioactive.
Characteristics
Rhenium is a silvery-white metal with one of the highest melting points of all elements, exceeded by only tungsten. (At standard pressure carbon sublimes rather than melts, though its sublimation point is comparable to the melting points of tungsten and rhenium.) It also has one of the highest boiling points of all elements, and the highest among stable elements. It is also one of the densest, exceeded only by platinum, iridium and osmium. Rhenium has a hexagonal close-packed crystal structure.
Its usual commercial form is a powder, but this element can be consolidated by pressing and sintering in a vacuum or hydrogen atmosphere. This procedure yields a compact solid having a density above 90% of the density of the metal. When annealed this metal is very ductile and can be bent, coiled, or rolled. Rhenium-molybdenum alloys are superconductive at 10 K; tungsten-rhenium alloys are also superconductive around 4–8 K, depending on the alloy. Rhenium metal superconducts at .
In bulk form and at room temperature and atmospheric pressure, the element resists alkalis, sulfuric acid, hydrochloric acid, nitric acid, and aqua regia. It will however, react with nitric acid upon heating.
Isotopes
Rhenium has one stable isotope, rhenium-185, which nevertheless occurs in minority abundance, a situation found only in two other elements (indium and tellurium). Naturally occurring rhenium is only 37.4% 185Re, and 62.6% 187Re, which is unstable but has a very long half-life (~1010 years). A kilogram of natural rhenium emits 1.07 MBq of radiation due to the presence of this isotope. This lifetime can be greatly affected by the charge state of the rhenium atom. The beta decay of 187Re is used for rhenium–osmium dating of ores. The available energy for this beta decay (2.6 keV) is the second lowest known among all radionuclides, only behind the decay from 115In to excited 115Sn* (0.147 keV). The isotope rhenium-186m is notable as being one of the longest lived metastable isotopes with a half-life of around 200,000 years. There are 33 other unstable isotopes that have been recognized, ranging from 160Re to 194Re, the longest-lived of which is 183Re with a half-life of 70 days.
Compounds
Rhenium compounds are known for all the oxidation states between −3 and +7 except −2. The oxidation states +7, +4, and +3 are the most common. Rhenium is most available commercially as salts of perrhenate, including sodium and ammonium perrhenates. These are white, water-soluble compounds. Tetrathioperrhenate anion [ReS4]− is possible.
Halides and oxyhalides
The most common rhenium chlorides are ReCl6, ReCl5, ReCl4, and ReCl3. The structures of these compounds often feature extensive Re-Re bonding, which is characteristic of this metal in oxidation states lower than VII. Salts of [Re2Cl8]2− feature a quadruple metal-metal bond. Although the highest rhenium chloride features Re(VI), fluorine gives the d0 Re(VII) derivative rhenium heptafluoride. Bromides and iodides of rhenium are also well known, including rhenium pentabromide and rhenium tetraiodide.
Like tungsten and molybdenum, with which it shares chemical similarities, rhenium forms a variety of oxyhalides. The oxychlorides are most common, and include ReOCl4, ReOCl3.
Oxides and sulfides
The most common oxide is the volatile yellow Re2O7. The red rhenium trioxide ReO3 adopts a perovskite-like structure. Other oxides include Re2O5, ReO2, and Re2O3. The sulfides are ReS2 and Re2S7. Perrhenate salts can be converted to tetrathioperrhenate by the action of ammonium hydrosulfide.
Other compounds
Rhenium diboride (ReB2) is a hard compound having a hardness similar to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride.
Organorhenium compounds
Dirhenium decacarbonyl is the most common entry to organorhenium chemistry. Its reduction with sodium amalgam gives Na[Re(CO)5] with rhenium in the formal oxidation state −1. Dirhenium decacarbonyl can be oxidised with bromine to bromopentacarbonylrhenium(I):
Re2(CO)10 + Br2 → 2 Re(CO)5Br
Reduction of this pentacarbonyl with zinc and acetic acid gives pentacarbonylhydridorhenium:
Re(CO)5Br + Zn + HOAc → Re(CO)5H + ZnBr(OAc)
Methylrhenium trioxide ("MTO"), CH3ReO3 is a volatile, colourless solid that has been used as a catalyst in some laboratory experiments. It can be prepared by many routes, a typical method is the reaction of Re2O7 and tetramethyltin:
Re2O7 + (CH3)4Sn → CH3ReO3 + (CH3)3SnOReO3
Analogous alkyl and aryl derivatives are known. MTO catalyses for the oxidations with hydrogen peroxide. Terminal alkynes yield the corresponding acid or ester, internal alkynes yield diketones, and alkenes give epoxides. MTO also catalyses the conversion of aldehydes and diazoalkanes into an alkene.
Nonahydridorhenate
A distinctive derivative of rhenium is nonahydridorhenate, originally thought to be the rhenide anion, Re−, but actually containing the anion in which the oxidation state of rhenium is +7.
Occurrence
Rhenium is one of the rarest elements in Earth's crust with an average concentration of 1 ppb; other sources quote the number of 0.5 ppb making it the 77th most abundant element in Earth's crust. Rhenium is probably not found free in nature (its possible natural occurrence is uncertain), but occurs in amounts up to 0.2% in the mineral molybdenite (which is primarily molybdenum disulfide), the major commercial source, although single molybdenite samples with up to 1.88% have been found. Chile has the world's largest rhenium reserves, part of the copper ore deposits, and was the leading producer as of 2005. It was only recently (in 1994) that the first rhenium mineral was found and described, a rhenium sulfide mineral (ReS2) condensing from a fumarole on Kudriavy volcano, Iturup island, in the Kuril Islands. Kudriavy discharges up to 20–60 kg rhenium per year mostly in the form of rhenium disulfide. Named rheniite, this rare mineral commands high prices among collectors.
Production
Approximately 80% of rhenium is extracted from porphyry molybdenum deposits. Some ores contain 0.001% to 0.2% rhenium. Roasting the ore volatilizes rhenium oxides. Rhenium(VII) oxide and perrhenic acid readily dissolve in water; they are leached from flue dusts and gasses and extracted by precipitating with potassium or ammonium chloride as the perrhenate salts, and purified by recrystallization. Total world production is between 40 and 50 tons/year; the main producers are in Chile, the United States, Peru, and Poland. Recycling of used Pt-Re catalyst and special alloys allow the recovery of another 10 tons per year. Prices for the metal rose rapidly in early 2008, from $1000–$2000 per kg in 2003–2006 to over $10,000 in February 2008. The metal form is prepared by reducing ammonium perrhenate with hydrogen at high temperatures:
2 NH4ReO4 + 7 H2 → 2 Re + 8 H2O + 2 NH3
There are technologies for the associated extraction of rhenium from productive solutions of underground leaching of uranium ores.
Applications
Rhenium is added to high-temperature superalloys that are used to make jet engine parts, using 70% of the worldwide rhenium production. Another major application is in platinum–rhenium catalysts, which are primarily used in making lead-free, high-octane gasoline.
Alloys
The nickel-based superalloys have improved creep strength with the addition of rhenium. The alloys normally contain 3% or 6% of rhenium. Second-generation alloys contain 3%; these alloys were used in the engines for the F-15 and F-16, whereas the newer single-crystal third-generation alloys contain 6% of rhenium; they are used in the F-22 and F-35 engines. Rhenium is also used in the superalloys, such as CMSX-4 (2nd gen) and CMSX-10 (3rd gen) that are used in industrial gas turbine engines like the GE 7FA. Rhenium can cause superalloys to become microstructurally unstable, forming undesirable topologically close packed (TCP) phases. In 4th- and 5th-generation superalloys, ruthenium is used to avoid this effect. Among others the new superalloys are EPM-102 (with 3% Ru) and TMS-162 (with 6% Ru), as well as TMS-138 and TMS-174.
For 2006, the consumption is given as 28% for General Electric, 28% Rolls-Royce plc and 12% Pratt & Whitney, all for superalloys, whereas the use for catalysts only accounts for 14% and the remaining applications use 18%. In 2006, 77% of rhenium consumption in the United States was in alloys. The rising demand for military jet engines and the constant supply made it necessary to develop superalloys with a lower rhenium content. For example, the newer CFM International CFM56 high-pressure turbine (HPT) blades will use Rene N515 with a rhenium content of 1.5% instead of Rene N5 with 3%.
Rhenium improves the properties of tungsten. Tungsten-rhenium alloys are more ductile at low temperature, allowing them to be more easily machined. The high-temperature stability is also improved. The effect increases with the rhenium concentration, and therefore tungsten alloys are produced with up to 27% of Re, which is the solubility limit. Tungsten-rhenium wire was originally created in efforts to develop a wire that was more ductile after recrystallization. This allows the wire to meet specific performance objectives, including superior vibration resistance, improved ductility, and higher resistivity. One application for the tungsten-rhenium alloys is X-ray sources. The high melting point of both elements, together with their high atomic mass, makes them stable against the prolonged electron impact. Rhenium tungsten alloys are also applied as thermocouples to measure temperatures up to 2200 °C.
The high temperature stability, low vapor pressure, good wear resistance and ability to withstand arc corrosion of rhenium are useful in self-cleaning electrical contacts. In particular, the discharge that occurs during electrical switching oxidizes the contacts. However, rhenium oxide Re2O7 is volatile (sublimes at ~360 °C) and therefore is removed during the discharge.
Rhenium has a high melting point and a low vapor pressure similar to tantalum and tungsten. Therefore, rhenium filaments exhibit a higher stability if the filament is operated not in vacuum, but in oxygen-containing atmosphere. Those filaments are widely used in mass spectrometers, ion gauges and photoflash lamps in photography.
Catalysts
Rhenium in the form of rhenium-platinum alloy is used as catalyst for catalytic reforming, which is a chemical process to convert petroleum refinery naphthas with low octane ratings into high-octane liquid products. Worldwide, 30% of catalysts used for this process contain rhenium. The olefin metathesis is the other reaction for which rhenium is used as catalyst. Normally Re2O7 on alumina is used for this process. Rhenium catalysts are very resistant to chemical poisoning from nitrogen, sulfur and phosphorus, and so are used in certain kinds of hydrogenation reactions.
Other uses
The isotopes 186Re and 188Re are radioactive and are used for treatment of liver cancer. They both have similar penetration depth in tissue (5 mm for 186Re and 11 mm for 188Re), but 186Re has the advantage of a longer half life (90 hours vs. 17 hours).
188Re is also being used experimentally in a novel treatment of pancreatic cancer where it is delivered by means of the bacterium Listeria monocytogenes. The 188Re isotope is also used for the rhenium-SCT (skin cancer therapy). The treatment uses the isotope's properties as a beta emitter for brachytherapy in the treatment of basal cell carcinoma and squamous cell carcinoma of the skin.
Related by periodic trends, rhenium has a similar chemistry to that of technetium; work done to label rhenium onto target compounds can often be translated to technetium. This is useful for radiopharmacy, where it is difficult to work with technetium – especially the technetium-99m isotope used in medicine – due to its expense and short half-life.
Rhenium is used in manufacturing high precision equipment like gyroscopes. Its high density, mechanical stability and corrosion resistance characteristics ensure the equipment's durability and precise performance in demanding conditions. Rhenium cathodes are also used for their stability and precision in spectral analysis.
Rhenium is used in aerospace, nuclear, and electronic industries, and it shows potential for application in medical instrumentation. In the rocket industry, it is used in engine components for booster rockets. Additionally, rhenium was employed in the SP-100 program due to its low-temperature ductility.
Rhenium's stiffness and high melting point makes it a common gasket material for high pressure experiments in diamond anvil cells.
Precautions
Very little is known about the toxicity of rhenium and its compounds because they are used in very small amounts. Soluble salts, such as the rhenium halides or perrhenates, could be hazardous due to elements other than rhenium or due to rhenium itself. Only a few compounds of rhenium have been tested for their acute toxicity; two examples are potassium perrhenate and rhenium trichloride, which were injected as a solution into rats. The perrhenate had an LD50 value of 2800 mg/kg after seven days (this is very low toxicity, similar to that of table salt) and the rhenium trichloride showed LD50 of 280 mg/kg.
| Physical sciences | Chemical elements_2 | null |
25604 | https://en.wikipedia.org/wiki/Radon | Radon | Radon is a chemical element; it has symbol Rn and atomic number 86. It is a radioactive noble gas and is colorless and odorless. Of the three naturally occurring radon isotopes, only Rn has a sufficiently long half-life (3.825 days) for it to be released from the soil and rock where it is generated. Radon isotopes are the immediate decay products of radium isotopes. The instability of Rn, its most stable isotope, makes radon one of the rarest elements. Radon will be present on Earth for several billion more years despite its short half-life, because it is constantly being produced as a step in the decay chains of U and Th, both of which are abundant radioactive nuclides with half-lives of at least several billion years. The decay of radon produces many other short-lived nuclides, known as "radon daughters", ending at stable isotopes of lead. Rn occurs in significant quantities as a step in the normal radioactive decay chain of U, also known as the uranium series, which slowly decays into a variety of radioactive nuclides and eventually decays into stable Pb. Rn occurs in minute quantities as an intermediate step in the decay chain of Th, also known as the thorium series, which eventually decays into stable Pb.
Radon was discovered in 1899 by Ernest Rutherford and Robert B. Owens at McGill University in Montreal, and was the fifth radioactive element to be discovered. First known as "emanation", the radioactive gas was identified during experiments with radium, thorium oxide, and actinium by Friedrich Ernst Dorn, Rutherford and Owens, and André-Louis Debierne, respectively, and each element's emanation was considered to be a separate substance: radon, thoron, and actinon. Sir William Ramsay and Robert Whytlaw-Gray considered that the radioactive emanations may contain a new element of the noble gas family, and isolated "radium emanation" in 1909 to determine its properties. In 1911, the element Ramsay and Whytlaw-Gray isolated was accepted by the International Commission for Atomic Weights, and in 1923, the International Committee for Chemical Elements and the International Union of Pure and Applied Chemistry (IUPAC) chose radon as the accepted name for the element's most stable isotope, Rn; thoron and actinon were also recognized by IUPAC as distinct isotopes of the element.
Under standard conditions, radon is gaseous and can be easily inhaled, posing a health hazard. However, the primary danger comes not from radon itself, but from its decay products, known as radon daughters. These decay products, often existing as single atoms or ions, can attach themselves to airborne dust particles. Although radon is a noble gas and does not adhere to lung tissue (meaning it is often exhaled before decaying), the radon daughters attached to dust are more likely to stick to the lungs. This increases the risk of harm, as the radon daughters can cause damage to lung tissue. Radon and its daughters are, taken together, often the single largest contributor to an individual's background radiation dose, but due to local differences in geology, the level of exposure to radon gas differs by location. A common source of environmental radon is uranium-containing minerals in the ground; it therefore accumulates in subterranean areas such as basements. Radon can also occur in ground water, such as spring waters and hot springs. Radon trapped in permafrost may be released by climate-change-induced thawing of permafrosts, and radon may also be released into groundwater and the atmosphere following seismic events leading to earthquakes, which has led to its investigation in the field of earthquake prediction. It is possible to test for radon in buildings, and to use techniques such as sub-slab depressurization for mitigation.
Epidemiological studies have shown a clear association between breathing high concentrations of radon and incidence of lung cancer. Radon is a contaminant that affects indoor air quality worldwide. According to the United States Environmental Protection Agency (EPA), radon is the second most frequent cause of lung cancer, after cigarette smoking, causing 21,000 lung cancer deaths per year in the United States. About 2,900 of these deaths occur among people who have never smoked. While radon is the second most frequent cause of lung cancer, it is the number one cause among non-smokers, according to EPA policy-oriented estimates. Significant uncertainties exist for the health effects of low-dose exposures.
Characteristics
Physical properties
Radon is a colorless, odorless, and tasteless gas and therefore is not detectable by human senses alone. At standard temperature and pressure, it forms a monatomic gas with a density of 9.73 kg/m3, about 8 times the density of the Earth's atmosphere at sea level, 1.217 kg/m3. It is one of the densest gases at room temperature (a few are denser, e.g. CF3(CF2)2CF3 and WF6) and is the densest of the noble gases. Although colorless at standard temperature and pressure, when cooled below its freezing point of , it emits a brilliant radioluminescence that turns from yellow to orange-red as the temperature lowers. Upon condensation, it glows because of the intense radiation it produces. It is sparingly soluble in water, but more soluble than lighter noble gases. It is appreciably more soluble in organic liquids than in water. Its solubility equation is as follows:
where is the molar fraction of radon, is the absolute temperature, and and are solvent constants.
Chemical properties
Radon is a member of the zero-valence elements that are called noble gases, and is chemically not very reactive. The 3.8-day half-life of Rn makes it useful in physical sciences as a natural tracer. Because radon is a gas at standard conditions, unlike its decay-chain parents, it can readily be extracted from them for research.
It is inert to most common chemical reactions, such as combustion, because the outer valence shell contains eight electrons. This produces a stable, minimum energy configuration in which the outer electrons are tightly bound. Its first ionization energy—the minimum energy required to extract one electron from it—is 1037 kJ/mol. In accordance with periodic trends, radon has a lower electronegativity than the element one period before it, xenon, and is therefore more reactive. Early studies concluded that the stability of radon hydrate should be of the same order as that of the hydrates of chlorine () or sulfur dioxide (), and significantly higher than the stability of the hydrate of hydrogen sulfide ().
Because of its cost and radioactivity, experimental chemical research is seldom performed with radon, and as a result there are very few reported compounds of radon, all either fluorides or oxides. Radon can be oxidized by powerful oxidizing agents such as fluorine, thus forming radon difluoride (). It decomposes back to its elements at a temperature of above , and is reduced by water to radon gas and hydrogen fluoride: it may also be reduced back to its elements by hydrogen gas. It has a low volatility and was thought to be . Because of the short half-life of radon and the radioactivity of its compounds, it has not been possible to study the compound in any detail. Theoretical studies on this molecule predict that it should have a Rn–F bond distance of 2.08 ångströms (Å), and that the compound is thermodynamically more stable and less volatile than its lighter counterpart xenon difluoride (). The octahedral molecule was predicted to have an even lower enthalpy of formation than the difluoride. The [RnF]+ ion is believed to form by the following reaction:
Rn (g) + 2 (s) → (s) + 2 (g)
For this reason, antimony pentafluoride together with chlorine trifluoride and have been considered for radon gas removal in uranium mines due to the formation of radon–fluorine compounds. Radon compounds can be formed by the decay of radium in radium halides, a reaction that has been used to reduce the amount of radon that escapes from targets during irradiation. Additionally, salts of the [RnF]+ cation with the anions , , and are known. Radon is also oxidised by dioxygen difluoride to at .
Radon oxides are among the few other reported compounds of radon; only the trioxide () has been confirmed. The higher fluorides and have been claimed and are calculated to be stable, but their identification is unclear. They may have been observed in experiments where unknown radon-containing products distilled together with xenon hexafluoride: these may have been , , or both. Trace-scale heating of radon with xenon, fluorine, bromine pentafluoride, and either sodium fluoride or nickel fluoride was claimed to produce a higher fluoride as well which hydrolysed to form . While it has been suggested that these claims were really due to radon precipitating out as the solid complex [RnF][NiF6]2−, the fact that radon coprecipitates from aqueous solution with has been taken as confirmation that was formed, which has been supported by further studies of the hydrolysed solution. That [RnO3F]− did not form in other experiments may have been due to the high concentration of fluoride used. Electromigration studies also suggest the presence of cationic [HRnO3]+ and anionic [HRnO4]− forms of radon in weakly acidic aqueous solution (pH > 5), the procedure having previously been validated by examination of the homologous xenon trioxide.
The decay technique has also been used. Avrorin et al. reported in 1982 that 212Fr compounds cocrystallised with their caesium analogues appeared to retain chemically bound radon after electron capture; analogies with xenon suggested the formation of RnO3, but this could not be confirmed.
It is likely that the difficulty in identifying higher fluorides of radon stems from radon being kinetically hindered from being oxidised beyond the divalent state because of the strong ionicity of radon difluoride () and the high positive charge on radon in RnF+; spatial separation of molecules may be necessary to clearly identify higher fluorides of radon, of which is expected to be more stable than due to spin–orbit splitting of the 6p shell of radon (RnIV would have a closed-shell 6s6p configuration). Therefore, while should have a similar stability to xenon tetrafluoride (), would likely be much less stable than xenon hexafluoride (): radon hexafluoride would also probably be a regular octahedral molecule, unlike the distorted octahedral structure of , because of the inert pair effect. Because radon is quite electropositive for a noble gas, it is possible that radon fluorides actually take on highly fluorine-bridged structures and are not volatile. Extrapolation down the noble gas group would suggest also the possible existence of RnO, RnO2, and RnOF4, as well as the first chemically stable noble gas chlorides RnCl2 and RnCl4, but none of these have yet been found.
Radon carbonyl (RnCO) has been predicted to be stable and to have a linear molecular geometry. The molecules and RnXe were found to be significantly stabilized by spin-orbit coupling. Radon caged inside a fullerene has been proposed as a drug for tumors. Despite the existence of Xe(VIII), no Rn(VIII) compounds have been claimed to exist; should be highly unstable chemically (XeF8 is thermodynamically unstable). It is predicted that the most stable Rn(VIII) compound would be barium perradonate (Ba2RnO6), analogous to barium perxenate. The instability of Rn(VIII) is due to the relativistic stabilization of the 6s shell, also known as the inert pair effect.
Radon reacts with the liquid halogen fluorides ClF, , , , , and to form . In halogen fluoride solution, radon is nonvolatile and exists as the RnF+ and Rn2+ cations; addition of fluoride anions results in the formation of the complexes and , paralleling the chemistry of beryllium(II) and aluminium(III). The standard electrode potential of the Rn2+/Rn couple has been estimated as +2.0 V, although there is no evidence for the formation of stable radon ions or compounds in aqueous solution.
Isotopes
Radon has no stable isotopes. Thirty-nine radioactive isotopes have been characterized, with mass numbers ranging from 193 to 231. Six of them, from 217 to 222 inclusive, occur naturally. The most stable isotope is Rn (half-life 3.82 days), which is a decay product of Ra, the latter being itself a decay product of U. A trace amount of the (highly unstable) isotope Rn (half-life about 35 milliseconds) is also among the daughters of Rn. The isotope Rn would be produced by the double beta decay of natural Po; while energetically possible, this process has however never been seen.
Three other radon isotopes have a half-life of over an hour: Rn (about 15 hours), Rn (2.4 hours) and Rn (about 1.8 hours). However, none of these three occur naturally. Rn, also called thoron, is a natural decay product of the most stable thorium isotope (Th). It has a half-life of 55.6 seconds and also emits alpha radiation. Similarly, Rn is derived from the most stable isotope of actinium (Ac)—named "actinon"—and is an alpha emitter with a half-life of 3.96 seconds.
Daughters
Rn belongs to the radium and uranium-238 decay chain, and has a half-life of 3.8235 days. Its first four products (excluding marginal decay schemes) are very short-lived, meaning that the corresponding disintegrations are indicative of the initial radon distribution. Its decay goes through the following sequence:
Rn, 3.82 days, alpha decaying to...
Po, 3.10 minutes, alpha decaying to...
Pb, 26.8 minutes, beta decaying to...
Bi, 19.9 minutes, beta decaying to...
Po, 0.1643 ms, alpha decaying to...
Pb, which has a much longer half-life of 22.3 years, beta decaying to...
Bi, 5.013 days, beta decaying to...
Po, 138.376 days, alpha decaying to...
Pb, stable.
The radon equilibrium factor is the ratio between the activity of all short-period radon progenies (which are responsible for most of radon's biological effects), and the activity that would be at equilibrium with the radon parent.
If a closed volume is constantly supplied with radon, the concentration of short-lived isotopes will increase until an equilibrium is reached where the overall decay rate of the decay products equals that of the radon itself. The equilibrium factor is 1 when both activities are equal, meaning that the decay products have stayed close to the radon parent long enough for the equilibrium to be reached, within a couple of hours. Under these conditions, each additional pCi/L of radon will increase exposure by 0.01 working level (WL, a measure of radioactivity commonly used in mining). These conditions are not always met; in many homes, the equilibrium factor is typically 40%; that is, there will be 0.004 WL of daughters for each pCi/L of radon in the air. Pb takes much longer to come in equilibrium with radon, dependent on environmental factors, but if the environment permits accumulation of dust over extended periods of time, 210Pb and its decay products may contribute to overall radiation levels as well. Several studies on the radioactive equilibrium of elements in the environment find it more useful to use the ratio of other Rn decay products with Pb, such as Po, in measuring overall radiation levels.
Because of their electrostatic charge, radon progenies adhere to surfaces or dust particles, whereas gaseous radon does not. Attachment removes them from the air, usually causing the equilibrium factor in the atmosphere to be less than 1. The equilibrium factor is also lowered by air circulation or air filtration devices, and is increased by airborne dust particles, including cigarette smoke. The equilibrium factor found in epidemiological studies is 0.4.
History and etymology
Radon was discovered in 1899 by Ernest Rutherford and Robert B. Owens at McGill University in Montreal. It was the fifth radioactive element to be discovered, after uranium, thorium, radium, and polonium. In 1899, Pierre and Marie Curie observed that the gas emitted by radium remained radioactive for a month. Later that year, Rutherford and Owens noticed variations when trying to measure radiation from thorium oxide. Rutherford noticed that the compounds of thorium continuously emit a radioactive gas that remains radioactive for several minutes, and called this gas "emanation" (from , to flow out, and , expiration), and later "thorium emanation" ("Th Em"). In 1900, Friedrich Ernst Dorn reported some experiments in which he noticed that radium compounds emanate a radioactive gas he named "radium emanation" ("Ra Em"). In 1901, Rutherford and Harriet Brooks demonstrated that the emanations are radioactive, but credited the Curies for the discovery of the element. In 1903, similar emanations were observed from actinium by André-Louis Debierne, and were called "actinium emanation" ("Ac Em").
Several shortened names were soon suggested for the three emanations: exradio, exthorio, and exactinio in 1904; radon (Ro), thoron (To), and akton or acton (Ao) in 1918; radeon, thoreon, and actineon in 1919, and eventually radon, thoron, and actinon in 1920. (The name radon is not related to that of the Austrian mathematician Johann Radon.) The likeness of the spectra of these three gases with those of argon, krypton, and xenon, and their observed chemical inertia led Sir William Ramsay to suggest in 1904 that the "emanations" might contain a new element of the noble-gas family.
In 1909, Ramsay and Robert Whytlaw-Gray isolated radon and determined its melting temperature and approximate density. In 1910, they determined that it was the heaviest known gas. They wrote that "" ("the expression 'radium emanation' is very awkward") and suggested the new name niton (Nt) (from , shining) to emphasize the radioluminescence property, and in 1912 it was accepted by the International Commission for Atomic Weights. In 1923, the International Committee for Chemical Elements and International Union of Pure and Applied Chemistry (IUPAC) chose the name of the most stable isotope, radon, as the name of the element. The isotopes thoron and actinon were later renamed Rn and Rn. This has caused some confusion in the literature regarding the element's discovery as while Dorn had discovered radon the isotope, he was not the first to discover radon the element.
As late as the 1960s, the element was also referred to simply as emanation. The first synthesized compound of radon, radon fluoride, was obtained in 1962. Even today, the word radon may refer to either the element or its isotope 222Rn, with thoron remaining in use as a short name for 220Rn to stem this ambiguity. The name actinon for 219Rn is rarely encountered today, probably due to the short half-life of that isotope.
The danger of high exposure to radon in mines, where exposures can reach 1,000,000 Bq/m3, has long been known. In 1530, Paracelsus described a wasting disease of miners, the mala metallorum, and Georg Agricola recommended ventilation in mines to avoid this mountain sickness (Bergsucht). In 1879, this condition was identified as lung cancer by Harting and Hesse in their investigation of miners from Schneeberg, Germany. The first major studies with radon and health occurred in the context of uranium mining in the Joachimsthal region of Bohemia. In the US, studies and mitigation only followed decades of health effects on uranium miners of the Southwestern US employed during the early Cold War; standards were not implemented until 1971.
In the early 20th century in the US, gold contaminated with the radon daughter 210Pb entered the jewelry industry. This was from gold brachytherapy seeds that had held 222Rn, which were melted down after the radon had decayed.
The presence of radon in indoor air was documented as early as 1950. Beginning in the 1970s, research was initiated to address sources of indoor radon, determinants of concentration, health effects, and mitigation approaches. In the US, the problem of indoor radon received widespread publicity and intensified investigation after a widely publicized incident in 1984. During routine monitoring at a Pennsylvania nuclear power plant, a worker was found to be contaminated with radioactivity. A high concentration of radon in his home was subsequently identified as responsible.
Occurrence
Concentration units
Discussions of radon concentrations in the environment refer to 222Rn, the decay product of uranium and radium. While the average rate of production of 220Rn (from the thorium decay series) is about the same as that of 222Rn, the amount of 220Rn in the environment is much less than that of 222Rn because of the short half-life of 220Rn (55 seconds, versus 3.8 days respectively).
Radon concentration in the atmosphere is usually measured in becquerel per cubic meter (Bq/m3), the SI derived unit. Another unit of measurement common in the US is picocuries per liter (pCi/L); 1 pCi/L = 37 Bq/m3. Typical domestic exposures average about 48 Bq/m3 indoors, though this varies widely, and 15 Bq/m3 outdoors.
In the mining industry, the exposure is traditionally measured in working level (WL), and the cumulative exposure in working level month (WLM); 1 WL equals any combination of short-lived 222Rn daughters (218Po, 214Pb, 214Bi, and 214Po) in 1 liter of air that releases 1.3 × 105 MeV of potential alpha energy; 1 WL is equivalent to 2.08 × 10−5 joules per cubic meter of air (J/m3). The SI unit of cumulative exposure is expressed in joule-hours per cubic meter (J·h/m3). One WLM is equivalent to 3.6 × 10−3 J·h/m3. An exposure to 1 WL for 1 working-month (170 hours) equals 1 WLM cumulative exposure. The International Commission on Radiological Protection recommends an annual limit of 4.8WLM for miners. Assuming 2000 hours of work per year, this corresponds to a concentration of 1500 Bq/m3.
222Rn decays to 210Pb and other radioisotopes. The levels of 210Pb can be measured. The rate of deposition of this radioisotope is weather-dependent.
Radon concentrations found in natural environments are much too low to be detected by chemical means. A 1,000 Bq/m3 (relatively high) concentration corresponds to 0.17 picogram per cubic meter (pg/m3). The average concentration of radon in the atmosphere is about 6 molar percent, or about 150 atoms in each milliliter of air. The radon activity of the entire Earth's atmosphere originates from only a few tens of grams of radon, consistently replaced by decay of larger amounts of radium, thorium, and uranium.
Natural
Radon is produced by the radioactive decay of radium-226, which is found in uranium ores, phosphate rock, shales, igneous and metamorphic rocks such as granite, gneiss, and schist, and to a lesser degree, in common rocks such as limestone. Every square mile of surface soil, to a depth of 6 inches (2.6 km to a depth of 15 cm), contains about 1 gram of radium, which releases radon in small amounts to the atmosphere. It is estimated that 2.4 billion curies (90 EBq) of radon are released from soil annually worldwide. This is equivalent to some .
Radon concentration can differ widely from place to place. In the open air, it ranges from 1 to 100 Bq/m, even less (0.1 Bq/m) above the ocean. In caves or ventilated mines, or poorly ventilated houses, its concentration climbs to 20–2,000 Bq/m.
Radon concentration can be much higher in mining contexts. Ventilation regulations instruct to maintain radon concentration in uranium mines under the "working level", with 95th percentile levels ranging up to nearly 3 WL (546 pCi Rn per liter of air; 20.2 kBq/m, measured from 1976 to 1985).
The concentration in the air at the (unventilated) Gastein Healing Gallery averages 43 kBq/m (1.2 nCi/L) with maximal value of 160 kBq/m (4.3 nCi/L).
Radon mostly appears with the radium/uranium series (decay chain) (Rn), and marginally with the thorium series (Rn). The element emanates naturally from the ground, and some building materials, all over the world, wherever traces of uranium or thorium are found, and particularly in regions with soils containing granite or shale, which have a higher concentration of uranium. Not all granitic regions are prone to high emissions of radon. Being a rare gas, it usually migrates freely through faults and fragmented soils, and may accumulate in caves or water. Owing to its very short half-life (four days for Rn), radon concentration decreases very quickly when the distance from the production area increases. Radon concentration varies greatly with season and atmospheric conditions. For instance, it has been shown to accumulate in the air if there is a meteorological inversion and little wind.
High concentrations of radon can be found in some spring waters and hot springs. The towns of Boulder, Montana; Misasa; Bad Kreuznach, Germany; and the country of Japan have radium-rich springs that emit radon. To be classified as a radon mineral water, radon concentration must be above 2 nCi/L (74 kBq/m). The activity of radon mineral water reaches 2 MBq/m in Merano and 4 MBq/m in Lurisia (Italy).
Natural radon concentrations in the Earth's atmosphere are so low that radon-rich water in contact with the atmosphere will continually lose radon by volatilization. Hence, ground water has a higher concentration of Rn than surface water, because radon is continuously produced by radioactive decay of Ra present in rocks. Likewise, the saturated zone of a soil frequently has a higher radon content than the unsaturated zone because of diffusional losses to the atmosphere.
In 1971, Apollo 15 passed above the Aristarchus plateau on the Moon, and detected a significant rise in alpha particles thought to be caused by the decay of Rn. The presence of Rn has been inferred later from data obtained from the Lunar Prospector alpha particle spectrometer.
Radon is found in some petroleum. Because radon has a similar pressure and temperature curve to propane, and oil refineries separate petrochemicals based on their boiling points, the piping carrying freshly separated propane in oil refineries can become contaminated because of decaying radon and its products.
Residues from the petroleum and natural gas industry often contain radium and its daughters. The sulfate scale from an oil well can be radium rich, while the water, oil, and gas from a well often contains radon. Radon decays to form solid radioisotopes that form coatings on the inside of pipework.
Accumulation in buildings
Measurement of radon levels in the first decades of its discovery was mainly done to determine the presence of radium and uranium in geological surveys. In 1956, most likely the first indoor survey of radon decay products was performed in Sweden, with the intent of estimating the public exposure to radon and its decay products. From 1975 up until 1984, small studies in Sweden, Austria, the United States and Norway aimed to measure radon indoors and in metropolitan areas.
High concentrations of radon in homes were discovered by chance in 1984 after the stringent radiation testing conducted at the new Limerick Generating Station nuclear power plant in Montgomery County, Pennsylvania, United States revealed that Stanley Watras, a construction engineer at the plant, was contaminated by radioactive substances even though the reactor had never been fueled and Watras had been decontaminated each evening. It was determined that radon levels in his home's basement were in excess of 100,000 Bq/m3 (2.7 nCi/L); he was told that living in the home was the equivalent of smoking 135 packs of cigarettes a day, and he and his family had increased their risk of developing lung cancer by 13 or 14 percent. The incident dramatized the fact that radon levels in particular dwellings can occasionally be orders of magnitude higher than typical. Since the incident in Pennsylvania, millions of short-term radon measurements have been taken in homes in the United States. Outside the United States, radon measurements are typically performed over the long term.
In the United States, typical domestic exposures are of approximately 100 Bq/m3 (2.7 pCi/L) indoors. Some level of radon will be found in all buildings. Radon mostly enters a building directly from the soil through the lowest level in the building that is in contact with the ground. High levels of radon in the water supply can also increase indoor radon air levels. Typical entry points of radon into buildings are cracks in solid foundations and walls, construction joints, gaps in suspended floors and around service pipes, cavities inside walls, and the water supply. Radon concentrations in the same place may differ by double/half over one hour, and the concentration in one room of a building may be significantly different from the concentration in an adjoining room.
The distribution of radon concentrations will generally differ from room to room, and the readings are averaged according to regulatory protocols. Indoor radon concentration is usually assumed to follow a log-normal distribution on a given territory. Thus, the geometric mean is generally used for estimating the "average" radon concentration in an area. The mean concentration ranges from less than 10 Bq/m3 to over 100 Bq/m3 in some European countries.
Some of the highest radon hazard in the US is found in Iowa and in the Appalachian Mountain areas in southeastern Pennsylvania. Iowa has the highest average radon concentrations in the US due to significant glaciation that ground the granitic rocks from the Canadian Shield and deposited it as soils making up the rich Iowa farmland. Many cities within the state, such as Iowa City, have passed requirements for radon-resistant construction in new homes. The second highest readings in Ireland were found in office buildings in the Irish town of Mallow, County Cork, prompting local fears regarding lung cancer.
Since radon is a colorless, odorless gas, the only way to know how much is present in the air or water is to perform tests. In the US, radon test kits are available to the public at retail stores, such as hardware stores, for home use, and testing is available through licensed professionals, who are often home inspectors. Efforts to reduce indoor radon levels are called radon mitigation. In the US, the EPA recommends all houses be tested for radon. In the UK, under the Housing Health & Safety Rating System, property owners have an obligation to evaluate potential risks and hazards to health and safety in a residential property. Alpha-radiation monitoring over the long term is a method of testing for radon that is more common in countries outside the United States.
Industrial production
Radon is obtained as a by-product of uraniferous ores processing after transferring into 1% solutions of hydrochloric or hydrobromic acids. The gas mixture extracted from the solutions contains , , He, Rn, , and hydrocarbons. The mixture is purified by passing it over copper at to remove the and the , and then KOH and are used to remove the acids and moisture by sorption. Radon is condensed by liquid nitrogen and purified from residue gases by sublimation.
Radon commercialization is regulated, but it is available in small quantities for the calibration of 222Rn measurement systems. In 2008 it was priced at almost per milliliter of radium solution (which only contains about 15 picograms of actual radon at any given moment). Radon is produced commercially by a solution of radium-226 (half-life of 1,600 years). Radium-226 decays by alpha-particle emission, producing radon that collects over samples of radium-226 at a rate of about 1 mm3/day per gram of radium; equilibrium is quickly achieved and radon is produced in a steady flow, with an activity equal to that of the radium (50 Bq). Gaseous 222Rn (half-life of about four days) escapes from the capsule through diffusion.
Concentration scale
Applications
Medical
Hormesis
An early-20th-century form of quackery was the treatment of maladies in a radiotorium. It was a small, sealed room for patients to be exposed to radon for its "medicinal effects". The carcinogenic nature of radon due to its ionizing radiation became apparent later. Radon's molecule-damaging radioactivity has been used to kill cancerous cells, but it does not increase the health of healthy cells. The ionizing radiation causes the formation of free radicals, which results in cell damage, causing increased rates of illness, including cancer.
Exposure to radon has been suggested to mitigate autoimmune diseases such as arthritis in a process known as radiation hormesis. As a result, in the late 20th century and early 21st century, "health mines" established in Basin, Montana, attracted people seeking relief from health problems such as arthritis through limited exposure to radioactive mine water and radon. The practice is discouraged because of the well-documented ill effects of high doses of radiation on the body.
Radioactive water baths have been applied since 1906 in Jáchymov, Czech Republic, but even before radon discovery they were used in Bad Gastein, Austria. Radium-rich springs are also used in traditional Japanese onsen in Misasa, Tottori Prefecture. Drinking therapy is applied in Bad Brambach, Germany, and during the early 20th century, water from springs with radon in them was bottled and sold (this water had little to no radon in it by the time it got to consumers due to radon's short half-life). Inhalation therapy is carried out in Gasteiner-Heilstollen, Austria; Świeradów-Zdrój, Czerniawa-Zdrój, Kowary, Lądek-Zdrój, Poland; Harghita Băi, Romania; and Boulder, Montana. In the US and Europe, there are several "radon spas", where people sit for minutes or hours in a high-radon atmosphere, such as at Bad Schmiedeberg, Germany.
Nuclear medicine
Radon has been produced commercially for use in radiation therapy, but for the most part has been replaced by radionuclides made in particle accelerators and nuclear reactors. Radon has been used in implantable seeds, made of gold or glass, primarily used to treat cancers, known as brachytherapy. The gold seeds were produced by filling a long tube with radon pumped from a radium source, the tube being then divided into short sections by crimping and cutting. The gold layer keeps the radon within, and filters out the alpha and beta radiations, while allowing the gamma rays to escape (which kill the diseased tissue). The activities might range from 0.05 to 5 millicuries per seed (2 to 200 MBq). The gamma rays are produced by radon and the first short-lived elements of its decay chain (218Po, 214Pb, 214Bi, 214Po).
After 11 half-lives (42 days), radon radioactivity is at 1/2,048 of its original level. At this stage, the predominant residual activity of the seed originates from the radon decay product 210Pb, whose half-life (22.3 years) is 2,000 times that of radon and its descendants 210Bi and 210Po.
211Rn can be used to generate 211At, which has uses in targeted alpha therapy.
Scientific
Radon emanation from the soil varies with soil type and with surface uranium content, so outdoor radon concentrations can be used to track air masses to a limited degree. Because of radon's rapid loss to air and comparatively rapid decay, radon is used in hydrologic research that studies the interaction between groundwater and streams. Any significant concentration of radon in a river may be an indicator that there are local inputs of groundwater.
Radon soil concentration has been used to map buried close-subsurface geological faults because concentrations are generally higher over the faults. Similarly, it has found some limited use in prospecting for geothermal gradients.
Some researchers have investigated changes in groundwater radon concentrations for earthquake prediction. Increases in radon were noted before the 1966 Tashkent and 1994 Mindoro earthquakes. Radon has a half-life of approximately 3.8 days, which means that it can be found only shortly after it has been produced in the radioactive decay chain. For this reason, it has been hypothesized that increases in radon concentration is due to the generation of new cracks underground, which would allow increased groundwater circulation, flushing out radon. The generation of new cracks might not unreasonably be assumed to precede major earthquakes. In the 1970s and 1980s, scientific measurements of radon emissions near faults found that earthquakes often occurred with no radon signal, and radon was often detected with no earthquake to follow. It was then dismissed by many as an unreliable indicator. As of 2009, it was under investigation as a possible earthquake precursor by NASA; further research into the subject has suggested that abnormalities in atmospheric radon concentrations can be an indicator of seismic movement.
Radon is a known pollutant emitted from geothermal power stations because it is present in the material pumped from deep underground. It disperses rapidly, and no radiological hazard has been demonstrated in various investigations. In addition, typical systems re-inject the material deep underground rather than releasing it at the surface, so its environmental impact is minimal. In 1989, a survey of the collective dose received due to radon in geothermal fluids was measured at 2 man-sieverts per gigawatt-year of electricity produced, in comparison to the 2.5 man-sieverts per gigawatt-year produced from C emissions in nuclear power plants.
In the 1940s and 1950s, radon produced from a radium source was used for industrial radiography. Other X-ray sources such as Co and Ir became available after World War II and quickly replaced radium and thus radon for this purpose, being of lower cost and hazard.
Health risks
In mines
Rn decay products have been classified by the International Agency for Research on Cancer as being carcinogenic to humans, and as a gas that can be inhaled, lung cancer is a particular concern for people exposed to elevated levels of radon for sustained periods. During the 1940s and 1950s, when safety standards requiring expensive ventilation in mines were not widely implemented, radon exposure was linked to lung cancer among non-smoking miners of uranium and other hard rock materials in what is now the Czech Republic, and later among miners from the Southwestern US and South Australia. Despite these hazards being known in the early 1950s, this occupational hazard remained poorly managed in many mines until the 1970s. During this period, several entrepreneurs opened former uranium mines in the US to the general public and advertised alleged health benefits from breathing radon gas underground. Health benefits claimed included relief from pain, sinus problems, asthma, and arthritis, but the government banned such advertisements in 1975, and subsequent works have debated the truth of such claimed health effects, citing the documented ill effects of radiation on the body.
Since that time, ventilation and other measures have been used to reduce radon levels in most affected mines that continue to operate. In recent years, the average annual exposure of uranium miners has fallen to levels similar to the concentrations inhaled in some homes. This has reduced the risk of occupationally-induced cancer from radon, although health issues may persist for those who are currently employed in affected mines and for those who have been employed in them in the past. As the relative risk for miners has decreased, so has the ability to detect excess risks among that population.
Residues from processing of uranium ore can also be a source of radon. Radon resulting from the high radium content in uncovered dumps and tailing ponds can be easily released into the atmosphere and affect people living in the vicinity. The release of radon may be mitigated by covering tailings with soil or clay, though other decay products may leach into groundwater supplies.
Non-uranium mines may pose higher risks of radon exposure, as workers are not continuously monitored for radiation, and regulations specific to uranium mines do not apply. A review of radon level measurements across non-uranium mines found the highest concentrations of radon in non-metal mines, such as phosphorus and salt mines. However, older or abandoned uranium mines without ventilation may still have extremely high radon levels.
In addition to lung cancer, researchers have theorized a possible increased risk of leukemia due to radon exposure. Empirical support from studies of the general population is inconsistent; a study of uranium miners found a correlation between radon exposure and chronic lymphocytic leukemia, and current research supports a link between indoor radon exposure and poor health outcomes (i.e., an increased risk of lung cancer or childhood leukemia). Legal actions taken by those involved in nuclear industries, including miners, millers, transporters, nuclear site workers, and their respective unions have resulted in compensation for those affected by radon and radiation exposure under programs such as the compensation scheme for radiation-linked diseases (in the United Kingdom) and the Radiation Exposure Compensation Act (in the United States).
Domestic-level exposure
Radon has been considered the second leading cause of lung cancer in the United States and leading environmental cause of cancer mortality by the EPA, with the first one being smoking. Others have reached similar conclusions for the United Kingdom and France. Radon exposure in buildings may arise from subsurface rock formations and certain building materials (e.g., some granites). The greatest risk of radon exposure arises in buildings that are airtight, insufficiently ventilated, and have foundation leaks that allow air from the soil into basements and dwelling rooms. In some regions, such as Niška Banja, Serbia and Ullensvang, Norway, outdoor radon concentrations may be exceptionally high, though compared to indoors, where people spend more time and air is not dispersed and exchanged as often, outdoor exposure to radon is not considered a significant health risk.
Radon exposure (mostly radon daughters) has been linked to lung cancer in case-control studies performed in the US, Europe and China. There are approximately 21,000 deaths per year in the US (0.0063% of a population of 333 million) due to radon-induced lung cancers. In Europe, 2% of all cancers have been attributed to radon; in Slovenia in particular, a country with a high concentration of radon, about 120 people (0.0057% of a population of 2.11 million) die yearly because of radon. One of the most comprehensive radon studies performed in the US by epidemiologist R. William Field and colleagues found a 50% increased lung cancer risk even at the protracted exposures at the EPA's action level of 4 pCi/L. North American and European pooled analyses further support these findings. However, the conclusion that exposure to low levels of radon leads to elevated risk of lung cancer has been disputed, and analyses of the literature point towards elevated risk only when radon accumulates indoors and at levels above 100 Bq/m3.
Thoron (220Rn) is less studied than Rn in regards to domestic exposure due to its shorter half-life. However, it has been measured at comparatively high concentrations in buildings with earthen architecture, such as traditional half-timbered houses and modern houses with clay wall finishes, and in regions with thorium- and monazite-rich soil and sand. Thoron is a minor contributor to the overall radiation dose received due to indoor radon exposure, and can interfere with Rn measurements when not taken into account.
Action and reference level
WHO presented in 2009 a recommended reference level (the national reference level), 100 Bq/m3, for radon in dwellings. The recommendation also says that where this is not possible, 300 Bq/m3 should be selected as the highest level. A national reference level should not be a limit, but should represent the maximum acceptable annual average radon concentration in a dwelling.
The actionable concentration of radon in a home varies depending on the organization doing the recommendation, for example, the EPA encourages that action be taken at concentrations as low as 74 Bq/m3 (2 pCi/L), and the European Union recommends action be taken when concentrations reach 400 Bq/m3 (11 pCi/L) for old houses and 200 Bq/m3 (5 pCi/L) for new ones. On 8 July 2010, the UK's Health Protection Agency issued new advice setting a "Target Level" of 100 Bq/m3 whilst retaining an "Action Level" of 200 Bq/m3. Similar levels (as in the UK) are published by Norwegian Radiation and Nuclear Safety Authority (DSA) with the maximum limit for schools, kindergartens, and new dwellings set at 200 Bq/m3, where 100 Bq/m3 is set as the action level.
Inhalation and smoking
Results from epidemiological studies indicate that the risk of lung cancer increases with exposure to residential radon. A well known example of source of error is smoking, the main risk factor for lung cancer. In the US, cigarette smoking is estimated to cause 80% to 90% of all lung cancers.
According to the EPA, the risk of lung cancer for smokers is significant due to synergistic effects of radon and smoking. For this population about 62 people in a total of 1,000 will die of lung cancer compared to 7 people in a total of 1,000 for people who have never smoked. It cannot be excluded that the risk of non-smokers should be primarily explained by an effect of radon.
Radon, like other known or suspected external risk factors for lung cancer, is a threat for smokers and former smokers. This was demonstrated by the European pooling study. A commentary to the pooling study stated: "it is not appropriate to talk simply of a risk from radon in homes. The risk is from smoking, compounded by a synergistic effect of radon for smokers. Without smoking, the effect seems to be so small as to be insignificant."
According to the European pooling study, there is a difference in risk for the histological subtypes of lung cancer and radon exposure. Small-cell lung carcinoma, which has a high correlation with smoking, has a higher risk after radon exposure. For other histological subtypes such as adenocarcinoma, the type that primarily affects non-smokers, the risk from radon appears to be lower.
A study of radiation from post-mastectomy radiotherapy shows that the simple models previously used to assess the combined and separate risks from radiation and smoking need to be developed. This is also supported by new discussion about the calculation method, the linear no-threshold model, which routinely has been used.
A study from 2001, which included 436 non-smokers with lung cancer and a control group of 1649 non-smokers without lung cancer, showed that exposure to radon increased the risk of lung cancer in non-smokers. The group that had been exposed to tobacco smoke in the home appeared to have a much higher risk, while those who were not exposed to passive smoking did not show any increased risk with increasing radon exposure.
Absorption and ingestion from water
The effects of radon if ingested are unknown, although studies have found that its biological half-life ranges from 30 to 70 minutes, with 90% removal at 100 minutes. In 1999, the US National Research Council investigated the issue of radon in drinking water. The risk associated with ingestion was considered almost negligible; Water from underground sources may contain significant amounts of radon depending on the surrounding rock and soil conditions, whereas surface sources generally do not. Radon is also released from water when temperature is increased, pressure is decreased and when water is aerated. Optimum conditions for radon release and exposure in domestic living from water occurred during showering. Water with a radon concentration of 104 pCi/L can increase the indoor airborne radon concentration by 1 pCi/L under normal conditions. However, the concentration of radon released from contaminated groundwater to the air has been measured at 5 orders of magnitude less than the original concentration in water.
Ocean surface concentrations of radon exchange within the atmosphere, causing 222Rn to increase through the air-sea interface. Although areas tested were very shallow, additional measurements in a wide variety of coastal regimes should help define the nature of 222Rn observed.
Testing and mitigation
There are relatively simple tests for radon gas. In some countries these tests are methodically done in areas of known systematic hazards. Radon detection devices are commercially available. Digital radon detectors provide ongoing measurements giving both daily, weekly, short-term and long-term average readouts via a digital display. Short-term radon test devices used for initial screening purposes are inexpensive, in some cases free. There are important protocols for taking short-term radon tests and it is imperative that they be strictly followed. The kit includes a collector that the user hangs in the lowest habitable floor of the house for two to seven days. The user then sends the collector to a laboratory for analysis. Long term kits, taking collections for up to one year or more, are also available. An open-land test kit can test radon emissions from the land before construction begins. Radon concentrations can vary daily, and accurate radon exposure estimates require long-term average radon measurements in the spaces where an individual spends a significant amount of time.
Radon levels fluctuate naturally, due to factors like transient weather conditions, so an initial test might not be an accurate assessment of a home's average radon level. Radon levels are at a maximum during the coolest part of the day when pressure differentials are greatest. Therefore, a high result (over 4 pCi/L) justifies repeating the test before undertaking more expensive abatement projects. Measurements between 4 and 10 pCi/L warrant a long-term radon test. Measurements over 10 pCi/L warrant only another short-term test so that abatement measures are not unduly delayed. The EPA has advised purchasers of real estate to delay or decline a purchase if the seller has not successfully abated radon to 4 pCi/L or less.
Because the half-life of radon is only 3.8 days, removing or isolating the source will greatly reduce the hazard within a few weeks. Another method of reducing radon levels is to modify the building's ventilation. Generally, the indoor radon concentrations increase as ventilation rates decrease. In a well-ventilated place, the radon concentration tends to align with outdoor values (typically 10 Bq/m3, ranging from 1 to 100 Bq/m3).
The four principal ways of reducing the amount of radon accumulating in a house are:
Sub-slab depressurization (soil suction) by increasing under-floor ventilation;
Improving the ventilation of the house and avoiding the transport of radon from the basement into living rooms;
Installing a radon sump system in the basement;
Installing a positive pressurization or positive supply ventilation system.
According to the EPA, the method to reduce radon "...primarily used is a vent pipe system and fan, which pulls radon from beneath the house and vents it to the outside", which is also called sub-slab depressurization, active soil depressurization, or soil suction. Generally indoor radon can be mitigated by sub-slab depressurization and exhausting such radon-laden air to the outdoors, away from windows and other building openings. "[The] EPA generally recommends methods which prevent the entry of radon. Soil suction, for example, prevents radon from entering your home by drawing the radon from below the home and venting it through a pipe, or pipes, to the air above the home where it is quickly diluted" and the "EPA does not recommend the use of sealing alone to reduce radon because, by itself, sealing has not been shown to lower radon levels significantly or consistently".
Positive-pressure ventilation systems can be combined with a heat exchanger to recover energy in the process of exchanging air with the outside, and simply exhausting basement air to the outside is not necessarily a viable solution as this can actually draw radon gas into a dwelling. Homes built on a crawl space may benefit from a radon collector installed under a "radon barrier" (a sheet of plastic that covers the crawl space). For crawl spaces, the EPA states that "[a]n effective method to reduce radon levels in crawl space homes involves covering the earth floor with a high-density plastic sheet. A vent pipe and fan are used to draw the radon from under the sheet and vent it to the outdoors. This form of soil suction is called submembrane suction, and when properly applied is the most effective way to reduce radon levels in crawl space homes."
| Physical sciences | Chemical elements_2 | null |
25657 | https://en.wikipedia.org/wiki/Roman%20numerals | Roman numerals | Roman numerals are a numeral system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers are written with combinations of letters from the Latin alphabet, each with a fixed integer value. The modern style uses only these seven:
The use of Roman numerals continued long after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced by Arabic numerals; however, this process was gradual, and the use of Roman numerals persisted in various places, including on clock faces. For instance, on the clock of Big Ben (designed in 1852), the hours from 1 to 12 are written as:
The notations and can be read as "one less than five" (4) and "one less than ten" (9), although there is a tradition favouring the representation of "4" as "" on Roman numeral clocks.
Other common uses include year numbers on monuments and buildings and copyright dates on the title screens of films and television programmes. , signifying "a thousand, and a hundred less than another thousand", means 1900, so 1912 is written . For the years of the current (21st) century, indicates 2000; this year is ().
Description
Roman numerals use different symbols for each power of ten, and there is no zero symbol, in contrast with the place value notation of Arabic numerals (in which place-keeping zeros enable the same digit to represent different powers of ten).
This allows some flexibility in notation, and there has never been an official or universally accepted standard for Roman numerals. Usage varied greatly in ancient Rome and became thoroughly chaotic in medieval times. The more recent restoration of a largely "classical" notation has gained popularity among some, while variant forms are used by some modern writers as seeking more "flexibility". Roman numerals may be considered legally binding expressions of a number, as in U.S. Copyright law (where an "incorrect" or ambiguous numeral may invalidate a copyright claim or affect the termination date of the copyright period).
Standard form
The following table displays how Roman numerals are usually written:
The numerals for 4 () and 9 () are written using subtractive notation, where the smaller symbol () is subtracted from the larger one (, or ), thus avoiding the clumsier and . Subtractive notation is also used for 40 (), 90 (), 400 () and 900 (). These are the only subtractive forms in standard use.
A number containing two or more decimal digits is built by appending the Roman numeral equivalent for each, from highest to lowest, as in the following examples:
  39 = + = .
 246 = + + = .
 789 = + + = .
2,421 = + + + = .
Any missing place (represented by a zero in the place-value equivalent) is omitted, as in Latin (and English) speech:
 160 = + =
 207 = + =
1,009 = + =
1,066 = + + =
The largest number that can be represented in this manner is 3,999 (), but this is sufficient for the values for which Roman numerals are commonly used today, such as year numbers:
1776 = + + + = (the date written on the book held by the Statue of Liberty).
1918 = + + + = (the first year of the Spanish flu pandemic)
1944 = + + + = (erroneous copyright notice of the 1954 movie The Last Time I Saw Paris)
= (this year)
Prior to the introduction of Arabic numerals in the West, ancient and medieval users of Roman numerals used various means to write larger numbers .
Other forms
Forms exist that vary in one way or another from the general standard represented above.
Other additive forms
While subtractive notation for 4, 40 and 400 (, and ) has been the usual form since Roman times, additive notation to represent these numbers (, and ) continued to be used, including in compound numbers like 24 (), 74 (), and 490 (). The additive forms for 9, 90, and 900 (, , and ) have also been used, although less often.
The two conventions could be mixed in the same document or inscription, even in the same numeral. For example, on the numbered gates to the Colosseum, is systematically used instead of , but subtractive notation is used for ; consequently, gate 44 is labelled .
Especially on tombstones and other funerary inscriptions, 5 and 50 have been occasionally written and instead of and , and there are instances such as and rather than or .
Modern clock faces that use Roman numerals still very often use for four o'clock but for nine o'clock, a practice that goes back to very early clocks such as the Wells Cathedral clock of the late 14th century. However, this is far from universal: for example, the clock on the Palace of Westminster tower (commonly known as Big Ben) uses a subtractive for 4 o'clock.
Several monumental inscriptions created in the early 20th century use variant forms for "1900" (usually written ). These vary from for 1910 as seen on Admiralty Arch, London, to the more unusual, if not unique for 1903, on the north entrance to the Saint Louis Art Museum.
Other subtractive forms
There are numerous historical examples of being used for 8; for example, was used by officers of the XVIII Roman Legion to write their number. The notation appears prominently on the cenotaph of their senior centurion Marcus Caelius ( – 9 AD). On the publicly displayed official Roman calendars known as Fasti, is used for the 18 days to the next Kalends, and for the 28 days in February. The latter can be seen on the sole extant pre-Julian calendar, the Fasti Antiates Maiores.
There are historical examples of other subtractive forms: for 17, for 18, for 97, for 98, and for 99. A possible explanation is that the word for 18 in Latin is literally "two from twenty"while 98 is (two from hundred) and 99 is (one from hundred). However, the explanation does not seem to apply to and , since the Latin words for 17 and 97 were (seven ten) and (ninety seven), respectively.
The function in Microsoft Excel supports multiple subtraction modes depending on the "" setting. For example, the number "499" (usually ) can be rendered as , , or . The relevant Microsoft help page offers no explanation for this function other than to describe its output as "more concise".
Non-standard variants
There are also historical examples of other additive and multiplicative forms, and forms which seem to reflect spoken phrases. Some of these variants may have been regarded as errors even by contemporaries.
was how people associated with the XXII Roman Legion used to write their number. The practice may have been due to a common way to say "twenty-second" in Latin, namely (literally "two and twentieth") rather than the "regular" (twenty second). Apparently, at least one ancient stonecutter mistakenly thought that the of "22nd Legion" stood for 18, and "corrected" it to .
There are some examples of year numbers after 1000 written as two Roman numerals 1–99, e.g. 1613 as , corresponding to the common reading "sixteen thirteen" of such year numbers in English, or 1519 as as in French quinze-cent-dix-neuf (fifteen-hundred and nineteen), and similar readings in other languages.
In some French texts from the 15th century and later, one finds constructions like for 99, reflecting the French reading of that number as (four-score and nineteen). Similarly, in some English documents one finds, for example, 77 written as "" (which could be read "three-score and seventeen").
A medieval accounting text from 1301 renders numbers like 13,573 as "", that is, "13×1000 + 5×100 + 3×20 + 13".
Other numerals that do not fit the usual patterns – such as for 45, instead of the usual — may be due to scribal errors, or the writer's lack of familiarity with the system, rather than being genuine variant usage.
Non-numeric combinations
As Roman numerals are composed of ordinary alphabetic characters, there may sometimes be confusion with other uses of the same letters. For example, "XXX" and "XL" have other connotations in addition to their values as Roman numerals, while "IXL" more often than not is a gramogram of "I excel", and is in any case not an unambiguous Roman numeral.
Zero
As a non-positional numeral system, Roman numerals have no "place-keeping" zeros. Furthermore, the system as used by the Romans lacked a numeral for the number zero itself (that is, what remains after 1 is subtracted from 1). The word (the Latin word meaning "none") was used to represent 0, although the earliest attested instances are medieval. For instance Dionysius Exiguus used alongside Roman numerals in a manuscript from 525 AD. About 725, Bede or one of his colleagues used the letter , the initial of or of (the Latin word for "nothing") for 0, in a table of epacts, all written in Roman numerals.
The use of to indicate "none" long survived in the historic apothecaries' system of measurement: used well into the 20th century to designate quantities in pharmaceutical prescriptions.
In later times, the Arabic numeral "0" has been used as a zero to open enumerations with Roman numbers. Examples include the 24-hour Shepherd Gate Clock from 1852 and tarot packs such as the 15th-century Sola Busca and the 20th century Rider–Waite packs.
Fractions
The base "Roman fraction" is , indicating . The use of (as in to indicate 7) is attested in some ancient inscriptions and also in the now rare apothecaries' system (usually in the form ): but while Roman numerals for whole numbers are essentially decimal, does not correspond to , as one might expect, but .
The Romans used a duodecimal rather than a decimal system for fractions, as the divisibility of twelve makes it easier to handle the common fractions of and than does a system based on ten . Notation for fractions other than is mainly found on surviving Roman coins, many of which had values that were duodecimal fractions of the unit . Fractions less than are indicated by a dot (·) for each "twelfth", the source of the English words inch and ounce; dots are repeated for fractions up to five twelfths. Six twelfths (one half), is for "half". Uncia dots were added to for fractions from seven to eleven twelfths, just as tallies were added to for whole numbers from six to nine. The arrangement of the dots was variable and not necessarily linear. Five dots arranged like (⁙) (as on the face of a die) are known as a quincunx, from the name of the Roman fraction/coin. The Latin words and are the source of the English words sextant and quadrant.
Each fraction from to had a name in Roman times; these corresponded to the names of the related coins:
Other Roman fractional notations included the following:
Large numbers
The Romans developed two main ways of writing large numbers, the and the , further extended in various ways in later times.
Apostrophus
Using the method, 500 is written as , while 1,000 is written as . This system of encasing numbers to denote thousands (imagine the s and s as parentheses) had its origins in Etruscan numeral usage.
Each additional set of and surrounding raises the value by a factor of ten: represents 10,000 and represents 100,000. Similarly, each additional to the right of raises the value by a factor of ten: represents 5,000 and represents 50,000. Numerals larger than do not occur.
= 500 = 1,000
= 5,000 = 10,000
= 50,000 = 100,000
Sometimes (1000) is reduced to , (5,000) to ; (10,000) to ; (50,000) to ; and (100,000) to . It is likely (500) reduced to and (1000) influenced the later .
John Wallis is often credited with introducing the symbol for infinity , and one conjecture is that he based it on , since 1,000 was hyperbolically used to represent very large numbers.
Vinculum
Using the , conventional Roman numerals are multiplied by 1,000 by adding a "bar" or "overline", thus:
= 4,000
= 25,000
The vinculum came into use in the late Republic, and it was a common alternative to the apostrophic ↀ during the Imperial era around the Roman world (M for '1000' was not in use until the Medieval period). It continued in use in the Middle Ages, though it became known more commonly as , and it appears in modern editions of classical and medieval Latin texts.
In an extension of the , a three-sided box (now sometimes printed as two vertical lines and a ) is used to multiply by 100,000, thus:
p. = 1,332,000 paces (1,332 Roman miles).
notation is distinct from the custom of adding an overline to a numeral simply to indicate that it is a number. Both usages can be seen on Roman inscriptions of the same period and general location, such as on the Antonine Wall.
Origin
The system is closely associated with the ancient city-state of Rome and the Empire that it created. However, due to the scarcity of surviving examples, the origins of the system are obscure and there are several competing theories, all largely conjectural.
Etruscan numerals
Rome was founded sometime between 850 and 750 BC. At the time, the region was inhabited by diverse populations of which the Etruscans were the most advanced. The ancient Romans themselves admitted that the basis of much of their civilization was Etruscan. Rome itself was located next to the southern edge of the Etruscan domain, which covered a large part of north-central Italy.
The Roman numerals, in particular, are directly derived from the Etruscan number symbols: , , , , and for 1, 5, 10, 50, and 100 (they had more symbols for larger numbers, but it is unknown which symbol represents which number). As in the basic Roman system, the Etruscans wrote the symbols that added to the desired number, from higher to lower value. Thus, the number 87, for example, would be written 50 + 10 + 10 + 10 + 5 + 1 + 1 = 𐌣𐌢𐌢𐌢𐌡𐌠𐌠 (this would appear as 𐌠𐌠𐌡𐌢𐌢𐌢𐌣 since Etruscan was written from right to left.)
The symbols and resembled letters of the Etruscan alphabet, but , , and did not. The Etruscans used the subtractive notation, too, but not like the Romans. They wrote 17, 18, and 19 as 𐌠𐌠𐌠𐌢𐌢, 𐌠𐌠𐌢𐌢, and 𐌠𐌢𐌢, mirroring the way they spoke those numbers ("three from twenty", etc.); and similarly for 27, 28, 29, 37, 38, etc. However, they did not write 𐌠𐌡 for 4 (nor 𐌢𐌣 for 40), and wrote 𐌡𐌠𐌠, 𐌡𐌠𐌠𐌠 and 𐌡𐌠𐌠𐌠𐌠 for 7, 8, and 9, respectively.
Early Roman numerals
The early Roman numerals for 1, 10, and 100 were the Etruscan ones: , , and . The symbols for 5 and 50 changed from and to and at some point. The latter had flattened to (an inverted T) by the time of Augustus, and soon afterwards became identified with the graphically similar letter .
The symbol for 100 was written variously as or , and was then abbreviated to or , with (which matched the Latin letter C) finally winning out. It might have helped that C was the initial letter of , Latin for "hundred".
The numbers 500 and 1000 were denoted by or overlaid with a box or circle. Thus, 500 was like a superimposed on a or , making it look like . It became or by the time of Augustus, under the graphic influence of the letter . It was later identified as the letter ; an alternative symbol for "thousand" was a , and half of a thousand or "five hundred" is the right half of the symbol, , and this may have been converted into .
The notation for 1000 was a circled or boxed : Ⓧ, , , and by Augustan times was partially identified with the Greek letter phi. Over time, the symbol changed to and . The latter symbol further evolved into , then , and eventually changed to under the influence of the Latin word mille "thousand".
According to Paul Kayser, the basic numerical symbols were , , and (or ) and the intermediate ones were derived by taking half of those (half an is , half a is and half a is ). Then 𐌟 and ↆ developed as mentioned above.
Classical Roman numerals
The Colosseum was constructed in Rome in CE 72–80, and while the original perimeter wall has largely disappeared, the numbered entrances from (23) to (54) survive, to demonstrate that in Imperial times Roman numerals had already assumed their classical form: as largely standardised in current use. The most obvious anomaly (a common one that persisted for centuries) is the inconsistent use of subtractive notation - while is used for 40, is avoided in favour of : in fact, gate 44 is labelled .
Use in the Middle Ages and Renaissance
Lower case, or minuscule, letters were developed in the Middle Ages, well after the demise of the Western Roman Empire, and since that time lower-case versions of Roman numbers have also been commonly used: , , , , and so on.
Since the Middle Ages, a "" has sometimes been substituted for the final "" of a "lower-case" Roman numeral, such as "" for 3 or "" for 7. This "" can be considered a swash variant of "". Into the early 20th century, the use of a final "" was still sometimes used in medical prescriptions to prevent tampering with or misinterpretation of a number after it was written.
Numerals in documents and inscriptions from the Middle Ages sometimes include additional symbols, which today are called "medieval Roman numerals". Some simply substitute another letter for the standard one (such as "" for "", or "" for ""), while others serve as abbreviations for compound numerals ("" for "", or "" for ""). Although they are still listed today in some dictionaries, they are long out of use.
A superscript "o" (sometimes written directly above the symbol) was sometimes used as an ordinal indicator.
Chronograms, messages with dates encoded into them, were popular during the Renaissance era. The chronogram would be a phrase containing the letters , , , , , , and . By putting these letters together, the reader would obtain a number, usually indicating a particular year.
Modern use
By the 11th century, Arabic numerals had been introduced into Europe from al-Andalus, by way of Arab traders and arithmetic treatises. Roman numerals, however, proved very persistent, remaining in common use in the West well into the 14th and 15th centuries, even in accounting and other business records (where the actual calculations would have been made using an abacus). Replacement by their more convenient "Arabic" equivalents was quite gradual, and Roman numerals are still used today in certain contexts. A few examples of their current use are:
Names of monarchs and popes, e.g. Elizabeth II of the United Kingdom, Pope Benedict XVI. These are referred to as regnal numbers and are usually read as ordinals; e.g. is pronounced "the second". This tradition began in Europe sporadically in the Middle Ages, gaining widespread use in England during the reign of Henry VIII. Previously, the monarch was not known by numeral but by an epithet such as Edward the Confessor. Some monarchs (e.g. Charles IV of Spain, Louis XIV of France and William IV of Great Britain) seem to have preferred the use of instead of on their coinage (see illustration).
Generational suffixes, particularly in the U.S., for people sharing the same name across generations, such as William Howard Taft IV. These are also usually read as ordinals.
In the French Republican Calendar, initiated during the French Revolution, years were numbered by Roman numerals – from the year (1792) when this calendar was introduced to the year (1805) when it was abandoned.
The year of production of films, television shows and other works of art within the work itself. Outside reference to the work will use regular Arabic numerals.
Hour marks on timepieces. In this context, 4 is often written .
The year of construction on building façades and cornerstones.
Page numbering of prefaces and introductions of books, and sometimes of appendices and annexes, too.
Book volume and chapter numbers, as well as the several acts within a play (e.g. Act , Scene 2).
Sequels to some films, video games, and other works (as in Rocky II, Grand Theft Auto V, Myst III: Exile).
Outlines that use numbers to show hierarchical relationships.
Occurrences of a recurring grand event, for instance:
The Summer and Winter Olympic Games (e.g. the XXI Olympic Winter Games; the Games of the XXX Olympiad).
The Super Bowl, the annual championship game of the National Football League (e.g. Super Bowl XLII; Super Bowl 50 was a one-time exception).
WrestleMania, the annual professional wrestling event for the WWE (e.g. WrestleMania XXX). This usage has also been inconsistent.
Specific disciplines
In astronautics, United States rocket model variants are sometimes designated by Roman numerals, e.g. Titan I, Titan II, Titan III, Saturn I, Saturn V.
In astronomy, the natural satellites or "moons" of the planets are designated by capital Roman numerals appended to the planet's name. For example, Titan's designation is Saturn .
In chemistry, Roman numerals are sometimes used to denote the groups of the periodic table, but this has officially been deprecated in favour of Arabic numerals. They are also used in the IUPAC nomenclature of inorganic chemistry, for the oxidation number of cations which can take on several different positive charges. They are also used for naming phases of polymorphic crystals, such as ice.
In education, school grades (in the sense of year-groups rather than test scores) are sometimes referred to by a Roman numeral; for example, "grade " is sometimes seen for "grade 9".
In entomology, the broods of the thirteen- and seventeen-year periodical cicadas are identified by Roman numerals.
In graphic design stylised Roman numerals may represent numeric values.
In law, Roman numerals are commonly used to help organize legal codes as part of an alphanumeric outline.
In numbering UK Acts of Parliament within a given year (a given session until 1963), local acts have lowercase Roman numerals, whereas public acts have plain Arabic numerals and personal acts have italic Arabic numerals.
In mathematics (including trigonometry, statistics, and calculus), when a graph includes negative numbers, its quadrants are named using , , , and . These quadrant names signify positive numbers on both axes, negative numbers on the X axis, negative numbers on both axes, and negative numbers on the Y axis, respectively. The use of Roman numerals to designate quadrants avoids confusion, since Arabic numerals are used for the actual data represented in the graph.
In military unit designation, Roman numerals are often used to distinguish between units at different levels. This reduces possible confusion, especially when viewing operational or strategic level maps. In particular, army corps are often numbered using Roman numerals (for example, the American XVIII Airborne Corps or the Nazi III Panzerkorps) with Arabic numerals being used for divisions and armies.
In music, Roman numerals are used in several contexts:
Movements are often numbered using Roman numerals.
In Roman numeral analysis, harmonic function is identified using Roman numerals.
Individual strings of stringed instruments, such as the violin, are often denoted by Roman numerals, with higher numbers denoting lower strings.
In pharmacy, Roman numerals were used with the now largely obsolete apothecaries' system of measurement: including to denote "one half" and to denote "zero".
In photography, Roman numerals (with zero) are used to denote varying levels of brightness when using the Zone System.
In seismology, Roman numerals are used to designate degrees of the Mercalli intensity scale of earthquakes.
In sport the team containing the "top" players and representing a nation or province, a club or a school at the highest level in (say) rugby union is often called the "1st ", while a lower-ranking cricket or American football team might be the "3rd ".
In tarot, Roman numerals (with zero) are often used to denote the cards of the Major Arcana.
In Ireland, Roman numerals were used until the late 1980s to indicate the month on postage Franking. In documents, Roman numerals are sometimes still used to indicate the month to avoid confusion over day/month/year or month/day/year formats.
In theology and biblical scholarship, the Septuagint is often referred to as , as this translation of the Old Testament into Greek is named for the legendary number of its translators (septuaginta being Latin for "seventy").
Modern use in European languages other than English
Some uses that are rare or never seen in English-speaking countries may be relatively common in parts of continental Europe and in other regions (e.g. Latin America) that use a European language other than English. For instance:
Capital or small capital Roman numerals are widely used in Romance languages to denote , e.g. the French and the Spanish (not ) for "18th century". Some Slavic and Turkic languages (especially in and adjacent to Russia) similarly favour Roman numerals (e.g. Russian , Azeri or Polish ). On the other hand, in Turkish and some Central European Slavic languages, like most Germanic languages, one writes "18." (with a period) before the local word for "century" (e.g. Turkish , Czech ).
Mixed Roman and Arabic numerals are sometimes used in numeric representations of dates (especially in formal letters and official documents, but also on tombstones). The is written in Roman numerals, while the day is in Arabic numerals: "4..1789" and ".4.1789" both refer unambiguously to 4 June 1789.
Roman numerals are sometimes used to represent the in hours-of-operation signs displayed in windows or on doors of businesses, and also sometimes in railway and bus timetables. Monday, taken as the first day of the week, is represented by . Sunday is represented by . The hours of operation signs are tables composed of two columns where the left column is the day of the week in Roman numerals and the right column is a range of hours of operation from starting time to closing time. In the example case (left), the business opens from 10 AM to 7 PM on weekdays, 10 AM to 5 PM on Saturdays and is closed on Sundays. Note that the listing uses 24-hour time.
Roman numerals may also be used for floor numbering. For instance, apartments in central Amsterdam are indicated as 138-, with both an Arabic numeral (number of the block or house) and a Roman numeral (floor number). The apartment on the ground floor is indicated as .
In Italy, where roads outside built-up areas have kilometre signs, major roads and motorways also mark 100-metre subdivisionals, using Roman numerals from to for the smaller intervals. The sign thus marks 17.9 km.
Certain romance-speaking countries use Roman numerals to designate assemblies of their national legislatures. For instance, the composition of the Italian Parliament from 2018 to 2022 (elected in the 2018 Italian general election) is called the XVIII Legislature of the Italian Republic (or more commonly the "XVIII Legislature").
A notable exception to the use of Roman numerals in Europe is in Greece, where Greek numerals (based on the Greek alphabet) are generally used in contexts where Roman numerals would be used elsewhere.
Unicode
The "Number Forms" block of the Unicode computer character set standard has a number of Roman numeral symbols in the range of code points from U+2160 to U+2188. This range includes both upper- and lowercase numerals, as well as pre-combined characters for numbers up to 12 (Ⅻ or ). One justification for the existence of pre-combined numbers is to facilitate the setting of multiple-letter numbers (such as VIII) on a single horizontal line in Asian vertical text. The Unicode standard, however, includes special Roman numeral code points for compatibility only, stating that "[f]or most purposes, it is preferable to compose the Roman numerals from sequences of the appropriate Latin letters". The block also includes some symbols for large numbers, an old variant of "" (50) similar to the Etruscan character, the Claudian letter "reversed C", etc.
| Mathematics | Basics | null |
25665 | https://en.wikipedia.org/wiki/Rosaceae | Rosaceae | Rosaceae (), the rose family, is a family of flowering plants that includes 4,828 known species in 91 genera.
The name is derived from the type genus Rosa. The family includes herbs, shrubs, and trees. Most species are deciduous, but some are evergreen. They have a worldwide range but are most diverse in the Northern Hemisphere.
Many economically important products come from the Rosaceae, including various edible fruits, such as apples, pears, quinces, apricots, plums, cherries, peaches, raspberries, blackberries, loquats, strawberries, rose hips, hawthorns, and almonds. The family also includes popular ornamental trees and shrubs, such as roses, meadowsweets, rowans, firethorns, and photinias.
Among the most species-rich genera in the family are Alchemilla (270), Sorbus (260), Crataegus (260), Cotoneaster (260), Rubus (250), and Prunus (200), which contains the plums, cherries, peaches, apricots, and almonds. However, all of these numbers should be seen as estimates—much taxonomic work remains.
Description
Rosaceae can be woody trees, shrubs, climbers or herbaceous plants. The herbs are mostly perennials, but some annuals also exist, such as Aphanes arvensis.
Leaves
The leaves are generally arranged spirally, but have an opposite arrangement in some species. They can be simple or pinnately compound (either odd- or even-pinnate). Compound leaves appear in around 30 genera. The leaf margin is most often serrate. Paired stipules are generally present, and are a primitive feature within the family, independently lost in many groups of Amygdaloideae (previously called Spiraeoideae). The stipules are sometimes adnate (attached surface to surface) to the petiole. Glands or extrafloral nectaries may be present on leaf margins or petioles. Spines may be present on the midrib of leaflets and the rachis of compound leaves.
Flowers
Flowers of plants in the rose family are generally described as "showy". They are radially symmetrical, and almost always hermaphroditic. Rosaceae generally have five sepals, five petals, and many spirally arranged stamens. The bases of the sepals, petals, and stamens are fused together to form a characteristic cup-like structure called a hypanthium. They can be arranged in spikes, or heads. Solitary flowers are rare. Rosaceae have a variety of color petals, but blue is almost completely absent.
Fruits and seeds
The fruits occur in many varieties and were once considered the main characters for the definition of subfamilies amongst Rosaceae, giving rise to a fundamentally artificial subdivision. They can be follicles, capsules, nuts, achenes, drupes (Prunus), and accessory fruits, like the pome of an apple, the hip of a rose, or the receptacle-derived aggregate accessory fruit of a strawberry. Many fruits of the family are edible, but their seeds often contain amygdalin, which can release cyanide during digestion if the seed is damaged.
Taxonomy
Taxonomic history
The family was traditionally divided into six subfamilies: Rosoideae, Spiraeoideae, Maloideae (Pomoideae), Amygdaloideae (Prunoideae), Neuradoideae, and Chrysobalanoideae, and most of these were treated as families by various authors. More recently (1971), Chrysobalanoideae was placed in Malpighiales in molecular analyses and Neuradoideae has been assigned to Malvales. Schulze-Menz, in Engler's Syllabus edited by Melchior (1964) recognized Rosoideae, Dryadoideae, Lyonothamnoideae, Spireoideae, Amygdaloideae, and Maloideae. They were primarily diagnosed by the structure of the fruits. More recent work has identified that not all of these groups were monophyletic. Hutchinson (1964) and Kalkman (2004) recognized only tribes (17 and 21, respectively). Takhtajan (1997) delimited 21 tribes in 10 subfamilies: Filipenduloideae, Rosoideae, Ruboideae, Potentilloideae, Coleogynoideae, Kerroideae, Amygdaloideae (Prunoideae), Spireoideae, Maloideae (Pyroideae), Dichotomanthoideae. A more modern model comprises three subfamilies, one of which (Rosoideae) has largely remained the same.
While the boundaries of the Rosaceae are not disputed, there is no general agreement as to how many genera it contains. Areas of divergent opinion include the treatment of Potentilla s.l. and Sorbus s.l.. Compounding the problem is that apomixis is common in several genera. This results in an uncertainty in the number of species contained in each of these genera, due to the difficulty of dividing apomictic complexes into species. For example, Cotoneaster contains between 70 and 300 species, Rosa around 100 (including the taxonomically complex dog roses), Sorbus 100 to 200 species, Crataegus between 200 and 1,000, Alchemilla around 300 species, Potentilla roughly 500, and Rubus hundreds, or possibly even thousands of species.
Genera
Identified clades include:
Subfamily Rosoideae: Traditionally composed of those genera bearing aggregate fruits that are made up of small achenes or drupelets, and often the fleshy part of the fruit (e.g. strawberry) is the receptacle or the stalk bearing the carpels. The circumscription is now narrowed (excluding, for example, the Dryadoideae), but it still remains a diverse group containing five or six tribes and 20 or more genera, including rose, Rubus (blackberry, raspberry), Fragaria (strawberry), Potentilla, and Geum.
Subfamily Amygdaloideae: Within this group remains an identified clade with a pome fruit, traditionally known as subfamily Maloideae (or Pyroideae) which included genera such as apple, Cotoneaster, and Crataegus (hawthorn). To separate it at the subfamily level would leave the remaining genera as a paraphyletic group, so it has been expanded to include the former Spiraeoideae and Amygdaloideae. The subfamily has sometimes been referred to by the name "Spiraeoideae", but this is not permitted by the International Code of Nomenclature for algae, fungi, and plants.
Subfamily Dryadoideae: Fruits are achenes with hairy styles, and includes five genera (Dryas, Cercocarpus, Chamaebatia, Cowania, and Purshia), most species of which form root nodules which host nitrogen-fixing bacteria from the genus Frankia.
Phylogeny
The phylogenetic relationships between the three subfamilies within Rosaceae are unresolved. There are three competing hypotheses:
Amygdaloideae basal
Amygdaloideae has been identified as the earliest branching subfamily by Chin et al. (2014), Li et al. (2015), Li et al. (2016), and Sun et al. (2016). Most recently Zhang et al. (2017) recovered these relationships using whole plastid genomes:
The sister relationship between Dryadoideae and Rosoideae is supported by the following shared morphological characters not found in Amygdaloideae: presence of stipules, separation of the hypanthium from the ovary, and the fruits are usually achenes.
Dryadoideae basal
Dryadoideae has been identified as the earliest branching subfamily by Evans et al. (2002) and Potter (2003). Most recently Xiang et al. (2017) recovered these relationships using nuclear transcriptomes:
Rosoideae basal
Rosoideae has been identified as the earliest branching subfamily by Morgan et al. (1994), Evans (1999), Potter et al. (2002), Potter et al. (2007), Töpel et al. (2012), and Chen et al. (2016). The following is taken from Potter et al. (2007):
The sister relationship between Amygdaloideae and Dryadoideae is supported by the following shared biochemical characters not found in Rosoideae: production of cyanogenic glycosides and production of sorbitol.
Distribution and habitat
The Rosaceae have a cosmopolitan distribution, being found nearly everywhere except for Antarctica. They are primarily concentrated in the Northern Hemisphere in regions that are not desert or tropical rainforest.
Uses
The rose family is considered one of the six most economically important crop plant families, and includes apples, pears, quinces, medlars, loquats, almonds, peaches, apricots, plums, cherries, strawberries, blackberries, raspberries, sloes, and roses.
Many genera are also highly valued ornamental plants. These include trees and shrubs (Cotoneaster, Chaenomeles, Crataegus, Dasiphora, Exochorda, Kerria, Photinia, Physocarpus, Prunus, Pyracantha, Rhodotypos, Rosa, Sorbus, Spiraea), herbaceous perennials (Alchemilla, Aruncus, Filipendula, Geum, Potentilla, Sanguisorba), alpine plants (Dryas, Geum, Potentilla) and climbers (Rosa).
However, several genera are also introduced noxious weeds in some parts of the world, costing money to be controlled. These invasive plants can have negative impacts on the diversity of local ecosystems once established. Such naturalised pests include Acaena, Cotoneaster, Crataegus, and Pyracantha.
In Bulgaria and parts of western Asia, the production of rose oil from fresh flowers such as Rosa damascena, Rosa gallica, and other species is an important economic industry.
Gallery
The family Rosaceae covers a wide range of trees, bushes and plants.
| Biology and health sciences | Rosales | null |
25676 | https://en.wikipedia.org/wiki/Radar | Radar | Radar is a system that uses radio waves to determine the distance (ranging), direction (azimuth and elevation angles), and radial velocity of objects relative to the site. It is a radiodetermination method used to detect and track aircraft, ships, spacecraft, guided missiles, motor vehicles, map weather formations, and terrain.
A radar system consists of a transmitter producing electromagnetic waves in the radio or microwaves domain, a transmitting antenna, a receiving antenna (often the same antenna is used for transmitting and receiving) and a receiver and processor to determine properties of the objects. Radio waves (pulsed or continuous) from the transmitter reflect off the objects and return to the receiver, giving information about the objects' locations and speeds.
Radar was developed secretly for military use by several countries in the period before and during World War II. A key development was the cavity magnetron in the United Kingdom, which allowed the creation of relatively small systems with sub-meter resolution. The term RADAR was coined in 1940 by the United States Navy as an acronym for "radio detection and ranging". The term radar has since entered English and other languages as an anacronym, a common noun, losing all capitalization.
The modern uses of radar are highly diverse, including air and terrestrial traffic control, radar astronomy, air-defense systems, anti-missile systems, marine radars to locate landmarks and other ships, aircraft anti-collision systems, ocean surveillance systems, outer space surveillance and rendezvous systems, meteorological precipitation monitoring, radar remote sensing, altimetry and flight control systems, guided missile target locating systems, self-driving cars, and ground-penetrating radar for geological observations. Modern high tech radar systems use digital signal processing and machine learning and are capable of extracting useful information from very high noise levels.
Other systems which are similar to radar make use of other parts of the electromagnetic spectrum. One example is lidar, which uses predominantly infrared light from lasers rather than radio waves. With the emergence of driverless vehicles, radar is expected to assist the automated platform to monitor its environment, thus preventing unwanted incidents.
History
First experiments
As early as 1886, German physicist Heinrich Hertz showed that radio waves could be reflected from solid objects. In 1895, Alexander Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes. The next year, he added a spark-gap transmitter. In 1897, while testing this equipment for communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation.
The German inventor Christian Hülsmeyer was the first to use radio waves to detect "the presence of distant metallic objects". In 1904, he demonstrated the feasibility of detecting a ship in dense fog, but not its distance from the transmitter. He obtained a patent for his detection device in April 1904 and later a patent for a related amendment for estimating the distance to the ship. He also obtained a British patent on 23 September 1904 for a full radar system, that he called a telemobiloscope. It operated on a 50 cm wavelength and the pulsed radar signal was created via a spark-gap. His system already used the classic antenna setup of horn antenna with parabolic reflector and was presented to German military officials in practical tests in Cologne and Rotterdam harbour but was rejected.
In 1915, Robert Watson-Watt used radio technology to provide advance warning of thunderstorms to airmen and during the 1920s went on to lead the U.K. research establishment to make many advances using radio techniques, including the probing of the ionosphere and the detection of lightning at long distances. Through his lightning experiments, Watson-Watt became an expert on the use of radio direction finding before turning his inquiry to shortwave transmission. Requiring a suitable receiver for such studies, he told the "new boy" Arnold Frederic Wilkins to conduct an extensive review of available shortwave units. Wilkins would select a General Post Office model after noting its manual's description of a "fading" effect (the common term for interference at the time) when aircraft flew overhead.
By placing a transmitter and receiver on opposite sides of the Potomac River in 1922, U.S. Navy researchers A. Hoyt Taylor and Leo C. Young discovered that ships passing through the beam path caused the received signal to fade in and out. Taylor submitted a report, suggesting that this phenomenon might be used to detect the presence of ships in low visibility, but the Navy did not immediately continue the work. Eight years later, Lawrence A. Hyland at the Naval Research Laboratory (NRL) observed similar fading effects from passing aircraft; this revelation led to a patent application as well as a proposal for further intensive research on radio-echo signals from moving targets to take place at NRL, where Taylor and Young were based at the time.
Similarly, in the UK, L. S. Alder took out a secret provisional patent for Naval radar in 1928. W.A.S. Butement and P. E. Pollard developed a breadboard test unit, operating at 50 cm (600 MHz) and using pulsed modulation which gave successful laboratory results. In January 1931, a writeup on the apparatus was entered in the Inventions Book maintained by the Royal Engineers. This is the first official record in Great Britain of the technology that was used in coastal defence and was incorporated into Chain Home as Chain Home (low).
Before World War II
Before the Second World War, researchers in the United Kingdom, France, Germany, Italy, Japan, the Netherlands, the Soviet Union, and the United States, independently and in great secrecy, developed technologies that led to the modern version of radar. Australia, Canada, New Zealand, and South Africa followed prewar Great Britain's radar development, Hungary and Sweden generated its radar technology during the war.
In France in 1934, following systematic studies on the split-anode magnetron, the research branch of the Compagnie générale de la télégraphie sans fil (CSF) headed by Maurice Ponte with Henri Gutton, Sylvain Berline and M. Hugon, began developing an obstacle-locating radio apparatus, aspects of which were installed on the ocean liner Normandie in 1935.
During the same period, Soviet military engineer P.K. Oshchepkov, in collaboration with the Leningrad Electrotechnical Institute, produced an experimental apparatus, RAPID, capable of detecting an aircraft within 3 km of a receiver. The Soviets produced their first mass production radars RUS-1 and RUS-2 Redut in 1939 but further development was slowed following the arrest of Oshchepkov and his subsequent gulag sentence. In total, only 607 Redut stations were produced during the war. The first Russian airborne radar, Gneiss-2, entered into service in June 1943 on Pe-2 dive bombers. More than 230 Gneiss-2 stations were produced by the end of 1944. The French and Soviet systems, however, featured continuous-wave operation that did not provide the full performance ultimately synonymous with modern radar systems.
Full radar evolved as a pulsed system, and the first such elementary apparatus was demonstrated in December 1934 by the American Robert M. Page, working at the Naval Research Laboratory. The following year, the United States Army successfully tested a primitive surface-to-surface radar to aim coastal battery searchlights at night. This design was followed by a pulsed system demonstrated in May 1935 by Rudolf Kühnhold and the firm in Germany and then another in June 1935 by an Air Ministry team led by Robert Watson-Watt in Great Britain.
In 1935, Watson-Watt was asked to judge recent reports of a German radio-based death ray and turned the request over to Wilkins. Wilkins returned a set of calculations demonstrating the system was basically impossible. When Watson-Watt then asked what such a system might do, Wilkins recalled the earlier report about aircraft causing radio interference. This revelation led to the Daventry Experiment of 26 February 1935, using a powerful BBC shortwave transmitter as the source and their GPO receiver setup in a field while a bomber flew around the site. When the plane was clearly detected, Hugh Dowding, the Air Member for Supply and Research, was very impressed with their system's potential and funds were immediately provided for further operational development. Watson-Watt's team patented the device in patent GB593017.
Development of radar greatly expanded on 1 September 1936, when Watson-Watt became superintendent of a new establishment under the British Air Ministry, Bawdsey Research Station located in Bawdsey Manor, near Felixstowe, Suffolk. Work there resulted in the design and installation of aircraft detection and tracking stations called "Chain Home" along the East and South coasts of England in time for the outbreak of World War II in 1939. This system provided the vital advance information that helped the Royal Air Force win the Battle of Britain; without it, significant numbers of fighter aircraft, which Great Britain did not have available, would always have needed to be in the air to respond quickly. The radar formed part of the "Dowding system" for collecting reports of enemy aircraft and coordinating the response.
Given all required funding and development support, the team produced working radar systems in 1935 and began deployment. By 1936, the first five Chain Home (CH) systems were operational and by 1940 stretched across the entire UK including Northern Ireland. Even by standards of the era, CH was crude; instead of broadcasting and receiving from an aimed antenna, CH broadcast a signal floodlighting the entire area in front of it, and then used one of Watson-Watt's own radio direction finders to determine the direction of the returned echoes. This fact meant CH transmitters had to be much more powerful and have better antennas than competing systems but allowed its rapid introduction using existing technologies.
During World War II
A key development was the cavity magnetron in the UK, which allowed the creation of relatively small systems with sub-meter resolution. Britain shared the technology with the U.S. during the 1940 Tizard Mission.
In April 1940, Popular Science showed an example of a radar unit using the Watson-Watt patent in an article on air defence. Also, in late 1941 Popular Mechanics had an article in which a U.S. scientist speculated about the British early warning system on the English east coast and came close to what it was and how it worked. Watson-Watt was sent to the U.S. in 1941 to advise on air defense after Japan's attack on Pearl Harbor. Alfred Lee Loomis organized the secret MIT Radiation Laboratory at Massachusetts Institute of Technology, Cambridge, Massachusetts which developed microwave radar technology in the years 1941–45. Later, in 1943, Page greatly improved radar with the monopulse technique that was used for many years in most radar applications.
The war precipitated research to find better resolution, more portability, and more features for radar, including small, lightweight sets to equip night fighters (aircraft interception radar) and maritime patrol aircraft (air-to-surface-vessel radar), and complementary navigation systems like Oboe used by the RAF's Pathfinder.
Applications
The information provided by radar includes the bearing and range (and therefore position) of the object from the radar scanner. It is thus used in many different fields where the need for such positioning is crucial. The first use of radar was for military purposes: to locate air, ground and sea targets. This evolved in the civilian field into applications for aircraft, ships, and automobiles.
In aviation, aircraft can be equipped with radar devices that warn of aircraft or other obstacles in or approaching their path, display weather information, and give accurate altitude readings. The first commercial device fitted to aircraft was a 1938 Bell Lab unit on some United Air Lines aircraft. Aircraft can land in fog at airports equipped with radar-assisted ground-controlled approach systems in which the plane's position is observed on precision approach radar screens by operators who thereby give radio landing instructions to the pilot, maintaining the aircraft on a defined approach path to the runway. Military fighter aircraft are usually fitted with air-to-air targeting radars, to detect and target enemy aircraft. In addition, larger specialized military aircraft carry powerful airborne radars to observe air traffic over a wide region and direct fighter aircraft towards targets.
Marine radars are used to measure the bearing and distance of ships to prevent collision with other ships, to navigate, and to fix their position at sea when within range of shore or other fixed references such as islands, buoys, and lightships. In port or in harbour, vessel traffic service radar systems are used to monitor and regulate ship movements in busy waters.
Meteorologists use radar to monitor precipitation and wind. It has become the primary tool for short-term weather forecasting and watching for severe weather such as thunderstorms, tornadoes, winter storms, precipitation types, etc. Geologists use specialized ground-penetrating radars to map the composition of Earth's crust. Police forces use radar guns to monitor vehicle speeds on the roads. Automotive radars are used for adaptive cruise control and emergency breaking on vehicles by ignoring stationary roadside objects that could cause incorrect brake application and instead measuring moving objects to prevent collision with other vehicles. As part of Intelligent Transport Systems, fixed-position stopped vehicle detection (SVD) radars are mounted on the roadside to detect stranded vehicles, obstructions and debris by inverting the automotive radar approach and ignoring moving objects. Smaller radar systems are used to detect human movement. Examples are breathing pattern detection for sleep monitoring and hand and finger gesture detection for computer interaction. Automatic door opening, light activation and intruder sensing are also common.
Principles
Radar signal
A radar system has a transmitter that emits radio waves known as radar signals in predetermined directions. When these signals contact an object they are usually reflected or scattered in many directions, although some of them will be absorbed and penetrate into the target. Radar signals are reflected especially well by materials of considerable electrical conductivity—such as most metals, seawater, and wet ground. This makes the use of radar altimeters possible in certain cases. The radar signals that are reflected back towards the radar receiver are the desirable ones that make radar detection work. If the object is moving either toward or away from the transmitter, there will be a slight change in the frequency of the radio waves due to the Doppler effect.
Radar receivers are usually, but not always, in the same location as the transmitter. The reflected radar signals captured by the receiving antenna are usually very weak. They can be strengthened by electronic amplifiers. More sophisticated methods of signal processing are also used in order to recover useful radar signals.
The weak absorption of radio waves by the medium through which they pass is what enables radar sets to detect objects at relatively long ranges—ranges at which other electromagnetic wavelengths, such as visible light, infrared light, and ultraviolet light, are too strongly attenuated. Weather phenomena, such as fog, clouds, rain, falling snow, and sleet, that block visible light are usually transparent to radio waves. Certain radio frequencies that are absorbed or scattered by water vapour, raindrops, or atmospheric gases (especially oxygen) are avoided when designing radars, except when their detection is intended.
Illumination
Radar relies on its own transmissions rather than light from the Sun or the Moon, or from electromagnetic waves emitted by the target objects themselves, such as infrared radiation (heat). This process of directing artificial radio waves towards objects is called illumination, although radio waves are invisible to the human eye as well as optical cameras.
Reflection
If electromagnetic waves travelling through one material meet another material, having a different dielectric constant or diamagnetic constant from the first,
the waves will reflect or scatter from the boundary between the materials. This means that a solid object in air or in a vacuum, or a significant change in atomic density between the object and what is surrounding it, will usually scatter radar (radio) waves from its surface. This is particularly true for electrically conductive materials such as metal and carbon fibre, making radar well-suited to the detection of aircraft and ships. Radar absorbing material, containing resistive and sometimes magnetic substances, is used on military vehicles to reduce radar reflection. This is the radio equivalent of painting something a dark colour so that it cannot be seen by the eye at night.
Radar waves scatter in a variety of ways depending on the size (wavelength) of the radio wave and the shape of the target. If the wavelength is much shorter than the target's size, the wave will bounce off in a way similar to the way light is reflected by a mirror. If the wavelength is much longer than the size of the target, the target may not be visible because of poor reflection. Low-frequency radar technology is dependent on resonances for detection, but not identification, of targets. This is described by Rayleigh scattering, an effect that creates Earth's blue sky and red sunsets. When the two length scales are comparable, there may be resonances. Early radars used very long wavelengths that were larger than the targets and thus received a vague signal, whereas many modern systems use shorter wavelengths (a few centimetres or less) that can image objects as small as a loaf of bread.
Short radio waves reflect from curves and corners in a way similar to glint from a rounded piece of glass. The most reflective targets for short wavelengths have 90° angles between the reflective surfaces. A corner reflector consists of three flat surfaces meeting like the inside corner of a cube. The structure will reflect waves entering its opening directly back to the source. They are commonly used as radar reflectors to make otherwise difficult-to-detect objects easier to detect. Corner reflectors on boats, for example, make them more detectable to avoid collision or during a rescue. For similar reasons, objects intended to avoid detection will not have inside corners or surfaces and edges perpendicular to likely detection directions, which leads to "odd" looking stealth aircraft. These precautions do not totally eliminate reflection because of diffraction, especially at longer wavelengths. Half wavelength long wires or strips of conducting material, such as chaff, are very reflective but do not direct the scattered energy back toward the source. The extent to which an object reflects or scatters radio waves is called its radar cross-section.
Radar range equation
The power Pr returning to the receiving antenna is given by the equation:
where
Pt = transmitter power
Gt = gain of the transmitting antenna
Ar = effective aperture (area) of the receiving antenna; this can also be expressed as , where
= transmitted wavelength
Gr = gain of receiving antenna
σ = radar cross section, or scattering coefficient, of the target
F = pattern propagation factor
Rt = distance from the transmitter to the target
Rr = distance from the target to the receiver.
In the common case where the transmitter and the receiver are at the same location, Rt = Rr and the term Rt² Rr² can be replaced by R4, where R is the range.
This yields:
This shows that the received power declines as the fourth power of the range, which means that the received power from distant targets is relatively very small.
Additional filtering and pulse integration modifies the radar equation slightly for pulse-Doppler radar performance, which can be used to increase detection range and reduce transmit power.
The equation above with F = 1 is a simplification for transmission in a vacuum without interference. The propagation factor accounts for the effects of multipath and shadowing and depends on the details of the environment. In a real-world situation, pathloss effects are also considered.
Doppler effect
Frequency shift is caused by motion that changes the number of wavelengths between the reflector and the radar. This can degrade or enhance radar performance depending upon how it affects the detection process. As an example, moving target indication can interact with Doppler to produce signal cancellation at certain radial velocities, which degrades performance.
Sea-based radar systems, semi-active radar homing, active radar homing, weather radar, military aircraft, and radar astronomy rely on the Doppler effect to enhance performance. This produces information about target velocity during the detection process. This also allows small objects to be detected in an environment containing much larger nearby slow moving objects.
Doppler shift depends upon whether the radar configuration is active or passive. Active radar transmits a signal that is reflected back to the receiver. Passive radar depends upon the object sending a signal to the receiver.
The Doppler frequency shift for active radar is as follows, where is Doppler frequency, is transmit frequency, is radial velocity, and is the speed of light:
.
Passive radar is applicable to electronic countermeasures and radio astronomy as follows:
.
Only the radial component of the velocity is relevant. When the reflector is moving at right angle to the radar beam, it has no relative velocity. Objects moving parallel to the radar beam produce the maximum Doppler frequency shift.
When the transmit frequency () is pulsed, using a pulse repeat frequency of , the resulting frequency spectrum will contain harmonic frequencies above and below with a distance of . As a result, the Doppler measurement is only non-ambiguous if the Doppler frequency shift is less than half of , called the Nyquist frequency, since the returned frequency otherwise cannot be distinguished from shifting of a harmonic frequency above or below, thus requiring:
Or when substituting with :
As an example, a Doppler weather radar with a pulse rate of 2 kHz and transmit frequency of 1 GHz can reliably measure weather speed up to at most , thus cannot reliably determine radial velocity of aircraft moving .
Polarization
In all electromagnetic radiation, the electric field is perpendicular to the direction of propagation, and the electric field direction is the polarization of the wave. For a transmitted radar signal, the polarization can be controlled to yield different effects. Radars use horizontal, vertical, linear, and circular polarization to detect different types of reflections. For example, circular polarization is used to minimize the interference caused by rain. Linear polarization returns usually indicate metal surfaces. Random polarization returns usually indicate a fractal surface, such as rocks or soil, and are used by navigation radars.
Limiting factors
Beam path and range
A radar beam follows a linear path in vacuum but follows a somewhat curved path in atmosphere due to variation in the refractive index of air, which is called the radar horizon. Even when the beam is emitted parallel to the ground, the beam rises above the ground as the curvature of the Earth sinks below the horizon. Furthermore, the signal is attenuated by the medium the beam crosses, and the beam disperses.
The maximum range of conventional radar can be limited by a number of factors:
Line of sight, which depends on the height above the ground. Without a direct line of sight, the path of the beam is blocked.
The maximum non-ambiguous range, which is determined by the pulse repetition frequency. The maximum non-ambiguous range is the distance the pulse can travel to and return from before the next pulse is emitted.
Radar sensitivity and the power of the return signal as computed in the radar equation. This component includes factors such as the environmental conditions and the size (or radar cross section) of the target.
Noise
Signal noise is an internal source of random variations in the signal, which is generated by all electronic components.
Reflected signals decline rapidly as distance increases, so noise introduces a radar range limitation. The noise floor and signal-to-noise ratio are two different measures of performance that affect range performance. Reflectors that are too far away produce too little signal to exceed the noise floor and cannot be detected. Detection requires a signal that exceeds the noise floor by at least the signal-to-noise ratio.
Noise typically appears as random variations superimposed on the desired echo signal received in the radar receiver. The lower the power of the desired signal, the more difficult it is to discern it from the noise. The noise figure is a measure of the noise produced by a receiver compared to an ideal receiver, and this needs to be minimized.
Shot noise is produced by electrons in transit across a discontinuity, which occurs in all detectors. Shot noise is the dominant source in most receivers. There will also be flicker noise caused by electron transit through amplification devices, which is reduced using heterodyne amplification. Another reason for heterodyne processing is that for fixed fractional bandwidth, the instantaneous bandwidth increases linearly in frequency. This allows improved range resolution. The one notable exception to heterodyne (downconversion) radar systems is ultra-wideband radar. Here a single cycle, or transient wave, is used similar to UWB communications, see List of UWB channels.
Noise is also generated by external sources, most importantly the natural thermal radiation of the background surrounding the target of interest. In modern radar systems, the internal noise is typically about equal to or lower than the external noise. An exception is if the radar is aimed upwards at clear sky, where the scene is so "cold" that it generates very little thermal noise. The thermal noise is given by kB T B, where T is temperature, B is bandwidth (post matched filter) and kB is the Boltzmann constant. There is an appealing intuitive interpretation of this relationship in a radar. Matched filtering allows the entire energy received from a target to be compressed into a single bin (be it a range, Doppler, elevation, or azimuth bin). On the surface it appears that then within a fixed interval of time, perfect, error free, detection could be obtained. This is done by compressing all energy into an infinitesimal time slice. What limits this approach in the real world is that, while time is arbitrarily divisible, current is not. The quantum of electrical energy is an electron, and so the best that can be done is to match filter all energy into a single electron. Since the electron is moving at a certain temperature (Planck spectrum) this noise source cannot be further eroded. Ultimately, radar, like all macro-scale entities, is profoundly impacted by quantum theory.
Noise is random and target signals are not. Signal processing can take advantage of this phenomenon to reduce the noise floor using two strategies. The kind of signal integration used with moving target indication can improve noise up to for each stage. The signal can also be split among multiple filters for pulse-Doppler signal processing, which reduces the noise floor by the number of filters. These improvements depend upon coherence.
Interference
Radar systems must overcome unwanted signals in order to focus on the targets of interest. These unwanted signals may originate from internal and external sources, both passive and active. The ability of the radar system to overcome these unwanted signals defines its signal-to-noise ratio (SNR). SNR is defined as the ratio of the signal power to the noise power within the desired signal; it compares the level of a desired target signal to the level of background noise (atmospheric noise and noise generated within the receiver). The higher a system's SNR the better it is at discriminating actual targets from noise signals.
Clutter
Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to radar operators. Such targets include man-made objects such as buildings and — intentionally — by radar countermeasures such as chaff. Such targets also include natural objects such as ground, sea, and — when not being tasked for meteorological purposes — precipitation, hail spike, dust storms, animals (especially birds), turbulence in the atmospheric circulation, and meteor trails. Radar clutter can also be caused by other atmospheric phenomena, such as disturbances in the ionosphere caused by geomagnetic storms or other space weather events. This phenomenon is especially apparent near the geomagnetic poles, where the action of the solar wind on the earth's magnetosphere produces convection patterns in the ionospheric plasma. Radar clutter can degrade the ability of over-the-horizon radar to detect targets.
Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the center of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna. Clutter is considered a passive interference source since it only appears in response to radar signals sent by the radar.
Clutter is detected and neutralized in several ways. Clutter tends to appear static between radar scans; on subsequent scan echoes, desirable targets will appear to move, and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (meteorological radars wish for the opposite effect, and therefore use linear polarization to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.
Clutter moves with the wind or is stationary. Two common strategies to improve measures of performance in a clutter environment are:
Moving target indication, which integrates successive pulses
Doppler processing, which uses filters to separate clutter from desirable signals
The most effective clutter reduction technique is pulse-Doppler radar. Doppler separates clutter from aircraft and spacecraft using a frequency spectrum, so individual signals can be separated from multiple reflectors located in the same volume using velocity differences. This requires a coherent transmitter. Another technique uses a moving target indicator that subtracts the received signal from two successive pulses using phase to reduce signals from slow-moving objects. This can be adapted for systems that lack a coherent transmitter, such as time-domain pulse-amplitude radar.
Constant false alarm rate, a form of automatic gain control (AGC), is a method that relies on clutter returns far outnumbering echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software-controlled and affected the gain with greater granularity in specific detection cells.
Clutter may also originate from multipath echoes from valid targets caused by ground reflection, atmospheric ducting or ionospheric reflection/refraction (e.g., anomalous propagation). This clutter type is especially bothersome since it appears to move and behave like other normal (point) targets of interest. In a typical scenario, an aircraft echo is reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or eliminating it on the basis of jitter or a physical impossibility. Terrain bounce jamming exploits this response by amplifying the radar signal and directing it downward. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. Monopulse can be improved by altering the elevation algorithm used at low elevation. In newer air traffic control radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns to those adjacent, as well as calculating return improbabilities.
Jamming
Radar jamming refers to radio frequency signals originating from sources outside the radar, transmitting in the radar's frequency and thereby masking targets of interest. Jamming may be intentional, as with an electronic warfare tactic, or unintentional, as with friendly forces operating equipment that transmits using the same frequency range. Jamming is considered an active interference source, since it is initiated by elements outside the radar and in general unrelated to the radar signals.
Jamming is problematic to radar since the jamming signal only needs to travel one way (from the jammer to the radar receiver) whereas the radar echoes travel two ways (radar-target-radar) and are therefore significantly reduced in power by the time they return to the radar receiver in accordance with inverse-square law. Jammers therefore can be much less powerful than their jammed radars and still effectively mask targets along the line of sight from the jammer to the radar (mainlobe jamming). Jammers have an added effect of affecting radars along other lines of sight through the radar receiver's sidelobes (sidelobe jamming).
Mainlobe jamming can generally only be reduced by narrowing the mainlobe solid angle and cannot fully be eliminated when directly facing a jammer which uses the same frequency and polarization as the radar. Sidelobe jamming can be overcome by reducing receiving sidelobes in the radar antenna design and by using an omnidirectional antenna to detect and disregard non-mainlobe signals. Other anti-jamming techniques are frequency hopping and polarization.
Signal processing
Distance measurement
Transit time
One way to obtain a distance measurement (ranging) is based on the time-of-flight: transmit a short pulse of radio signal (electromagnetic radiation) and measure the time it takes for the reflection to return. The distance is one-half the round trip time multiplied by the speed of the signal. The factor of one-half comes from the fact that the signal has to travel to the object and back again. Since radio waves travel at the speed of light, accurate distance measurement requires high-speed electronics.
In most cases, the receiver does not detect the return while the signal is being transmitted. Through the use of a duplexer, the radar switches between transmitting and receiving at a predetermined rate.
A similar effect imposes a maximum range as well. In order to maximize range, longer times between pulses should be used, referred to as a pulse repetition time, or its reciprocal, pulse repetition frequency.
These two effects tend to be at odds with each other, and it is not easy to combine both good short range and good long range in a single radar. This is because the short pulses needed for a good minimum range broadcast have less total energy, making the returns much smaller and the target harder to detect. This could be offset by using more pulses, but this would shorten the maximum range. So each radar uses a particular type of signal. Long-range radars tend to use long pulses with long delays between them, and short range radars use smaller pulses with less time between them. As electronics have improved many radars now can change their pulse repetition frequency, thereby changing their range. The newest radars fire two pulses during one cell, one for short range (about ) and a separate signal for longer ranges (about ).
Distance may also be measured as a function of time. The radar mile is the time it takes for a radar pulse to travel one nautical mile, reflect off a target, and return to the radar antenna. Since a nautical mile is defined as 1,852 m, then dividing this distance by the speed of light (299,792,458 m/s), and then multiplying the result by 2 yields a result of 12.36 μs in duration.
Frequency modulation
Another form of distance measuring radar is based on frequency modulation. In these systems, the frequency of the transmitted signal is changed over time. Since the signal takes a finite time to travel to and from the target, the received signal is a different frequency than what the transmitter is broadcasting at the time the reflected signal arrives back at the radar. By comparing the frequency of the two signals the difference can be easily measured. This is easily accomplished with very high accuracy even in 1940s electronics. A further advantage is that the radar can operate effectively at relatively low frequencies. This was important in the early development of this type when high-frequency signal generation was difficult or expensive.
This technique can be used in continuous wave radar and is often found in aircraft radar altimeters. In these systems a "carrier" radar signal is frequency modulated in a predictable way, typically varying up and down with a sine wave or sawtooth pattern at audio frequencies. The signal is then sent out from one antenna and received on another, typically located on the bottom of the aircraft, and the signal can be continuously compared using a simple beat frequency modulator that produces an audio frequency tone from the returned signal and a portion of the transmitted signal.
The modulation index riding on the receive signal is proportional to the time delay between the radar and the reflector. The frequency shift becomes greater with greater time delay. The frequency shift is directly proportional to the distance travelled. That distance can be displayed on an instrument, and it may also be available via the transponder. This signal processing is similar to that used in speed detecting Doppler radar. Example systems using this approach are AZUSA, MISTRAM, and UDOP.
Terrestrial radar uses low-power FM signals that cover a larger frequency range. The multiple reflections are analyzed mathematically for pattern changes with multiple passes creating a computerized synthetic image. Doppler effects are used which allows slow moving objects to be detected as well as largely eliminating "noise" from the surfaces of bodies of water.
Pulse compression
The two techniques outlined above both have their disadvantages. The pulse timing technique has an inherent tradeoff in that the accuracy of the distance measurement is inversely related to the length of the pulse, while the energy, and thus direction range, is directly related. Increasing power for longer range while maintaining accuracy demands extremely high peak power, with 1960s early warning radars often operating in the tens of megawatts. The continuous wave methods spread this energy out in time and thus require much lower peak power compared to pulse techniques, but requires some method of allowing the sent and received signals to operate at the same time, often demanding two separate antennas.
The introduction of new electronics in the 1960s allowed the two techniques to be combined. It starts with a longer pulse that is also frequency modulated. Spreading the broadcast energy out in time means lower peak energies can be used, with modern examples typically on the order of tens of kilowatts. On reception, the signal is sent into a system that delays different frequencies by different times. The resulting output is a much shorter pulse that is suitable for accurate distance measurement, while also compressing the received energy into a much higher energy peak and thus reducing the signal-to-noise ratio. The technique is largely universal on modern large radars.
Speed measurement
Speed is the change in distance to an object with respect to time. Thus the existing system for measuring distance, combined with a memory capacity to see where the target last was, is enough to measure speed. At one time the memory consisted of a user making grease pencil marks on the radar screen and then calculating the speed using a slide rule. Modern radar systems perform the equivalent operation faster and more accurately using computers.
If the transmitter's output is coherent (phase synchronized), there is another effect that can be used to make almost instant speed measurements (no memory is required), known as the Doppler effect. Most modern radar systems use this principle into Doppler radar and pulse-Doppler radar systems (weather radar, military radar). The Doppler effect is only able to determine the relative speed of the target along the line of sight from the radar to the target. Any component of target velocity perpendicular to the line of sight cannot be determined by using the Doppler effect alone, but it can be determined by tracking the target's azimuth over time.
It is possible to make a Doppler radar without any pulsing, known as a continuous-wave radar (CW radar), by sending out a very pure signal of a known frequency. CW radar is ideal for determining the radial component of a target's velocity. CW radar is typically used by traffic enforcement to measure vehicle speed quickly and accurately where the range is not important.
When using a pulsed radar, the variation between the phase of successive returns gives the distance the target has moved between pulses, and thus its speed can be calculated.
Other mathematical developments in radar signal processing include time-frequency analysis (Weyl Heisenberg or wavelet), as well as the chirplet transform which makes use of the change of frequency of returns from moving targets ("chirp").
Pulse-Doppler signal processing
Pulse-Doppler signal processing includes frequency filtering in the detection process. The space between each transmit pulse is divided into range cells or range gates. Each cell is filtered independently much like the process used by a spectrum analyzer to produce the display showing different frequencies. Each different distance produces a different spectrum. These spectra are used to perform the detection process. This is required to achieve acceptable performance in hostile environments involving weather, terrain, and electronic countermeasures.
The primary purpose is to measure both the amplitude and frequency of the aggregate reflected signal from multiple distances. This is used with weather radar to measure radial wind velocity and precipitation rate in each different volume of air. This is linked with computing systems to produce a real-time electronic weather map. Aircraft safety depends upon continuous access to accurate weather radar information that is used to prevent injuries and accidents. Weather radar uses a low PRF. Coherency requirements are not as strict as those for military systems because individual signals ordinarily do not need to be separated. Less sophisticated filtering is required, and range ambiguity processing is not normally needed with weather radar in comparison with military radar intended to track air vehicles.
The alternate purpose is "look-down/shoot-down" capability required to improve military air combat survivability. Pulse-Doppler is also used for ground based surveillance radar required to defend personnel and vehicles. Pulse-doppler signal processing increases the maximum detection distance using less radiation close to aircraft pilots, shipboard personnel, infantry, and artillery. Reflections from terrain, water, and weather produce signals much larger than aircraft and missiles, which allows fast moving vehicles to hide using nap-of-the-earth flying techniques and stealth technology to avoid detection until an attack vehicle is too close to destroy. Pulse-Doppler signal processing incorporates more sophisticated electronic filtering that safely eliminates this kind of weakness. This requires the use of medium pulse-repetition frequency with phase coherent hardware that has a large dynamic range. Military applications require medium PRF which prevents range from being determined directly, and range ambiguity resolution processing is required to identify the true range of all reflected signals. Radial movement is usually linked with Doppler frequency to produce a lock signal that cannot be produced by radar jamming signals. Pulse-Doppler signal processing also produces audible signals that can be used for threat identification.
Reduction of interference effects
Signal processing is employed in radar systems to reduce the radar interference effects. Signal processing techniques include moving target indication, Pulse-Doppler signal processing, moving target detection processors, correlation with secondary surveillance radar targets, space-time adaptive processing, and track-before-detect. Constant false alarm rate and digital terrain model processing are also used in clutter environments.
Plot and track extraction
A track algorithm is a radar performance enhancement strategy. Tracking algorithms provide the ability to predict the future position of multiple moving objects based on the history of the individual positions being reported by sensor systems.
Historical information is accumulated and used to predict future position for use with air traffic control, threat estimation, combat system doctrine, gun aiming, and missile guidance. Position data is accumulated by radar sensors over the span of a few minutes.
There are four common track algorithms:
Nearest neighbour algorithm
Probabilistic Data Association
Multiple Hypothesis Tracking
Interactive Multiple Model (IMM)
Radar video returns from aircraft can be subjected to a plot extraction process whereby spurious and interfering signals are discarded. A sequence of target returns can be monitored through a device known as a plot extractor.
The non-relevant real time returns can be removed from the displayed information and a single plot displayed. In some radar systems, or alternatively in the command and control system to which the radar is connected, a radar tracker is used to associate the sequence of plots belonging to individual targets and estimate the targets' headings and speeds.
Engineering
A radar's components are:
A transmitter that generates the radio signal with an oscillator such as a klystron or a magnetron and controls its duration by a modulator.
A waveguide that links the transmitter and the antenna.
A duplexer that serves as a switch between the antenna and the transmitter or the receiver for the signal when the antenna is used in both situations.
A receiver. Knowing the shape of the desired received signal (a pulse), an optimal receiver can be designed using a matched filter.
A display processor to produce signals for human readable output devices.
An electronic section that controls all those devices and the antenna to perform the radar scan ordered by software.
A link to end user devices and displays.
Antenna design
Radio signals broadcast from a single antenna will spread out in all directions, and likewise a single antenna will receive signals equally from all directions. This leaves the radar with the problem of deciding where the target object is located.
Early systems tended to use omnidirectional broadcast antennas, with directional receiver antennas which were pointed in various directions. For instance, the first system to be deployed, Chain Home, used two straight antennas at right angles for reception, each on a different display. The maximum return would be detected with an antenna at right angles to the target, and a minimum with the antenna pointed directly at it (end on). The operator could determine the direction to a target by rotating the antenna so one display showed a maximum while the other showed a minimum.
One serious limitation with this type of solution is that the broadcast is sent out in all directions, so the amount of energy in the direction being examined is a small part of that transmitted. To get a reasonable amount of power on the "target", the transmitting aerial should also be directional.
Parabolic reflector
More modern systems use a steerable parabolic "dish" to create a tight broadcast beam, typically using the same dish as the receiver. Such systems often combine two radar frequencies in the same antenna in order to allow automatic steering, or radar lock.
Parabolic reflectors can be either symmetric parabolas or spoiled parabolas:
Symmetric parabolic antennas produce a narrow "pencil" beam in both the X and Y dimensions and consequently have a higher gain. The NEXRAD Pulse-Doppler weather radar uses a symmetric antenna to perform detailed volumetric scans of the atmosphere. Spoiled parabolic antennas produce a narrow beam in one dimension and a relatively wide beam in the other. This feature is useful if target detection over a wide range of angles is more important than target location in three dimensions. Most 2D surveillance radars use a spoiled parabolic antenna with a narrow azimuthal beamwidth and wide vertical beamwidth. This beam configuration allows the radar operator to detect an aircraft at a specific azimuth but at an indeterminate height. Conversely, so-called "nodder" height finding radars use a dish with a narrow vertical beamwidth and wide azimuthal beamwidth to detect an aircraft at a specific height but with low azimuthal precision.
Types of scan
Primary Scan: A scanning technique where the main antenna aerial is moved to produce a scanning beam, examples include circular scan, sector scan, etc.
Secondary Scan: A scanning technique where the antenna feed is moved to produce a scanning beam, examples include conical scan, unidirectional sector scan, lobe switching, etc.
Palmer Scan: A scanning technique that produces a scanning beam by moving the main antenna and its feed. A Palmer Scan is a combination of a Primary Scan and a Secondary Scan.
Conical scanning: The radar beam is rotated in a small circle around the "boresight" axis, which is pointed at the target.
Slotted waveguide
Applied similarly to the parabolic reflector, the slotted waveguide is moved mechanically to scan and is particularly suitable for non-tracking surface scan systems, where the vertical pattern may remain constant. Owing to its lower cost and less wind exposure, shipboard, airport surface, and harbour surveillance radars now use this approach in preference to a parabolic antenna.
Phased array
Another method of steering is used in a phased array radar.
Phased array antennas are composed of evenly spaced similar antenna elements, such as aerials or rows of slotted waveguide. Each antenna element or group of antenna elements incorporates a discrete phase shift that produces a phase gradient across the array. For example, array elements producing a 5 degree phase shift for each wavelength across the array face will produce a beam pointed 5 degrees away from the centerline perpendicular to the array face. Signals travelling along that beam will be reinforced. Signals offset from that beam will be cancelled. The amount of reinforcement is antenna gain. The amount of cancellation is side-lobe suppression.
Phased array radars have been in use since the earliest years of radar in World War II (Mammut radar), but electronic device limitations led to poor performance. Phased array radars were originally used for missile defence (see for example Safeguard Program). They are the heart of the ship-borne Aegis Combat System and the Patriot Missile System. The massive redundancy associated with having a large number of array elements increases reliability at the expense of gradual performance degradation that occurs as individual phase elements fail. To a lesser extent, phased array radars have been used in weather surveillance. As of 2017, NOAA plans to implement a national network of multi-function phased array radars throughout the United States within 10 years, for meteorological studies and flight monitoring.
Phased array antennas can be built to conform to specific shapes, like missiles, infantry support vehicles, ships, and aircraft.
As the price of electronics has fallen, phased array radars have become more common. Almost all modern military radar systems are based on phased arrays, where the small additional cost is offset by the improved reliability of a system with no moving parts. Traditional moving-antenna designs are still widely used in roles where cost is a significant factor such as air traffic surveillance and similar systems.
Phased array radars are valued for use in aircraft since they can track multiple targets. The first aircraft to use a phased array radar was the B-1B Lancer. The first fighter aircraft to use phased array radar was the Mikoyan MiG-31. The MiG-31M's SBI-16 Zaslon passive electronically scanned array radar was considered to be the world's most powerful fighter radar, until the AN/APG-77 active electronically scanned array was introduced on the Lockheed Martin F-22 Raptor.
Phased-array interferometry or aperture synthesis techniques, using an array of separate dishes that are phased into a single effective aperture, are not typical for radar applications, although they are widely used in radio astronomy. Because of the thinned array curse, such multiple aperture arrays, when used in transmitters, result in narrow beams at the expense of reducing the total power transmitted to the target. In principle, such techniques could increase spatial resolution, but the lower power means that this is generally not effective.
Aperture synthesis by post-processing motion data from a single moving source, on the other hand, is widely used in space and airborne radar systems.
Frequency bands
Antennas generally have to be sized similar to the wavelength of the operational frequency, normally within an order of magnitude. This provides a strong incentive to use shorter wavelengths as this will result in smaller antennas. Shorter wavelengths also result in higher resolution due to diffraction, meaning the shaped reflector seen on most radars can also be made smaller for any desired beamwidth.
Opposing the move to smaller wavelengths are a number of practical issues. For one, the electronics needed to produce high power very short wavelengths were generally more complex and expensive than the electronics needed for longer wavelengths or did not exist at all. Another issue is that the radar equation's effective aperture figure means that for any given antenna (or reflector) size will be more efficient at longer wavelengths. Additionally, shorter wavelengths may interact with molecules or raindrops in the air, scattering the signal. Very long wavelengths also have additional diffraction effects that make them suitable for over the horizon radars. For this reason, a wide variety of wavelengths are used in different roles.
The traditional band names originated as code-names during World War II and are still in military and aviation use throughout the world. They have been adopted in the United States by the Institute of Electrical and Electronics Engineers and internationally by the International Telecommunication Union. Most countries have additional regulations to control which parts of each band are available for civilian or military use.
Other users of the radio spectrum, such as the broadcasting and electronic countermeasures industries, have replaced the traditional military designations with their own systems.
Modulators
Modulators act to provide the waveform of the RF-pulse. There are two different radar modulator designs:
High voltage switch for non-coherent keyed power-oscillators. These modulators consist of a high voltage pulse generator formed from a high voltage supply, a pulse forming network, and a high voltage switch such as a thyratron. They generate short pulses of power to feed, e.g., the magnetron, a special type of vacuum tube that converts DC (usually pulsed) into microwaves. This technology is known as pulsed power. In this way, the transmitted pulse of RF radiation is kept to a defined and usually very short duration.
Hybrid mixers, fed by a waveform generator and an exciter for a complex but coherent waveform. This waveform can be generated by low power/low-voltage input signals. In this case the radar transmitter must be a power-amplifier, e.g., a klystron or a solid state transmitter. In this way, the transmitted pulse is intrapulse-modulated and the radar receiver must use pulse compression techniques.
Coolant
Coherent microwave amplifiers operating above 1,000 watts microwave output, like travelling wave tubes and klystrons, require liquid coolant. The electron beam must contain 5 to 10 times more power than the microwave output, which can produce enough heat to generate plasma. This plasma flows from the collector toward the cathode. The same magnetic focusing that guides the electron beam forces the plasma into the path of the electron beam but flowing in the opposite direction. This introduces FM modulation which degrades Doppler performance. To prevent this, liquid coolant with minimum pressure and flow rate is required, and deionized water is normally used in most high power surface radar systems that use Doppler processing.
Coolanol (silicate ester) was used in several military radars in the 1970s. However, it is hygroscopic, leading to hydrolysis and formation of highly flammable alcohol. The loss of a U.S. Navy aircraft in 1978 was attributed to a silicate ester fire. Coolanol is also expensive and toxic. The U.S. Navy has instituted a program named Pollution Prevention (P2) to eliminate or reduce the volume and toxicity of waste, air emissions, and effluent discharges. Because of this, Coolanol is used less often today.
Regulations
Radar (also: RADAR) is defined by article 1.100 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) as:
Configurations
Radar come in a variety of configurations in the emitter, the receiver, the antenna, wavelength, scan strategies, etc.
Bistatic radar
Continuous-wave radar
Doppler radar
Fm-cw radar
Monopulse radar
Passive radar
Planar array radar
Pulse-doppler
Synthetic-aperture radar
Synthetically thinned aperture radar
Over-the-horizon radar with chirp transmitter
| Technology | Navigation and timekeeping | null |
25685 | https://en.wikipedia.org/wiki/Random%20variable | Random variable | A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which
the domain is the set of possible outcomes in a sample space (e.g. the set which are the possible upper sides of a flipped coin heads or tails as the result from tossing a coin); and
the range is a measurable space (e.g. corresponding to the domain above, the range might be the set if say heads mapped to -1 and mapped to 1). Typically, the range of a random variable is a subset of the real numbers.
Informally, randomness typically represents some fundamental element of chance, such as in the roll of a die; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup.
In the formal mathematical language of measure theory, a random variable is defined as a measurable function from a probability measure space (called the sample space) to a measurable space. This allows consideration of the pushforward measure, which is called the distribution of the random variable; the distribution is thus a probability measure on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be independent.
It is common to consider the special cases of discrete random variables and absolutely continuous random variables, corresponding to whether a random variable is valued in a countable subset or in an interval of real numbers. There are other important possibilities, especially in the theory of stochastic processes, wherein it is natural to consider random sequences or random functions. Sometimes a random variable is taken to be automatically valued in the real numbers, with more general random quantities instead being called random elements.
According to George Mackey, Pafnuty Chebyshev was the first person "to think systematically in terms of random variables".
Definition
A random variable is a measurable function from a sample space as a set of possible outcomes to a measurable space . The technical axiomatic definition requires the sample space to be a sample space of a probability triple (see the measure-theoretic definition). A random variable is often denoted by capital Roman letters such as .
The probability that takes on a value in a measurable set is written as
.
Standard case
In many cases, is real-valued, i.e. . In some contexts, the term random element (see extensions) is used to denote a random variable not of this form.
When the image (or range) of is finitely or infinitely countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of . If the image is uncountably infinite (usually an interval) then is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous.
Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.
Extensions
The term "random variable" in statistics is traditionally limited to the real-valued case (). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.
However, the definition above is valid for any measurable space of values. Thus one can consider random elements of other sets , such as random Boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a random variable of type , or an -valued random variable.
This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of , using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space , which allows the different random variables to covary). For example:
A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are , , and the position of the 1 indicates the word.
A random sentence of given length may be represented as a vector of random words.
A random graph on given vertices may be represented as a matrix of random variables, whose values specify the adjacency matrix of the random graph.
A random function may be represented as a collection of random variables , giving the function's values at the various points in the function's domain. The are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as , and random field is a random function on any set (typically time, space, or a discrete set).
Distribution functions
If a random variable defined on the probability space is given, we can ask questions like "How likely is it that the value of is equal to 2?". This is the same as the probability of the event which is often written as or for short.
Recording all these probabilities of outputs of a random variable yields the probability distribution of . The probability distribution "forgets" about the particular probability space used to define and only records the probabilities of various output values of . Such a probability distribution, if is real-valued, can always be captured by its cumulative distribution function
and sometimes also using a probability density function, . In measure-theoretic terms, we use the random variable to "push-forward" the measure on to a measure on . The measure is called the "(probability) distribution of " or the "law of ".
The density , the Radon–Nikodym derivative of with respect to some reference measure on (often, this reference measure is the Lebesgue measure in the case of continuous random variables, or the counting measure in the case of discrete random variables).
The underlying probability space is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space altogether and just puts a measure on that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.
Examples
Discrete random variable
Consider an experiment where a person is chosen at random. An example of a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to their height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm.
Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum .
In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.
If are countable sets of real numbers, and , then is a discrete distribution function. Here for , for . Taking for instance an enumeration of all rational numbers as , one gets a discrete function that is not necessarily a step function (piecewise constant).
Coin toss
The possible outcomes for one coin toss can be described by the sample space . We can introduce a real-valued random variable that models a $1 payoff for a successful bet on heads as follows:
If the coin is a fair coin, Y has a probability mass function given by:
Dice roll
A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers n1 and n2 from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable X given by the function that maps the pair to the sum:
and (if the dice are fair) has a probability mass function fX given by:
Continuous random variable
Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally, ) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures;
such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part.
An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, X = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any range of values. For example, the probability of choosing a number in [0, 180] is . Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set.
More formally, given any interval , a random variable is called a "continuous uniform random variable" (CURV) if the probability that it takes a value in a subinterval depends only on the length of the subinterval. This implies that the probability of falling in any subinterval is proportional to the length of the subinterval, that is, if , one has
where the last equality results from the unitarity axiom of probability. The probability density function of a CURV is given by the indicator function of its interval of support normalized by the interval's length: Of particular interest is the uniform distribution on the unit interval . Samples of any desired probability distribution can be generated by calculating the quantile function of on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables.
Mixed type
A mixed random variable is a random variable whose cumulative distribution function is neither discrete nor everywhere-continuous. It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case the will be the weighted average of the CDFs of the component variables.
An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = −1; otherwise X = the value of the spinner as in the preceding example. There is a probability of that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example.
Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see . The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).
Measure-theoretic definition
The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals.
The measure-theoretic definition is as follows.
Let be a probability space and a measurable space. Then an -valued random variable is a measurable function , which means that, for every subset , its preimage is -measurable; , where . This definition enables us to measure any subset in the target space by looking at its preimage, which by assumption is measurable.
In more intuitive terms, a member of is a possible outcome, a member of is a measurable subset of possible outcomes, the function gives the probability of each such measurable subset, represents the set of values that the random variable can take (such as the set of real numbers), and a member of is a "well-behaved" (measurable) subset of (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability.
When is a topological space, then the most common choice for the σ-algebra is the Borel σ-algebra , which is the σ-algebra generated by the collection of all open sets in . In such case the -valued random variable is called an -valued random variable. Moreover, when the space is the real line , then such a real-valued random variable is called simply a random variable.
Real-valued random variables
In this case the observation space is the set of real numbers. Recall, is the probability space. For a real observation space, the function is a real-valued random variable if
This definition is a special case of the above because the set generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that .
Moments
The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted , and also called the first moment. In general, is not equal to . Once the "average value" is known, one could then ask how far from this average value the values of typically are, a question that is answered by the variance and standard deviation of a random variable. can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of .
Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables , find a collection of functions such that the expectation values fully characterise the distribution of the random variable .
Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a categorical random variable X that can take on the nominal values "red", "blue" or "green", the real-valued function can be constructed; this uses the Iverson bracket, and has the value 1 if has the value "green", 0 otherwise. Then, the expected value and other moments of this function can be determined.
Functions of random variables
A new random variable Y can be defined by applying a real Borel measurable function to the outcomes of a real-valued random variable . That is, . The cumulative distribution function of is then
If function is invertible (i.e., exists, where is 's inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain
With the same hypotheses of invertibility of , assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to , in order to obtain
If there is no invertibility of but each admits at most a countable number of roots (i.e., a finite, or countably infinite, number of such that ) then the previous relation between the probability density functions can be generalized with
where , according to the inverse function theorem. The formulas for densities do not demand to be increasing.
In the measure-theoretic, axiomatic approach to probability, if a random variable on and a Borel measurable function , then is also a random variable on , since the composition of measurable functions is also measurable. (However, this is not necessarily true if is Lebesgue measurable.) The same procedure that allowed one to go from a probability space to can be used to obtain the distribution of .
Example 1
Let be a real-valued, continuous random variable and let .
If , then , so
If , then
so
Example 2
Suppose is a random variable with a cumulative distribution
where is a fixed parameter. Consider the random variable Then,
The last expression can be calculated in terms of the cumulative distribution of so
which is the cumulative distribution function (CDF) of an exponential distribution.
Example 3
Suppose is a random variable with a standard normal distribution, whose density is
Consider the random variable We can find the density using the above formula for a change of variables:
In this case the change is not monotonic, because every value of has two corresponding values of (one positive and negative). However, because of symmetry, both halves will transform identically, i.e.,
The inverse transformation is
and its derivative is
Then,
This is a chi-squared distribution with one degree of freedom.
Example 4
Suppose is a random variable with a normal distribution, whose density is
Consider the random variable We can find the density using the above formula for a change of variables:
In this case the change is not monotonic, because every value of has two corresponding values of (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms:
The inverse transformation is
and its derivative is
Then,
This is a noncentral chi-squared distribution with one degree of freedom.
Some properties
The probability distribution of the sum of two independent random variables is the convolution of each of their distributions.
Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).
Equivalence of random variables
There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution.
In increasing order of strength, the precise definition of these notions of equivalence is given below.
Equality in distribution
If the sample space is a subset of the real line, random variables X and Y are equal in distribution (denoted ) if they have the same distribution functions:
To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform.
Almost sure equality
Two random variables X and Y are equal almost surely (denoted ) if, and only if, the probability that they are different is zero:
For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance:
where "ess sup" represents the essential supremum in the sense of measure theory.
Equality
Finally, the two random variables X and Y are equal if they are equal as functions on their measurable space:
This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable.
Convergence
A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem.
There are various senses in which a sequence of random variables can converge to a random variable . These are explained in the article on convergence of random variables.
| Mathematics | Statistics and probability | null |
25715 | https://en.wikipedia.org/wiki/Rail%20transport | Rail transport | Rail transport (also known as train transport) is a means of transport using wheeled vehicles running in tracks, which usually consist of two parallel steel rails. Rail transport is one of the two primary means of land transport, next to road transport. It is used for about 8% of passenger and freight transport globally, thanks to its energy efficiency and potentially high speed, Also, the track spreads the weight of the train which means larger amounts can be carried than with trucks on roads.Rolling stock on rails generally encounters lower frictional resistance than rubber-tyred road vehicles, allowing rail cars to be coupled into longer trains. Power is usually provided by diesel or electrical locomotives. While railway transport is capital-intensive and less flexible than road transport, it can carry heavy loads of passengers and cargo with greater energy efficiency and safety.
Precursors of railways driven by human or animal power have existed since antiquity, but modern rail transport began with the invention of the steam locomotive in the United Kingdom at the beginning of the 19th century. The first passenger railway, the Stockton and Darlington Railway, opened in 1825. The quick spread of railways throughout Europe and North America, following the 1830 opening of the first intercity connection in England, was a key component of the Industrial Revolution. The adoption of rail transport lowered shipping costs compared to water transport, leading to "national markets" in which prices varied less from city to city.
In the 1880s, railway electrification began with tramways and rapid transit systems. Starting in the 1940s, steam locomotives were replaced by diesel locomotives. The first high-speed railway system was introduced in Japan in 1964, and high-speed rail lines now connect many cities in Europe, East Asia, and the eastern United States. Following some decline due to competition from cars and airplanes, rail transport has had a revival in recent decades due to road congestion and rising fuel prices, as well as governments investing in rail as a means of reducing CO2 emissions.
History
Smooth, durable road surfaces have been made for wheeled vehicles since prehistoric times. In some cases, they were narrow and in pairs to support only the wheels. That is, they were wagonways or tracks. Some had grooves or flanges or other mechanical means to keep the wheels on track.
For example, evidence indicates that a 6 to 8.5 km long Diolkos paved trackway transported boats across the Isthmus of Corinth in Greece from around 600 BC. The Diolkos was in use for over 650 years, until at least the 1st century AD. Paved trackways were also later built in Roman Egypt.
Pre-steam modern systems
Wooden rails introduced
In 1515, Cardinal Matthäus Lang wrote a description of the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power, through a treadwheel. The line is still operational, although in updated form and is possibly the oldest operational railway.
Wagonways (or tramways) using wooden rails, hauled by horses, started appearing in the 1550s to facilitate the transport of ore tubs to and from mines and soon became popular in Europe. Such an operation was illustrated in Germany in 1556 by Georgius Agricola in his work De re metallica. This line used "Hund" carts with unflanged wheels running on wooden planks and a vertical pin on the truck fitting into the gap between the planks to keep it going the right way. The miners called the wagons Hunde ("dogs") from the noise they made on the tracks.
There are many references to their use in central Europe in the 16th century. Such a transport system was later used by German miners at Caldbeck, Cumbria, England, perhaps from the 1560s. A wagonway was built at Prescot, near Liverpool, sometime around 1600, possibly as early as 1594. Owned by Philip Layton, the line carried coal from a pit near Prescot Hall to a terminus about away. A funicular railway was also made at Broseley in Shropshire some time before 1604. This carried coal for James Clifford from his mines down to the River Severn to be loaded onto barges and carried to riverside towns. The Wollaton Wagonway, completed in 1604 by Huntingdon Beaumont, has sometimes erroneously been cited as the earliest British railway. It ran from Strelley to Wollaton near Nottingham.
The Middleton Railway in Leeds, which was built in 1758, later became the world's oldest operational railway (other than funiculars), albeit now in an upgraded form. In 1764, the first railway in the Americas was built in Lewiston, New York.
Metal rails introduced
In the late 1760s, the Coalbrookdale Company began to fix plates of cast iron to the upper surface of the wooden rails. This allowed a variation of gauge to be used. At first only balloon loops could be used for turning, but later, movable points were taken into use that allowed for switching.
A system was introduced in which unflanged wheels ran on L-shaped metal plates, which came to be known as plateways. John Curr, a Sheffield colliery manager, invented this flanged rail in 1787, though the exact date of this is disputed. The plate rail was taken up by Benjamin Outram for wagonways serving his canals, manufacturing them at his Butterley ironworks. In 1803, William Jessop opened the Surrey Iron Railway, a double track plateway, erroneously sometimes cited as world's first public railway, in south London.
William Jessop had earlier used a form of all-iron edge rail and flanged wheels successfully for an extension to the Charnwood Forest Canal at Nanpantan, Loughborough, Leicestershire in 1789. In 1790, Jessop and his partner Outram began to manufacture edge rails. Jessop became a partner in the Butterley Company in 1790. The first public edgeway (thus also first public railway) built was Lake Lock Rail Road in 1796. Although the primary purpose of the line was to carry coal, it also carried passengers.
These two systems of constructing iron railways, the "L" plate-rail and the smooth edge-rail, continued to exist side by side until well into the early 19th century. The flanged wheel and edge-rail eventually proved its superiority and became the standard for railways.
Cast iron used in rails proved unsatisfactory because it was brittle and broke under heavy loads. The wrought iron invented by John Birkinshaw in 1820 replaced cast iron. Wrought iron, usually simply referred to as "iron", was a ductile material that could undergo considerable deformation before breaking, making it more suitable for iron rails. But iron was expensive to produce until Henry Cort patented the puddling process in 1784. In 1783 Cort also patented the rolling process, which was 15 times faster at consolidating and shaping iron than hammering. These processes greatly lowered the cost of producing iron and rails. The next important development in iron production was hot blast developed by James Beaumont Neilson (patented 1828), which considerably reduced the amount of coke (fuel) or charcoal needed to produce pig iron. Wrought iron was a soft material that contained slag or dross. The softness and dross tended to make iron rails distort and delaminate and they lasted less than 10 years. Sometimes they lasted as little as one year under high traffic. All these developments in the production of iron eventually led to the replacement of composite wood/iron rails with superior all-iron rails.
The introduction of the Bessemer process, enabling steel to be made inexpensively, led to the era of great expansion of railways that began in the late 1860s. Steel rails lasted several times longer than iron. Steel rails made heavier locomotives possible, allowing for longer trains and improving the productivity of railroads. The Bessemer process introduced nitrogen into the steel, which caused the steel to become brittle with age. The open hearth furnace began to replace the Bessemer process near the end of the 19th century, improving the quality of steel and further reducing costs. Thus steel completely replaced the use of iron in rails, becoming standard for all railways.
The first passenger horsecar or tram, Swansea and Mumbles Railway, was opened between Swansea and Mumbles in Wales in 1807. Horses remained the preferable mode for tram transport even after the arrival of steam engines until the end of the 19th century, because they were cleaner compared to steam-driven trams which caused smoke in city streets.
Steam power introduced
In 1784, James Watt, a Scottish inventor and mechanical engineer, patented a design for a steam locomotive. Watt had improved the steam engine of Thomas Newcomen, hitherto used to pump water out of mines, and developed a reciprocating engine in 1769 capable of powering a wheel. This was a large stationary engine, powering cotton mills and a variety of machinery; the state of boiler technology necessitated the use of low-pressure steam acting upon a vacuum in the cylinder, which required a separate condenser and an air pump. Nevertheless, as the construction of boilers improved, Watt investigated the use of high-pressure steam acting directly upon a piston, raising the possibility of a smaller engine that might be used to power a vehicle. Following his patent, Watt's employee William Murdoch produced a working model of a self-propelled steam carriage in that year.
The first full-scale working railway steam locomotive was built in the United Kingdom in 1804 by Richard Trevithick, a British engineer born in Cornwall. This used high-pressure steam to drive the engine by one power stroke. The transmission system employed a large flywheel to even out the action of the piston rod. On 21 February 1804, the world's first steam-powered railway journey took place when Trevithick's unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in South Wales. Trevithick later demonstrated a locomotive operating upon a piece of circular rail track in Bloomsbury, London, the Catch Me Who Can, but never got beyond the experimental stage with railway locomotives, not least because his engines were too heavy for the cast-iron plateway track then in use.
The first commercially successful steam locomotive was Matthew Murray's rack locomotive Salamanca built for the Middleton Railway in Leeds in 1812. This twin-cylinder locomotive was light enough to not break the edge-rails track and solved the problem of adhesion by a cog-wheel using teeth cast on the side of one of the rails. Thus it was also the first rack railway.
This was followed in 1813 by the locomotive Puffing Billy built by Christopher Blackett and William Hedley for the Wylam Colliery Railway, the first successful locomotive running by adhesion only. This was accomplished by the distribution of weight between a number of wheels. Puffing Billy is now on display in the Science Museum in London, and is the oldest locomotive in existence.
In 1814, George Stephenson, inspired by the early locomotives of Trevithick, Murray and Hedley, persuaded the manager of the Killingworth colliery where he worked to allow him to build a steam-powered machine. Stephenson played a pivotal role in the development and widespread adoption of the steam locomotive. His designs considerably improved on the work of the earlier pioneers. He built the locomotive Blücher, also a successful flanged-wheel adhesion locomotive. In 1825 he built the locomotive Locomotion for the Stockton and Darlington Railway in the northeast of England, which became the first public steam railway in the world in 1825, although it used both horse power and steam power on different runs. In 1829, he built the locomotive Rocket, which entered in and won the Rainhill Trials. This success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives for railways in Great Britain and Ireland, the United States, and much of Europe. The first public railway which used only steam locomotives, all the time, was Liverpool and Manchester Railway, built in 1830.
Steam power continued to be the dominant power system in railways around the world for more than a century.
Electric power introduced
The first known electric locomotive was built in 1837 by chemist Robert Davidson of Aberdeen in Scotland, and it was powered by galvanic cells (batteries). Thus it was also the earliest battery-electric locomotive. Davidson later built a larger locomotive named Galvani, exhibited at the Royal Scottish Society of Arts Exhibition in 1841. The seven-ton vehicle had two direct-drive reluctance motors, with fixed electromagnets acting on iron bars attached to a wooden cylinder on each axle, and simple commutators. It hauled a load of six tons at four miles per hour (6 kilometers per hour) for a distance of . It was tested on the Edinburgh and Glasgow Railway in September of the following year, but the limited power from batteries prevented its general use. It was destroyed by railway workers, who saw it as a threat to their job security. By the middle of the nineteenth century most european countries had military uses for railways.
Werner von Siemens demonstrated an electric railway in 1879 in Berlin. The world's first electric tram line, Gross-Lichterfelde Tramway, opened in Lichterfelde near Berlin, Germany, in 1881. It was built by Siemens. The tram ran on 180 volts DC, which was supplied by running rails. In 1891 the track was equipped with an overhead wire and the line was extended to Berlin-Lichterfelde West station. The Volk's Electric Railway opened in 1883 in Brighton, England. The railway is still operational, thus making it the oldest operational electric railway in the world. Also in 1883, Mödling and Hinterbrühl Tram opened near Vienna in Austria. It was the first tram line in the world in regular service powered from an overhead line. Five years later, in the U.S. electric trolleys were pioneered in 1888 on the Richmond Union Passenger Railway, using equipment designed by Frank J. Sprague.
The first use of electrification on a main line was on a four-mile section of the Baltimore Belt Line of the Baltimore and Ohio Railroad (B&O) in 1895 connecting the main portion of the B&O to the new line to New York through a series of tunnels around the edges of Baltimore's downtown. Electricity quickly became the power supply of choice for subways, abetted by the Sprague's invention of multiple-unit train control in 1897. By the early 1900s most street railways were electrified.
The London Underground, the world's oldest underground railway, opened in 1863, and it began operating electric services using a fourth rail system in 1890 on the City and South London Railway, now part of the London Underground Northern line. This was the first major railway to use electric traction. The world's first deep-level electric railway, it runs from the City of London, under the River Thames, to Stockwell in south London.
The first practical AC electric locomotive was designed by Charles Brown, then working for Oerlikon, Zürich. In 1891, Brown had demonstrated long-distance power transmission, using three-phase AC, between a hydro-electric plant at Lauffen am Neckar and Frankfurt am Main West, a distance of . Using experience he had gained while working for Jean Heilmann on steam–electric locomotive designs, Brown observed that three-phase motors had a higher power-to-weight ratio than DC motors and, because of the absence of a commutator, were simpler to manufacture and maintain. However, they were much larger than the DC motors of the time and could not be mounted in underfloor bogies: they could only be carried within locomotive bodies.
In 1894, Hungarian engineer Kálmán Kandó developed a new type 3-phase asynchronous electric drive motors and generators for electric locomotives. Kandó's early 1894 designs were first applied in a short three-phase AC tramway in Évian-les-Bains (France), which was constructed between 1896 and 1898.
In 1896, Oerlikon installed the first commercial example of the system on the Lugano Tramway. Each 30-tonne locomotive had two motors run by three-phase 750 V 40 Hz fed from double overhead lines. Three-phase motors run at a constant speed and provide regenerative braking, and are well suited to steeply graded routes, and the first main-line three-phase locomotives were supplied by Brown (by then in partnership with Walter Boveri) in 1899 on the 40 km Burgdorf–Thun line, Switzerland.
Italian railways were the first in the world to introduce electric traction for the entire length of a main line rather than a short section. The 106 km Valtellina line was opened on 4 September 1902, designed by Kandó and a team from the Ganz works. The electrical system was three-phase at 3 kV 15 Hz. In 1918, Kandó invented and developed the rotary phase converter, enabling electric locomotives to use three-phase motors whilst supplied via a single overhead wire, carrying the simple industrial frequency (50 Hz) single phase AC of the high-voltage national networks.
An important contribution to the wider adoption of AC traction came from SNCF of France after World War II. The company conducted trials at AC 50 Hz, and established it as a standard. Following SNCF's successful trials, 50 Hz, now also called industrial frequency was adopted as standard for main-lines across the world.
Diesel power introduced
Earliest recorded examples of an internal combustion engine for railway use included a prototype designed by William Dent Priestman. Sir William Thomson examined it in 1888 and described it as a "Priestman oil engine mounted upon a truck which is worked on a temporary line of rails to show the adaptation of a petroleum engine for locomotive purposes." In 1894, a two axle machine built by Priestman Brothers was used on the Hull Docks.
In 1906, Rudolf Diesel, Adolf Klose and the steam and diesel engine manufacturer Gebrüder Sulzer founded Diesel-Sulzer-Klose GmbH to manufacture diesel-powered locomotives. Sulzer had been manufacturing diesel engines since 1898. The Prussian State Railways ordered a diesel locomotive from the company in 1909. The world's first diesel-powered locomotive was operated in the summer of 1912 on the Winterthur–Romanshorn railway in Switzerland, but was not a commercial success. The locomotive weight was 95 tonnes and the power was 883 kW with a maximum speed of . Small numbers of prototype diesel locomotives were produced in a number of countries through the mid-1920s. The Soviet Union operated three experimental units of different designs since late 1925, though only one of them (the E el-2) proved technically viable.
A significant breakthrough occurred in 1914, when Hermann Lemp, a General Electric electrical engineer, developed and patented a reliable direct current electrical control system (subsequent improvements were also patented by Lemp). Lemp's design used a single lever to control both engine and generator in a coordinated fashion, and was the prototype for all diesel–electric locomotive control systems. In 1914, world's first functional diesel–electric railcars were produced for the Königlich-Sächsische Staatseisenbahnen (Royal Saxon State Railways) by Waggonfabrik Rastatt with electric equipment from Brown, Boveri & Cie and diesel engines from Swiss Sulzer AG. They were classified as DET 1 and DET 2 (de.wiki). The first regular used diesel–electric locomotives were switcher (shunter) locomotives. General Electric produced several small switching locomotives in the 1930s (the famous "44-tonner" switcher was introduced in 1940) Westinghouse Electric and Baldwin collaborated to build switching locomotives starting in 1929.
In 1929, the Canadian National Railways became the first North American railway to use diesels in mainline service with two units, 9000 and 9001, from Westinghouse.
High-speed rail
Although steam and diesel services reaching speeds up to were started before the 1960s in Europe, they were not very successful.
The first electrified high-speed rail Tōkaidō Shinkansen was introduced in 1964 between Tokyo and Osaka in Japan. Since then high-speed rail transport, functioning at speeds up to and above , has been built in Japan, Spain, France, Germany, Italy, the People's Republic of China, Taiwan (Republic of China), the United Kingdom, South Korea, Scandinavia, Belgium and the Netherlands. The construction of many of these lines has resulted in the dramatic decline of short-haul flights and automotive traffic between connected cities, such as the London–Paris–Brussels corridor, Madrid–Barcelona, Milan–Rome–Naples, as well as many other major lines.
High-speed trains normally operate on standard gauge tracks of continuously welded rail on grade-separated right-of-way that incorporates a large turning radius in its design. While high-speed rail is most often designed for passenger travel, some high-speed systems also offer freight service.
Preservation
Since 1980, rail transport has changed dramatically, but a number of heritage railways continue to operate as part of living history to preserve and maintain old railway lines for services of tourist trains.
Trains
A train is a connected series of rail vehicles that move along the track. Propulsion for the train is provided by a separate locomotive or from individual motors in self-propelled multiple units. Most trains carry a revenue load, although non-revenue cars exist for the railway's own use, such as for maintenance-of-way purposes. The engine driver (engineer in North America) controls the locomotive or other power cars, although people movers and some rapid transits are under automatic control.
Haulage
Traditionally, trains are pulled using a locomotive. This involves one or more powered vehicles being located at the front of the train, providing sufficient tractive force to haul the weight of the full train. This arrangement remains dominant for freight trains and is often used for passenger trains. A push–pull train has the end passenger car equipped with a driver's cab so that the engine driver can remotely control the locomotive. This allows one of the locomotive-hauled train's drawbacks to be removed, since the locomotive need not be moved to the front of the train each time the train changes direction. A railroad car is a vehicle used for the haulage of either passengers or freight.
A multiple unit has powered wheels throughout the whole train. These are used for rapid transit and tram systems, as well as many both short- and long-haul passenger trains. A railcar is a single, self-powered car, and may be electrically propelled or powered by a diesel engine. Multiple units have a driver's cab at each end of the unit, and were developed following the ability to build electric motors and other engines small enough to fit under the coach. There are only a few freight multiple units, most of which are high-speed post trains.
Motive power
Steam locomotives are locomotives with a steam engine that provides adhesion. Coal, petroleum, or wood is burned in a firebox, boiling water in the boiler to create pressurized steam. The steam travels through the smokebox before leaving via the chimney or smoke stack. In the process, it powers a piston that transmits power directly through a connecting rod (US: main rod) and a crankpin (US: wristpin) on the driving wheel (US main driver) or to a crank on a driving axle. Steam locomotives have been phased out in most parts of the world for economical and safety reasons, although many are preserved in working order by heritage railways.
Electric locomotives draw power from a stationary source via an overhead wire or third rail. Some also or instead use a battery. In locomotives that are powered by high-voltage alternating current, a transformer in the locomotive converts the high-voltage low-current power to low-voltage high current used in the traction motors that power the wheels. Modern locomotives may use three-phase AC induction motors or direct current motors. Under certain conditions, electric locomotives are the most powerful traction. They are also the cheapest to run and provide less noise and no local air pollution. However, they require high capital investments both for the overhead lines and the supporting infrastructure, as well as the generating station that is needed to produce electricity. Accordingly, electric traction is used on urban systems, lines with high traffic and for high-speed rail.
Diesel locomotives use a diesel engine as the prime mover. The energy transmission may be either diesel–electric, diesel-mechanical or diesel–hydraulic but diesel–electric is dominant. Electro-diesel locomotives are built to run as diesel–electric on unelectrified sections and as electric locomotives on electrified sections.
Alternative methods of motive power include magnetic levitation, horse-drawn, cable, gravity, pneumatics and gas turbine.
Passenger trains
A passenger train stops at stations where passengers may embark and disembark. The oversight of the train is the duty of a guard/train manager/conductor. Passenger trains are part of public transport and often make up the stem of the service, with buses feeding to stations. Passenger trains provide long-distance intercity travel, daily commuter trips, or local urban transit services, operating with a diversity of vehicles, operating speeds, right-of-way requirements, and service frequency. Service frequencies are often expressed as a number of trains per hour (tph). Passenger trains can usually be into two types of operation, intercity railway and intracity transit. Whereas intercity railway involve higher speeds, longer routes, and lower frequency (usually scheduled), intracity transit involves lower speeds, shorter routes, and higher frequency (especially during peak hours).
Intercity trains are long-haul trains that operate with few stops between cities. Trains typically have amenities such as a dining car. Some lines also provide over-night services with sleeping cars. Some long-haul trains have been given a specific name. Regional trains are medium distance trains that connect cities with outlying, surrounding areas, or provide a regional service, making more stops and having lower speeds. Commuter trains serve suburbs of urban areas, providing a daily commuting service. Airport rail links provide quick access from city centres to airports.
High-speed rail are special inter-city trains that operate at much higher speeds than conventional railways, the limit being regarded at . High-speed trains are used mostly for long-haul service and most systems are in Western Europe and East Asia. Magnetic levitation trains such as the Shanghai maglev train use under-riding magnets which attract themselves upward towards the underside of a guideway and this line has achieved somewhat higher peak speeds in day-to-day operation than conventional high-speed railways, although only over short distances. Due to their heightened speeds, route alignments for high-speed rail tend to have broader curves than conventional railways, but may have steeper grades that are more easily climbed by trains with large kinetic energy.
High kinetic energy translates to higher horsepower-to-ton ratios (e.g. ); this allows trains to accelerate and maintain higher speeds and negotiate steep grades as momentum builds up and recovered in downgrades (reducing cut and fill and tunnelling requirements). Since lateral forces act on curves, curvatures are designed with the highest possible radius. All these features are dramatically different from freight operations, thus justifying exclusive high-speed rail lines if it is economically feasible.
Higher-speed rail services are intercity rail services that have top speeds higher than conventional intercity trains but the speeds are not as high as those in the high-speed rail services. These services are provided after improvements to the conventional rail infrastructure to support trains that can operate safely at higher speeds.
Rapid transit is an intracity system built in large cities and has the highest capacity of any passenger transport system. It is usually grade-separated and commonly built underground or elevated. At street level, smaller trams can be used. Light rails are upgraded trams that have step-free access, their own right-of-way and sometimes sections underground. Monorail systems are elevated, medium-capacity systems. A people mover is a driverless, grade-separated train that serves only a few stations, as a shuttle. Due to the lack of uniformity of rapid transit systems, route alignment varies, with diverse rights-of-way (private land, side of road, street median) and geometric characteristics (sharp or broad curves, steep or gentle grades). For instance, the Chicago 'L' trains are designed with extremely short cars to negotiate the sharp curves in the Loop. New Jersey's PATH has similar-sized cars to accommodate curves in the trans-Hudson tunnels. San Francisco's BART operates large cars on its routes.
Freight trains
Freight trains carry cargo using freight cars specialized for the type of goods. Freight trains are very efficient, with economy of scale and high energy efficiency. However, their use can be reduced by lack of flexibility, if there is need of transshipment at both ends of the trip due to lack of tracks to the points of pick-up and delivery. Authorities often encourage the use of cargo rail transport due to its efficiency and to reduce road traffic.
Container trains have become widely used in many places for general freight, particularly in North America, where double stacking reduces costs. Containers can easily be transshipped between other modes, such as ships and trucks, and at breaks of gauge. Containers have succeeded the boxcar (wagon-load), where the cargo had to be loaded and unloaded into the train manually. The intermodal containerization of cargo has revolutionized the supply chain logistics industry, reducing shipping costs significantly. In Europe, the sliding wall wagon has largely superseded the ordinary covered wagons. Other types of cars include refrigerator cars, stock cars for livestock and autoracks for road vehicles. When rail is combined with road transport, a roadrailer will allow trailers to be driven onto the train, allowing for easy transition between road and rail.
Bulk handling represents a key advantage for rail transport. Low or even zero transshipment costs combined with energy efficiency and low inventory costs allow trains to handle bulk much cheaper than by road. Typical bulk cargo includes coal, ore, grains and liquids. Bulk is transported in open-topped cars, hopper cars and tank cars.
Infrastructure
Right-of-way
Railway tracks are laid upon land owned or leased by the railway company. Owing to the desirability of maintaining modest grades, in hilly or mountainous terrain rails will often be laid in circuitous routes . Route length and grade requirements can be reduced by the use of alternating cuttings, bridges and tunnels – all of which can greatly increase the capital expenditures required to develop a right-of-way, while significantly reducing operating costs and allowing higher speeds on longer radius curves. In densely urbanised areas, railways are sometimes laid in tunnels to minimise the effects on existing properties.
Track
Track consists of two parallel steel rails, anchored perpendicular to members called sleepers (ties) of timber, concrete, steel, or plastic to maintain a consistent distance apart, or rail gauge. Other variations are also possible, such as "slab track", in which the rails are fastened to a concrete foundation resting on a prepared subsurface.
Rail gauges are usually categorized as standard gauge (used on approximately 70% of the world's existing railway lines), broad gauge, and narrow gauge. In addition to the rail gauge, the tracks will be laid to conform with a loading gauge which defines the maximum height and width for railway vehicles and their loads to ensure safe passage through bridges, tunnels and other structures.
The track guides the conical, flanged wheels, keeping the cars on the track without active steering and therefore allowing trains to be much longer than road vehicles. The rails and ties are usually placed on a foundation made of compressed earth on top of which is placed a bed of ballast to distribute the load from the ties and to prevent the track from buckling as the ground settles over time under the weight of the vehicles passing above.
The ballast also serves as a means of drainage. Some more modern track in special areas is attached directly without ballast. Track may be prefabricated or assembled in place. By welding rails together to form lengths of continuous welded rail, additional wear and tear on rolling stock caused by the small surface gap at the joints between rails can be counteracted; this also makes for a quieter ride.
On curves, the outer rail may be at a higher level than the inner rail. This is called superelevation or cant. This reduces the forces tending to displace the track and makes for a more comfortable ride for standing livestock and standing or seated passengers. A given amount of superelevation is most effective over a limited range of speeds.
Points and switches – also known as turnouts – are the means of directing a train onto a diverging section of track. Laid similar to normal track, a point typically consists of a frog (common crossing), check rails and two switch rails. The switch rails may be moved left or right, under the control of the signalling system, to determine which path the train will follow.
Spikes in wooden ties can loosen over time, but split and rotten ties may be individually replaced with new wooden ties or concrete substitutes. Concrete ties can also develop cracks or splits, and can also be replaced individually. Should the rails settle due to soil subsidence, they can be lifted by specialized machinery and additional ballast tamped under the ties to level the rails.
Periodically, ballast must be removed and replaced with clean ballast to ensure adequate drainage. Culverts and other passages for water must be kept clear lest water is impounded by the trackbed, causing landslips. Where trackbeds are placed along rivers, additional protection is usually placed to prevent streambank erosion during times of high water. Bridges require inspection and maintenance, since they are subject to large surges of stress in a short period of time when a heavy train crosses.
Gauge incompatibility
The use of different track gauges in different regions of the world, and sometimes within the same country, can impede the movement of passengers and freight. Often elaborate transfer mechanisms are installed where two lines of different gauge meet to facilitate movement across the break of gauge. Countries with multiple gauges in use, such as India and Australia, have invested heavily to unify their rail networks. China is developing a modernized Eurasian Land Bridge to move goods by rail to Western Europe.
Train inspection systems
The inspection of railway equipment is essential for the safe movement of trains. Many types of defect detectors are in use on the world's railroads. These devices use technologies that vary from a simplistic paddle and switch to infrared and laser scanning, and even ultrasonic audio analysis. Their use has avoided many rail accidents over the 70 years they have been used.
Signalling
Railway signalling is a system used to control railway traffic safely to prevent trains from colliding. Being guided by fixed rails which generate low friction, trains are uniquely susceptible to collision since they frequently operate at speeds that do not enable them to stop quickly or within the driver's sighting distance; road vehicles, which encounter a higher level of friction between their rubber tyres and the road surface, have much shorter braking distances. Most forms of train control involve movement authority being passed from those responsible for each section of a rail network to the train crew. Not all methods require the use of signals, and some systems are specific to single track railways.
The signalling process is traditionally carried out in a signal box, a small building that houses the lever frame required for the signalman to operate switches and signal equipment. These are placed at various intervals along the route of a railway, controlling specified sections of track. More recent technological developments have made such operational doctrine superfluous, with the centralization of signalling operations to regional control rooms. This has been facilitated by the increased use of computers, allowing vast sections of track to be monitored from a single location. The common method of block signalling divides the track into zones guarded by combinations of block signals, operating rules, and automatic-control devices so that only one train may be in a block at any time.
Electrification
The electrification system provides electrical energy to the trains, so they can operate without a prime mover on board. This allows lower operating costs, but requires large capital investments along the lines. Mainline and tram systems normally have overhead wires, which hang from poles along the line. Grade-separated rapid transit sometimes use a ground third rail.
Power may be fed as direct (DC) or alternating current (AC). The most common DC voltages are 600 and 750 V for tram and rapid transit systems, and 1,500 and 3,000 V for mainlines. The two dominant AC systems are 15 kV and 25 kV.
Stations
A railway station serves as an area where passengers can board and alight from trains. A goods station is a yard which is exclusively used for loading and unloading cargo. Large passenger stations have at least one building providing conveniences for passengers, such as purchasing tickets and food. Smaller stations typically only consist of a platform. Early stations were sometimes built with both passenger and goods facilities.
Platforms are used to allow easy access to the trains, and are connected to each other via underpasses, footbridges and level crossings. Some large stations are built as culs-de-sac, with trains only operating out from one direction. Smaller stations normally serve local residential areas, and may have connection to feeder bus services. Large stations, in particular central stations, serve as the main public transport hub for the city, and have transfer available between rail services, and to rapid transit, tram or bus services.
Operations
Ownership
Since the 1980s, there has been an increasing trend to split up railway companies, with companies owning the rolling stock separated from those owning the infrastructure. This is particularly true in Europe, where this arrangement is required by the European Union. This has allowed open access by any train operator to any portion of the European railway network. In the UK, the railway track is state owned, with a public controlled body (Network Rail) running, maintaining and developing the track, while Train Operating Companies have run the trains since privatization in the 1990s.
In the U.S., virtually all rail networks and infrastructure outside the Northeast corridor are privately owned by freight lines. Passenger lines, primarily Amtrak, operate as tenants on the freight lines. Consequently, operations must be closely synchronized and coordinated between freight and passenger railroads, with passenger trains often being dispatched by the host freight railroad. Due to this shared system, both are regulated by the Federal Railroad Administration (FRA) and may follow the AREMA recommended practices for track work and AAR standards for vehicles.
Financing
The main source of income for railway companies is from ticket revenue (for passenger transport) and shipment fees for cargo. Discounts and monthly passes are sometimes available for frequent travellers (e.g. season ticket and rail pass). Freight revenue may be sold per container slot or for a whole train. Sometimes, the shipper owns the cars and only rents the haulage. For passenger transport, advertisement income can be significant.
Governments may choose to give subsidies to rail operation, since rail transport has fewer externalities than other dominant modes of transport. If the railway company is state-owned, the state may simply provide direct subsidies in exchange for increased production. If operations have been privatised, several options are available. Some countries have a system where the infrastructure is owned by a government agency or company – with open access to the tracks for any company that meets safety requirements. In such cases, the state may choose to provide the tracks free of charge, or for a fee that does not cover all costs. This is seen as analogous to the government providing free access to roads. For passenger operations, a direct subsidy may be paid to a public-owned operator, or public service obligation tender may be held, and a time-limited contract awarded to the lowest bidder. Total EU rail subsidies amounted to €73 billion in 2005.
Via Rail Canada and US passenger rail service Amtrak are private railroad companies chartered by their respective national governments. As private passenger services declined because of competition from cars and airlines, they became shareholders of Amtrak either with a cash entrance fee or relinquishing their locomotives and rolling stock. The government subsidises Amtrak by supplying start-up capital and making up for losses at the end of the fiscal year.
Safety
Some trains travel faster than road vehicles. They are heavy and unable to deviate from the track, and have longer stopping distances. Possible accidents include derailment (jumping the track) and collisions with another train or a road vehicle, or with pedestrians at level crossings, which account for the majority of all rail accidents and casualties. To minimize the risk, the most important safety measures are strict operating rules, e.g. railway signalling, and gates or grade separation at crossings. Train whistles, bells, or horns warn of the presence of a train, while trackside signals maintain the distances between trains. Another method used to increase safety is the addition of platform screen doors to separate the platform from train tracks. These prevent unauthorised incursion on to the train tracks which can result in accidents that cause serious harm or death, as well as providing other benefits such as preventing litter build up on the tracks which can pose a fire risk.
On many high-speed inter-city networks, such as Japan's Shinkansen, the trains run on dedicated railway lines without any level crossings. This is an important element in the safety of the system as it effectively eliminates the potential for collision with automobiles, other vehicles, or pedestrians, and greatly reduces the probability of collision with other trains. Another benefit is that services on the inter-city network remain punctual.
Maintenance
As in any infrastructure asset, railways must keep up with periodic inspection and maintenance to minimise the effect of infrastructure failures that can disrupt freight revenue operations and passenger services. Because passengers are considered the most crucial cargo and usually operate at higher speeds, steeper grades, and higher capacity/frequency, their lines are especially important. Inspection practices include track geometry cars or walking inspection. Curve maintenance especially for transit services includes gauging, fastener tightening, and rail replacement.
Rail corrugation is a common issue with transit systems due to the high number of light-axle, wheel passages which result in grinding of the wheel/rail interface. Since maintenance may overlap with operations, maintenance windows (nighttime hours, off-peak hours, altering train schedules or routes) must be closely followed. In addition, passenger safety during maintenance work (inter-track fencing, proper storage of materials, track work notices, hazards of equipment near states) must be regarded at all times. At times, maintenance access problems can emerge due to tunnels, elevated structures, and congested cityscapes. Here, specialised equipment or smaller versions of conventional maintenance gear are used.
Unlike highways or road networks where capacity is disaggregated into unlinked trips over individual route segments, railway capacity is fundamentally considered a network system. As a result, many components are causes and effects of system disruptions. Maintenance must acknowledge the vast array of a route's performance (type of train service, origination/destination, seasonal impacts), a line's capacity (length, terrain, number of tracks, types of train control), trains throughput (max speeds, acceleration/ deceleration rates), and service features with shared passenger-freight tracks (sidings, terminal capacities, switching routes, and design type).
Social, economic, and energy aspects
Energy
Transport by rail is an energy-efficient but capital-intensive means of mechanized land transport. The tracks provide smooth and hard surfaces on which the wheels of the train can roll with a relatively low level of friction.
A typical modern wagon can hold up to of freight on two four-wheel bogies. The track distributes the weight of the train evenly, allowing significantly greater loads per axle and wheel than in road transport, leading to greater energy efficiency. Trains have a smaller frontal area in relation to the load they are carrying, which reduces air resistance and thus energy usage.
In addition, the presence of track guiding the wheels allows for very long trains to be pulled by one or a few engines and driven by a single operator, even around curves, which allows for economies of scale in both manpower and energy use; by contrast, in road transport, more than two articulations causes fishtailing and makes the vehicle unsafe.
Energy efficiency
Considering only the energy spent to move the means of transport, and using the example of the urban area of Lisbon, electric trains seem to be on average 20 times more efficient than automobiles for transportation of passengers, if we consider energy spent per passenger-distance with similar occupation ratios. Considering an automobile with a consumption of around of fuel, the average car in Europe has an occupancy of around 1.2 passengers per automobile (occupation ratio around 24%) and that one litre of fuel amounts to about , equating to an average of per passenger-km. This compares to a modern train with an average occupancy of 20% and a consumption of about , equating to per passenger-km, 20 times less than the automobile.
Usage
Due to these benefits, rail transport is a major form of passenger and freight transport in many countries. It is ubiquitous in Europe, with an integrated network covering virtually the whole continent. In India, China, South Korea and Japan, many millions use trains as regular transport. In North America, freight rail transport is widespread and heavily used, but intercity passenger rail transport is relatively scarce outside the Northeast Corridor, due to increased preference of other modes, particularly automobiles and airplanes. However, implementing new and improved ways such as making it easily accessible within neighborhoods can aid in reducing commuters from using private vehicles and airplanes.
South Africa, northern Africa and Argentina have extensive rail networks, but some railways elsewhere in Africa and South America are isolated lines. Australia has a generally sparse network befitting its population density but has some areas with significant networks, especially in the southeast. In addition to the previously existing east–west transcontinental line in Australia, a line from north to south has been constructed. The highest railway in the world is the line to Lhasa, in Tibet, partly running over permafrost territory. Western Europe has the highest railway density in the world and many individual trains there operate through several countries despite technical and organizational differences in each national network.
Social and economic impact
Modernization
Historically, railways have been considered central to modernity and ideas of progress. The process of modernization in the 19th century involved a transition from a spatially oriented world to a time-oriented world. Timekeeping became of heightened importance, resulting in clock towers for railway stations, clocks in public places, and pocket watches for railway workers and travellers. Trains followed exact schedules and never left early, whereas in the premodern era, passenger ships left whenever the captain had enough passengers. In the premodern era, local time was set at noon, when the sun was at its highest; this changed with the introduction of standard time zones. Printed timetables were a convenience for travellers, but more elaborate timetables, called train orders, were essential for train crews, the maintenance workers, the station personnel, and for the repair and maintenance crews. The structure of railway timetables were later adapted for different uses, such as schedules for buses, ferries, and airplanes, for radio and television programmes, for school schedules, and for factory time clocks.
The invention of the electrical telegraph in the early 19th century also was crucial for the development and operation of railroad networks. If bad weather disrupted the system, telegraphers relayed immediate corrections and updates throughout the system. Additionally, most railroads were single-track, with sidings and signals to allow lower priority trains to be sidetracked and have scheduled meets.
Nation-building
Scholars have linked railroads to successful nation-building efforts by states.
Model of corporate management
According to historian Henry Adams, a railroad network needed:
the energies of a generation, for it required all the new machinery to be created capital, banks, mines, furnaces, shops, power-houses, technical knowledge, mechanical population, together with a steady remodelling of social and political habits, ideas, and institutions to fit the new scale and suit the new conditions. The generation between 1865 and 1895 was already mortgaged to the railways, and no one knew it better than the generation itself.
The impact can be examined through five aspects: shipping, finance, management, careers, and popular reaction.
Shipping freight and passengers
Railroads form an efficient network for shipping freight and passengers across a large national market; their development thus was beneficial to many aspects of a nation's economy, including manufacturing, retail and wholesale, agriculture, and finance. By the 1940s, the United States had an integrated national market comparable in size to that of Europe, but free of internal barriers or tariffs, and supported by a common language, financial system, and legal system.
Financial system
Financing of railroads provided the basis for a dramatic expansion of the private (non-governmental) financial system. Construction of railroads was far more expensive than factories: in 1860, the combined total of railroad stocks and bonds was $1.8 billion; in 1897, it reached $10.6 billion (compared to a total national debt of $1.2 billion).
Funding came from financiers in the Northeastern United States and from Europe, especially Britain. About 10 percent of the funding came from the government, particularly in the form of land grants that were realized upon completion of a certain amount of trackage. The emerging American financial system was based on railroad bonds, and by 1860, New York was the dominant financial market. The British invested heavily in railroads around the world, but nowhere more than in the United States; the total bond value reached about $3 billion by 1914. However, in 1914–1917, the British liquidated their American assets to pay for war supplies.
Modern management
Railroad management designed complex systems that could handle far more complicated simultaneous relationships than those common in other industries at the time. Civil engineers became the senior management of railroads. The leading American innovators were the Western Railroad of Massachusetts and the Baltimore and Ohio Railroad in the 1840s, the Erie Railroad in the 1850s, and the Pennsylvania Railroad in the 1860s.
Career paths
The development of railroads led to the emergence of private-sector careers for both blue-collar workers and white-collar workers. Railroading became a lifetime career for young men; women were almost never hired. A typical career path would see a young man hired at age 18 as a shop labourer, be promoted to skilled mechanic at age 24, brakemen at 25, freight conductor at 27, and passenger conductor at age 57. White-collar career paths likewise were delineated: educated young men started in clerical or statistical work and moved up to station agents or bureaucrats at the divisional or central headquarters, acquiring additional knowledge, experience, and human capital at each level. Being very hard to replace, they were virtually guaranteed permanent jobs and provided with insurance and medical care.
Hiring, firing, and wage rates were set not by foremen, but by central administrators, to minimize favouritism and personality conflicts. Everything was done by the book, whereby an increasingly complex set of rules dictated to everyone exactly what should be done in every circumstance, and exactly what their rank and pay would be. By the 1880s, career railroaders began retiring, and pension systems were invented for them.
Transportation
Railways contribute to social vibrancy and economic competitiveness by transporting multitudes of customers and workers to city centres and inner suburbs. Hong Kong has recognized rail as "the backbone of the public transit system" and as such developed their franchised bus system and road infrastructure in comprehensive alignment with their rail services. China's large cities such as Beijing, Shanghai, and Guangzhou recognize rail transit lines as the framework and bus lines as the main body to their metropolitan transportation systems. The Japanese Shinkansen was built to meet the growing traffic demand in the "heart of Japan's industry and economy" situated on the Tokyo-Kobe line.
Military role
Rail transport can be important for military activity. During the 1860s, railways provided a means for rapid movement of troops and supplies during the American Civil War, as well as in the Austro-Prussian and Franco-Prussian Wars Throughout the 20th century, rail was a key element of war plans for rapid military mobilization, allowing for the quick and efficient transport of large numbers of reservists to their mustering-points, and infantry soldiers to the front lines. So-called strategic railways were or are constructed for a primarily military purpose. The Western Front in France during World War I required many trainloads of munitions a day. Conversely, owing to their strategic value, rail yards and bridges in Germany and occupied France were major targets of Allied air raids during World War II. Rail transport and infrastructure continues to play an important role in present-day conflicts like the Russian invasion of Ukraine, where sabotage of railways in Belarus and in Russia also influenced the course of the war.
Positive impacts
Railways channel growth towards dense city agglomerations and along their arteries. This contrasts with highway expansion, indicative of the U.S. transportation policy post-World War II, which instead encourages development of suburbs at the periphery of metropolitan areas, contributing to increased vehicle miles travelled, carbon emissions, development of greenfield spaces, and depletion of natural reserves. These arrangements revalue city spaces, local taxes, housing values, and promotion of mixed use development.
Negative impacts
There has also been some opposition to the development of railway networks. For instance, the arrival of railways and steam locomotives to Austria during the 1840s angered locals because of the noise, smell, and pollution caused by the trains and the damage to homes and the surrounding land caused by the engine's soot and fiery embers; and since most travel did not occur over long distances, few people utilized the new line.
Pollution
A 2018 study found that the opening of the Beijing Metro caused a reduction in "most of the air pollutants concentrations (PM2.5, PM10, SO2, NO2, and CO) but had little effect on ozone pollution."
Modern rail as economic development indicator
European development economists have argued that the existence of modern rail infrastructure is a significant indicator of a country's economic advancement: this perspective is illustrated notably through the Basic Rail Transportation Infrastructure Index (known as BRTI Index).
Subsidies
In 2010, annual rail spending in China was ¥840 billion (US$ billion in ), from 2014 to 2017 China had an annual target of ¥800 billion (US$ billion in ) and planned to spend ¥3.5 trillion (US$ trillion in ) over 2016–2020.
The Indian Railways are subsidized by around ₹260 billion (US$ billion in ), of which around 60% goes to commuter rail and short-haul trips.
According to the 2017 European Railway Performance Index for intensity of use, quality of service and safety performance, the top tier European national rail systems consists of Switzerland, Denmark, Finland, Germany, Austria, Sweden, and France. Performance levels reveal a positive correlation between public cost and a given railway system's performance, and also reveal differences in the value that countries receive in return for their public cost. Denmark, Finland, France, Germany, the Netherlands, Sweden, and Switzerland capture relatively high value for their money, while Luxembourg, Belgium, Latvia, Slovakia, Portugal, Romania, and Bulgaria underperform relative to the average ratio of performance to cost among European countries.
Russia
In 2016, Russian Railways received 94.9 billion roubles (around US$1.4 billion) from the government.
North America
United States
In 2015, funding from the U.S. federal government for Amtrak was around US$1.4 billion. By 2018, appropriated funding had increased to approximately US$1.9 billion.
| Technology | Trains | null |
25717 | https://en.wikipedia.org/wiki/Regular%20expression | Regular expression | A regular expression (shortened as regex or regexp), sometimes referred to as rational expression, is a sequence of characters that specifies a match pattern in text. Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings, or for input validation. Regular expression techniques are developed in theoretical computer science and formal language theory.
The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized the concept of a regular language. They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax.
Regular expressions are used in search engines, in search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK, and in lexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine", and many of these are available for reuse.
History
Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular events. These arose in theoretical computer science, in the subfields of automata theory (models of computation) and the description and classification of formal languages, motivated by Kleene's attempt to describe early artificial neural networks. (Kleene introduced it as an alternative to McCulloch & Pitts's "prehensible", but admitted "We would welcome any suggestions as to a more descriptive term.") Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs.
Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation. He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/re/p meaning "Global search for Regular Expression and Print matching lines"). Around the same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design.
Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including lex, sed, AWK, and expr, and in other programs such as vi, and Emacs (which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992.
In the 1980s, the more complicated regexes arose in Perl, which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation for Tcl called Advanced Regular Expressions. The Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. Perl later expanded on Spencer's original library to add many new features. Part of the effort in the design of Raku (formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars. The result is a mini-language called Raku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF-style definition of a recursive descent parser via sub-rules.
The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in the glob syntax for filenames, and in the SQL LIKE operator.
Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools including PHP and Apache HTTP Server.
Today, regexes are widely supported in programming languages, text processing programs (particularly lexers), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python, and is built into the syntax of others, including Perl and ECMAScript. In the late 2010s, several companies started to offer hardware, FPGA, GPU implementations of PCRE compatible regex engines that are faster compared to CPU implementations.
Patterns
The phrase regular expressions, or regexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex b., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, . is a very general pattern, [a-z] (match all lower case letters from 'a' to 'z') is less general and b is a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard.
A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base.
The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?.
A regex processor translates a regular expression in the above syntax into an internal representation that can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match the regular expression.
The picture shows the NFA scheme N(s*) obtained from the regular expression s*, where s denotes a simpler regular expression in turn, which has already been recursively translated to the NFA N(s).
Basic concepts
A regular expression, often called a pattern, specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel; we say that this pattern matches each of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example, (Hän|Han|Haen)del also specifies the same set of three strings in this example.
Most formalisms provide the following operations to construct regular expressions.
Boolean "or"
A vertical bar separates alternatives. For example, can match "gray" or "grey".
Grouping
Parentheses are used to define the scope and precedence of the operators (among other uses). For example, gray|grey and are equivalent patterns which both describe the set of "gray" or "grey".
Quantification
A quantifier after an element (such as a token, character, or group) specifies how many times the preceding element is allowed to repeat. The most common quantifiers are the question mark ?, the asterisk * (derived from the Kleene star), and the plus sign + (Kleene plus).
{|
|-
| style="width:15px; vertical-align:top;" |?
|The question mark indicates zero or one occurrences of the preceding element. For example, colou?r matches both "color" and "colour".
|-
| style="vertical-align:top;" |*
|The asterisk indicates zero or more occurrences of the preceding element. For example, ab*c matches "ac", "abc", "abbc", "abbbc", and so on.
|-
| style="vertical-align:top;" |+
|The plus sign indicates one or more occurrences of the preceding element. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac".
|-
|{n}
| The preceding item is matched exactly n times.
|-
|{min,}
| The preceding item is matched min or more times.
|-
|{,max}
| The preceding item is matched up to max times.
|-
|{min,max}
| The preceding item is matched at least min times, but not more than max times.
|}
Wildcard
The wildcard . matches any character. For example,
a.b matches any string that contains an "a", and then any character and then "b".
a.*b matches any string that contains an "a", and then the character "b" at some later point.
These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷.
The precise syntax for regular expressions varies among tools and with context; more detail is given in .
Formal language theory
Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars.
Formal definition
Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are defined
as regular expressions:
(empty set) ∅ denoting the set ∅.
(empty string) ε denoting the set containing only the "empty" string, which has no characters at all.
(literal character) a in Σ denoting the set containing only the character a.
Given regular expressions R and S, the following operations over them are defined
to produce regular expressions:
(concatenation) (RS) denotes the set of strings that can be obtained by concatenating a string accepted by R and a string accepted by S (in that order). For example, let R denote {"ab", "c"} and S denote {"d", "ef"}. Then, (RS) denotes {"abd", "abef", "cd", "cef"}.
(alternation) (R|S) denotes the set union of sets described by R and S. For example, if R describes {"ab", "c"} and S describes {"ab", "d", "ef"}, expression (R|S) describes {"ab", "c", "d", "ef"}.
(Kleene star) (R*) denotes the smallest superset of the set described by R that contains ε and is closed under string concatenation. This is the set of all strings that can be made by concatenating any finite number (including zero) of strings from the set described by R. For example, if R denotes {"0", "1"}, (R*) denotes the set of all finite binary strings (including the empty string). If R denotes {"ab", "c"}, (R*) denotes {ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "abcab", ...}.
To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example, (ab)c can be written as abc, and a|(b(c*)) can be written as a|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar.
Examples:
a|b* denotes {ε, "a", "b", "bb", "bbb", ...}
(a|b)* denotes the set of all strings with no symbols other than "a" and "b", including the empty string: {ε, "a", "b", "aa", "ab", "ba", "bb", "aaa", ...}
ab*(c|ε) denotes the set of strings starting with "a", then zero or more "b"s and finally optionally a "c": {"a", "ac", "ab", "abc", "abb", "abbc", ...}
(0|(1(01*0)*1))* denotes the set of binary numbers that are multiples of 3: { ε, "0", "00", "11", "000", "011", "110", "0000", "0011", "0110", "1001", "1100", "1111", "00000", ...}
Expressive power and compactness
The formal definition of regular expressions is minimal on purpose, and avoids defining ? and +—these can be expressed as follows: a+=aa*, and a?=(a|ε). Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause a double exponential blow-up of its length.
Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages
Lk consisting of all strings over the alphabet {a,b} whose kth-from-last letter equals a. On the one hand, a regular expression describing L4 is given by
.
Generalizing this pattern to Lk gives the expression:
On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy.
In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file .
Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved by Kleene's algorithm.
Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement regexes. See below for more on this.
Deciding equivalence of regular expressions
As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.
It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent).
Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)* and (X* Y*)* denote the same regular language, for all regular expressions X, Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)* and (a* b*)* denote the same language over the alphabet Σ={a,b}. More generally, an equation E=F between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.
Every regular expression can be written solely in terms of the Kleene star and set unions over finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra, using equational and Horn clause axioms.
Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.
Syntax
A regex pattern matches a target string. The pattern is composed of a sequence of atoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using ( ) as metacharacters. Metacharacters help form: atoms; quantifiers telling how many atoms (and whether it is a greedy quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities.
Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence, in this case, the backslash \. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters ( ) and { } be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are {}[]()^$.|*+? and \. The usual characters that become metacharacters when escaped are dswDSW and N.
Delimiters
When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex re is entered as "re". However, they are often written with slashes as delimiters, as in /re/ for the regex re. This originates in ed, where / is the editor command for searching, and an expression /re/ can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famously g/re/p as in grep ("global regex print"), which is included in most Unix-based operating systems, such as Linux distributions. A similar convention is used in sed, where search and replace is given by s/re/replacement/ and patterns can be joined with a comma to specify a range of lines as in /re1/,/re2/. This notation is particularly well known due to its use in Perl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command s,/,X, will replace a / with an X, using commas as delimiters.
IEEE POSIX Standard
The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions), ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE is deprecated, in favor of BRE, as both provide backward compatibility. The subsection below covering the character classes applies to both BRE and ERE.
BRE and ERE work together. ERE adds ?, +, and |, and it removes the need to escape the metacharacters ( ) and { }, which are required in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU grep has the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" for Perl regexes.
Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, ( ) and { } are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching, backreferences, named capture groups, and recursive patterns.
POSIX basic and extended
In the POSIX standard, Basic Regular Syntax (BRE) requires that the metacharacters ( ) and { } be designated \(\) and \{\}, whereas Extended Regular Syntax (ERE) does not.
Examples:
.at matches any three-character string ending with "at", including "hat", "cat", "bat", "4at", "#at" and " at" (starting with a space).
[hc]at matches "hat" and "cat".
[^b]at matches all strings matched by .at except "bat".
[^hc]at matches all strings matched by .at other than "hat" and "cat".
^[hc]at matches "hat" and "cat", but only at the beginning of the string or line.
[hc]at$ matches "hat" and "cat", but only at the end of the string or line.
\[.\] matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]", "[b]", "[7]", "[@]", "[]]", and "[ ]" (bracket space bracket).
s.* matches s followed by zero or more characters, for example: "s", "saw", "seed", "s3w96.7", and "s6#h%(>>>m n mQ".
According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax).
Metacharacters in POSIX extended
The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \) is now ( ) and \{ \} is now { }. Additionally, support is removed for \n backreferences and the following metacharacters are added:
Examples:
[hc]?at matches "at", "hat", and "cat".
[hc]*at matches "at", "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on.
[hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on, but not "at".
cat|dog matches "cat" or "dog".
POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.
Character classes
The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example, [A-Z] could stand for any uppercase letter in the English alphabet, and \d could mean any digit. Character classes apply to both POSIX levels.
When specifying a range of characters, such as [a-Z] (i.e. lowercase a to uppercase Z), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could be abc...zABC...Z, or aAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table:
POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab] matches the uppercase letters and lowercase "a" and "b".
An additional non-POSIX class understood by some tools is [:word:], which is usually defined as [:alnum:] plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor Vim further distinguishes word and word-head classes (using the notation \w and \h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like \h\w* or [[:alpha:]_][[:alnum:]_]* in POSIX notation.
Note that what the POSIX regex standards call character classes are commonly referred to as POSIX character classes in other regex flavors which support them. With most other regex flavors, the term character class is used to describe what POSIX calls bracket expressions.
Perl and PCRE
Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar to Perl's—for example, Java, JavaScript, Julia, Python, Ruby, Qt, Microsoft's .NET Framework, and XML Schema. Some languages and tools such as Boost and PHP support multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.
Lazy matching
In Python and some other implementations (e.g. Java), the three common quantifiers (*, + and ?) are greedy by default because they match as many characters as possible. The regex ".+" (including the double-quotes) applied to the string
"Ganymede," he continued, "is the largest moon in the Solar System."
matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part, "Ganymede,". The aforementioned quantifiers may, however, be made lazy or minimal or reluctant, matching as few characters as possible, by appending a question mark: ".+?" matches only "Ganymede,".
Possessive matching
In Java and Python 3.11+, quantifiers may be made possessive by appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed: While the regex ".*" applied to the string
"Ganymede," he continued, "is the largest moon in the Solar System."
matches the entire line, the regex ".*+" does , because .*+ consumes the entire input, including the final ". Thus, possessive quantifiers are most useful with negated character classes, e.g. "[^"]*+", which matches "Ganymede," when applied to the same string.
Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is . For example, while matches both and , only matches because the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi".
Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.
IETF I-Regexp
IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences.
Patterns for non-regular languages
Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds the regular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.+)\1.
The language of squares is not regular, nor is it context-free, due to the pumping lemma. However, pattern matching with an unbounded number of backreferences, as supported by numerous modern tools, is still context sensitive. The general problem of matching any number of backreferences is NP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used.
However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theory and pattern matching. For this reason, some people have taken to using the term regex, regexp, or simply pattern to describe the latter. Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku:
Assertions
Other features not found in describing regular languages include assertions. These include the ubiquitous and , used since at least 1970, as well as some more sophisticated extensions like lookaround that appeared in 1994. Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching. Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.
The and have been attested since at least 1994, starting with Perl 5. The lookbehind assertions and are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005.
Implementations and running times
There are at least three different algorithms that decide whether and how a given regex matches a string.
The oldest and fastest relies on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of O(2m), but it can be run on a string of size n in time O(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded.
An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky. Modern implementations include the re1-re2-sregex family based on Cox's code.
The third algorithm is to match the pattern against the input string by backtracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service (ReDoS).
Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.
Sublinear runtime algorithms have been achieved using Boyer-Moore (BM) based algorithms and related DFA optimization techniques such as the reverse scan. GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wu agrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.
A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of time and space for a haystack of length n and k backreferences in the RegExp. A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.
Unicode
In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to use ASCII characters as their token set though regex libraries have supported numerous other character sets. Many modern regex engines offer at least some support for Unicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode.
Supported encoding. Some regex libraries expect to work on some particular encoding instead of on abstract Unicode characters. Many of these require the UTF-8 encoding, while others might expect UTF-16, or UTF-32. In contrast, Perl and Java are agnostic on encodings, instead operating on decoded characters internally.
Supported Unicode range. Many regex engines support only the Basic Multilingual Plane, that is, the characters which can be encoded with only 16 bits. Currently (as of ) only a few regex engines (e.g., Perl's and Java's) can handle the full 21-bit Unicode range.
Extending ASCII-oriented constructs to Unicode. For example, in ASCII-based implementations, character ranges of the form [x-y] are valid wherever x and y have code points in the range [0x00,0x7F] and codepoint(x) ≤ codepoint(y). The natural extension of such character ranges to Unicode would simply change the requirement that the endpoints lie in [0x00,0x7F] to the requirement that they lie in [0x0000,0x10FFFF]. However, in practice this is often not the case. Some implementations, such as that of gawk, do not allow character ranges to cross Unicode blocks. A range like [0x61,0x7F] is valid since both endpoints fall within the Basic Latin block, as is [0x0530,0x0560] since both endpoints fall within the Armenian block, but a range like [0x0061,0x0532] is invalid since it includes multiple Unicode blocks. Other engines, such as that of the Vim editor, allow block-crossing but the character values must not be more than 256 apart.
Case insensitivity. Some case-insensitivity flags affect only the ASCII characters. Other flags affect all characters. Some engines have two different flags, one for ASCII, the other for Unicode. Exactly which characters belong to the POSIX classes also varies.
Cousins of case insensitivity. As ASCII has case distinction, case insensitivity became a logical feature in text searching. Unicode introduced alphabetic scripts without case like Devanagari. For these, case sensitivity is not applicable. For scripts like Chinese, another distinction seems logical: between traditional and simplified. In Arabic scripts, insensitivity to initial, medial, final, and isolated position may be desired. In Japanese, insensitivity between hiragana and katakana is sometimes useful.
Normalization. Unicode has combining characters. Like old typewriters, plain base characters (white spaces, punctuation characters, symbols, digits, or letters) can be followed by one or more non-spacing symbols (usually diacritics, like accent marks modifying letters) to form a single printable character; but Unicode also provides a limited set of precomposed characters, i.e. characters that already include one or more combining characters. A sequence of a base character + combining characters should be matched with the identical single precomposed character (only some of these combining sequences can be precomposed into a single Unicode character, but infinitely many other combining sequences are possible in Unicode, and needed for various languages, using one or more combining characters after an initial base character; these combining sequences may include a base character or combining characters partially precomposed, but not necessarily in canonical order and not necessarily using the canonical precompositions). The process of standardizing sequences of a base character + combining characters by decomposing these canonically equivalent sequences, before reordering them into canonical order (and optionally recomposing some combining characters into the leading base character) is called normalization.
New control codes. Unicode introduced, among other codes, byte order marks and text direction markers. These codes might have to be dealt with in a special way.
Introduction of character classes for Unicode blocks, scripts, and numerous other character properties. Block properties are much less useful than script properties, because a block can have code points from several different scripts, and a script can have code points from several different blocks. In Perl and the library, properties of the form \p{InX} or \p{Block=X} match characters in block X and \P{InX} or \P{Block=X} matches code points not in that block. Similarly, \p{Armenian}, \p{IsArmenian}, or \p{Script=Armenian} matches any character in the Armenian script. In general, \p{X} matches any character with either the binary property X or the general category X. For example, \p{Lu}, \p{Uppercase_Letter}, or \p{GC=Lu} matches any uppercase letter. Binary properties that are not general categories include \p{White_Space}, \p{Alphabetic}, \p{Math}, and \p{Dash}. Examples of non-binary properties are \p{Bidi_Class=Right_to_Left}, \p{Word_Break=A_Letter}, and \p{Numeric_Value=10}.
Language support
Most general-purpose programming languages support regex capabilities, either natively or via libraries.
Uses
Regexes are useful in a wide variety of text processing tasks, and more generally string processing, where the data need not be textual. Common applications include data validation, data scraping (especially web scraping), data wrangling, simple parsing, the production of syntax highlighting systems, and many other tasks.
Some high-end desktop publishing software has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining a character style that makes text into small caps and then using the regex [A-Z]{4,} to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead.
While regexes would be useful on Internet search engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions include Google Code Search and Exalead. However, Google Code Search was shut down in January 2012.
Examples
The specific syntax rules vary depending on the specific implementation, programming language, or library in use. Additionally, the functionality of regex implementations can vary between versions.
Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation.
This section provides a basic description of some of the properties of regexes by way of illustration.
The following conventions are used in the examples.
metacharacter(s) ;; the metacharacters column specifies the regex syntax being demonstrated
=~ m// ;; indicates a regex match operation in Perl
=~ s/// ;; indicates a regex substitution operation in Perl
These regexes are all Perl-like syntax. Standard POSIX regular expressions are different.
Unless otherwise indicated, the following examples conform to the Perl programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, \( \) vs. (), or lack of \d instead of POSIX [:digit:]).
The syntax and conventions used in these examples coincide with that of other programming environments as well.
Induction
Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as the induction of regular languages and is part of the general problem of grammar induction in computational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of strings not in that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (see language identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).
| Technology | Software development: General | null |
25748 | https://en.wikipedia.org/wiki/Router%20%28computing%29 | Router (computing) | A router is a computer and networking device that forwards data packets between computer networks, including internetworks such as the global Internet.
A router is connected to two or more data lines from different IP networks. When a data packet comes in on a line, the router reads the network address information in the packet header to determine the ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Data packets are forwarded from one router to another through an internetwork until it reaches its destination node.
The most familiar type of IP routers are home and small office routers that forward IP packets between the home computers and the Internet. More sophisticated routers, such as enterprise routers, connect large business or ISP networks to powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Routers can be built from standard computer parts but are mostly specialized purpose-built computers. Early routers used software-based forwarding, running on a CPU. More sophisticated devices use application-specific integrated circuits (ASICs) to increase performance or add advanced filtering and firewall functionality.
Operation
When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol. Each router builds up a routing table, a list of routes, between two computer systems on the interconnected networks.
The software that runs the router is composed of two functional processing units that operate simultaneously, called planes:
Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table. The control-plane logic then strips non-essential directives from the table and builds a forwarding information base (FIB) to be used by the forwarding plane.
Forwarding plane: This unit forwards the data packets between incoming and outgoing interface connections. It reads the header of each packet as it comes in, matches the destination to entries in the FIB supplied by the control plane, and directs the packet to the outgoing network specified in the FIB.
Applications
A router may have interfaces for multiple types of physical layer connections, such as copper cables, fiber optic, or wireless transmission. It can also support multiple network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a unique network prefix.
Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' (ISPs') networks, they are also responsible for directing data between different networks. The largest routers (such as the Cisco CRS-1 or Juniper PTX) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks.
All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Some routers can connect to Data service units for T1 connections via serial ports.
Access, core and distribution
The hierarchical internetworking model divides enterprise networks into three layers: core, distribution, and access.
Access routers, including small office/home office (SOHO) models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt, or DD-WRT.
Distribution routers aggregate traffic from multiple access routers. Distribution routers are often responsible for enforcing quality of service across a wide area network (WAN), so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks.
In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth but lack some of the features of edge routers.
Security
External networks must be carefully considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, and other security functions, or they may be handled by separate devices. Routers also commonly perform network address translation which restricts connections initiated from external connections but is not recognized as a security feature by all experts. Some experts argue that open source routers are more secure and reliable than closed source routers because errors and potentially exploitable vulnerabilities are more likely to be discovered and addressed in an open-source environment.
Routing different networks
Routers are also often distinguished on the basis of the network in which they operate. A router in a local area network (LAN) of a single organization is called an interior router. A router that is operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network (WAN) is called a border router, or gateway router.
Internet connectivity and internal use
Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). defines the types of BGP routers according to their functions:
Edge router or inter-AS border router: Placed at the edge of an ISP network, where the router is used to peer with the upstream IP transit providers, bilateral peers through IXP, private peering (or even settlement-free peering) through Private Network Interconnect (PNI) via the extensive use of Exterior Border Gateway Protocol (eBGP).
Provider Router (P): A Provider router is also called a transit-router, it sits in an MPLS network and is responsible for establishing label-switched paths between the PE routers.
Provider edge router (PE): An MPLS-specific router in the network's access layer that interconnects with customer edge routers to provide layer 2 or layer 3 VPN services.
Customer edge router (CE): Located at the edge of the subscriber's network, it interconnects with the PE router for L2VPN services, or direct layer 3 IP hand-off in the case of Dedicated Internet Access, if IP Transit services are provided through an MPLS core, the CE peers with the PE using eBGP with the public ASNs of each respective network. In the case of L3VPN services the CE can exchange routes with the PE using eBGP. It is commonly used in both service provider and enterprise or data center organizations.
Core router: Resides within an Autonomous System as a backbone to carry traffic between edge routers.
Within an ISP: In the ISP's autonomous system, a router uses internal BGP to communicate with other ISP edge routers, other intranet core routers, or the ISP's intranet provider border routers.
Internet backbone: The Internet no longer has a clearly identifiable backbone, unlike its predecessor networks. See default-free zone (DFZ). The major ISPs' system routers make up what could be considered to be the current Internet backbone core. ISPs operate all four types of the BGP routers described here. An ISP core router is used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multiprotocol Label Switching protocols.
Port forwarding: In some networks, that rely on legacy IPv4 and NAT, routers (often labeled as NAT boxes) are also used for port forwarding configuration between RFC1918 address space and their publicly assigned IPv4 address.
Voice, data, fax, and video processing routers: Commonly referred to as access servers or gateways, these devices are used to route and process voice, data, video and fax traffic on the Internet. Since 2005, most long-distance phone calls have been processed as IP traffic (VOIP) through a voice gateway. Use of access server-type routers expanded with the advent of the Internet, first with dial-up access and another resurgence with voice phone service.
Larger networks commonly use multilayer switches, with layer-3 devices being used to simply interconnect multiple subnets within the same security zone, and higher-layer switches when filtering, translation, load balancing, or other higher-level functions are required, especially between zones.
Wi-Fi routers
Wi-Fi routers combine the functions of a router with those of a wireless access point. They are typically devices with a small form factor, operating on the standard electric power supply for residential use. Connected to the Internet as offered by an Internet service provider, they provide Internet access through a wireless network for home or office use.
History
The concepts of a switching node using software and an interface computer were first proposed by Donald Davies for the NPL network in 1966. The same idea was conceived by Wesley Clark the following year for use in the ARPANET, which were named Interface Message Processors (IMPs). The first interface computer was implemented at the National Physical Laboratory in the United Kingdom in early 1969, followed later that year by the IMPs at the University of California, Los Angeles, the Stanford Research Institute, the University of California, Santa Barbara, and the University of Utah School of Computing in the United States. All were built with the Honeywell 516. These computers had fundamentally the same functionality as a router does today.
The idea for a router (called a gateway at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that function entirely to the hosts. This particular idea, the end-to-end principle, was pioneered in the CYCLADES network.
The idea was explored in more detail, with the intention to produce a prototype system as part of two contemporaneous programs. One was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system. Some time after early 1974, the first Xerox routers became operational. Due to corporate intellectual property concerns, it received little attention outside Xerox for years. The other was the DARPA-initiated program, which created the TCP/IP architecture in use today. The first true IP router was developed by Ginny Travers at BBN, as part of that DARPA-initiated effort, during 1975–1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet. Mike Brecia, Ginny Travers, and Bob Hinden received the IEEE Internet Award for early IP routers in 2008.
The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981 and both were also based on PDP-11s. Stanford's router program was led by William Yeager and MIT's by Noel Chiappa. Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking when protocols other than TCP/IP were in use. Modern routers that handle both IPv4 and IPv6 are multiprotocol but are simpler devices than ones processing AppleTalk, DECnet, IPX, and Xerox protocols.
From the mid-1970s and in the 1980s, general-purpose minicomputers served as routers. Modern high-speed routers are network processors or highly specialized computers with extra hardware acceleration added to speed both common routing functions, such as packet forwarding, and specialized functions such as IPsec encryption. There is substantial use of Linux and Unix software-based machines, running open source routing code, for research and other applications. The Cisco IOS operating system was independently designed. Major router operating systems, such as Junos and NX-OS, are extensively modified versions of Unix software.
Forwarding
The main purpose of a router is to connect multiple networks and forward packets destined either for directly attached networks or more remote networks. A router is considered a layer-3 device because its primary forwarding decision is based on the information in the layer-3 IP packet, specifically the destination IP address. When a router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the addresses in the routing table. Once a match is found, the packet is encapsulated in the layer-2 data link frame for the outgoing interface indicated in the table entry. A router typically does not look into the packet payload, but only at the layer-3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, quality of service (QoS). For pure IP forwarding, a router is designed to minimize the state information associated with individual packets. Once a packet is forwarded, the router does not retain any historical information about the packet.
The routing table itself can contain information derived from a variety of sources, such as a default or static routes that are configured manually, or dynamic entries from routing protocols where the router learns routes from other routers. A default route is one that is used to route all traffic whose destination does not otherwise appear in the routing table; it is common – even necessary – in small networks, such as a home or small business where the default route simply sends all non-local traffic to the Internet service provider. The default route can be manually configured (as a static route); learned by dynamic routing protocols; or be obtained by DHCP.
A router can run more than one routing protocol at a time, particularly if it serves as an autonomous system border router between parts of a network that run different routing protocols; if it does so, then redistribution may be used (usually selectively) to share information between the different protocols running on the same router.
Besides deciding to which interface a packet is forwarded, which is handled primarily via the routing table, a router also has to manage congestion when packets arrive at a rate higher than the router can process. Three policies commonly used are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented: the router simply drops new incoming packets once buffer space in the router is exhausted. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until reaching a pre-determined maximum, when it drops all incoming packets, thus reverting to tail drop. WRED can be configured to drop packets more readily dependent on the type of traffic.
Another function a router performs is traffic classification and deciding which packet should be processed first. This is managed through QoS, which is critical when Voice over IP is deployed, so as not to introduce excessive latency.
Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made.
Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid the overhead of scheduling CPU time to process the packets. Others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC.
| Technology | Networks | null |
25750 | https://en.wikipedia.org/wiki/Routing | Routing | Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), and computer networks, such as the Internet.
In packet switching networks, routing is the higher-level decision making that directs network packets from their source toward their destination through intermediate network nodes by specific packet forwarding mechanisms. Packet forwarding is the transit of network packets from one network interface to another. Intermediate nodes are typically network hardware devices such as routers, gateways, firewalls, or switches. General-purpose computers also forward packets and perform routing, although they have no specially optimized hardware for the task.
The routing process usually directs forwarding on the basis of routing tables. Routing tables maintain a record of the routes to various network destinations. Routing tables may be specified by an administrator, learned by observing network traffic or built with the assistance of routing protocols.
Routing, in a narrower sense of the term, often refers to IP routing and is contrasted with bridging. IP routing assumes that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within local area networks.
Delivery schemes
Routing schemes differ in how they deliver messages:
Unicast is the dominant form of message delivery on the Internet. This article focuses on unicast routing algorithms.
Topology distribution
With static routing, small networks may use manually configured routing tables. Larger networks have complex topologies that can change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses pre-computed routing tables, with fallback routes if the most direct route becomes blocked (see routing in the PSTN).
Dynamic routing attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. Examples of dynamic-routing protocols and algorithms include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP).
Distance vector algorithms
Distance vector algorithms use the Bellman–Ford algorithm. This approach assigns a cost number to each of the links between each node in the network. Nodes send information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used).
When a node first starts, it only knows of its immediate neighbors and the direct cost involved in reaching them. (This information — the list of destinations, the total cost to each, and the next hop to send data to get there — makes up the routing table, or distance table.) Each node, on a regular basis, sends to each neighbor node its own current assessment of the total cost to get to all the destinations it knows of. The neighboring nodes examine this information and compare it to what they already know; anything that represents an improvement on what they already have, they insert in their own table. Over time, all the nodes in the network discover the best next hop and total cost for all destinations.
When a network node goes down, any nodes that used it as their next hop discard the entry and convey the updated routing information to all adjacent nodes, which in turn repeat the process. Eventually, all the nodes in the network receive the updates and discover new paths to all the destinations that do not involve the down node.
Link-state algorithms
When applying link-state algorithms, a graphical map of the network is the fundamental data used for each node. To produce its map, each node floods the entire network with information about the other nodes it can connect to. Each node then independently assembles this information into a map. Using this map, each router independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra's algorithm. The result is a tree graph rooted at the current node, such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node.
Optimized Link State Routing algorithm
A link-state routing algorithm optimized for mobile ad hoc networks is the optimized Link State Routing Protocol (OLSR). OLSR is proactive; it uses Hello and Topology Control (TC) messages to discover and disseminate link-state information through the mobile ad hoc network. Using Hello messages, each node discovers 2-hop neighbor information and elects a set of multipoint relays (MPRs). MPRs distinguish OLSR from other link-state routing protocols.
Path-vector protocol
Distance vector and link-state routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become intractable in large networks and cannot be used in inter-domain routing. Distance vector routing is subject to instability if there are more than a few hops in the domain. Link state routing needs significant resources to calculate routing tables. It also creates heavy traffic due to flooding.
Path-vector routing is used for inter-domain routing. It is similar to distance vector routing. Path-vector routing assumes that one node (there can be many) in each autonomous system acts on behalf of the entire autonomous system. This node is called the speaker node. The speaker node creates a routing table and advertises it to neighboring speaker nodes in neighboring autonomous systems. The idea is the same as distance vector routing except that only speaker nodes in each autonomous system can communicate with each other. The speaker node advertises the path, not the metric, of the nodes in its autonomous system or other autonomous systems.
The path-vector routing algorithm is similar to the distance vector algorithm in the sense that each border router advertises the destinations it can reach to its neighboring router. However, instead of advertising networks in terms of a destination and the distance to that destination, networks are advertised as destination addresses and path descriptions to reach those destinations. The path, expressed in terms of the domains (or confederations) traversed so far, is carried in a special path attribute that records the sequence of routing domains through which the reachability information has passed. A route is defined as a pairing between a destination and the attributes of the path to that destination, thus the name, path-vector routing; The routers receive a vector that contains paths to a set of destinations.
Path selection
Path selection involves applying a routing metric to multiple routes to select (or predict) the best route. Most routing algorithms use only one network path at a time. Multipath routing and specifically equal-cost multi-path routing techniques enable the use of multiple alternative paths.
In computer networking, the metric is computed by a routing algorithm, and can cover information such as bandwidth, network delay, hop count, path cost, load, maximum transmission unit, reliability, and communication cost. The routing table stores only the best possible routes, while link-state or topological databases may store all other information as well.
In case of overlapping or equal routes, algorithms consider the following elements in priority order to decide which routes to install into the routing table:
Prefix length: A matching route table entry with a longer subnet mask is always preferred as it specifies the destination more exactly.
Metric: When comparing routes learned via the same routing protocol, a lower metric is preferred. Metrics cannot be compared between routes learned from different routing protocols.
Administrative distance: When comparing route table entries from different sources such as different routing protocols and static configuration, a lower administrative distance indicates a more reliable source and thus a preferred route.
Because a routing metric is specific to a given routing protocol, multi-protocol routers must use some external heuristic to select between routes learned from different routing protocols. Cisco routers, for example, attribute a value known as the administrative distance to each route, where smaller administrative distances indicate routes learned from a protocol assumed to be more reliable.
A local administrator can set up host-specific routes that provide more control over network usage, permits testing, and better overall security. This is useful for debugging network connections or routing tables.
In some small systems, a single central device decides ahead of time the complete path of every packet. In some other small systems, whichever edge device injects a packet into the network decides ahead of time the complete path of that particular packet. In either case, the route-planning device needs to know a lot of information about what devices are connected to the network and how they are connected to each other. Once it has this information, it can use an algorithm such as A* search algorithm to find the best path.
In high-speed systems, there are so many packets transmitted every second that it is infeasible for a single device to calculate the complete path for each and every packet. Early high-speed systems dealt with this with circuit switching by setting up a path once for the first packet between some source and some destination; later packets between that same source and that same destination continue to follow the same path without recalculating until the circuit teardown. Later high-speed systems inject packets into the network without any one device ever calculating a complete path for packets.
In large systems, there are so many connections between devices, and those connections change so frequently, that it is infeasible for any one device to even know how all the devices are connected to each other, much less calculate a complete path through them. Such systems generally use next-hop routing.
Most systems use a deterministic dynamic routing algorithm. When a device chooses a path to a particular final destination, that device always chooses the same path to that destination until it receives information that makes it think some other path is better.
A few routing algorithms do not use a deterministic algorithm to find the best link for a packet to get from its original source to its final destination. Instead, to avoid congestion hot spots in packet systems, a few algorithms use a randomized algorithm—Valiant's paradigm—that routes a path to a randomly picked intermediate destination, and from there to its true final destination. In many early telephone switches, a randomizer was often used to select the start of a path through a multistage switching fabric.
Depending on the application for which path selection is performed, different metrics can be used. For example, for web requests one can use minimum latency paths to minimize web page load time, or for bulk data transfers one can choose the least utilized path to balance load across the network and increase throughput. A popular path selection objective is to reduce the average completion times of traffic flows and the total network bandwidth consumption. Recently, a path selection metric was proposed that computes the total number of bytes scheduled on the edges per path as selection metric. An empirical analysis of several path selection metrics, including this new proposal, has been made available.
Multiple agents
In some networks, routing is complicated by the fact that no single entity is responsible for selecting paths; instead, multiple entities are involved in selecting paths or even parts of a single path. Complications or inefficiency can result if these entities choose paths to optimize their own objectives, which may conflict with the objectives of other participants.
A classic example involves traffic in a road system, in which each driver picks a path that minimizes their travel time. With such routing, the equilibrium routes can be longer than optimal for all drivers. In particular, Braess's paradox shows that adding a new road can lengthen travel times for all drivers.
In a single-agent model used, for example, for routing automated guided vehicles (AGVs) on a terminal, reservations are made for each vehicle to prevent simultaneous use of the same part of an infrastructure. This approach is also referred to as context-aware routing.
The Internet is partitioned into autonomous systems (ASs) such as internet service providers (ISPs), each of which controls routes involving its network. Routing occurs at multiple levels. First, AS-level paths are selected via the BGP protocol that produces a sequence of ASs through which packets flow. Each AS may have multiple paths, offered by neighboring ASs, from which to choose. These routing decisions often correlate with business relationships with these neighboring ASs, which may be unrelated to path quality or latency. Second, once an AS-level path has been selected, there are often multiple corresponding router-level paths to choose from. This is due, in part, because two ISPs may be connected through multiple connections. In choosing the single router-level path, it is common practice for each ISP to employ hot-potato routing: sending traffic along the path that minimizes the distance through the ISP's own network—even if that path lengthens the total distance to the destination.
For example, consider two ISPs, A and B. Each has a presence in New York, connected by a fast link with latency —and each has a presence in London connected by a 5 ms link. Suppose both ISPs have trans-Atlantic links that connect their two networks, but A's link has latency 100 ms and B's has latency 120 ms. When routing a message from a source in A's London network to a destination in B's New York network, A may choose to immediately send the message to B in London. This saves A the work of sending it along an expensive trans-Atlantic link, but causes the message to experience latency 125 ms when the other route would have been 20 ms faster.
Additionally, a similar routing challenge can be observed in cellular networks, where different packets are destined for various endpoints, and each link exhibits varying spectral efficiency. In this context, the selection of the optimal path involves considering latency and packet error rate. To address this, multiple independent entities, one for each base station, play a crucial role in path selection while striving to optimize overall network performance.
A 2003 measurement study of Internet routes found that, between pairs of neighboring ISPs, more than 30% of paths have inflated latency due to hot-potato routing, with 5% of paths being delayed by at least 12 ms. Inflation due to AS-level path selection, while substantial, was attributed primarily to BGP's lack of a mechanism to directly optimize for latency, rather than to selfish routing policies. It was also suggested that, were an appropriate mechanism in place, ISPs would be willing to cooperate to reduce latency rather than use hot-potato routing. Such a mechanism was later published by the same authors, first for the case of two ISPs and then for the global case.
Route analytics
As the Internet and IP networks have become mission critical business tools, there has been increased interest in techniques and methods to monitor the routing posture of networks. Incorrect routing or routing issues cause undesirable performance degradation, flapping or downtime. Monitoring routing in a network is achieved using route analytics tools and techniques.
Centralized routing
In networks where a logically centralized control is available over the forwarding state, for example, using software-defined networking, routing techniques can be used that aim to optimize global and network-wide performance metrics. This has been used by large internet companies that operate many data centers in different geographical locations attached using private optical links, examples of which include Microsoft's Global WAN, Facebook's Express Backbone, and Google's B4.
Global performance metrics to optimize include maximizing network utilization, minimizing traffic flow completion times, maximizing the traffic delivered prior to specific deadlines and reducing the completion times of flows. Work on the later over private WAN discusses modeling routing as a graph optimization problem by pushing all the queuing to the end-points. The authors also propose a heuristic to solve the problem efficiently while sacrificing negligible performance.
| Technology | Networks | null |
25754 | https://en.wikipedia.org/wiki/Resistor | Resistor | A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat may be used as part of motor controls, in power distribution systems, or as test loads for generators.
Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force, or chemical activity.
Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds and forms. Resistors are also implemented within integrated circuits.
The electrical function of a resistor is specified by its resistance: common commercial resistors are manufactured over a range of more than nine orders of magnitude. The nominal value of the resistance falls within the manufacturing tolerance, indicated on the component.
Electronic symbols and notation
Two typical schematic diagram symbols are as follows:
The notation to state a resistor's value in a circuit diagram varies.
One common scheme is the RKM code following IEC 60062. Rather than using a decimal separator, this notation uses a letter loosely associated with SI prefixes corresponding with the part's resistance. For example, 8K2 as part marking code, in a circuit diagram or in a bill of materials (BOM) indicates a resistor value of 8.2 kΩ. Additional zeros imply a tighter tolerance, for example 15M0 for three significant digits. When the value can be expressed without the need for a prefix (that is, multiplicator 1), an "R" is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω.
Theory of operation
Ohm's law
An ideal resistor (i.e. a resistance without reactance) obeys Ohm's law:
Ohm's law states that the voltage () across a resistor is proportional to the current () passing through it, where the constant of proportionality is the resistance (). For example, if a 300-ohm resistor is attached across the terminals of a 12-volt battery, then a current of 12 / 300 = 0.04 amperes flows through that resistor.
The ohm (symbol: Ω) is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere. Since resistors are specified and manufactured over a very large range of values, the derived units of milliohm (1 mΩ = 10−3 Ω), kilohm (1 kΩ = 103 Ω), and megohm (1 MΩ = 106 Ω) are also in common usage.
Series and parallel resistors
The total resistance of resistors connected in series is the sum of their individual resistance values.
The total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors.
For example, a 10 ohm resistor connected in parallel with a 5 ohm resistor and a 15 ohm resistor produces ohms of resistance, or = 2.727 ohms.
A resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other. Some complex networks of resistors cannot be resolved in this manner, requiring more sophisticated circuit analysis. Generally, the Y-Δ transform, or matrix methods can be used to solve such problems.
Power dissipation
At any instant, the power P (watts) consumed by a resistor of resistance R (ohms) is calculated as:
where V (volts) is the voltage across the resistor and I (amps) is the current flowing through it. Using Ohm's law, the two other forms can be derived. This power is converted into heat which must be dissipated by the resistor's package before its temperature rises excessively.
Resistors are rated according to their maximum power dissipation. Discrete resistors in solid-state electronic systems are typically rated as , , or watt. They usually absorb much less than a watt of electrical power and require little attention to their power rating.
Power resistors are required to dissipate substantial amounts of power and are typically used in power supplies, power conversion circuits, and power amplifiers; this designation is loosely applied to resistors with power ratings of 1 watt or greater. Power resistors are physically larger and may not use the preferred values, color codes, and external packages described below.
If the average power dissipated by a resistor is more than its power rating, damage to the resistor may occur, permanently altering its resistance; this is distinct from the reversible change in resistance due to its temperature coefficient when it warms. Excessive power dissipation may raise the temperature of the resistor to a point where it can burn the circuit board or adjacent components, or even cause a fire. There are flameproof resistors that will not produce flames with any overload of any duration.
Resistors may be specified with higher rated dissipation than is experienced in service to account for poor air circulation, high altitude, or high operating temperature.
All resistors have a maximum voltage rating; this may limit the power dissipation for higher resistance values. For instance, among watt resistors (a very common sort of leaded resistor) one is listed with a resistance of 100 MΩ and a maximum rated voltage of 750 V. However even placing 750 V across a 100 MΩ resistor continuously would only result in a power dissipation of less than 6 mW, making the nominal watt rating meaningless.
Nonideal properties
Practical resistors have a series inductance and a small parallel capacitance; these specifications can be important in high-frequency applications. And while even an ideal resistor inherently has Johnson noise, some resistors have worse noise characteristics and so may be an issue for low-noise amplifiers or other sensitive electronics.
In some precision applications, the temperature coefficient of the resistance may also be of concern.
The unwanted inductance, excess noise, and temperature coefficient are mainly dependent on the technology used in manufacturing the resistor. They are not normally specified individually for a particular family of resistors manufactured using a particular technology. A family of discrete resistors may also be characterized according to its form factor, that is, the size of the device and the position of its leads (or terminals). This is relevant in the practical manufacturing of circuits that may use them.
Practical resistors are also specified as having a maximum power rating which must exceed the anticipated power dissipation of that resistor in a particular circuit: this is mainly of concern in power electronics applications.
Resistors with higher power ratings are physically larger and may require heat sinks. In a high-voltage circuit, attention must sometimes be paid to the rated maximum working voltage of the resistor. While there is no minimum working voltage for a given resistor, failure to account for a resistor's maximum rating may cause the resistor to incinerate when current is run through it.
Fixed resistors
Lead arrangements
Through-hole components typically have "leads" (pronounced ) leaving the body "axially", that is, on a line parallel with the part's longest axis. Others have leads coming off their body "radially" instead. Other components may be SMT (surface mount technology), while high power resistors may have one of their leads designed into the heat sink.
Carbon composition
Carbon composition resistors (CCR) consist of a solid cylindrical resistive element with embedded wire leads or metal end caps to which the lead wires are attached. The body of the resistor is protected with paint or plastic. Early 20th-century carbon composition resistors had uninsulated bodies; the lead wires were wrapped around the ends of the resistance element rod and soldered. The completed resistor was painted for color-coding of its value.
The resistive element in carbon composition resistors is made from a mixture of finely powdered carbon and an insulating material, usually ceramic. A resin holds the mixture together. The resistance is determined by the ratio of the fill material (the powdered ceramic) to the carbon. Higher concentrations of carbon, which is a good conductor, result in lower resistances. Carbon composition resistors were commonly used in the 1960s and earlier, but are not popular for general use now as other types have better specifications, such as tolerance, voltage dependence, and stress. Carbon composition resistors change value when stressed with over-voltages. Moreover, if internal moisture content, such as from exposure for some length of time to a humid environment, is significant, soldering heat creates a non-reversible change in resistance value. Carbon composition resistors have poor stability with time and were consequently factory sorted to, at best, only 5% tolerance. These resistors are non-inductive, which provides benefits when used in voltage pulse reduction and surge protection applications. Carbon composition resistors have higher capability to withstand overload relative to the component's size.
Carbon composition resistors are still available, but relatively expensive. Values ranged from fractions of an ohm to 22 megohms. Due to their high price, these resistors are no longer used in most applications. However, they are used in power supplies and welding controls. They are also in demand for repair of vintage electronic equipment where authenticity is a factor.
Carbon pile
A carbon pile resistor is made of a stack of carbon disks compressed between two metal contact plates. Adjusting the clamping pressure changes the resistance between the plates. These resistors are used when an adjustable load is required, such as in testing automotive batteries or radio transmitters. A carbon pile resistor can also be used as a speed control for small motors in household appliances (sewing machines, hand-held mixers) with ratings up to a few hundred watts. A carbon pile resistor can be incorporated in automatic voltage regulators for generators, where the carbon pile controls the field current to maintain relatively constant voltage. This principle is also applied in the carbon microphone.
Carbon film
In manufacturing carbon film resistors, a carbon film is deposited on an insulating substrate, and a helix is cut in it to create a long, narrow resistive path. Varying shapes, coupled with the resistivity of amorphous carbon (ranging from 500 to 800 μΩ m), can provide a wide range of resistance values. Carbon film resistors feature lower noise compared to carbon composition resistors because of the precise distribution of the pure graphite without binding. Carbon film resistors feature a power rating range of 0.125 W to 5 W at 70 °C. Resistances available range from 1 ohm to 10 megaohm. The carbon film resistor has an operating temperature range of −55 °C to 155 °C. It has 200 to 600 volts maximum working voltage range. Special carbon film resistors are used in applications requiring high pulse stability.
Printed carbon resistors
Carbon composition resistors can be printed directly onto printed circuit board (PCB) substrates as part of the PCB manufacturing process. Although this technique is more common on hybrid PCB modules, it can also be used on standard fibreglass PCBs. Tolerances are typically quite large and can be in the order of 30%. A typical application would be non-critical pull-up resistors.
Thick and thin film
Thick film resistors became popular during the 1970s, and most SMD (surface mount device) resistors today are of this type. The resistive element of thick films is 1000 times thicker than thin films, but the principal difference is how the film is applied to the cylinder (axial resistors) or the surface (SMD resistors).
Thin film resistors are made by sputtering (a method of vacuum deposition) the resistive material onto an insulating substrate. The film is then etched in a similar manner to the old (subtractive) process for making printed circuit boards; that is, the surface is coated with a photo-sensitive material, covered by a pattern film, irradiated with ultraviolet light, and then the exposed photo-sensitive coating is developed, and underlying thin film is etched away.
Thick film resistors are manufactured using screen and stencil printing processes.
Because the time during which the sputtering is performed can be controlled, the thickness of the thin film can be accurately controlled. The type of material also varies, consisting of one or more ceramic (cermet) conductors such as tantalum nitride (TaN), ruthenium oxide (), lead oxide (PbO), bismuth ruthenate (), nickel chromium (NiCr), or bismuth iridate ().
The resistance of both thin and thick film resistors after manufacture is not highly accurate; they are usually trimmed to an accurate value by abrasive or laser trimming. Thin film resistors are usually specified with tolerances of 1% and 5%, and with temperature coefficients of 5 to 50 ppm/K. They also have much lower noise levels, on the level of 10–100 times less than thick film resistors. Thick film resistors may use the same conductive ceramics, but they are mixed with sintered (powdered) glass and a carrier liquid so that the composite can be screen-printed. This composite of glass and conductive ceramic (cermet) material is then fused (baked) in an oven at about 850 °C.
When first manufactured, thick film resistors had tolerances of 5%, but standard tolerances have improved to 2% or 1% in the last few decades. Temperature coefficients of thick film resistors are typically ±200 or ±250 ppm/K; a 40-kelvin (70 °F) temperature change can change the resistance by 1%.
Thin film resistors are usually far more expensive than thick film resistors. For example, SMD thin film resistors, with 0.5% tolerances and with 25 ppm/K temperature coefficients, when bought in full size reel quantities, are about twice the cost of 1%, 250 ppm/K thick film resistors.
Metal film
A common type of axial-leaded resistor today is the metal-film resistor. Metal Electrode Leadless Face (MELF) resistors often use the same technology.
Metal film resistors are usually coated with nickel chromium (NiCr), but might be coated with any of the cermet materials listed above for thin film resistors. Unlike thin film resistors, the material may be applied using different techniques than sputtering (though this is one technique used). The resistance value is determined by cutting a helix through the coating rather than by etching, similar to the way carbon resistors are made. The result is a reasonable tolerance (0.5%, 1%, or 2%) and a temperature coefficient that is generally between 50 and 100 ppm/K. Metal film resistors possess good noise characteristics and low non-linearity due to a low voltage coefficient. They are also beneficial due to long-term stability.
Metal oxide film
Metal-oxide film resistors are made of metal oxides which results in a higher operating temperature and greater stability and reliability than metal film. They are used in applications with high endurance demands.
Wire wound
Wirewound resistors are commonly made by winding a metal wire, usually nichrome, around a ceramic, plastic, or fiberglass core. The ends of the wire are soldered or welded to two caps or rings, attached to the ends of the core. The assembly is protected with a layer of paint, molded plastic, or an enamel coating baked at high temperature. These resistors are designed to withstand unusually high temperatures of up to 450 °C. Wire leads in low power wirewound resistors are usually between 0.6 and 0.8 mm in diameter and tinned for ease of soldering. For higher power wirewound resistors, either a ceramic outer case or an aluminum outer case on top of an insulating layer is used. If the outer case is ceramic, such resistors are sometimes described as "cement" resistors, though they do not actually contain any traditional cement. The aluminum-cased types are designed to be attached to a heat sink to dissipate the heat; the rated power is dependent on being used with a suitable heat sink, e.g., a 50 W power rated resistor overheats at a fraction of the power dissipation if not used with a heat sink. Large wirewound resistors may be rated for 1,000 watts or more.
Because wirewound resistors are coils they have more undesirable inductance than other types of resistor. However, winding the wire in sections with alternately reversed direction can minimize inductance. Other techniques employ bifilar winding, or a flat thin former (to reduce cross-section area of the coil). For the most demanding circuits, resistors with Ayrton–Perry winding are used.
Applications of wirewound resistors are similar to those of composition resistors with the exception of high frequency applications. The high frequency response of wirewound resistors is substantially worse than that of a composition resistor.
Metal foil resistor
In 1960, Felix Zandman and Sidney J. Stein presented a development of resistor film of very high stability.
The primary resistance element of a foil resistor is a chromium nickel alloy foil several micrometers thick. Chromium nickel alloys are characterized by having a large electrical resistance (about 58 times that of copper), a small temperature coefficient and high resistance to oxidation. Examples are Chromel A and Nichrome V, whose typical composition is 80 Ni and 20 Cr, with a melting point of 1420 °C. When iron is added, the chromium nickel alloy becomes more ductile. The Nichrome and Chromel C are examples of an alloy containing iron. The composition typical of Nichrome is 60 Ni, 12 Cr, 26 Fe, 2 Mn and Chromel C, 64 Ni, 11 Cr, Fe 25. The melting temperature of these alloys are 1350 °C and 1390 °C, respectively.
Since their introduction in the 1960s, foil resistors have had the best precision and stability of any resistor available. One of the important parameters of stability is the temperature coefficient of resistance (TCR). The TCR of foil resistors is extremely low, and has been further improved over the years. One range of ultra-precision foil resistors offers a TCR of 0.14 ppm/°C, tolerance ±0.005%, long-term stability (1 year) 25 ppm, (3 years) 50 ppm (further improved 5-fold by hermetic sealing), stability under load (2000 hours) 0.03%, thermal EMF 0.1 μV/°C, noise −42 dB, voltage coefficient 0.1 ppm/V, inductance 0.08 μH, capacitance 0.5 pF.
The thermal stability of this type of resistor also has to do with the opposing effects of the metal's electrical resistance increasing with temperature, and being reduced by thermal expansion leading to an increase in thickness of the foil, whose other dimensions are constrained by a ceramic substrate.
Ammeter shunts
An ammeter shunt is a special type of current-sensing resistor, having four terminals and a value in milliohms or even micro-ohms. Current-measuring instruments, by themselves, can usually accept only limited currents. To measure high currents, the current passes through the shunt across which the voltage drop is measured and interpreted as current. A typical shunt consists of two solid metal blocks, sometimes brass, mounted on an insulating base. Between the blocks, and soldered or brazed to them, are one or more strips of low temperature coefficient of resistance (TCR) manganin alloy. Large bolts threaded into the blocks make the current connections, while much smaller screws provide volt meter connections. Shunts are rated by full-scale current, and often have a voltage drop of 50 mV at rated current. Such meters are adapted to the shunt full current rating by using an appropriately marked dial face; no change need to be made to the other parts of the meter.
Grid resistor
In heavy-duty industrial high-current applications, a grid resistor is a large convection-cooled lattice of stamped metal alloy strips connected in rows between two electrodes. Such industrial grade resistors can be as large as a refrigerator; some designs can handle over 500 amperes of current, with a range of resistances extending lower than 0.04 ohms. They are used in applications such as dynamic braking and load banking for locomotives and trams, neutral grounding for industrial AC distribution, control loads for cranes and heavy equipment, load testing of generators and harmonic filtering for electric substations.
The term grid resistor is sometimes used to describe a resistor of any type connected to the control grid of a vacuum tube. This is not a resistor technology; it is an electronic circuit topology.
Special varieties
Cermet
Phenolic
Tantalum
Water resistor
Variable resistors
Adjustable resistors
A resistor may have one or more fixed tapping points so that the resistance can be changed by moving the connecting wires to different terminals. Some wirewound power resistors have a tapping point that can slide along the resistance element, allowing a larger or smaller part of the resistance to be used.
Where continuous adjustment of the resistance value during operation of equipment is required, the sliding resistance tap can be connected to a knob accessible to an operator. Such a device is called a rheostat and has two terminals.
Potentiometers
A potentiometer (colloquially, pot) is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob or by a linear slider. The name potentiometer comes from its function as an adjustable voltage divider to provide a variable potential at the terminal connected to the tapping point. Volume control in an audio device is a common application of a potentiometer. A typical low power potentiometer (see drawing) is constructed of a flat resistance element (B) of carbon composition, metal film, or conductive plastic, with a springy phosphor bronze wiper contact (C) which moves along the surface. An alternate construction is resistance wire wound on a form, with the wiper sliding axially along the coil. These have lower resolution, since as the wiper moves the resistance changes in steps equal to the resistance of a single turn.
High-resolution multiturn potentiometers are used in precision applications. These have wire-wound resistance elements typically wound on a helical mandrel, with the wiper moving on a helical track as the control is turned, making continuous contact with the wire. Some include a conductive-plastic resistance coating over the wire to improve resolution. These typically offer ten turns of their shafts to cover their full range. They are usually set with dials that include a simple turns counter and a graduated dial, and can typically achieve three-digit resolution. Electronic analog computers used them in quantity for setting coefficients and delayed-sweep oscilloscopes of recent decades included one on their panels.
Resistance decade boxes
A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in. Usually the resistance is accurate to high precision, ranging from laboratory/calibration grade accuracy of 20 parts per million, to field grade at 1%. Inexpensive boxes with lesser accuracy are also available. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. The range of resistance provided, the maximum resolution, and the accuracy characterize the box. For example, one box offers resistances from 0 to 100 megohms, maximum resolution 0.1 ohm, accuracy 0.1%.
Special devices
There are various devices whose resistance changes with various quantities. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. Since their resistance can be large until they are allowed to heat up due to the passage of current, they are also commonly used to prevent excessive current surges when equipment is powered on. Similarly, the resistance of a humistor varies with humidity. One sort of photodetector, the photoresistor, has a resistance which varies with illumination.
The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. A single resistor may be used, or a pair (half bridge), or four resistors connected in a Wheatstone bridge configuration. The strain resistor is bonded with adhesive to an object that is subjected to mechanical strain. With the strain gauge and a filter, amplifier, and analog/digital converter, the strain on an object can be measured.
A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. It passes a current whose magnitude can vary by a factor of 1012 in response to changes in applied pressure.
Measurement
The value of a resistor can be measured with an ohmmeter, which may be one function of a multimeter. Usually, probes on the ends of test leads connect to the resistor. A simple ohmmeter may apply a voltage from a battery across the unknown resistor (with an internal resistor of a known value in series) producing a current which drives a meter movement. The current, in accordance with Ohm's law, is inversely proportional to the sum of the internal resistance and the resistor being tested, resulting in an analog meter scale which is very non-linear, calibrated from infinity to 0 ohms. A digital multimeter, using active electronics, may instead pass a specified current through the test resistance. The voltage generated across the test resistance in that case is linearly proportional to its resistance, which is measured and displayed. In either case the low-resistance ranges of the meter pass much more current through the test leads than do high-resistance ranges. This allows for the voltages present to be at reasonable levels (generally below 10 volts) but still measurable.
Measuring low-value resistors, such as fractional-ohm resistors, with acceptable accuracy requires four-terminal connections. One pair of terminals applies a known, calibrated current to the resistor, while the other pair senses the voltage drop across the resistor. Some laboratory quality ohmmeters, milliohmmeters, and even some of the better digital multimeters sense using four input terminals for this purpose, which may be used with special test leads called Kelvin clips. Each of the two clips has a pair of jaws insulated from each other. One side of each clip applies the measuring current, while the other connections are only to sense the voltage drop. The resistance is again calculated using Ohm's Law as the measured voltage divided by the applied current.
Standards
Production resistors
Resistor characteristics are quantified and reported using various national standards. In the US, MIL-STD-202 contains the relevant test methods to which other standards refer.
There are various standards specifying properties of resistors for use in equipment:
IEC 60062 (IEC 62) / DIN 40825 / BS 1852 / IS 8186 / JIS C 5062 etc. (Resistor color code, RKM code, date code)
EIA RS-279 / DIN 41429 (Resistor color code)
IEC 60063 (IEC 63) / JIS C 5063 (Standard E series values)
MIL-PRF-26
MIL-PRF-39007 (Fixed power, established reliability)
MIL-PRF-55342 (Surface-mount thick and thin film)
MIL-PRF-914
MIL-R-11 Standard Canceled
MIL-R-39017 (Fixed, General Purpose, Established Reliability)
MIL-PRF-32159 (zero ohm jumpers)
UL 1412 (fusing and temperature limited resistors)
There are other United States military procurement MIL-R- standards.
Resistance standards
The primary standard for resistance, the "mercury ohm" was initially defined in 1884 in as a column of mercury 106.3 cm long and in cross-section, at . Difficulties in precisely measuring the physical constants to replicate this standard result in variations of as much as 30 ppm. From 1900 the mercury ohm was replaced with a precision machined plate of manganin. Since 1990 the international resistance standard has been based on the quantized Hall effect discovered by Klaus von Klitzing, for which he won the Nobel Prize in Physics in 1985.
Resistors of extremely high precision are manufactured for calibration and laboratory use. They may have four terminals, using one pair to carry an operating current and the other pair to measure the voltage drop; this eliminates errors caused by voltage drops across the lead resistances, because no charge flows through voltage sensing leads. It is important in small value resistors (100–0.0001 ohm) where lead resistance is significant or even comparable with respect to resistance standard value.
Resistor marking
Axial resistor cases are usually tan, brown, blue, or green (though other colors are occasionally found as well, such as dark red or dark gray), and display three to six colored stripes that indicate resistance (and by extension tolerance), and may include bands to indicate the temperature coefficient and reliability class. In four-striped resistors, the first two stripes represent the first two digits of the resistance in ohms, the third represents a multiplier, and the fourth the tolerance (which if absent, denotes ±20%). For five- and six- striped resistors the third band is the third digit, the fourth is the multiplier and the fifth is the tolerance; a sixth stripe represents the temperature coefficient. The power rating of the resistor is usually not marked and is deduced from its size.
Surface-mount resistors are marked numerically.
Early 20th century resistors, essentially uninsulated, were dipped in paint to cover their entire body for color-coding. This base color represented the first digit. A second color of paint was applied to one end of the element to represent a second digit, and a color dot (or band) in the middle provided the third digit. The rule was "body, tip, dot", providing two significant digits for value and the decimal multiplier, in that sequence. Default tolerance was ±20%. Closer-tolerance resistors had silver (±10%) or gold-colored (±5%) paint on the other end.
Preferred values
Early resistors were made in more or less arbitrary round numbers; a series might have 100, 125, 150, 200, 300, etc. Early power wirewound resistors, such as brown vitreous-enameled types, were made with a system of preferred values like some of those mentioned here. Resistors as manufactured are subject to a certain percentage tolerance, and it makes sense to manufacture values that correlate with the tolerance, so that the actual value of a resistor overlaps slightly with its neighbors. Wider spacing leaves gaps; narrower spacing increases manufacturing and inventory costs to provide resistors that are more or less interchangeable.
A logical scheme is to produce resistors in a range of values which increase in a geometric progression, so that each value is greater than its predecessor by a fixed multiplier or percentage, chosen to match the tolerance of the range. For example, for a tolerance of ±20% it makes sense to have each resistor about 1.5 times its predecessor, covering a decade in 6 values. More precisely, the factor used is 1.4678 ≈ , giving values of 1.47, 2.15, 3.16, 4.64, 6.81, 10 for the 1–10-decade (a decade is a range increasing by a factor of 10; 0.1–1 and 10–100 are other examples); these are rounded in practice to 1.5, 2.2, 3.3, 4.7, 6.8, 10; followed by 15, 22, 33, ... and preceded by ... 0.47, 0.68, 1. This scheme has been adopted as the E6 series of the IEC 60063 preferred number values. There are also E12, E24, E48, E96 and E192 series for components of progressively finer resolution, with 12, 24, 48, 96, and 192 different values within each decade. The actual values used are in the IEC 60063 lists of preferred numbers.
A resistor of 100 ohms ±20% would be expected to have a value between 80 and 120 ohms; its E6 neighbors are 68 (54–82) and 150 (120–180) ohms. A sensible spacing, E6 is used for ±20% components; E12 for ±10%; E24 for ±5%; E48 for ±2%, E96 for ±1%; E192 for ±0.5% or better. Resistors are manufactured in values from a few milliohms to about a gigaohm in IEC60063 ranges appropriate for their tolerance. Manufacturers may sort resistors into tolerance-classes based on measurement. Accordingly, a selection of 100 ohms resistors with a tolerance of ±10%, might not lie just around 100 ohm (but no more than 10% off) as one would expect (a bell-curve), but rather be in two groups – either between 5 and 10% too high or 5 to 10% too low (but not closer to 100 ohm than that) because any resistors the factory had measured as being less than 5% off would have been marked and sold as resistors with only ±5% tolerance or better. When designing a circuit, this may become a consideration. This process of sorting parts based on post-production measurement is known as "binning", and can be applied to other components than resistors (such as speed grades for CPUs).
SMT resistors
Surface mounted resistors of larger sizes (metric 1608 and above) are printed with numerical values in a code related to that used on axial resistors. Standard-tolerance surface-mount technology (SMT) resistors are marked with a three-digit code, in which the first two digits are the first two significant digits of the value and the third digit is the power of ten (the number of zeroes). For example:
334 = 33 × 104 Ω = 330 kΩ
222 = 22 × 102 Ω = 2.2 kΩ
473 = 47 × 103 Ω = 47 kΩ
105 = 10 × 105 Ω = 1 MΩ
Resistances less than 100 Ω are written: 100, 220, 470. The final zero represents ten to the power zero, which is 1. For example:
100 = 10 × 100 Ω = 10 Ω
220 = 22 × 100 Ω = 22 Ω
Sometimes these values are marked as 10 or 22 to prevent a mistake.
Resistances less than 10 Ω have 'R' to indicate the position of the decimal point (radix point). For example:
4R7 = 4.7 Ω
R300 = 0.30 Ω
0R22 = 0.22 Ω
0R01 = 0.01 Ω
000 and 0000 sometimes appear as values on surface-mount zero-ohm links, since these have (approximately) zero resistance.
More recent surface-mount resistors are too small, physically, to permit practical markings to be applied.
Precision resistor markings
Many precision resistors, including surface mount and axial-lead types, are marked with a four-digit code. The first three digits are the significant figures and the fourth is the power of ten. For example:
1001 = 100 × 101 Ω = 1.00 kΩ
4992 = 499 × 102 Ω = 49.9 kΩ
1000 = 100 × 100 Ω = 100 Ω
Axial-lead precision resistors often use color code bands to represent this four-digit code.
EIA-96 marking
The former EIA-96 marking system now included in IEC 60062:2016 is a more compact marking system intended for physically small high-precision resistors. It uses a two-digit code plus a letter (a total of three alphanumeric characters) to indicate 1% resistance values to three significant digits. The two digits (from "01" to "96") are a code that indicates one of the 96 "positions" in the standard E96 series of 1% resistor values. The uppercase letter is a code that indicates a power of ten multiplier. For example, the marking "01C" represents 10 kOhm; "10C" represents 12.4 kOhm; "96C" represents 97.6 kOhm.
Industrial type designation
Steps to find out the resistance or capacitance values:
First two letters gives the power dissipation capacity.
Next three digits gives the resistance value.
First two digits are the significant values
Third digit is the multiplier.
Final digit gives the tolerance.
If a resistor is coded:
EB1041: power dissipation capacity = 1/2 watts, resistance value = ±10% = between ohms and ohms.
CB3932: power dissipation capacity = 1/4 watts, resistance value = ±20% = between and ohms.
Common usage patterns
There are several common usage patterns that resistors are commonly configured in.
Current limiting
Resistors are commonly used to limit the amount of current flowing through a circuit. Many circuit components (such as LEDs) require the current flowing through them to be limited, but do not themselves limit the amount of current. Therefore, oftentimes resistors will be added to prevent overcurrent situations. Additionally, oftentimes circuits do not need the amount of current that would be otherwise flowing through them, so resistors can be added to limit the power consumption of such circuits.
Voltage divider
Oftentimes circuits need to provide various reference voltages for other circuits (such as voltage comparators). A fixed voltage can be obtained by taking two resistors in series between two other fixed voltages (such as the source voltage and ground). The terminal between the two resistors will be at a voltage that is between the two voltages, at a linear distance based on the relative resistances of the two resistors. For instance, if a 200 ohm resistor and a 400 ohm resistor are placed in series between 6 V and 0 V, the terminal between them will be at 4 V.
Pull-down and pull-up resistors
When a circuit is not connected to power, the voltage of that circuit is not zero but undefined (it can be influenced by previous voltages or the environment). A pull-up or pull-down resistor provides a voltage for a circuit when it is otherwise disconnected (such as when a button is not pushed down or a transistor is not active). A pull-up resistor connects the circuit to a high positive voltage (if the circuit requires a high positive default voltage) and a pull-down resistor connects the circuit to a low voltage or ground (if the circuit requires a low default voltage). The resistor value must be high enough so that, when the circuit is active, the voltage source it is attached to does not over influence the function of the circuit, but low enough so that it "pulls" quickly enough when the circuit is deactivated, and does not significantly alter the voltage from the source value.
Electrical and thermal noise
In amplifying faint signals, it is often necessary to minimize electronic noise, particularly in the first stage of amplification. As a dissipative element, even an ideal resistor naturally produces a randomly fluctuating voltage, or noise, across its terminals. This Johnson–Nyquist noise is a fundamental noise source which depends only upon the temperature and resistance of the resistor, and is predicted by the fluctuation–dissipation theorem. Using a larger value of resistance produces a larger voltage noise, whereas a smaller value of resistance generates more current noise, at a given temperature.
The thermal noise of a practical resistor may also be larger than the theoretical prediction and that increase is typically frequency-dependent. Excess noise of a practical resistor is observed only when current flows through it. This is specified in unit of μV/V/decade – μV of noise per volt applied across the resistor per decade of frequency. The μV/V/decade value is frequently given in dB so that a resistor with a noise index of 0 dB exhibits 1 μV (rms) of excess noise for each volt across the resistor in each frequency decade. Excess noise is thus an example of 1/f noise. Thick-film and carbon composition resistors generate more excess noise than other types at low frequencies. Wire-wound and thin-film resistors are often used for their better noise characteristics. Carbon composition resistors can exhibit a noise index of 0 dB while bulk metal foil resistors may have a noise index of −40 dB, usually making the excess noise of metal foil resistors insignificant. Thin film surface mount resistors typically have lower noise and better thermal stability than thick film surface mount resistors. Excess noise is also size-dependent: in general, excess noise is reduced as the physical size of a resistor is increased (or multiple resistors are used in parallel), as the independently fluctuating resistances of smaller components tend to average out.
While not an example of "noise" per se, a resistor may act as a thermocouple, producing a small DC voltage differential across it due to the thermoelectric effect if its ends are at different temperatures. This induced DC voltage can degrade the precision of instrumentation amplifiers in particular. Such voltages appear in the junctions of the resistor leads with the circuit board and with the resistor body. Common metal film resistors show such an effect at a magnitude of about 20 μV/°C. Some carbon composition resistors can exhibit thermoelectric offsets as high as 400 μV/°C, whereas specially constructed resistors can reduce this number to 0.05 μV/°C. In applications where the thermoelectric effect may become important, care has to be taken to mount the resistors horizontally to avoid temperature gradients and to mind the air flow over the board.
Failure modes
The failure rate of resistors in a properly designed circuit is low compared to other electronic components such as semiconductors and electrolytic capacitors. Damage to resistors most often occurs due to overheating when the average power delivered to it greatly exceeds its ability to dissipate heat (specified by the resistor's power rating). This may be due to a fault external to the circuit but is frequently caused by the failure of another component (such as a transistor that shorts out) in the circuit connected to the resistor. Operating a resistor too close to its power rating can limit the resistor's lifespan or cause a significant change in its resistance. A safe design generally uses overrated resistors in power applications to avoid this danger.
Low-power thin-film resistors can be damaged by long-term high-voltage stress, even below maximum specified voltage and below maximum power rating. This is often the case for the startup resistors feeding a switched-mode power supply integrated circuit.
When overheated, carbon-film resistors may decrease or increase in resistance.
Carbon film and composition resistors can fail (open circuit) if running close to their maximum dissipation. This is also possible but less likely with metal film and wirewound resistors.
There can also be failure of resistors due to mechanical stress and adverse environmental factors including humidity. If not enclosed, wirewound resistors can corrode.
Surface mount resistors have been known to fail due to the ingress of sulfur into the internal makeup of the resistor. This sulfur chemically reacts with the silver layer to produce non-conductive silver sulfide. The resistor's impedance goes to infinity. Sulfur resistant and anti-corrosive resistors are sold into automotive, industrial, and military applications. ASTM B809 is an industry standard that tests a part's susceptibility to sulfur.
An alternative failure mode can be encountered where large value resistors are used (hundreds of kilohms and higher). Resistors are not only specified with a maximum power dissipation, but also for a maximum voltage drop. Exceeding this voltage causes the resistor to degrade slowly reducing in resistance. The voltage dropped across large value resistors can be exceeded before the power dissipation reaches its limiting value. Since the maximum voltage specified for commonly encountered resistors is a few hundred volts, this is a problem only in applications where these voltages are encountered.
Variable resistors can also degrade in a different manner, typically involving poor contact between the wiper and the body of the resistance. This may be due to dirt or corrosion and is typically perceived as "crackling" as the contact resistance fluctuates; this is especially noticed as the device is adjusted. This is similar to crackling caused by poor contact in switches, and like switches, potentiometers are to some extent self-cleaning: running the wiper across the resistance may improve the contact. Potentiometers which are seldom adjusted, especially in dirty or harsh environments, are most likely to develop this problem. When self-cleaning of the contact is insufficient, improvement can usually be obtained through the use of contact cleaner (also known as "tuner cleaner") spray. The crackling noise associated with turning the shaft of a dirty potentiometer in an audio circuit (such as the volume control) is greatly accentuated when an undesired DC voltage is present, often indicating the failure of a DC blocking capacitor in the circuit.
| Technology | Components | null |
25756 | https://en.wikipedia.org/wiki/Repetitive%20strain%20injury | Repetitive strain injury | A repetitive strain injury (RSI) is an injury to part of the musculoskeletal or nervous system caused by repetitive use, vibrations, compression or long periods in a fixed position. Other common names include repetitive stress injury, repetitive stress disorders, cumulative trauma disorders (CTDs), and overuse syndrome.
Signs and symptoms
Some examples of symptoms experienced by patients with RSI are aching, pulsing pain, tingling and extremity weakness, initially presenting with intermittent discomfort and then with a higher degree of frequency.
Definition
Repetitive strain injury (RSI) and associative trauma orders are umbrella terms used to refer to several discrete conditions that can be associated with repetitive tasks, forceful exertions, vibrations, mechanical compression, sustained or awkward positions, or repetitive eccentric contractions. The exact terminology is controversial, but the terms now used by the United States Department of Labor and the National Institute of Occupational Safety and Health (NIOSH) are musculoskeletal disorders (MSDs) and work-related musculoskeletal disorders (WMDs).
Examples of conditions that may sometimes be attributed to such causes include tendinosis (or less often tendinitis), carpal tunnel syndrome, cubital tunnel syndrome, De Quervain syndrome, thoracic outlet syndrome, intersection syndrome, golfer's elbow (medial epicondylitis), tennis elbow (lateral epicondylitis), trigger finger (so-called stenosing tenosynovitis), radial tunnel syndrome, ulnar tunnel syndrome, and focal dystonia.
A general worldwide increase since the 1970s in RSIs of the arms, hands, neck, and shoulder has been attributed to the widespread use in the workplace of keyboard entry devices, such as typewriters and computers, which require long periods of repetitive motions in a fixed posture. Extreme temperatures have also been reported as risk factor for RSI.
Risk factors
Occupational risk factors
Workers in certain fields are at risk of repetitive strains. Most occupational injuries are musculoskeletal disorders, and many of these are caused by cumulative trauma rather than a single event. Miners and poultry workers, for example, must make repeated motions which can cause tendon, muscular, and skeletal injuries. Jobs that involve repeated motion patterns or prolonged posture within a work cycle, or both, may be repetitive. Young athletes are predisposed to RSIs due to an underdeveloped musculoskeletal system.
Psychosocial factors
Factors such as personality differences to work-place organization problems. Certain workers may negatively perceive their work organization due to excessive work rate, long work hours, limited job control, and low social support. Previous studies shown elevated urinary catecholamines (stress-related chemicals) in workers with RSI. Pain related to RSI may evolve into chronic pain syndrome particularly for workers who do not have supports from co-workers and supervisors.
Non-occupational factors
Age and gender are important risk factors for RSIs. The risk of RSI increases with age. Women are more likely affected than men because of their smaller frame, lower muscle mass and strength, and due to endocrine influences. In addition, lifestyle choices such as smoking and alcohol consumption are recognizable risk factors for RSI. Recent scientific findings indicate that obesity and diabetes may predispose an individual to RSIs by creating a chronic low grade inflammatory response that prevents the body from effectively healing damaged tissues.
Diagnosis
RSIs are assessed using a number of objective clinical measures. These include effort-based tests such as grip and pinch strength, diagnostic tests such as Finkelstein's test for De Quervain's tendinitis, Phalen's contortion, Tinel's percussion for carpal tunnel syndrome, and nerve conduction velocity tests that show nerve compression in the wrist. Various imaging techniques can also be used to show nerve compression such as x-ray for the wrist, and MRI for the thoracic outlet and cervico-brachial areas. Utilization of routine imaging is useful in early detection and treatment of overuse injuries in at risk populations, which is important in preventing long term adverse effects.
Treatment
There are no quick fixes for repetitive strain injuries. Early diagnosis is critical to limiting damage. For upper limb RSIs, occupational therapists can create interventions that include teaching the correct approaches to functional task movements in order to minimize the risk of injury. The RICE (Rest, Ice, Compression, Elevation) treatment is used as the first treatment for many muscle strains, ligament sprains, or other bruises and injuries. RICE is used immediately after an injury happens and for the first 24 to 48 hours after the injury. These modalities can help reduce the swelling and pain. Commonly prescribed treatments for early-stage RSIs include analgesics, myofeedback, biofeedback, physical therapy, relaxation, and ultrasound therapy. Low-grade RSIs can sometimes resolve themselves if treatments begin shortly after the onset of symptoms. However, some RSIs may require more aggressive intervention including surgery and can persist for years.
Although there are no "quick fixes" for RSI, there are effective approaches to its treatment and prevention. One is that of ergonomics, the changing of one's environment (especially workplace equipment) to minimize repetitive strain.
A 2006 Canadian study found exercise in leisure time was strongly associated with decreased risk of developing an RSI. Doctors sometimes recommend that those with RSI engage in specific strengthening exercises, for example to improve sitting posture, reduce excessive kyphosis, and potentially thoracic outlet syndrome. Modifications of posture and arm use are often recommended.
History
Although seemingly a modern phenomenon, RSIs have long been documented in the medical literature. In 1700, the Italian physician Bernardino Ramazzini first described RSI in more than 20 categories of industrial workers in Italy, including musicians and clerks. Carpal tunnel syndrome was first identified by the British surgeon James Paget in 1854. The April 1875 issue of The Graphic describes "telegraphic paralysis."
The Swiss surgeon Fritz de Quervain first identified De Quervain's tendinitis in Swiss factory workers in 1895. The French neurologist Jules Tinel (1879–1952) developed his percussion test for compression of the median nerve in 1900. The American surgeon George Phalen improved the understanding of the aetiology of carpal tunnel syndrome with his clinical experience of several hundred patients during the 1950s and 1960s.
Society
Specific sources of discomfort have been popularly referred to by terms such as Blackberry thumb, PlayStation thumb, Rubik's wrist or "cuber's thumb", stylus finger, and raver's wrist, and Emacs pinky.
| Biology and health sciences | Types | Health |
25758 | https://en.wikipedia.org/wiki/RNA | RNA | Ribonucleic acid (RNA) is a polymeric molecule that is essential for most biological functions, either by performing the function itself (non-coding RNA) or by forming a template for the production of proteins (messenger RNA). RNA and deoxyribonucleic acid (DNA) are nucleic acids. The nucleic acids constitute one of the four major macromolecules essential for all known forms of life. RNA is assembled as a chain of nucleotides. Cellular organisms use messenger RNA (mRNA) to convey genetic information (using the nitrogenous bases of guanine, uracil, adenine, and cytosine, denoted by the letters G, U, A, and C) that directs synthesis of specific proteins. Many viruses encode their genetic information using an RNA genome.
Some RNA molecules play an active role within cells by catalyzing biological reactions, controlling gene expression, or sensing and communicating responses to cellular signals. One of these active processes is protein synthesis, a universal function in which RNA molecules direct the synthesis of proteins on ribosomes. This process uses transfer RNA (tRNA) molecules to deliver amino acids to the ribosome, where ribosomal RNA (rRNA) then links amino acids together to form coded proteins.
It has become widely accepted in science that early in the history of life on Earth, prior to the evolution of DNA and possibly of protein-based enzymes as well, an "RNA world" existed in which RNA served as both living organisms' storage method for genetic information—a role fulfilled today by DNA, except in the case of RNA viruses—and potentially performed catalytic functions in cells—a function performed today by protein enzymes, with the notable and important exception of the ribosome, which is a ribozyme.
Chemical structure of RNA
Basic chemical composition
Each nucleotide in RNA contains a ribose sugar, with carbons numbered 1' through 5'. A base is attached to the 1' position, in general, adenine (A), cytosine (C), guanine (G), or uracil (U). Adenine and guanine are purines, and cytosine and uracil are pyrimidines. A phosphate group is attached to the 3' position of one ribose and the 5' position of the next. The phosphate groups have a negative charge each, making RNA a charged molecule (polyanion). The bases form hydrogen bonds between cytosine and guanine, between adenine and uracil and between guanine and uracil. However, other interactions are possible, such as a group of adenine bases binding to each other in a bulge,
or the GNRA tetraloop that has a guanine–adenine base-pair.
Differences between DNA and RNA
The chemical structure of RNA is very similar to that of DNA, but differs in three primary ways:
Unlike double-stranded DNA, RNA is usually a single-stranded molecule (ssRNA) in many of its biological roles and consists of much shorter chains of nucleotides. However, double-stranded RNA (dsRNA) can form and (moreover) a single RNA molecule can, by complementary base pairing, form intrastrand double helixes, as in tRNA.
While the sugar-phosphate "backbone" of DNA contains deoxyribose, RNA contains ribose instead. Ribose has a hydroxyl group attached to the pentose ring in the 2' position, whereas deoxyribose does not. The hydroxyl groups in the ribose backbone make RNA more chemically labile than DNA by lowering the activation energy of hydrolysis.
The complementary base to adenine in DNA is thymine, whereas in RNA, it is uracil, which is an unmethylated form of thymine.
Like DNA, most biologically active RNAs, including mRNA, tRNA, rRNA, snRNAs, and other non-coding RNAs, contain self-complementary sequences that allow parts of the RNA to fold and pair with itself to form double helices. Analysis of these RNAs has revealed that they are highly structured. Unlike DNA, their structures do not consist of long double helices, but rather collections of short helices packed together into structures akin to proteins.
In this fashion, RNAs can achieve chemical catalysis (like enzymes). For instance, determination of the structure of the ribosome—an RNA-protein complex that catalyzes the assembly of proteins—revealed that its active site is composed entirely of RNA.
An important structural component of RNA that distinguishes it from DNA is the presence of a hydroxyl group at the 2' position of the ribose sugar. The presence of this functional group causes the helix to mostly take the A-form geometry, although in single strand dinucleotide contexts, RNA can rarely also adopt the B-form most commonly observed in DNA. The A-form geometry results in a very deep and narrow major groove and a shallow and wide minor groove. A second consequence of the presence of the 2'-hydroxyl group is that in conformationally flexible regions of an RNA molecule (that is, not involved in formation of a double helix), it can chemically attack the adjacent phosphodiester bond to cleave the backbone.
Secondary and tertiary structures
The functional form of single-stranded RNA molecules, just like proteins, frequently requires a specific spatial tertiary structure. The scaffold for this structure is provided by secondary structural elements that are hydrogen bonds within the molecule. This leads to several recognizable "domains" of secondary structure like hairpin loops, bulges, and internal loops. In order to create, i.e., design, RNA for any given secondary structure, two or three bases would not be enough, but four bases are enough. This is likely why nature has "chosen" a four base alphabet: fewer than four would not allow the creation of all structures, while more than four bases are not necessary to do so. Since RNA is charged, metal ions such as Mg2+ are needed to stabilise many secondary and tertiary structures.
The naturally occurring enantiomer of RNA is D-RNA composed of D-ribonucleotides. All chirality centers are located in the D-ribose. By the use of L-ribose or rather L-ribonucleotides, L-RNA can be synthesized. L-RNA is much more stable against degradation by RNase.
Like other structured biopolymers such as proteins, one can define topology of a folded RNA molecule. This is often done based on arrangement of intra-chain contacts within a folded RNA, termed as circuit topology.
Chemical modifications
RNA is transcribed with only four bases (adenine, cytosine, guanine and uracil), but these bases and attached sugars can be modified in numerous ways as the RNAs mature. Pseudouridine (Ψ), in which the linkage between uracil and ribose is changed from a C–N bond to a C–C bond, and ribothymidine (T) are found in various places (the most notable ones being in the TΨC loop of tRNA). Another notable modified base is hypoxanthine, a deaminated adenine base whose nucleoside is called inosine (I). Inosine plays a key role in the wobble hypothesis of the genetic code.
There are more than 100 other naturally occurring modified nucleosides. The greatest structural diversity of modifications can be found in tRNA, while pseudouridine and nucleosides with 2'-O-methylribose often present in rRNA are the most common. The specific roles of many of these modifications in RNA are not fully understood. However, it is notable that, in ribosomal RNA, many of the post-transcriptional modifications occur in highly functional regions, such as the peptidyl transferase center and the subunit interface, implying that they are important for normal function.
Types of RNA
Messenger RNA (mRNA) is the type of RNA that carries information from DNA to the ribosome, the sites of protein synthesis (translation) in the cell cytoplasm. The coding sequence of the mRNA determines the amino acid sequence in the protein that is produced. However, many RNAs do not code for protein (about 97% of the transcriptional output is non-protein-coding in eukaryotes).
These so-called non-coding RNAs ("ncRNA") can be encoded by their own genes (RNA genes), but can also derive from mRNA introns. The most prominent examples of non-coding RNAs are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. There are also non-coding RNAs involved in gene regulation, RNA processing and other roles. Certain RNAs are able to catalyse chemical reactions such as cutting and ligating other RNA molecules, and the catalysis of peptide bond formation in the ribosome; these are known as ribozymes.
According to the length of RNA chain, RNA includes small RNA and long RNA. Usually, small RNAs are shorter than 200 nt in length, and long RNAs are greater than 200 nt long. Long RNAs, also called large RNAs, mainly include long non-coding RNA (lncRNA) and mRNA. Small RNAs mainly include 5.8S ribosomal RNA (rRNA), 5S rRNA, transfer RNA (tRNA), microRNA (miRNA), small interfering RNA (siRNA), small nucleolar RNA (snoRNAs), Piwi-interacting RNA (piRNA), tRNA-derived small RNA (tsRNA) and small rDNA-derived RNA (srRNA).
There are certain exceptions as in the case of the 5S rRNA of the members of the genus Halococcus (Archaea), which have an insertion, thus increasing its size.
RNAs involved in protein synthesis
Messenger RNA (mRNA) carries information about a protein sequence to the ribosomes, the protein synthesis factories in the cell. It is coded so that every three nucleotides (a codon) corresponds to one amino acid. In eukaryotic cells, once precursor mRNA (pre-mRNA) has been transcribed from DNA, it is processed to mature mRNA. This removes its introns—non-coding sections of the pre-mRNA. The mRNA is then exported from the nucleus to the cytoplasm, where it is bound to ribosomes and translated into its corresponding protein form with the help of tRNA. In prokaryotic cells, which do not have nucleus and cytoplasm compartments, mRNA can bind to ribosomes while it is being transcribed from DNA. After a certain amount of time, the message degrades into its component nucleotides with the assistance of ribonucleases.
Transfer RNA (tRNA) is a small RNA chain of about 80 nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis during translation. It has sites for amino acid attachment and an anticodon region for codon recognition that binds to a specific sequence on the messenger RNA chain through hydrogen bonding.
Ribosomal RNA (rRNA) is the catalytic component of the ribosomes. The rRNA is the component of the ribosome that hosts translation. Eukaryotic ribosomes contain four different rRNA molecules: 18S, 5.8S, 28S and 5S rRNA. Three of the rRNA molecules are synthesized in the nucleolus, and one is synthesized elsewhere. In the cytoplasm, ribosomal RNA and protein combine to form a nucleoprotein called a ribosome. The ribosome binds mRNA and carries out protein synthesis. Several ribosomes may be attached to a single mRNA at any time. Nearly all the RNA found in a typical eukaryotic cell is rRNA.
Transfer-messenger RNA (tmRNA) is found in many bacteria and plastids. It tags proteins encoded by mRNAs that lack stop codons for degradation and prevents the ribosome from stalling.
Regulatory RNA
The earliest known regulators of gene expression were proteins known as repressors and activators – regulators with specific short binding sites within enhancer regions near the genes to be regulated. Later studies have shown that RNAs also regulate genes. There are several kinds of RNA-dependent processes in eukaryotes regulating the expression of genes at various points, such as RNAi repressing genes post-transcriptionally, long non-coding RNAs shutting down blocks of chromatin epigenetically, and enhancer RNAs inducing increased gene expression. Bacteria and archaea have also been shown to use regulatory RNA systems such as bacterial small RNAs and CRISPR. Fire and Mello were awarded the 2006 Nobel Prize in Physiology or Medicine for discovering microRNAs (miRNAs), specific short RNA molecules that can base-pair with mRNAs.
MicroRNA (miRNA) and small interfering RNA (siRNA)
Post-transcriptional expression levels of many genes can be controlled by RNA interference, in which miRNAs, specific short RNA molecules, pair with mRNA regions and target them for degradation. This antisense-based process involves steps that first process the RNA so that it can base-pair with a region of its target mRNAs. Once the base pairing occurs, other proteins direct the mRNA to be destroyed by nucleases.
Long non-coding RNAs
Next to be linked to regulation were Xist and other long noncoding RNAs associated with X chromosome inactivation. Their roles, at first mysterious, were shown by Jeannie T. Lee and others to be the silencing of blocks of chromatin via recruitment of Polycomb complex so that messenger RNA could not be transcribed from them. Additional lncRNAs, currently defined as RNAs of more than 200 base pairs that do not appear to have coding potential, have been found associated with regulation of stem cell pluripotency and cell division.
Enhancer RNAs
The third major group of regulatory RNAs is called enhancer RNAs. It is not clear at present whether they are a unique category of RNAs of various lengths or constitute a distinct subset of lncRNAs. In any case, they are transcribed from enhancers, which are known regulatory sites in the DNA near genes they regulate. They up-regulate the transcription of the gene(s) under control of the enhancer from which they are transcribed.
Small RNA in prokaryotes
Small RNA
At first, regulatory RNA was thought to be a eukaryotic phenomenon, a part of the explanation for why so much more transcription in higher organisms was seen than had been predicted. But as soon as researchers began to look for possible RNA regulators in bacteria, they turned up there as well, termed as small RNA (sRNA). Currently, the ubiquitous nature of systems of RNA regulation of genes has been discussed as support for the RNA World theory. There are indications that the enterobacterial sRNAs are involved in various cellular processes and seem to have significant role in stress responses such as membrane stress, starvation stress, phosphosugar stress and DNA damage. Also, it has been suggested that sRNAs have been evolved to have important role in stress responses because of their kinetic properties that allow for rapid response and stabilisation of the physiological state. Bacterial small RNAs generally act via antisense pairing with mRNA to down-regulate its translation, either by affecting stability or affecting cis-binding ability. Riboswitches have also been discovered. They are cis-acting regulatory RNA sequences acting allosterically. They change shape when they bind metabolites so that they gain or lose the ability to bind chromatin to regulate expression of genes.
CRISPR RNA
Archaea also have systems of regulatory RNA. The CRISPR system, recently being used to edit DNA in situ, acts via regulatory RNAs in archaea and bacteria to provide protection against virus invaders.
RNA synthesis and processing
Synthesis
Synthesis of RNA typically occurs in the cell nucleus and is usually catalyzed by an enzyme—RNA polymerase—using DNA as a template, a process known as transcription. Initiation of transcription begins with the binding of the enzyme to a promoter sequence in the DNA (usually found "upstream" of a gene). The DNA double helix is unwound by the helicase activity of the enzyme. The enzyme then progresses along the template strand in the 3’ to 5’ direction, synthesizing a complementary RNA molecule with elongation occurring in the 5’ to 3’ direction. The DNA sequence also dictates where termination of RNA synthesis will occur.
Primary transcript RNAs are often modified by enzymes after transcription. For example, a poly(A) tail and a 5' cap are added to eukaryotic pre-mRNA and introns are removed by the spliceosome.
There are also a number of RNA-dependent RNA polymerases that use RNA as their template for synthesis of a new strand of RNA. For instance, a number of RNA viruses (such as poliovirus) use this type of enzyme to replicate their genetic material. Also, RNA-dependent RNA polymerase is part of the RNA interference pathway in many organisms.
RNA processing
Many RNAs are involved in modifying other RNAs.
Introns are spliced out of pre-mRNA by spliceosomes, which contain several small nuclear RNAs (snRNA), or the introns can be ribozymes that are spliced by themselves.
RNA can also be altered by having its nucleotides modified to nucleotides other than A, C, G and U.
In eukaryotes, modifications of RNA nucleotides are in general directed by small nucleolar RNAs (snoRNA; 60–300 nt), found in the nucleolus and cajal bodies. snoRNAs associate with enzymes and guide them to a spot on an RNA by basepairing to that RNA. These enzymes then perform the nucleotide modification. rRNAs and tRNAs are extensively modified, but snRNAs and mRNAs can also be the target of base modification. RNA can also be methylated.
RNA in genetics
RNA genomes
Like DNA, RNA can carry genetic information. RNA viruses have genomes composed of RNA that encodes a number of proteins. The viral genome is replicated by some of those proteins, while other proteins protect the genome as the virus particle moves to a new host cell. Viroids are another group of pathogens, but they consist only of RNA, do not encode any protein and are replicated by a host plant cell's polymerase.
Reverse transcription
Reverse transcribing viruses replicate their genomes by reverse transcribing DNA copies from their RNA; these DNA copies are then transcribed to new RNA. Retrotransposons also spread by copying DNA and RNA from one another, and telomerase contains an RNA that is used as template for building the ends of eukaryotic chromosomes.
Double-stranded RNA
Double-stranded RNA (dsRNA) is RNA with two complementary strands, similar to the DNA found in all cells, but with the replacement of thymine by uracil and the adding of one oxygen atom. dsRNA forms the genetic material of some viruses (double-stranded RNA viruses). Double-stranded RNA, such as viral RNA or siRNA, can trigger RNA interference in eukaryotes, as well as interferon response in vertebrates. In eukaryotes, double-stranded RNA (dsRNA) plays a role in the activation of the innate immune system against viral infections.
Circular RNA
In the late 1970s, it was shown that there is a single stranded covalently closed, i.e. circular form of RNA expressed throughout the animal and plant kingdom (see circRNA). circRNAs are thought to arise via a "back-splice" reaction where the spliceosome joins a upstream 3' acceptor to a downstream 5' donor splice site. So far the function of circRNAs is largely unknown, although for few examples a microRNA sponging activity has been demonstrated.
Key discoveries in RNA biology
Research on RNA has led to many important biological discoveries and numerous Nobel Prizes. Nucleic acids were discovered in 1868 by Friedrich Miescher, who called the material 'nuclein' since it was found in the nucleus. It was later discovered that prokaryotic cells, which do not have a nucleus, also contain nucleic acids. The role of RNA in protein synthesis was suspected already in 1939. Severo Ochoa won the 1959 Nobel Prize in Medicine (shared with Arthur Kornberg) after he discovered an enzyme that can synthesize RNA in the laboratory. However, the enzyme discovered by Ochoa (polynucleotide phosphorylase) was later shown to be responsible for RNA degradation, not RNA synthesis. In 1956 Alex Rich and David Davies hybridized two separate strands of RNA to form the first crystal of RNA whose structure could be determined by X-ray crystallography.
The sequence of the 77 nucleotides of a yeast tRNA was found by Robert W. Holley in 1965, winning Holley the 1968 Nobel Prize in Medicine (shared with Har Gobind Khorana and Marshall Nirenberg).
In the early 1970s, retroviruses and reverse transcriptase were discovered, showing for the first time that enzymes could copy RNA into DNA (the opposite of the usual route for transmission of genetic information). For this work, David Baltimore, Renato Dulbecco and Howard Temin were awarded a Nobel Prize in 1975.
In 1976, Walter Fiers and his team determined the first complete nucleotide sequence of an RNA virus genome, that of bacteriophage MS2.
In 1977, introns and RNA splicing were discovered in both mammalian viruses and in cellular genes, resulting in a 1993 Nobel to Philip Sharp and Richard Roberts.
Catalytic RNA molecules (ribozymes) were discovered in the early 1980s, leading to a 1989 Nobel award to Thomas Cech and Sidney Altman. In 1990, it was found in Petunia that introduced genes can silence similar genes of the plant's own, now known to be a result of RNA interference.
At about the same time, 22 nt long RNAs, now called microRNAs, were found to have a role in the development of C. elegans.
Studies on RNA interference earned a Nobel Prize for Andrew Fire and Craig Mello in 2006, and another Nobel for studies on the transcription of RNA to Roger Kornberg in the same year. The discovery of gene regulatory RNAs has led to attempts to develop drugs made of RNA, such as siRNA, to silence genes. Adding to the Nobel prizes for research on RNA, in 2009 it was awarded for the elucidation of the atomic structure of the ribosome to Venki Ramakrishnan, Thomas A. Steitz, and Ada Yonath. In 2023 the Nobel Prize in Physiology or Medicine was awarded to Katalin Karikó and Drew Weissman for their discoveries concerning modified nucleosides that enabled the development of effective mRNA vaccines against COVID-19.
Relevance for prebiotic chemistry and abiogenesis
In 1968, Carl Woese hypothesized that RNA might be catalytic and suggested that the earliest forms of life (self-replicating molecules) could have relied on RNA both to carry genetic information and to catalyze biochemical reactions—an RNA world. In May 2022, scientists discovered that RNA can form spontaneously on prebiotic basalt lava glass, presumed to have been abundant on the early Earth.
In March 2015, DNA and RNA nucleobases, including uracil, cytosine and thymine, were reportedly formed in the laboratory under outer space conditions, using starter chemicals such as pyrimidine, an organic compound commonly found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), is one of the most carbon-rich compounds found in the universe and may have been formed in red giants or in interstellar dust and gas clouds. In July 2022, astronomers reported massive amounts of prebiotic molecules, including possible RNA precursors, in the galactic center of the Milky Way Galaxy.
Medical applications
RNA, initially deemed unsuitable for therapeutics due to its short half-life, has been made useful through advances in stabilization. Therapeutic applications arise as RNA folds into complex conformations and binds proteins, nucleic acids, and small molecules to form catalytic centers. RNA-based vaccines are thought to be easier to produce than traditional vaccines derived from killed or altered pathogens, because it can take months or years to grow and study a pathogen and determine which molecular parts to extract, inactivate, and use in a vaccine. Small molecules with conventional therapeutic properties can target RNA and DNA structures, thereby treating novel diseases. However, research is scarce on small molecules targeting RNA and approved drugs for human illness. Ribavirin, branaplam, and ataluren are currently available medications that stabilize double-stranded RNA structures and control splicing in a variety of disorders.
Protein-coding mRNAs have emerged as new therapeutic candidates, with RNA replacement being particularly beneficial for brief but torrential protein expression. In vitro transcribed mRNAs (IVT-mRNA) have been used to deliver proteins for bone regeneration, pluripotency, and heart function in animal models. SiRNAs, short RNA molecules, play a crucial role in innate defense against viruses and chromatin structure. They can be artificially introduced to silence specific genes, making them valuable for gene function studies, therapeutic target validation, and drug development.
mRNA vaccines have emerged as an important new class of vaccines, using mRNA to manufacture proteins which provoke an immune response. Their first successful large-scale application came in the form of COVID-19 vaccines during the COVID-19 pandemic.
| Biology and health sciences | Chemistry | null |
25766 | https://en.wikipedia.org/wiki/Ribosome | Ribosome | Ribosomes () are macromolecular machines, found within all cells, that perform biological protein synthesis (messenger RNA translation). Ribosomes link amino acids together in the order specified by the codons of messenger RNA molecules to form polypeptide chains. Ribosomes consist of two major components: the small and large ribosomal subunits. Each subunit consists of one or more ribosomal RNA molecules and many ribosomal proteins (). The ribosomes and associated molecules are also known as the translational apparatus.
Overview
The sequence of DNA that encodes the sequence of the amino acids in a protein is transcribed into a messenger RNA (mRNA) chain. Ribosomes bind to the messenger RNA molecules and use the RNA's sequence of nucleotides to determine the sequence of amino acids needed to generate a protein. Amino acids are selected and carried to the ribosome by transfer RNA (tRNA) molecules, which enter the ribosome and bind to the messenger RNA chain via an anticodon stem loop. For each coding triplet (codon) in the messenger RNA, there is a unique transfer RNA that must have the exact anti-codon match, and carries the correct amino acid for incorporating into a growing polypeptide chain. Once the protein is produced, it can then fold to produce a functional three-dimensional structure.
A ribosome is made from complexes of RNAs and proteins and is therefore a ribonucleoprotein complex. In prokaryotes each ribosome is composed of small (30S) and large (50S) components, called subunits, which are bound to each other:
(30S) has mainly a decoding function and is also bound to the mRNA
(50S) has mainly a catalytic function and is also bound to the aminoacylated tRNAs.
The synthesis of proteins from their building blocks takes place in four phases: initiation, elongation, termination, and recycling. The start codon in all mRNA molecules has the sequence AUG. The stop codon is one of UAA, UAG, or UGA; since there are no tRNA molecules that recognize these codons, the ribosome recognizes that translation is complete. When a ribosome finishes reading an mRNA molecule, the two subunits separate and are usually broken up but can be reused. Ribosomes are a kind of enzyme, called ribozymes because the catalytic peptidyl transferase activity that links amino acids together is performed by the ribosomal RNA.
In eukaryotic cells, ribosomes are often associated with the intracellular membranes that make up the rough endoplasmic reticulum.
Ribosomes from bacteria, archaea, and eukaryotes (in the three-domain system) resemble each other to a remarkable degree, evidence of a common origin. They differ in their size, sequence, structure, and the ratio of protein to RNA. The differences in structure allow some antibiotics to kill bacteria by inhibiting their ribosomes while leaving human ribosomes unaffected. In all species, more than one ribosome may move along a single mRNA chain at one time (as a polysome), each "reading" a specific sequence and producing a corresponding protein molecule.
The mitochondrial ribosomes of eukaryotic cells are distinct from their other ribosomes. They functionally resemble those in bacteria, reflecting the evolutionary origin of mitochondria as endosymbiotic bacteria.
Discovery
Ribosomes were first observed in the mid-1950s by Romanian-American cell biologist George Emil Palade, using an electron microscope, as dense particles or granules. They were initially called Palade granules due to their granular structure. The term "ribosome" was proposed in 1958 by Howard M. Dintzis:
Albert Claude, Christian de Duve, and George Emil Palade were jointly awarded the Nobel Prize in Physiology or Medicine, in 1974, for the discovery of the ribosome. The Nobel Prize in Chemistry 2009 was awarded to Venkatraman Ramakrishnan, Thomas A. Steitz and Ada E. Yonath for determining the detailed structure and mechanism of the ribosome.
Structure
The ribosome is a complex cellular machine. It is largely made up of specialized RNA known as ribosomal RNA (rRNA) as well as dozens of distinct proteins (the exact number varies slightly between species). The ribosomal proteins and rRNAs are arranged into two distinct ribosomal pieces of different sizes, known generally as the large and small subunits of the ribosome. Ribosomes consist of two subunits that fit together and work as one to translate the mRNA into a polypeptide chain during protein synthesis. Because they are formed from two subunits of non-equal size, they are slightly longer on the axis than in diameter.
Prokaryotic ribosomes
Prokaryotic ribosomes are around 20 nm (200 Å) in diameter and are composed of 65% rRNA and 35% ribosomal proteins. Eukaryotic ribosomes are between 25 and 30 nm (250–300 Å) in diameter with an rRNA-to-protein ratio that is close to 1. Crystallographic work has shown that there are no ribosomal proteins close to the reaction site for polypeptide synthesis. This suggests that the protein components of ribosomes do not directly participate in peptide bond formation catalysis, but rather that these proteins act as a scaffold that may enhance the ability of rRNA to synthesize protein (see: Ribozyme).
The ribosomal subunits of prokaryotes and eukaryotes are quite similar.
The unit of measurement used to describe the ribosomal subunits and the rRNA fragments is the Svedberg unit, a measure of the rate of sedimentation in centrifugation rather than size. This accounts for why fragment names do not add up: for example, bacterial 70S ribosomes are made of 50S and 30S subunits.
Prokaryotes have 70S ribosomes, each consisting of a small (30S) and a large (50S) subunit. E. coli, for example, has a 16S RNA subunit (consisting of 1540 nucleotides) that is bound to 21 proteins. The large subunit is composed of a 5S RNA subunit (120 nucleotides), a 23S RNA subunit (2900 nucleotides) and 31 proteins.
{| class="wikitable float-right" style="text-align:center"
|+ Ribosome of E. coli (a bacterium)
|-
! width="25%"| ribosome
! width="25%"| subunit
! width="25%"| rRNAs
! width="25%"| r-proteins
|-
| rowspan="3" | 70S || rowspan="2" | 50S || 23S (2904 nt) || rowspan="2" | 31
|-
| 5S (120 nt)
|-
| 30S || 16S (1542 nt) || 21
|}
Affinity label for the tRNA binding sites on the E. coli ribosome allowed the identification of A and P site proteins most likely associated with the peptidyltransferase activity; labelled proteins are L27, L14, L15, L16, L2; at least L27 is located at the donor site, as shown by E. Collatz and A.P. Czernilofsky. Additional research has demonstrated that the S1 and S21 proteins, in association with the 3′-end of 16S ribosomal RNA, are involved in the initiation of translation.
Archaeal ribosomes
Archaeal ribosomes share the same general dimensions of bacteria ones, being a 70S ribosome made up from a 50S large subunit, a 30S small subunit, and containing three rRNA chains. However, on the sequence level, they are much closer to eukaryotic ones than to bacterial ones. Every extra ribosomal protein archaea have compared to bacteria has a eukaryotic counterpart, while no such relation applies between archaea and bacteria.
Eukaryotic ribosomes
Eukaryotes have 80S ribosomes located in their cytosol, each consisting of a small (40S) and large (60S) subunit. Their 40S subunit has an 18S RNA (1900 nucleotides) and 33 proteins. The large subunit is composed of a 5S RNA (120 nucleotides), 28S RNA (4700 nucleotides), a 5.8S RNA (160 nucleotides) subunits and 49 proteins.
{| class="wikitable float-right" style="text-align:center"
|+ eukaryotic cytosolic ribosomes (R. norvegicus)
|-
! width="25%"| ribosome
! width="25%"| subunit
! width="25%"| rRNAs
! width="25%"| r-proteins
|-
| rowspan="4" | 80S || rowspan="3" | 60S || 28S (4718 nt) || rowspan="3" | 49
|-
| 5.8S (160 nt)
|-
| 5S (120 nt)
|-
| 40S || 18S (1874 nt) || 33
|}
During 1977, Czernilofsky published research that used affinity labeling to identify tRNA-binding sites on rat liver ribosomes. Several proteins, including L32/33, L36, L21, L23, L28/29 and L13 were implicated as being at or near the peptidyl transferase center.
Plastoribosomes and mitoribosomes
In eukaryotes, ribosomes are present in mitochondria (sometimes called mitoribosomes) and in plastids such as chloroplasts (also called plastoribosomes). They also consist of large and small subunits bound together with proteins into one 70S particle. These ribosomes are similar to those of bacteria and these organelles are thought to have originated as symbiotic bacteria. Of the two, chloroplastic ribosomes are closer to bacterial ones than mitochondrial ones are. Many pieces of ribosomal RNA in the mitochondria are shortened, and in the case of 5S rRNA, replaced by other structures in animals and fungi. In particular, Leishmania tarentolae has a minimalized set of mitochondrial rRNA. In contrast, plant mitoribosomes have both extended rRNA and additional proteins as compared to bacteria, in particular, many pentatricopetide repeat proteins.
The cryptomonad and chlorarachniophyte algae may contain a nucleomorph that resembles a vestigial eukaryotic nucleus. Eukaryotic 80S ribosomes may be present in the compartment containing the nucleomorph.
Making use of the differences
The differences between the bacterial and eukaryotic ribosomes are exploited by pharmaceutical chemists to create antibiotics that can destroy a bacterial infection without harming the cells of the infected person. Due to the differences in their structures, the bacterial 70S ribosomes are vulnerable to these antibiotics while the eukaryotic 80S ribosomes are not. Even though mitochondria possess ribosomes similar to the bacterial ones, mitochondria are not affected by these antibiotics because they are surrounded by a double membrane that does not easily admit these antibiotics into the organelle. A noteworthy counterexample is the antineoplastic antibiotic chloramphenicol, which inhibits bacterial 50S and eukaryotic mitochondrial 50S ribosomes. Ribosomes in chloroplasts, however, are different: Antibiotic resistance in chloroplast ribosomal proteins is a trait that has to be introduced as a marker, with genetic engineering.
Common properties
The various ribosomes share a core structure, which is quite similar despite the large differences in size. Much of the RNA is highly organized into various tertiary structural motifs, for example pseudoknots that exhibit coaxial stacking. The extra RNA in the larger ribosomes is in several long continuous insertions, such that they form loops out of the core structure without disrupting or changing it. All of the catalytic activity of the ribosome is carried out by the RNA; the proteins reside on the surface and seem to stabilize the structure.
High-resolution structure
The general molecular structure of the ribosome has been known since the early 1970s. In the early 2000s, the structure has been achieved at high resolutions, of the order of a few ångströms.
The first papers giving the structure of the ribosome at atomic resolution were published almost simultaneously in late 2000. The 50S (large prokaryotic) subunit was determined from the archaeon Haloarcula marismortui and the bacterium Deinococcus radiodurans, and the structure of the 30S subunit was determined from the bacterium Thermus thermophilus. These structural studies were awarded the Nobel Prize in Chemistry in 2009. In May 2001 these coordinates were used to reconstruct the entire T. thermophilus 70S particle at 5.5 Å resolution.
Two papers were published in November 2005 with structures of the Escherichia coli 70S ribosome. The structures of a vacant ribosome were determined at 3.5 Å resolution using X-ray crystallography. Then, two weeks later, a structure based on cryo-electron microscopy was published, which depicts the ribosome at 11–15 Å resolution in the act of passing a newly synthesized protein strand into the protein-conducting channel.
The first atomic structures of the ribosome complexed with tRNA and mRNA molecules were solved by using X-ray crystallography by two groups independently, at 2.8 Å and at 3.7 Å. These structures allow one to see the details of interactions of the Thermus thermophilus ribosome with mRNA and with tRNAs bound at classical ribosomal sites. Interactions of the ribosome with long mRNAs containing Shine-Dalgarno sequences were visualized soon after that at 4.5–5.5 Å resolution.
In 2011, the first complete atomic structure of the eukaryotic 80S ribosome from the yeast Saccharomyces cerevisiae was obtained by crystallography. The model reveals the architecture of eukaryote-specific elements and their interaction with the universally conserved core. At the same time, the complete model of a eukaryotic 40S ribosomal structure in Tetrahymena thermophila was published and described the structure of the 40S subunit, as well as much about the 40S subunit's interaction with eIF1 during translation initiation. Similarly, the eukaryotic 60S subunit structure was also determined from Tetrahymena thermophila in complex with eIF6.
Function
Ribosomes are minute particles consisting of RNA and associated proteins that function to synthesize proteins. Proteins are needed for many cellular functions, such as repairing damage or directing chemical processes. Ribosomes can be found floating within the cytoplasm or attached to the endoplasmic reticulum. Their main function is to convert genetic code into an amino acid sequence and to build protein polymers from amino acid monomers.
Ribosomes act as catalysts in two extremely important biological processes called peptidyl transfer and peptidyl hydrolysis. The "PT center is responsible for producing protein bonds during protein elongation".
In summary, ribosomes have two main functions: Decoding the message, and the formation of peptide bonds. These two functions reside in the ribosomal subunits. Each subunit is made of one or more rRNAs and many r-proteins. The small subunit (30S in bacteria and archaea, 40S in eukaryotes) has the decoding function, whereas the large subunit (50S in bacteria and archaea, 60S in eukaryotes) catalyzes the formation of peptide bonds, referred to as the peptidyl-transferase activity. The bacterial (and archaeal) small subunit contains the 16S rRNA and 21 r-proteins (Escherichia coli), whereas the eukaryotic small subunit contains the 18S rRNA and 32 r-proteins (Saccharomyces cerevisiae, although the numbers vary between species). The bacterial large subunit contains the 5S and 23S rRNAs and 34 r-proteins (E. coli), with the eukaryotic large subunit containing the 5S, 5.8S, and 25S/28S rRNAs and 46 r-proteins (S. cerevisiae; again, the exact numbers vary between species).
Translation
Ribosomes are the workplaces of protein biosynthesis, the process of translating mRNA into protein. The mRNA comprises a series of codons which are decoded by the ribosome to make the protein. Using the mRNA as a template, the ribosome traverses each codon (3 nucleotides) of the mRNA, pairing it with the appropriate amino acid provided by an aminoacyl-tRNA. Aminoacyl-tRNA contains a complementary anticodon on one end and the appropriate amino acid on the other. For fast and accurate recognition of the appropriate tRNA, the ribosome utilizes large conformational changes (conformational proofreading). The small ribosomal subunit, typically bound to an aminoacyl-tRNA containing the first amino acid methionine, binds to an AUG codon on the mRNA and recruits the large ribosomal subunit. The ribosome contains three RNA binding sites, designated A, P, and E. The A-site binds an aminoacyl-tRNA or termination release factors; the P-site binds a peptidyl-tRNA (a tRNA bound to the poly-peptide chain); and the E-site (exit) binds a free tRNA. Protein synthesis begins at a start codon AUG near the 5' end of the mRNA. mRNA binds to the P site of the ribosome first. The ribosome recognizes the start codon by using the Shine-Dalgarno sequence of the mRNA in prokaryotes and Kozak box in eukaryotes.
Although catalysis of the peptide bond involves the C2 hydroxyl of RNA's P-site adenosine in a proton shuttle mechanism, other steps in protein synthesis (such as translocation) are caused by changes in protein conformations. Since their catalytic core is made of RNA, ribosomes are classified as "ribozymes," and it is thought that they might be remnants of the RNA world.
In Figure 5, both ribosomal subunits (small and large) assemble at the start codon (towards the 5' end of the mRNA). The ribosome uses tRNA that matches the current codon (triplet) on the mRNA to append an amino acid to the polypeptide chain. This is done for each triplet on the mRNA, while the ribosome moves towards the 3' end of the mRNA. Usually in bacterial cells, several ribosomes are working parallel on a single mRNA, forming what is called a polyribosome or polysome.
Cotranslational folding
The ribosome is known to actively participate in the protein folding. The structures obtained in this way are usually identical to the ones obtained during protein chemical refolding; however, the pathways leading to the final product may be different. In some cases, the ribosome is crucial in obtaining the functional protein form. For example, one of the possible mechanisms of folding of the deeply knotted proteins relies on the ribosome pushing the chain through the attached loop.
Addition of translation-independent amino acids
Presence of a ribosome quality control protein Rqc2 is associated with mRNA-independent protein elongation. This elongation is a result of ribosomal addition (via tRNAs brought by Rqc2) of CAT tails: ribosomes extend the C-terminus of a stalled protein with random, translation-independent sequences of alanines and threonines.
Ribosome locations
Ribosomes are classified as being either "free" or "membrane-bound".
Free and membrane-bound ribosomes differ only in their spatial distribution; they are identical in structure. Whether the ribosome exists in a free or membrane-bound state depends on the presence of an ER-targeting signal sequence on the protein being synthesized, so an individual ribosome might be membrane-bound when it is making one protein, but free in the cytosol when it makes another protein.
Ribosomes are sometimes referred to as organelles, but the use of the term organelle is often restricted to describing sub-cellular components that include a phospholipid membrane, which ribosomes, being entirely particulate, do not. For this reason, ribosomes may sometimes be described as "non-membranous organelles".
Free ribosomes
Free ribosomes can move about anywhere in the cytosol, but are excluded from the cell nucleus and other organelles. Proteins that are formed from free ribosomes are released into the cytosol and used within the cell. Since the cytosol contains high concentrations of glutathione and is, therefore, a reducing environment, proteins containing disulfide bonds, which are formed from oxidized cysteine residues, cannot be produced within it.
Membrane-bound ribosomes
When a ribosome begins to synthesize proteins that are needed in some organelles, the ribosome making this protein can become "membrane-bound". In eukaryotic cells this happens in a region of the endoplasmic reticulum (ER) called the "rough ER". The newly produced polypeptide chains are inserted directly into the ER by the ribosome undertaking vectorial synthesis and are then transported to their destinations, through the secretory pathway. Bound ribosomes usually produce proteins that are used within the plasma membrane or are expelled from the cell via exocytosis.
Biogenesis
In bacterial cells, ribosomes are synthesized in the cytoplasm through the transcription of multiple ribosome gene operons. In eukaryotes, the process takes place both in the cell cytoplasm and in the nucleolus, which is a region within the cell nucleus. The assembly process involves the coordinated function of over 200 proteins in the synthesis and processing of the four rRNAs, as well as assembly of those rRNAs with the ribosomal proteins.
Origin
The ribosome may have first originated as a protoribosome, possibly containing a peptidyl transferase centre (PTC), in an RNA world, appearing as a self-replicating complex that only later evolved the ability to synthesize proteins when amino acids began to appear. Studies suggest that ancient ribosomes constructed solely of rRNA could have developed the ability to synthesize peptide bonds. In addition, evidence strongly points to ancient ribosomes as self-replicating complexes, where the rRNA in the ribosomes had informational, structural, and catalytic purposes because it could have coded for tRNAs and proteins needed for ribosomal self-replication. Hypothetical cellular organisms with self-replicating RNA but without DNA are called ribocytes (or ribocells).
As amino acids gradually appeared in the RNA world under prebiotic conditions, their interactions with catalytic RNA would increase both the range and efficiency of function of catalytic RNA molecules. Thus, the driving force for the evolution of the ribosome from an ancient self-replicating machine into its current form as a translational machine may have been the selective pressure to incorporate proteins into the ribosome's self-replicating mechanisms, so as to increase its capacity for self-replication.
Heterogeneous ribosomes
Ribosomes are compositionally heterogeneous between species and even within the same cell, as evidenced by the existence of cytoplasmic and mitochondria ribosomes within the same eukaryotic cells. Certain researchers have suggested that heterogeneity in the composition of ribosomal proteins in mammals is important for gene regulation, i.e., the specialized ribosome hypothesis. However, this hypothesis is controversial and the topic of ongoing research.
Heterogeneity in ribosome composition was first proposed to be involved in translational control of protein synthesis by Vince Mauro and Gerald Edelman. They proposed the ribosome filter hypothesis to explain the regulatory functions of ribosomes. Evidence has suggested that specialized ribosomes specific to different cell populations may affect how genes are translated. Some ribosomal proteins exchange from the assembled complex with cytosolic copies suggesting that the structure of the in vivo ribosome can be modified without synthesizing an entire new ribosome.
Certain ribosomal proteins are absolutely critical for cellular life while others are not. In budding yeast, 14/78 ribosomal proteins are non-essential for growth, while in humans this depends on the cell of study. Other forms of heterogeneity include post-translational modifications to ribosomal proteins such as acetylation, methylation, and phosphorylation. Arabidopsis, Viral internal ribosome entry sites (IRESs) may mediate translations by compositionally distinct ribosomes. For example, 40S ribosomal units without eS25 in yeast and mammalian cells are unable to recruit the CrPV IGR IRES.
Heterogeneity of ribosomal RNA modifications plays a significant role in structural maintenance and/or function and most mRNA modifications are found in highly conserved regions. The most common rRNA modifications are pseudouridylation and 2'-O-methylation of ribose.
| Biology and health sciences | Organelles and other cell parts | null |
25767 | https://en.wikipedia.org/wiki/Real-time%20computing | Real-time computing | Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".
The term "real-time" is also used in simulation to mean that the simulation's clock runs at the same speed as a real clock.
Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds. A system not specified as operating in real time cannot usually guarantee a response within any timeframe, although typical or expected response times may be given. Real-time processing fails if not completed within a specified deadline relative to an event; deadlines must always be met, regardless of system load.
A real-time system has been described as one which "controls an environment by receiving data, processing them, and returning the results sufficiently quickly to affect the environment at that time". The term "real-time" is used in process control and enterprise systems to mean "without significant delay".
Real-time software may use one or more of the following: synchronous programming languages, real-time operating systems (RTOSes), and real-time networks. Each of these provide essential frameworks on which to build a real-time software application.
Systems used for many safety-critical applications must be real-time, such as for control of fly-by-wire aircraft, or anti-lock brakes, both of which demand immediate and accurate mechanical response.
History
The term real-time derives from its use in early simulation, where a real-world process is simulated at a rate which matched that of the real process (now called real-time simulation to avoid ambiguity). Analog computers, most often, were capable of simulating at a much faster pace than real-time, a situation that could be just as dangerous as a slow simulation if it were not also recognized and accounted for.
Minicomputers, particularly in the 1970s onwards, when built into dedicated embedded systems such as DOG (Digital on-screen graphic) scanners, increased the need for low-latency priority-driven responses to important interactions with incoming data. Operating systems such as Data General's RDOS (Real-Time Disk Operating System) and RTOS with background and foreground scheduling as well as Digital Equipment Corporation's RT-11 date from this era. Background-foreground scheduling allowed low priority tasks CPU time when no foreground task needed to execute, and gave absolute priority within the foreground to threads/tasks with the highest priority. Real-time operating systems would also be used for time-sharing multiuser duties. For example, Data General Business Basic could run in the foreground or background of RDOS and would introduce additional elements to the scheduling algorithm to make it more appropriate for people interacting via dumb terminals.
Early personal computers were sometimes used for real-time computing. The possibility of deactivating other interrupts allowed for hard-coded loops with defined timing, and the low interrupt latency allowed the implementation of a real-time operating system, giving the user interface and the disk drives lower priority than the real-time thread. Compared to these the programmable interrupt controller of the Intel CPUs (8086..80586) generates a very large latency and the Windows operating system is neither a real-time operating system nor does it allow a program to take over the CPU completely and use its own scheduler, without using native machine language and thus bypassing all interrupting Windows code. However, several coding libraries exist which offer real time capabilities in a high level language on a variety of operating systems, for example Java Real Time. Later microprocessors such as the Motorola 68000 and subsequent family members (68010, 68020, ColdFire etc.) also became popular with manufacturers of industrial control systems. This application area is one where real-time control offers genuine advantages in terms of process performance and safety.
Criteria for real-time computing
A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. Real-time systems, as well as their deadlines, are classified by the consequence of missing a deadline:
Hard missing a deadline is a total system failure.
Firm infrequent deadline misses are tolerable, but may degrade the system's quality of service. The usefulness of a result is zero after its deadline.
Soft the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.
Thus, the goal of a hard real-time system is to ensure that all deadlines are met, but for soft real-time systems the goal becomes meeting a certain subset of deadlines in order to optimize some application-specific criteria. The particular criteria optimized depend on the application, but some typical examples include maximizing the number of deadlines met, minimizing the lateness of tasks and maximizing the number of high priority tasks meeting their deadlines.
Hard real-time systems are used when it is imperative that an event be reacted to within a strict deadline. Such strong guarantees are required of systems for which not reacting in a certain interval of time would cause great loss in some manner, especially damaging the surroundings physically or threatening human lives (although the strict definition is simply that missing the deadline constitutes failure of the system). Some examples of hard real-time systems:
A car engine control system is a hard real-time system because a delayed signal may cause engine failure or damage.
Medical systems such as heart pacemakers. Even though a pacemaker's task is simple, because of the potential risk to human life, medical systems like these are typically required to undergo thorough testing and certification, which in turn requires hard real-time computing in order to offer provable guarantees that a failure is unlikely or impossible.
Industrial process controllers, such as a machine on an assembly line. If the machine is delayed, the item on the assembly line could pass beyond the reach of the machine (leaving the product untouched), or the machine or the product could be damaged by activating the robot at the wrong time. If the failure is detected, both cases would lead to the assembly line stopping, which slows production. If the failure is not detected, a product with a defect could make it through production, or could cause damage in later steps of production.
Hard real-time systems are typically found interacting at a low level with physical hardware, in embedded systems. Early video game systems such as the Atari 2600 and Cinematronics vector graphics had hard real-time requirements because of the nature of the graphics and timing hardware.
Softmodems replace a hardware modem with software running on a computer's CPU. The software must run every few milliseconds to generate the next audio data to be output. If that data is late, the receiving modem will lose synchronization, causing a long interruption as synchronization is reestablished or causing the connection to be lost entirely.
Many types of printers have hard real-time requirements, such as inkjets (the ink must be deposited at the correct time as the printhead crosses the page), laser printers (the laser must be activated at the right time as the beam scans across the rotating drum), and dot matrix and various types of line printers (the impact mechanism must be activated at the right time as the print mechanism comes into alignment with the desired output). A failure in any of these would cause either missing output or misaligned output.
In the context of multitasking systems the scheduling policy is normally priority driven (pre-emptive schedulers). In some situations, these can guarantee hard real-time performance (for instance if the set of tasks and their priorities is known in advance). There are other hard real-time schedulers such as rate-monotonic which is not common in general-purpose systems, as it requires additional information in order to schedule a task: namely a bound or worst-case estimate for how long the task must execute. Specific algorithms for scheduling such hard real-time tasks exist, like earliest deadline first, which, ignoring the overhead of context switching, is sufficient for system loads of less than 100%. New overlay scheduling systems, such as an adaptive partition scheduler assist in managing large systems with a mixture of hard real-time and non real-time applications.
Firm real-time systems are more nebulously defined, and some classifications do not include them, distinguishing only hard and soft real-time systems. Some examples of firm real-time systems:
The assembly line machine described earlier as hard real-time could instead be considered firm real-time. A missed deadline still causes an error which needs to be dealt with: there might be machinery to mark a part as bad or eject it from the assembly line, or the assembly line could be stopped so an operator can correct the problem. However, as long as these errors are infrequent, they may be tolerated.
Soft real-time systems are typically used to solve issues of concurrent access and the need to keep a number of connected systems up-to-date through changing situations. Some examples of soft real-time systems:
Software that maintains and updates the flight plans for commercial airliners. The flight plans must be kept reasonably current, but they can operate with the latency of a few seconds.
Live audio-video systems are also usually soft real-time. A frame of audio which is played late may cause a brief audio glitch (and may cause all subsequent audio to be delayed correspondingly, causing a perception that the audio is being played slower than normal), but this may be better than the alternatives of continuing to play silence, static, a previous audio frame, or estimated data. A frame of video that is delayed typically causes even less disruption for viewers. The system can continue to operate and also recover in the future using workload prediction and reconfiguration methodologies.
Similarly, video games are often soft real-time, particularly as they try to meet a target frame rate. As the next image cannot be computed in advance, since it depends on inputs from the player, only a short time is available to perform all the computing needed to generate a frame of video before that frame must be displayed. If the deadline is missed, the game can continue at a lower frame rate; depending on the game, this may only affect its graphics (while the gameplay continues at normal speed), or the gameplay itself may be slowed down (which was common on older third- and fourth-generation consoles).
Real-time in digital signal processing
In a real-time digital signal processing (DSP) process, the analyzed (input) and generated (output) samples can be processed (or generated) continuously in the time it takes to input and output the same set of samples independent of the processing delay. It means that the processing delay must be bounded even if the processing continues for an unlimited time. The mean processing time per sample, including overhead, is no greater than the sampling period, which is the reciprocal of the sampling rate. This is the criterion whether the samples are grouped together in large segments and processed as blocks or are processed individually and whether there are long, short, or non-existent input and output buffers.
Consider an audio DSP example; if a process requires 2.01 seconds to analyze, synthesize, or process 2.00 seconds of sound, it is not real-time. However, if it takes 1.99 seconds, it is or can be made into a real-time DSP process.
A common life analogy is standing in a line or queue waiting for the checkout in a grocery store. If the line asymptotically grows longer and longer without bound, the checkout process is not real-time. If the length of the line is bounded, customers are being "processed" and output as rapidly, on average, as they are being inputted then that process is real-time. The grocer might go out of business or must at least lose business if they cannot make their checkout process real-time; thus, it is fundamentally important that this process is real-time.
A signal processing algorithm that cannot keep up with the flow of input data with output falling further and further behind the input, is not real-time. If the delay of the output (relative to the input) is bounded regarding a process which operates over an unlimited time, then that signal processing algorithm is real-time, even if the throughput delay may be very long.
Live vs. real-time
Real-time signal processing is necessary, but not sufficient in and of itself, for live signal processing such as what is required in live event support. Live audio digital signal processing requires both real-time operation and a sufficient limit to throughput delay so as to be tolerable to performers using stage monitors or in-ear monitors and not noticeable as lip sync error by the audience also directly watching the performers. Tolerable limits to latency for live, real-time processing is a subject of investigation and debate, but is estimated to be between 6 and 20 milliseconds.
Real-time bidirectional telecommunications delays of less than 300 ms ("round trip" or twice the unidirectional delay) are considered "acceptable" to avoid undesired "talk-over" in conversation.
Real-time and high-performance
Real-time computing is sometimes misunderstood to be high-performance computing, but this is not an accurate classification. For example, a massive supercomputer executing a scientific simulation may offer impressive performance, yet it is not executing a real-time computation. Conversely, once the hardware and software for an anti-lock braking system have been designed to meet its required deadlines, no further performance gains are obligatory or even useful. Furthermore, if a network server is highly loaded with network traffic, its response time may be slower, but will (in most cases) still succeed before it times out (hits its deadline). Hence, such a network server would not be considered a real-time system: temporal failures (delays, time-outs, etc.) are typically small and compartmentalized (limited in effect), but are not catastrophic failures. In a real-time system, such as the FTSE 100 Index, a slow-down beyond limits would often be considered catastrophic in its application context. The most important requirement of a real-time system is consistent output, not high throughput.
Some kinds of software, such as many chess-playing programs, can fall into either category. For instance, a chess program designed to play in a tournament with a clock will need to decide on a move before a certain deadline or lose the game, and is therefore a real-time computation, but a chess program that is allowed to run indefinitely before moving is not. In both of these cases, however, high performance is desirable: the more work a tournament chess program can do in the allotted time, the better its moves will be, and the faster an unconstrained chess program runs, the sooner it will be able to move. This example also illustrates the essential difference between real-time computations and other computations: if the tournament chess program does not make a decision about its next move in its allotted time it loses the game—i.e., it fails as a real-time computation—while in the other scenario, meeting the deadline is assumed not to be necessary. High-performance is indicative of the amount of processing that is performed in a given amount of time, whereas real-time is the ability to get done with the processing to yield a useful output in the available time.
Near real-time
The term "near real-time" or "nearly real-time" (NRT), in telecommunications and computing, refers to the time delay introduced, by automated data processing or network transmission, between the occurrence of an event and the use of the processed data, such as for display or feedback and control purposes. For example, a near-real-time display depicts an event or situation as it existed at the current time minus the processing time, as nearly the time of the live event.
The distinction between the terms "near real time" and "real time" is somewhat nebulous and must be defined for the situation at hand. The term implies that there are no significant delays. In many cases, processing described as "real-time" would be more accurately described as "near real-time".
Near real-time also refers to delayed real-time transmission of voice and video. It allows playing video images, in approximately real-time, without having to wait for an entire large video file to download. Incompatible databases can export/import to common flat files that the other database can import/export on a scheduled basis so they can sync/share common data in "near real-time" with each other.
Design methods
Several methods exist to aid the design of real-time systems, an example of which is MASCOT, an old but very successful method that represents the concurrent structure of the system. Other examples are HOOD, Real-Time UML, AADL, the Ravenscar profile, and Real-Time Java.
| Technology | Computer science | null |
25768 | https://en.wikipedia.org/wiki/Ruby%20%28programming%20language%29 | Ruby (programming language) | Ruby is an interpreted, high-level, general-purpose programming language. It was designed with an emphasis on programming productivity and simplicity. In Ruby, everything is an object, including primitive data types. It was developed in the mid-1990s by Yukihiro "Matz" Matsumoto in Japan.
Ruby is dynamically typed and uses garbage collection and just-in-time compilation. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. According to the creator, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, BASIC, and Lisp.
History
Early concept
Matsumoto has said that Ruby was conceived in 1993. In a 1999 post to the ruby-talk mailing list, he describes some of his early ideas about the language:
Matsumoto describes the design of Ruby as being like a simple Lisp language at its core, with an object system like that of Smalltalk, blocks inspired by higher-order functions, and practical utility like that of Perl.
The name "Ruby" originated during an online chat session between Matsumoto and Keiju Ishitsuka on February 24, 1993, before any code had been written for the language. Initially two names were proposed: "Coral" and "Ruby". Matsumoto chose the latter in a later e-mail to Ishitsuka. Matsumoto later noted a factor in choosing the name "Ruby"–it was the birthstone of one of his colleagues.
Early releases
The first public release of Ruby 0.95 was announced on Japanese domestic newsgroups on December 21, 1995. Subsequently, three more versions of Ruby were released in two days. The release coincided with the launch of the Japanese-language ruby-list mailing list, which was the first mailing list for the new language.
Already present at this stage of development were many of the features familiar in later releases of Ruby, including object-oriented design, classes with inheritance, mixins, iterators, closures, exception handling and garbage collection.
After the release of Ruby 0.95 in 1995, several stable versions of Ruby were released in these years:
Ruby 1.0: December 25, 1996
Ruby 1.2: December 1998
Ruby 1.4: August 1999
Ruby 1.6: September 2000
In 1997, the first article about Ruby was published on the Web. In the same year, Matsumoto was hired by netlab.jp to work on Ruby as a full-time developer.
In 1998, the Ruby Application Archive was launched by Matsumoto, along with a simple English-language homepage for Ruby.
In 1999, the first English language mailing list ruby-talk began, which signaled a growing interest in the language outside Japan. In this same year, Matsumoto and Keiju Ishitsuka wrote the first book on Ruby, The Object-oriented Scripting Language Ruby (オブジェクト指向スクリプト言語 Ruby), which was published in Japan in October 1999. It would be followed in the early 2000s by around 20 books on Ruby published in Japanese.
By 2000, Ruby was more popular than Python in Japan. In September 2000, the first English language book Programming Ruby was printed, which was later freely released to the public, further widening the adoption of Ruby amongst English speakers. In early 2002, the English-language ruby-talk mailing list was receiving more messages than the Japanese-language ruby-list, demonstrating Ruby's increasing popularity in the non-Japanese speaking world.
Ruby 1.8 and 1.9
Ruby 1.8 was initially released August 2003, was stable for a long time, and was retired June 2013. Although deprecated, there is still code based on it. Ruby 1.8 is only partially compatible with Ruby 1.9.
Ruby 1.8 has been the subject of several industry standards. The language specifications for Ruby were developed by the Open Standards Promotion Center of the Information-Technology Promotion Agency (a Japanese government agency) for submission to the Japanese Industrial Standards Committee (JISC) and then to the International Organization for Standardization (ISO). It was accepted as a Japanese Industrial Standard (JIS X 3017) in 2011 and an international standard (ISO/IEC 30170) in 2012.
Around 2005, interest in the Ruby language surged in tandem with Ruby on Rails, a web framework written in Ruby. Rails is frequently credited with increasing awareness of Ruby.
Effective with Ruby 1.9.3, released October 31, 2011, Ruby switched from being dual-licensed under the Ruby License and the GPL to being dual-licensed under the Ruby License and the two-clause BSD license. Adoption of 1.9 was slowed by changes from 1.8 that required many popular third party gems to be rewritten. Ruby 1.9 introduces many significant changes over the 1.8 series. Examples include:
block local variables (variables that are local to the block in which they are declared)
an additional lambda syntax:
an additional Hash literal syntax using colons for symbol keys:
per-string character encodings are supported
new socket API [IPv6 support]
require_relative import security
Ruby 2
Ruby 2.0 was intended to be fully backward compatible with Ruby 1.9.3. As of the official 2.0.0 release on February 24, 2013, there were only five known (minor) incompatibilities. Ruby 2.0 added several new features, including:
Method keyword arguments
A new method, Module#prepend, to extend a class
A new literal to create an array of symbols
New API for lazy evaluation of Enumerables
A new convention of using #to_h to convert objects to Hashes
Starting with 2.1.0, Ruby's versioning policy changed to be more similar to semantic versioning.
Ruby 2.2.0 includes speed-ups, bugfixes, and library updates and removes some deprecated APIs. Most notably, Ruby 2.2.0 introduces changes to memory handlingan incremental garbage collector, support for garbage collection of symbols and the option to compile directly against jemalloc. It also contains experimental support for using vfork(2) with system() and spawn(), and added support for the Unicode 7.0 specification. Since version 2.2.1, Ruby MRI performance on PowerPC64 was improved. Features that were made obsolete or removed include callcc, the DL library, Digest::HMAC, lib/rational.rb, lib/complex.rb, GServer, Logger::Application as well as various C API functions.
Ruby 2.3.0 includes many performance improvements, updates, and bugfixes including changes to Proc#call, Socket and IO use of exception keywords, Thread#name handling, default passive Net::FTP connections, and Rake being removed from stdlib. Other notable changes include:
The ability to mark all string literals as frozen by default with a consequently large performance increase in string operations.
Hash comparison to allow direct checking of key/value pairs instead of just keys.
A new safe navigation operator &. that can ease nil handling (e.g. instead of , we can use if obj&.foo&.bar).
The did_you_mean gem is now bundled by default and required on startup to automatically suggest similar name matches on a NameError or NoMethodError.
Hash#dig and Array#dig to easily extract deeply nested values (e.g. given profile = { social: { wikipedia: { name: 'Foo Baz' } } }, the value Foo Baz can now be retrieved by profile.dig(:social, :wikipedia, :name)).
.grep_v(regexp) which will match all negative examples of a given regular expression in addition to other new features.
Ruby 2.4.0 includes performance improvements to hash table, Array#max, Array#min, and instance variable access. Other notable changes include:
Binding#irb: Start a REPL session similar to binding.pry
Unify Fixnum and Bignum into Integer class
String supports Unicode case mappings, not just ASCII
A new method, Regexp#match?, which is a faster Boolean version of Regexp#match
Thread deadlock detection now shows threads with their backtrace and dependency
A few notable changes in Ruby 2.5.0 include rescue and ensure statements automatically use a surrounding do-end block (less need for extra begin-end blocks), method-chaining with yield_self, support for branch coverage and method coverage measurement, and easier Hash transformations with Hash#slice and Hash#transform_keys On top of that come a lot of performance improvements like faster block passing (3 times faster), faster Mutexes, faster ERB templates and improvements on some concatenation methods.
A few notable changes in Ruby 2.6.0 include an experimental just-in-time compiler (JIT), and RubyVM::AbstractSyntaxTree (experimental).
A few notable changes in Ruby 2.7.0 include pattern Matching (experimental), REPL improvements, a compaction GC, and separation of positional and keyword arguments.
Ruby 3
Ruby 3.0.0 was released on Christmas Day in 2020. It is known as Ruby 3x3 which means that programs would run three times faster in Ruby 3.0 comparing to Ruby 2.0. and some had already implemented in intermediate releases on the road from 2 to 3. To achieve 3x3, Ruby 3 comes with MJIT, and later YJIT, Just-In-Time Compilers, to make programs faster, although they are described as experimental and remain disabled by default (enabled by flags at runtime).
Another goal of Ruby 3.0 is to improve concurrency and two more utilities Fibre Scheduler, and experimental Ractor facilitate the goal. Ractor is light-weight and thread-safe as it is achieved by exchanging messages rather than shared objects.
Ruby 3.0 introduces RBS language to describe the types of Ruby programs for static analysis. It is separated from general Ruby programs.
There are some syntax enhancements and library changes in Ruby 3.0 as well.
Ruby 3.1 was released on December 25, 2021. It includes YJIT, a new, experimental, Just-In-Time Compiler developed by Shopify, to enhance the performance of real world business applications. A new debugger is also included. There are some syntax enhancements and other improvements in this release. Network libraries for FTP, SMTP, IMAP, and POP are moved from default gems to bundled gems.
Ruby 3.2 was released on December 25, 2022. It brings support for being run inside of a WebAssembly environment via a WASI interface. Regular expressions also receives some improvements, including a faster, memoized matching algorithm to protect against certain ReDoS attacks, and configurable timeouts for regular expression matching. Additional debugging and syntax features are also included in this release, which include syntax suggestion, as well as error highlighting. The MJIT compiler has been re-implemented as a standard library module, while the YJIT, a Rust-based JIT compiler now supports more architectures on Linux.
Ruby 3.3 was released on December 25, 2023. Ruby 3.3 introduces significant enhancements and performance improvements to the language. Key features include the introduction of the Prism parser for portable and maintainable parsing, the addition of the pure-Ruby JIT compiler RJIT, and major performance boosts in the YJIT compiler. Additionally, improvements in memory usage, the introduction of an M:N thread scheduler, and updates to the standard library contribute to a more efficient and developer-friendly Ruby ecosystem.
Semantics and philosophy
Matsumoto has said that Ruby is designed for programmer productivity and fun, following the principles of good user interface design. At a Google Tech Talk in 2008 he said, "I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language." He stresses that systems design needs to emphasize human, rather than computer, needs:
Matsumoto has said his primary design goal was to make a language that he himself enjoyed using, by minimizing programmer work and possible confusion. He has said that he had not applied the principle of least astonishment (POLA) to the design of Ruby; in a May 2005 discussion on the newsgroup comp.lang.ruby, Matsumoto attempted to distance Ruby from POLA, explaining that because any design choice will be surprising to someone, he uses a personal standard in evaluating surprise. If that personal standard remains consistent, there would be few surprises for those familiar with the standard.
Matsumoto defined it this way in an interview:
Ruby is object-oriented: every value is an object, including classes and instances of types that many other languages designate as primitives (such as integers, Booleans, and "null"). Because everything in Ruby is an object, everything in Ruby has certain built-in abilities called methods. Every function is a method and methods are always called on an object. Methods defined at the top level scope become methods of the Object class. Since this class is an ancestor of every other class, such methods can be called on any object. They are also visible in all scopes, effectively serving as "global" procedures. Ruby supports inheritance with dynamic dispatch, mixins and singleton methods (belonging to, and defined for, a single instance rather than being defined on the class). Though Ruby does not support multiple inheritance, classes can import modules as mixins.
Ruby has been described as a multi-paradigm programming language: it allows procedural programming (defining functions/variables outside classes makes them part of the root, 'self' Object), with object orientation (everything is an object) or functional programming (it has anonymous functions, closures, and continuations; statements all have values, and functions return the last evaluation). It has support for introspection, reflective programming, metaprogramming, and interpreter-based threads. Ruby features dynamic typing, and supports parametric polymorphism.
According to the Ruby FAQ, the syntax is similar to Perl's and the semantics are similar to Smalltalk's, but the design philosophy differs greatly from Python's.
Features
Thoroughly object-oriented with inheritance, mixins and metaclasses
Dynamic typing and duck typing
Everything is an expression (even statements) and everything is executed imperatively (even declarations)
Succinct and flexible syntax that minimizes syntactic noise and serves as a foundation for domain-specific languages
Dynamic reflection and alteration of objects to facilitate metaprogramming
Lexical closures, iterators and generators, with a block syntax
Literal notation for arrays, hashes, regular expressions and symbols
Embedding code in strings (interpolation)
Default arguments
Four levels of variable scope (global, class, instance, and local) denoted by sigils or the lack thereof
Garbage collection
First-class continuations
Strict Boolean coercion rules (everything is true except false and nil)
Exception handling
Operator overloading
Built-in support for rational numbers, complex numbers and arbitrary-precision arithmetic
Custom dispatch behavior (through method_missing and const_missing)
Native threads and cooperative fibers (fibers are a 1.9/YARV feature)
Support for Unicode and multiple character encodings.
Native plug-in API in C
Interactive Ruby Shell, an interactive command-line interpreter that can be used to test code quickly (REPL)
Centralized package management through RubyGems
Implemented on all major platforms
Large standard library, including modules for YAML, JSON, XML, CGI, OpenSSL, HTTP, FTP, RSS, curses, zlib and Tk
Just-in-time compilation
Syntax
The syntax of Ruby is broadly similar to that of Perl and Python. Class and method definitions are signaled by keywords, whereas code blocks can be defined by either keywords or braces. In contrast to Perl, variables are not obligatorily prefixed with a sigil. When used, the sigil changes the semantics of scope of the variable. For practical purposes there is no distinction between expressions and statements. Line breaks are significant and taken as the end of a statement; a semicolon may be equivalently used. Unlike Python, indentation is not significant.
One of the differences from Python and Perl is that Ruby keeps all of its instance variables completely private to the class and only exposes them through accessor methods (attr_writer, attr_reader, etc.). Unlike the "getter" and "setter" methods of other languages like C++ or Java, accessor methods in Ruby can be created with a single line of code via metaprogramming; however, accessor methods can also be created in the traditional fashion of C++ and Java. As invocation of these methods does not require the use of parentheses, it is trivial to change an instance variable into a full function, without modifying a single line of calling code or having to do any refactoring achieving similar functionality to C# and VB.NET property members.
Python's property descriptors are similar, but come with a trade-off in the development process. If one begins in Python by using a publicly exposed instance variable, and later changes the implementation to use a private instance variable exposed through a property descriptor, code internal to the class may need to be adjusted to use the private variable rather than the public property. Ruby's design forces all instance variables to be private, but also provides a simple way to declare set and get methods. This is in keeping with the idea that in Ruby, one never directly accesses the internal members of a class from outside the class; rather, one passes a message to the class and receives a response.
Implementations
Matz's Ruby interpreter
The original Ruby interpreter is often referred to as Matz's Ruby Interpreter or MRI. This implementation is written in C and uses its own Ruby-specific virtual machine.
The standardized and retired Ruby 1.8 implementation was written in C, as a single-pass interpreted language.
Starting with Ruby 1.9, and continuing with Ruby 2.x and above, the official Ruby interpreter has been YARV ("Yet Another Ruby VM"), and this implementation has superseded the slower virtual machine used in previous releases of MRI.
Alternative implementations
, there are a number of alternative implementations of Ruby, including JRuby, Rubinius, and mruby. Each takes a different approach, with JRuby and Rubinius providing just-in-time compilation and mruby also providing ahead-of-time compilation.
Ruby has three major alternative implementations:
JRuby, a mixed Java and Ruby implementation that runs on the Java virtual machine. JRuby currently targets Ruby 3.1.x.
TruffleRuby, a Java implementation using the Truffle language implementation framework with GraalVM
Rubinius, a C++ bytecode virtual machine that uses LLVM to compile to machine code at runtime. The bytecode compiler and most core classes are written in pure Ruby. Rubinius currently targets Ruby 2.3.1.
Other Ruby implementations include:
MagLev, a Smalltalk implementation that runs on GemTalk Systems' GemStone/S VM
mruby, an implementation designed to be embedded into C code, in a similar vein to Lua. It is currently being developed by Yukihiro Matsumoto and others
RGSS, or Ruby Game Scripting System, a proprietary implementation that is used by the RPG Maker series of software for game design and modification of the RPG Maker engine
julializer, a transpiler (partial) from Ruby to Julia. It can be used for a large speedup over e.g. Ruby or JRuby implementations (may only be useful for numerical code).
Topaz, a Ruby implementation written in Python
Opal, a web-based interpreter that compiles Ruby to JavaScript
Other now defunct Ruby implementations were:
MacRuby, a Mac OS X implementation on the Objective-C runtime. Its iOS counterpart is called RubyMotion
IronRuby an implementation on the .NET Framework
Cardinal, an implementation for the Parrot virtual machine
Ruby Enterprise Edition, often shortened to ree, an implementation optimized to handle large-scale Ruby on Rails projects
HotRuby, a JavaScript and ActionScript implementation of the Ruby programming language
The maturity of Ruby implementations tends to be measured by their ability to run the Ruby on Rails (Rails) framework, because it is complex to implement and uses many Ruby-specific features. The point when a particular implementation achieves this goal is called "the Rails singularity". The reference implementation, JRuby, and Rubinius are all able to run Rails unmodified in a production environment.
Platform support
Matsumoto originally developed Ruby on the 4.3BSD-based Sony NEWS-OS 3.x, but later migrated his work to SunOS 4.x, and finally to Linux. By 1999, Ruby was known to work across many different operating systems. Modern Ruby versions and implementations are available on all major desktop, mobile and server-based operating systems. Ruby is also supported across a number of cloud hosting platforms like Jelastic, Heroku, Google Cloud Platform and others.
Tools such as RVM and RBEnv, facilitate installation and partitioning of multiple ruby versions, and multiple 'gemsets' on one machine.
Repositories and libraries
RubyGems is Ruby's package manager. A Ruby package is called a "gem" and can be installed via the command line. Most gems are libraries, though a few exist that are applications, such as IDEs. There are over 100,000 Ruby gems hosted on RubyGems.org.
Many new and existing Ruby libraries are hosted on GitHub, a service that offers version control repository hosting for Git.
The Ruby Application Archive, which hosted applications, documentation, and libraries for Ruby programming, was maintained until 2013, when its function was transferred to RubyGems.
| Technology | Scripting languages | null |
25781 | https://en.wikipedia.org/wiki/Robot | Robot | A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Robots may be constructed to evoke human form, but most robots are task-performing machines, designed with an emphasis on stark functionality, rather than expressive aesthetics.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda's Advanced Step in Innovative Mobility (ASIMO) and TOSY's TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nanorobots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous things are expected to proliferate in the future, with home robotics and the autonomous car as some of the main drivers.
The branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics.
From the time of ancient civilization, there have been many accounts of user-configurable automated devices and even automata, resembling humans and other animals, such as animatronics, designed primarily as entertainment. As mechanical techniques developed through the Industrial age, there appeared more practical applications such as automated machines, remote-control and wireless remote-control.
The term comes from a Slavic root, robot-, with meanings associated with labor. The word "robot" was first used to denote a fictional humanoid in a 1920 Czech-language play R.U.R. (Rossumovi Univerzální Roboti – Rossum's Universal Robots) by Karel Čapek, though it was Karel's brother Josef Čapek who was the word's true inventor. Electronics evolved into the driving force of development with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England in 1948, as well as Computer Numerical Control (CNC) machine tools in the late 1940s by John T. Parsons and Frank L. Stulen.
The first commercial, digital and programmable robot was built by George Devol in 1954 and was named the Unimate. It was sold to General Motors in 1961 where it was used to lift pieces of hot metal from die casting machines at the Inland Fisher Guide Plant in the West Trenton section of Ewing Township, New Jersey.
Robots have replaced humans in performing repetitive and dangerous tasks which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea. There are concerns about the increasing use of robots and their role in society. Robots are blamed for rising technological unemployment as they replace workers in increasing numbers of functions. The use of robots in military combat raises ethical concerns. The possibilities of robot autonomy and potential repercussions have been addressed in fiction and may be a realistic concern in the future.
Summary
There is no consensus on which machines qualify as robots but there is general agreement among experts, and the public, that robots tend to possess some or all of the following abilities and functions: accept electronic programming, process data or physical perceptions electronically, operate autonomously to some degree, move around, operate physical parts of itself or physical processes, sense and manipulate their environment, and exhibit intelligent behavior, especially behavior which mimics humans or other animals.
The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots. Related to the concept of a robot is the field of synthetic biology, which studies entities whose nature is more comparable to living things than to machines.
Simpler automated machines are called automatons, like animatronics, often made to resemble humans or animals. Humanoid robots that resemble humans esthetically, possibly even organically, are called androids, while android can be shortened to droid, referring to robots with a broader likeness. On the other hand a human that is augmented with artificial machines is called a cyborg, which is a particular type of transhuman.
History
Early beginnings
Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the island from pirates.
In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called "The Pigeon". The Greek engineer Ctesibius (c. 270 BC) "applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures." Philo of Byzantium described a washstand automaton. Hero of Alexandria , a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water, including a "speaking" automaton.
In ancient China, the 3rd-century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an 'artificer'. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical 'handiwork' made of leather, wood, and artificial organs. There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.
In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours. His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.
Samarangana Sutradhara, a Sanskrit treatise by Bhoja (11th century), includes a chapter about the construction of mechanical contrivances (automata), including mechanical bees and birds, fountains shaped like humans and animals, and male and female dolls that refilled oil lamps, danced, played instruments, and re-enacted scenes from Hindu mythology. The 11th century Lokapannatti tells of how the Buddha's relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka.
13th century Muslim scientist Ismail al-Jazari created several automated devices. He built automated moving peacocks driven by hydropower. He also invented the earliest known automatic gates, which were driven by hydropower, created automatic doors as part of one of his elaborate water clocks. One of al-Jazari's humanoid automata was a waitress that could serve water, tea or drinks. The drink was stored in a tank with a reservoir from where the drink drips into a bucket and, after seven minutes, into a cup, after which the waitress appears out of an automatic door serving the drink. Al-Jazari invented a hand washing automaton incorporating a flush mechanism now used in modern flush toilets. It features a female humanoid automaton standing by a basin filled with water. When the user pulls the lever, the water drains and the female automaton refills the basin.
Mark E. Rosheim summarizes the advances in robotics made by Muslim engineers, especially al-Jazari, as follows:Unlike the Greek designs, these Arab examples reveal an interest, not only in dramatic illusion, but in manipulating the environment for human comfort. Thus, the greatest contribution the Arabs made, besides preserving, disseminating and building on the work of the Greeks, was the concept of practical application. This was the key element that was missing in Greek robotic science.
In the 14th century, the coronation of Richard II of England featured an automata angel.
In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo's robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it. According to Encyclopædia Britannica, Leonardo da Vinci may have been influenced by the classic automata of al-Jazari.
In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet. Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.
In France, between 1738 and 1739, Jacques de Vaucanson exhibited several life-sized automatons: a flute player, a pipe player and a duck. The mechanical duck could flap its wings, crane its neck, and swallow food from the exhibitor's hand, and it gave the illusion of digesting its food by excreting matter stored in a hidden compartment. About 30 years later in Switzerland the clockmaker Pierre Jaquet-Droz made several complex mechanical figures that could write and play music. Several of these devices still exist and work.
Remote-controlled systems
Remotely operated vehicles were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).
The Brennan torpedo, invented by Louis Brennan in 1877, was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". In 1897 the British inventor Ernest Wilson was granted a patent for a torpedo remotely controlled by "Hertzian" (radio) waves and in 1898 Nikola Tesla publicly demonstrated a wireless-controlled torpedo that he hoped to sell to the US Navy.
In 1903, the Spanish engineer Leonardo Torres Quevedo demonstrated a radio control system called Telekino at the Paris Academy of Sciences, which he wanted to use to control an airship of his own design. He obtained several patents for the system in other countries. Unlike previous 'on/off' techniques, Torres established a method for controlling any mechanical or electrical device with different states of operation. The Telekino remotely controlled a tricycle in 1904, considered the first case of an unmanned ground vehicle, and an electric boat with a crew in 1906, which was controlled at a distance over 2 km.
Archibald Low, known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket.
Early robots
In 1928, one of the first humanoid robots, Eric, was exhibited at the annual exhibition of the Model Engineers Society in London, where it delivered a speech. Invented by W. H. Richards, the robot's frame consisted of an aluminium body of armour with eleven electromagnets and one motor powered by a twelve-volt power source. The robot could move its hands and head and could be controlled through remote control or voice control. Both Eric and his "brother" George toured the world.
Westinghouse Electric Corporation built Televox in 1926; it was a cardboard cutout connected to various devices which users could turn on and off. In 1939, the humanoid robot known as Elektro was debuted at the 1939 New York World's Fair. Seven feet tall (2.1 m) and weighing 265 pounds (120.2 kg), it could walk by voice command, speak about 700 words (using a 78-rpm record player), smoke cigarettes, blow up balloons, and move its head and arms. The body consisted of a steel gear, cam and motor skeleton covered by an aluminum skin. In 1928, Japan's first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura.
The German V-1 flying bomb was equipped with systems for automatic guidance and range control, flying on a predetermined course (which could include a 90-degree turn) and entering a terminal dive after a predetermined distance. It was reported as being a 'robot' in contemporary descriptions.
Modern autonomous robots
The first electronic autonomous robots with complex behaviour were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. He wanted to prove that rich connections between a small number of brain cells could give rise to very complex behaviors – essentially that the secret of how the brain worked lay in how it was wired up. His first robots, named Elmer and Elsie, were constructed between 1948 and 1949 and were often described as tortoises due to their shape and slow rate of movement. The three-wheeled tortoise robots were capable of phototaxis, by which they could find their way to a recharging station when they ran low on battery power.
Walter stressed the importance of using purely analogue electronics to simulate brain processes at a time when his contemporaries such as Alan Turing and John von Neumann were all turning towards a view of mental processes in terms of digital computation. His work inspired subsequent generations of robotics researchers such as Rodney Brooks, Hans Moravec and Mark Tilden. Modern incarnations of Walter's turtles may be found in the form of BEAM robotics.
The first digitally operated and programmable robot was invented by George Devol in 1954 and was ultimately called the Unimate. This ultimately laid the foundations of the modern robotics industry. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them.
The first palletizing robot was introduced in 1963 by the Fuji Yusoki Kogyo Company. In 1973, a robot with six electromechanically driven axes was patented by KUKA robotics in Germany, and the programmable universal manipulation arm was invented by Victor Scheinman in 1976, and the design was sold to Unimation.
Commercial and industrial robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.
Future development and trends
Various techniques have emerged to develop the science of robotics and robots. One method is evolutionary robotics, in which a number of differing robots are submitted to tests. Those which perform best are used as a model to create a subsequent "generation" of robots. Another method is developmental robotics, which tracks changes and development within a single robot in the areas of problem-solving and other functions. Another new type of robot is just recently introduced which acts both as a smartphone and robot and is named RoboHon.
As robots become more advanced, eventually there may be a standard computer operating system designed mainly for robots. Robot Operating System (ROS) is an open-source software set of programs being developed at Stanford University, the Massachusetts Institute of Technology, and the Technical University of Munich, Germany, among others. ROS provides ways to program a robot's navigation and limbs regardless of the specific hardware involved. It also provides high-level commands for items like image recognition and even opening doors. When ROS boots up on a robot's computer, it would obtain data on attributes such as the length and movement of robots' limbs. It would relay this data to higher-level algorithms. Microsoft is also developing a "Windows for robots" system with its Robotics Developer Studio, which has been available since 2007.
Japan hopes to have full-scale commercialization of service robots by 2025. Much technological research in Japan is led by Japanese government agencies, particularly the Trade Ministry.
Many future applications of robotics seem obvious to people, even though they are well beyond the capabilities of robots available at the time of the prediction. As early as 1982 people were confident that someday robots would: 1. Clean parts by removing molding flash 2. Spray paint automobiles with absolutely no human presence 3. Pack things in boxes—for example, orient and nest chocolate candies in candy boxes 4. Make electrical cable harness 5. Load trucks with boxes—a packing problem 6. Handle soft goods, such as garments and shoes 7. Shear sheep 8. Be used as prostheses 9. Cook fast food and work in other service industries 10. Work as a household robot.
Generally such predictions are overly optimistic in timescale.
New functionalities and prototypes
In 2008, Caterpillar Inc. developed a dump truck which can drive itself without any human operator. Many analysts believe that self-driving trucks may eventually revolutionize logistics. By 2014, Caterpillar had a self-driving dump truck which is expected to greatly change the process of mining. In 2015, these Caterpillar trucks were actively used in mining operations in Australia by the mining company Rio Tinto Coal Australia. Some analysts believe that within the next few decades, most trucks will be self-driving.
A literate or 'reading robot' named Marge has intelligence that comes from software. She can read newspapers, find and correct misspelled words, learn about banks like Barclays, and understand that some restaurants are better places to eat than others.
Baxter is a new robot introduced in 2012 which learns by guidance. A worker could teach Baxter how to perform a task by moving its hands in the desired motion and having Baxter memorize them. Extra dials, buttons, and controls are available on Baxter's arm for more precision and features. Any regular worker could program Baxter and it only takes a matter of minutes, unlike usual industrial robots that take extensive programs and coding to be used. This means Baxter needs no programming to operate. No software engineers are needed. This also means Baxter can be taught to perform multiple, more complicated tasks. Sawyer was added in 2015 for smaller, more precise tasks.
Prototype cooking robots have been developed and could be programmed for autonomous, dynamic and adjustable preparation of discrete meals.
Etymology
The word robot was introduced to the public by the Czech interwar writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots), published in 1920. The play begins in a factory that uses a chemical substitute for protoplasm to manufacture living, simplified people called robots. The play does not focus in detail on the technology behind the creation of these living creatures, but in their appearance they prefigure modern ideas of androids, creatures who can be mistaken for humans. These mass-produced workers are depicted as efficient but emotionless, incapable of original thinking and indifferent to self-preservation. At issue is whether the robots are being exploited and the consequences of human dependence upon commodified labor (especially after a number of specially-formulated robots achieve self-awareness and incite robots all around the world to rise up against the humans).
Karel Čapek himself did not coin the word. He wrote a short letter in reference to an etymology in the Oxford English Dictionary in which he named his brother, the painter and writer Josef Čapek, as its actual originator.
In an article in the Czech journal Lidové noviny in 1933, he explained that he had originally wanted to call the creatures (, from Latin ). However, he did not like the word, and sought advice from his brother Josef, who suggested . The word means literally , and figuratively in Czech and also (more general) in many Slavic languages (e.g.: Bulgarian, Russian, Serbian, Croatian, Slovenian, Slovak, Polish, Macedonian, Ukrainian and archaic Czech) as well as in Hungarian. Traditionally the (Hungarian ) was the work period a serf (corvée) had to give for his lord, typically six months of the year. The origin of the word is the Old Church Slavonic ( in contemporary Bulgarian, Macedonian and Russian), which in turn comes from the Proto-Indo-European root . Robot is cognate with the German .
English pronunciation of the word has evolved relatively quickly since its introduction. In the U.S. during the late 1930s to early 1940s it was pronounced . By the late 1950s to early 1960s, some were pronouncing it , while others used By the 1970s, its current pronunciation had become predominant.
The word robotics, used to describe this field of study, was coined by the science fiction writer Isaac Asimov. Asimov created the Three Laws of Robotics which are a recurring theme in his books. These have since been used by many others to define laws used in fiction. (The three laws are pure fiction, and no technology yet created has the ability to understand or follow them, and in fact most robots serve military purposes, which run quite contrary to the first law and often the third law. "People think about Asimov's laws, but they were set up to point out how a simple ethical system doesn't work. If you read the short stories, every single one is about a failure, and they are totally impractical," said Dr. Joanna Bryson of the University of Bath.)
Modern robots
Mobile robot
Mobile robots have the capability to move around in their environment and are not fixed to one physical location. An example of a mobile robot that is in common use today is the automated guided vehicle or automatic guided vehicle (AGV). An AGV is a mobile robot that follows markers or wires in the floor, or uses vision or lasers. AGVs are discussed later in this article.
Mobile robots are also found in industry, military and security environments. They also appear as consumer products, for entertainment or to perform certain tasks like vacuum cleaning. Mobile robots are the focus of a great deal of current research and almost every major university has one or more labs that focus on mobile robot research.
Mobile robots are usually used in tightly controlled environments such as on assembly lines because they have difficulty responding to unexpected interference. Because of this most humans rarely encounter robots. However domestic robots for cleaning and maintenance are increasingly common in and around homes in developed countries. Robots can also be found in military applications.
Industrial robots (manipulating)
Industrial robots usually consist of a jointed arm (multi-linked manipulator) and an end effector that is attached to a fixed surface. One of the most common type of end effector is a gripper assembly.
The International Organization for Standardization gives a definition of a manipulating industrial robot in ISO 8373:
"an automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications."
This definition is used by the International Federation of Robotics, the European Robotics Research Network (EURON) and many national standards committees.
The industrial robots in food and drink processing plants are used for tasks such as feeding machines, packaging, and palletizing, which have replaced many manual, physical tasks. The complexity of digital skills required by workers varies depending on the level of automation and the specific tasks involved.
Service robot
Most commonly industrial robots are fixed robotic arms and manipulators used primarily for production and distribution of goods. The term "service robot" is less well-defined. The International Federation of Robotics has proposed a tentative definition, "A service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations."
Educational (interactive) robots
Robots are used as educational assistants to teachers. From the 1980s, robots such as turtles were used in schools and programmed using the Logo language.
There are robot kits like Lego Mindstorms, BIOLOID, OLLO from ROBOTIS, or BotBrain Educational Robots can help children to learn about mathematics, physics, programming, and electronics. Robotics have also been introduced into the lives of elementary and high school students in the form of robot competitions with the company FIRST (For Inspiration and Recognition of Science and Technology). The organization is the foundation for the FIRST Robotics Competition, FIRST Tech Challenge, FIRST Lego League Challenge and FIRST Lego League Explore competitions.
There have also been robots such as the teaching computer, Leachim (1974). Leachim was an early example of speech synthesis using the Diphone synthesis method. 2-XL (1976) was a robot shaped game / teaching toy based on branching between audible tracks on an 8-track tape player, both invented by Michael J. Freeman. Later, the 8-track was upgraded to tape cassettes and then to digital.
Modular robot
Modular robots are a new breed of robots that are designed to increase the use of robots by modularizing their architecture. The functionality and effectiveness of a modular robot is easier to increase compared to conventional robots. These robots are composed of a single type of identical, several different identical module types, or similarly shaped modules, which vary in size. Their architectural structure allows hyper-redundancy for modular robots, as they can be designed with more than 8 degrees of freedom (DOF). Creating the programming, inverse kinematics and dynamics for modular robots is more complex than with traditional robots. Modular robots may be composed of L-shaped modules, cubic modules, and U and H-shaped modules. ANAT technology, an early modular robotic technology patented by Robotics Design Inc., allows the creation of modular robots from U- and H-shaped modules that connect in a chain, and are used to form heterogeneous and homogenous modular robot systems. These "ANAT robots" can be designed with "n" DOF as each module is a complete motorized robotic system that folds relatively to the modules connected before and after it in its chain, and therefore a single module allows one degree of freedom. The more modules that are connected to one another, the more degrees of freedom it will have. L-shaped modules can also be designed in a chain, and must become increasingly smaller as the size of the chain increases, as payloads attached to the end of the chain place a greater strain on modules that are further from the base. ANAT H-shaped modules do not suffer from this problem, as their design allows a modular robot to distribute pressure and impacts evenly amongst other attached modules, and therefore payload-carrying capacity does not decrease as the length of the arm increases. Modular robots can be manually or self-reconfigured to form a different robot, that may perform different applications. Because modular robots of the same architecture type are composed of modules that compose different modular robots, a snake-arm robot can combine with another to form a dual or quadra-arm robot, or can split into several mobile robots, and mobile robots can split into multiple smaller ones, or combine with others into a larger or different one. This allows a single modular robot the ability to be fully specialized in a single task, as well as the capacity to be specialized to perform multiple different tasks.
Modular robotic technology is currently being applied in hybrid transportation, industrial automation, duct cleaning and handling. Many research centres and universities have also studied this technology, and have developed prototypes.
Collaborative robots
A collaborative robot or cobot is a robot that can safely and effectively interact with human workers while performing simple industrial tasks. However, end-effectors and other environmental conditions may create hazards, and as such risk assessments should be done before using any industrial motion-control application.
The collaborative robots most widely used in industries today are manufactured by Universal Robots in Denmark.
Rethink Robotics—founded by Rodney Brooks, previously with iRobot—introduced Baxter in September 2012; as an industrial robot designed to safely interact with neighboring human workers, and be programmable for performing simple tasks. Baxters stop if they detect a human in the way of their robotic arms and have prominent off switches. Intended for sale to small businesses, they are promoted as the robotic analogue of the personal computer. , 190 companies in the US have bought Baxters and they are being used commercially in the UK.
Robots in society
Roughly half of all the robots in the world are in Asia, 32% in Europe, and 16% in North America, 1% in Australasia and 1% in Africa. 40% of all the robots in the world are in Japan, making Japan the country with the highest number of robots.
Autonomy and ethical questions
As robots have become more advanced and sophisticated, experts and academics have increasingly explored the questions of what ethics might govern robots' behavior, and whether robots might be able to claim any kind of social, cultural, ethical or legal rights. One scientific team has said that it was possible that a robot brain would exist by 2019. Others predict robot intelligence breakthroughs by 2050. Recent advances have made robotic behavior more sophisticated. The social impact of intelligent robots is subject of a 2010 documentary film called Plug & Pray.
Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity". He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.
Military robots
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. There are also concerns about technology which might allow some armed robots to be controlled mainly by other robots. The US Navy has funded a report which indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. One researcher states that autonomous robots might be more humane, as they could make decisions more effectively. However, other experts question this.
One robot in particular, the EATR, has generated public concerns over its fuel source, as it can continually refuel itself using organic substances. Although the engine for the EATR is designed to run on biomass and vegetation specifically selected by its sensors, which it can find on battlefields or other local environments, the project has stated that chicken fat can also be used.
Manuel De Landa has noted that "smart missiles" and autonomous bombs equipped with artificial perception can be considered robots, as they make some of their decisions autonomously. He believes this represents an important and dangerous trend in which humans are handing over important decisions to machines.
Relationship to unemployment
For centuries, people have predicted that machines would make workers obsolete and increase unemployment, although the causes of unemployment are usually thought to be due to social policy.
A recent example of human replacement involves Taiwanese technology company Foxconn who, in July 2011, announced a three-year plan to replace workers with more robots. At present the company uses ten thousand robots but will increase them to a million robots over a three-year period.
Lawyers have speculated that an increased prevalence of robots in the workplace could lead to the need to improve redundancy laws.
Kevin J. Delaney said "Robots are taking human jobs. But Bill Gates believes that governments should tax companies' use of them, as a way to at least temporarily slow the spread of automation and to fund other types of employment." The robot tax would also help pay a guaranteed living wage to the displaced workers.
The World Bank's World Development Report 2019 puts forth evidence showing that while automation displaces workers, technological innovation creates more new industries and jobs on balance.
Contemporary uses
At present, there are two main types of robots, based on their use: general-purpose autonomous robots and dedicated robots.
Robots can be classified by their specificity of purpose. A robot might be designed to perform one particular task extremely well, or a range of tasks less well. All robots by their nature can be re-programmed to behave differently, but some are limited by their physical form. For example, a factory robot arm can perform jobs such as cutting, welding, gluing, or acting as a fairground ride, while a pick-and-place robot can only populate printed circuit boards.
General-purpose autonomous robots
General-purpose autonomous robots can perform a variety of functions independently. General-purpose autonomous robots typically can navigate independently in known spaces, handle their own re-charging needs, interface with electronic doors and elevators and perform other basic tasks. Like computers, general-purpose robots can link with networks, software and accessories that increase their usefulness. They may recognize people or objects, talk, provide companionship, monitor environmental quality, respond to alarms, pick up supplies and perform other useful tasks. General-purpose robots may perform a variety of functions simultaneously or they may take on different roles at different times of day. Some such robots try to mimic human beings and may even resemble people in appearance; this type of robot is called a humanoid robot. Humanoid robots are still in a very limited stage, as no humanoid robot can, as of yet, actually navigate around a room that it has never been in. Thus, humanoid robots are really quite limited, despite their intelligent behaviors in their well-known environments.
Factory robots
Car production
Over the last three decades, automobile factories have become dominated by robots. A typical factory contains hundreds of industrial robots working on fully automated production lines, with one robot for every ten human workers. On an automated production line, a vehicle chassis on a conveyor is welded, glued, painted and finally assembled at a sequence of robot stations.
Packaging
Industrial robots are also used extensively for palletizing and packaging of manufactured goods, for example for rapidly taking drink cartons from the end of a conveyor belt and placing them into boxes, or for loading and unloading machining centers.
Electronics
Mass-produced printed circuit boards (PCBs) are almost exclusively manufactured by pick-and-place robots, typically with SCARA manipulators, which remove tiny electronic components from strips or trays, and place them on to PCBs with great accuracy. Such robots can place hundreds of thousands of components per hour, far out-performing a human in speed, accuracy, and reliability.
Automated guided vehicles (AGVs)
Mobile robots, following markers or wires in the floor, or using vision or lasers, are used to transport goods around large facilities, such as warehouses, container ports, or hospitals.
Early AGV-style robots
Limited to tasks that could be accurately defined and had to be performed the same way every time. Very little feedback or intelligence was required, and the robots needed only the most basic exteroceptors (sensors). The limitations of these AGVs are that their paths are not easily altered and they cannot alter their paths if obstacles block them. If one AGV breaks down, it may stop the entire operation.
Interim AGV technologies
Developed to deploy triangulation from beacons or bar code grids for scanning on the floor or ceiling. In most factories, triangulation systems tend to require moderate to high maintenance, such as daily cleaning of all beacons or bar codes. Also, if a tall pallet or large vehicle blocks beacons or a bar code is marred, AGVs may become lost. Often such AGVs are designed to be used in human-free environments.
Intelligent AGVs (i-AGVs)
Such as SmartLoader, SpeciMinder, ADAM, Tug Eskorta, and MT 400 with Motivity are designed for people-friendly workspaces. They navigate by recognizing natural features. 3D scanners or other means of sensing the environment in two or three dimensions help to eliminate cumulative errors in dead-reckoning calculations of the AGV's current position. Some AGVs can create maps of their environment using scanning lasers with simultaneous localization and mapping (SLAM) and use those maps to navigate in real time with other path planning and obstacle avoidance algorithms. They are able to operate in complex environments and perform non-repetitive and non-sequential tasks such as transporting photomasks in a semiconductor lab, specimens in hospitals and goods in warehouses. For dynamic areas, such as warehouses full of pallets, AGVs require additional strategies using three-dimensional sensors such as time-of-flight or stereovision cameras.
Dirty, dangerous, dull, or inaccessible tasks
There are many jobs that humans would rather leave to robots. The job may be boring, such as domestic cleaning or sports field line marking, or dangerous, such as exploring inside a volcano. Other jobs are physically inaccessible, such as exploring another planet, cleaning the inside of a long pipe, or performing laparoscopic surgery.
Space probes
Almost every unmanned space probe ever launched was a robot. Some were launched in the 1960s with very limited abilities, but their ability to fly and land (in the case of Luna 9) is an indication of their status as a robot. This includes the Voyager probes and the Galileo probes, among others.
Telerobots
Teleoperated robots, or telerobots, are devices remotely operated from a distance by a human operator rather than following a predetermined sequence of movements, but which has semi-autonomous behaviour. They are used when a human cannot be present on site to perform a job because it is dangerous, far away, or inaccessible. The robot may be in another room or another country, or may be on a very different scale to the operator. For instance, a laparoscopic surgery robot allows the surgeon to work inside a human patient on a relatively small scale compared to open surgery, significantly shortening recovery time. They can also be used to avoid exposing workers to the hazardous and tight spaces such as in duct cleaning. When disabling a bomb, the operator sends a small robot to disable it. Several authors have been using a device called the Longpen to sign books remotely. Teleoperated robot aircraft, like the Predator Unmanned Aerial Vehicle, are increasingly being used by the military. These pilotless drones can search terrain and fire on targets. Hundreds of robots such as iRobot's Packbot and the Foster-Miller TALON are being used in Iraq and Afghanistan by the U.S. military to defuse roadside bombs or improvised explosive devices (IEDs) in an activity known as explosive ordnance disposal (EOD).
Automated fruit harvesting machines
Robots are used to automate picking fruit on orchards at a cost lower than that of human pickers.
Domestic robots
Domestic robots are simple robots dedicated to a single task work in home use. They are used in simple but often disliked jobs, such as vacuum cleaning, floor washing, and lawn mowing. An example of a domestic robot is a Roomba.
Military robots
Military robots include the SWORDS robot which is currently used in ground-based combat. It can use a variety of weapons and there is some discussion of giving it some degree of autonomy in battleground situations.
Unmanned combat air vehicles (UCAVs), which are an upgraded form of UAVs, can do a wide variety of missions, including combat. UCAVs are being designed such as the BAE Systems Mantis which would have the ability to fly themselves, to pick their own course and target, and to make most decisions on their own. The BAE Taranis is a UCAV built by Great Britain which can fly across continents without a pilot and has new means to avoid detection. Flight trials are expected to begin in 2011.
The AAAI has studied this topic in depth and its president has commissioned a study to look at this issue.
Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. Several such measures reportedly already exist, with robot-heavy countries such as Japan and South Korea having begun to pass regulations requiring robots to be equipped with safety systems, and possibly sets of 'laws' akin to Asimov's Three Laws of Robotics. An official report was issued in 2009 by the Japanese government's Robot Industry Policy Committee. Chinese officials and researchers have issued a report suggesting a set of ethical rules, and a set of new legal guidelines referred to as "Robot Legal Studies." Some concern has been expressed over a possible occurrence of robots telling apparent falsehoods.
Mining robots
Mining robots are designed to solve a number of problems currently facing the mining industry, including skills shortages, improving productivity from declining ore grades, and achieving environmental targets. Due to the hazardous nature of mining, in particular underground mining, the prevalence of autonomous, semi-autonomous, and tele-operated robots has greatly increased in recent times. A number of vehicle manufacturers provide autonomous trains, trucks and loaders that will load material, transport it on the mine site to its destination, and unload without requiring human intervention. One of the world's largest mining corporations, Rio Tinto, has recently expanded its autonomous truck fleet to the world's largest, consisting of 150 autonomous Komatsu trucks, operating in Western Australia. Similarly, BHP has announced the expansion of its autonomous drill fleet to the world's largest, 21 autonomous Atlas Copco drills.
Drilling, longwall and rockbreaking machines are now also available as autonomous robots. The Atlas Copco Rig Control System can autonomously execute a drilling plan on a drilling rig, moving the rig into position using GPS, set up the drill rig and drill down to specified depths. Similarly, the Transmin Rocklogic system can automatically plan a path to position a rockbreaker at a selected destination. These systems greatly enhance the safety and efficiency of mining operations.
Healthcare
Robots in healthcare have two main functions. Those which assist an individual, such as a sufferer of a disease like Multiple Sclerosis, and those which aid in the overall systems such as pharmacies and hospitals.
Home automation for the elderly and disabled
Robots used in home automation have developed over time from simple basic robotic assistants, such as the Handy 1, through to semi-autonomous robots, such as FRIEND which can assist the elderly and disabled with common tasks.
The population is aging in many countries, especially Japan, meaning that there are increasing numbers of elderly people to care for, but relatively fewer young people to care for them. Humans make the best carers, but where they are unavailable, robots are gradually being introduced.
FRIEND is a semi-autonomous robot designed to support disabled and elderly people in their daily life activities, like preparing and serving a meal. FRIEND make it possible for patients who are paraplegic, have muscle diseases or serious paralysis (due to strokes etc.), to perform tasks without help from other people like therapists or nursing staff.
Pharmacies
Script Pro manufactures a robot designed to help pharmacies fill prescriptions that consist of oral solids or medications in pill form. The pharmacist or pharmacy technician enters the prescription information into its information system. The system, upon determining whether or not the drug is in the robot, will send the information to the robot for filling. The robot has 3 different size vials to fill determined by the size of the pill. The robot technician, user, or pharmacist determines the needed size of the vial based on the tablet when the robot is stocked. Once the vial is filled it is brought up to a conveyor belt that delivers it to a holder that spins the vial and attaches the patient label. Afterwards it is set on another conveyor that delivers the patient's medication vial to a slot labeled with the patient's name on an LED read out. The pharmacist or technician then checks the contents of the vial to ensure it's the correct drug for the correct patient and then seals the vials and sends it out front to be picked up.
McKesson's Robot RX is another healthcare robotics product that helps pharmacies dispense thousands of medications daily with little or no errors. The robot can be ten feet wide and thirty feet long and can hold hundreds of different kinds of medications and thousands of doses. The pharmacy saves many resources like staff members that are otherwise unavailable in a resource scarce industry. It uses an electromechanical head coupled with a pneumatic system to capture each dose and deliver it to either its stocked or dispensed location. The head moves along a single axis while it rotates 180 degrees to pull the medications. During this process it uses barcode technology to verify it's pulling the correct drug. It then delivers the drug to a patient specific bin on a conveyor belt. Once the bin is filled with all of the drugs that a particular patient needs and that the robot stocks, the bin is then released and returned out on the conveyor belt to a technician waiting to load it into a cart for delivery to the floor.
Research robots
While most robots today are installed in factories or homes, performing labour or life saving jobs, many new types of robot are being developed in laboratories around the world. Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robot, alternative ways to think about or design robots, and new ways to manufacture them. It is expected that these new types of robot will be able to solve real world problems when they are finally realized.
Bionic and biomimetic robots
One approach to designing robots is to base them on animals. BionicKangaroo was designed and engineered by studying and applying the physiology and methods of locomotion of a kangaroo.
Nanorobots
Nanorobotics is the emerging technology field of creating machines or robots whose components are at or close to the microscopic scale of a nanometer (10−9 meters). Also known as "nanobots" or "nanites", they would be constructed from molecular machines. So far, researchers have mostly produced only parts of these complex systems, such as bearings, sensors, and synthetic molecular motors, but functioning robots have also been made such as the entrants to the Nanobot Robocup contest. Researchers also hope to be able to create entire robots as small as viruses or bacteria, which could perform tasks on a tiny scale. Possible applications include micro surgery (on the level of individual cells), utility fog, manufacturing, weaponry and cleaning. Some people have suggested that if there were nanobots which could reproduce, the earth would turn into "grey goo", while others argue that this hypothetical outcome is nonsense.
Reconfigurable robots
A few researchers have investigated the possibility of creating robots which can alter their physical form to suit a particular task, like the fictional T-1000. Real robots are nowhere near that sophisticated however, and mostly consist of a small number of cube shaped units, which can move relative to their neighbours. Algorithms have been designed in case any such robots become a reality.
Robotic, mobile laboratory operators
In July 2020 scientists reported the development of a mobile robot chemist and demonstrate that it can assist in experimental searches. According to the scientists their strategy was automating the researcher rather than the instruments – freeing up time for the human researchers to think creatively – and could identify photocatalyst mixtures for hydrogen production from water that were six times more active than initial formulations. The modular robot can operate laboratory instruments, work nearly around the clock, and autonomously make decisions on his next actions depending on experimental results.
Soft-bodied robots
Robots with silicone bodies and flexible actuators (air muscles, electroactive polymers, and ferrofluids) look and feel different from robots with rigid skeletons, and can have different behaviors. Soft, flexible (and sometimes even squishy) robots are often designed to mimic the biomechanics of animals and other things found in nature, which is leading to new applications in medicine, care giving, search and rescue, food handling and manufacturing, and scientific exploration.
Swarm robots
Inspired by colonies of insects such as ants and bees, researchers are modeling the behavior of swarms of thousands of tiny robots which together perform a useful task, such as finding something hidden, cleaning, or spying. Each robot is quite simple, but the emergent behavior of the swarm is more complex. The whole set of robots can be considered as one single distributed system, in the same way an ant colony can be considered a superorganism, exhibiting swarm intelligence. The largest swarms so far created include the iRobot swarm, the SRI/MobileRobots CentiBots project and the Open-source Micro-robotic Project swarm, which are being used to research collective behaviors. Swarms are also more resistant to failure. Whereas one large robot may fail and ruin a mission, a swarm can continue even if several robots fail. This could make them attractive for space exploration missions, where failure is normally extremely costly.
Haptic interface robots
Robotics also has application in the design of virtual reality interfaces. Specialized robots are in widespread use in the haptic research community. These robots, called "haptic interfaces", allow touch-enabled user interaction with real and virtual environments. Robotic forces allow simulating the mechanical properties of "virtual" objects, which users can experience through their sense of touch.
Contemporary art and sculpture
Robots are used by contemporary artists to create works that include mechanical automation. There are many branches of robotic art, one of which is robotic installation art, a type of installation art that is programmed to respond to viewer interactions, by means of computers, sensors and actuators. The future behavior of such installations can therefore be altered by input from either the artist or the participant, which differentiates these artworks from other types of kinetic art.
Le Grand Palais in Paris organized an exhibition "Artists & Robots", featuring artworks created by more than forty artists with the help of robots in 2018.
Robots in popular culture
Literature
Robotic characters, androids (artificial men/women) or gynoids (artificial women), and cyborgs (also "bionic men/women", or humans with significant mechanical enhancements) have become a staple of science fiction.
The first reference in Western literature to mechanical servants appears in Homer's Iliad. In Book XVIII, Hephaestus, god of fire, creates new armor for the hero Achilles, assisted by robots. According to the Rieu translation, "Golden maidservants hastened to help their master. They looked like real women and could not only speak and use their limbs but were endowed with intelligence and trained in handwork by the immortal gods." The words "robot" or "android" are not used to describe them, but they are nevertheless mechanical devices human in appearance. "The first use of the word Robot was in Karel Čapek's play R.U.R. (Rossum's Universal Robots) (written in 1920)". Writer Karel Čapek was born in Czechoslovakia (Czech Republic).
Possibly the most prolific author of the twentieth century was Isaac Asimov (1920–1992) who published over five-hundred books. Asimov is probably best remembered for his science-fiction stories and especially those about robots, where he placed robots and their interaction with society at the center of many of his works. Asimov carefully considered the problem of the ideal set of instructions robots might be given to lower the risk to humans, and arrived at his Three Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given it by human beings, except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These were introduced in his 1942 short story "Runaround", although foreshadowed in a few earlier stories. Later, Asimov added the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm"; the rest of the laws are modified sequentially to acknowledge this.
According to the Oxford English Dictionary, the first passage in Asimov's short story "Liar!" (1941) that mentions the First Law is the earliest recorded use of the word robotics. Asimov was not initially aware of this; he assumed the word already existed by analogy with mechanics, hydraulics, and other similar terms denoting branches of applied knowledge.
Robot competitions
Robots are used in a number of competitive events. Robot combat competitions have been popularized by television shows such as Robot Wars and BattleBots, featuring mostly remotely controlled 'robots' that compete against each other directly using various weaponry, there are also amateur robot combat leagues active globally outside of the televised events. Micromouse events, in which autonomous robots compete to solve mazes or other obstacle courses are also held internationally.
Robot competitions are also often used within educational settings to introduce the concept of robotics to children such as the FIRST Robotics Competition in the US.
Films
Robots appear in many films. Most of the robots in cinema are fictional. Two of the most famous are R2-D2 and C-3PO from the Star Wars franchise.
Sex robots
The concept of humanoid sex robots has drawn public attention and elicited debate regarding their supposed benefits and potential effects on society. Opponents argue that the introduction of such devices would be socially harmful, and demeaning to women and children, while proponents cite their potential therapeutical benefits, particularly in aiding people with dementia or depression.
Problems depicted in popular culture
Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race. Frankenstein (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or android advancing beyond its creator.
Other works with similar themes include The Mechanical Man, The Terminator, Runaway, RoboCop, the Replicators in Stargate, the Cylons in Battlestar Galactica, the Cybermen and Daleks in Doctor Who, The Matrix, Enthiran and I, Robot. Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Examples of popular media where the robot becomes evil are 2001: A Space Odyssey, Red Planet and Enthiran.
The 2017 game Horizon Zero Dawn explores themes of robotics in warfare, robot ethics, and the AI control problem, as well as the positive or negative impact such technologies could have on the environment.
Another common theme is the reaction, sometimes called the "uncanny valley", of unease and even revulsion at the sight of robots that mimic humans too closely.
More recently, fictional representations of artificially intelligent robots in films such as A.I. Artificial Intelligence and Ex Machina and the 2016 TV adaptation of Westworld have engaged audience sympathy for the robots themselves.
Emancipation or revolution as a theme in relation to robots was already present in the term coining play of R.U.R..
The Star Wars universe for example has several instances of droid revolts.
The Dune series on the other hand has the premise of humans revolting against thinking machines and finding human-biological alternatives to them.
| Technology | Basics_8 | null |
25784 | https://en.wikipedia.org/wiki/Renewable%20energy | Renewable energy | Renewable energy (also called green energy) is energy from renewable natural resources that are replenished on a human timescale. The most widely used renewable energy types are solar energy, wind power, and hydropower. Bioenergy and geothermal power are also significant in some countries. Some also consider nuclear power a renewable power source, although this is controversial. Renewable energy installations can be large or small and are suited for both urban and rural areas. Renewable energy is often deployed together with further electrification. This has several benefits: electricity can move heat and vehicles efficiently and is clean at the point of consumption. Variable renewable energy sources are those that have a fluctuating nature, such as wind power and solar power. In contrast, controllable renewable energy sources include dammed hydroelectricity, bioenergy, or geothermal power.
Renewable energy systems have rapidly become more efficient and cheaper over the past 30 years. A large majority of worldwide newly installed electricity capacity is now renewable. Renewable energy sources, such as solar and wind power, have seen significant cost reductions over the past decade, making them more competitive with traditional fossil fuels. In most countries, photovoltaic solar or onshore wind are the cheapest new-build electricity. From 2011 to 2021, renewable energy grew from 20% to 28% of global electricity supply. Power from the sun and wind accounted for most of this increase, growing from a combined 2% to 10%. Use of fossil energy shrank from 68% to 62%. In 2022, renewables accounted for 30% of global electricity generation and are projected to reach over 42% by 2028. Many countries already have renewables contributing more than 20% of their total energy supply, with some generating over half or even all their electricity from renewable sources.
The main motivation to replace fossil fuels with renewable energy sources is to slow and eventually stop climate change, which is widely agreed to be caused mostly by greenhouse gas emissions. In general, renewable energy sources cause much lower emissions than fossil fuels. The International Energy Agency estimates that to achieve net zero emissions by 2050, 90% of global electricity generation will need to be produced from renewable sources. Renewables also cause much less air pollution than fossil fuels, improving public health, and are less noisy.
The deployment of renewable energy still faces obstacles, especially fossil fuel subsidies, lobbying by incumbent power providers, and local opposition to the use of land for renewable installations. Like all mining, the extraction of minerals required for many renewable energy technologies also results in environmental damage. In addition, although most renewable energy sources are sustainable, some are not.
Overview
Definition
Renewable energy is usually understood as energy harnessed from continuously occurring natural phenomena. The International Energy Agency defines it as "energy derived from natural processes that are replenished at a faster rate than they are consumed". Solar power, wind power, hydroelectricity, geothermal energy, and biomass are widely agreed to be the main types of renewable energy. Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services.
Although almost all forms of renewable energy cause much fewer carbon emissions than fossil fuels, the term is not synonymous with low-carbon energy. Some non-renewable sources of energy, such as nuclear power,generate almost no emissions, while some renewable energy sources can be very carbon-intensive, such as the burning of biomass if it is not offset by planting new plants. Renewable energy is also distinct from sustainable energy, a more abstract concept that seeks to group energy sources based on their overall permanent impact on future generations of humans. For example, biomass is often associated with unsustainable deforestation.
Role in addressing climate change
As part of the global effort to limit climate change, most countries have committed to net zero greenhouse gas emissions. In practice, this means phasing out fossil fuels and replacing them with low-emissions energy sources. This much needed process, coined as "low-carbon substitutions" in contrast to other transition processes including energy additions, needs to be accelerated multiple times in order to successfully mitigating climate change. At the 2023 United Nations Climate Change Conference, around three-quarters of the world's countries set a goal of tripling renewable energy capacity by 2030. The European Union aims to generate 40% of its electricity from renewables by the same year.
Other benefits
Renewable energy is more evenly distributed around the world than fossil fuels, which are concentrated in a limited number of countries. It also brings health benefits by reducing air pollution caused by the burning of fossil fuels. The potential worldwide savings in health care costs have been estimated at trillions of dollars annually.
Intermittency
The two most important forms of renewable energy, solar and wind, are intermittent energy sources: they are not available constantly, resulting in lower capacity factors. In contrast, fossil fuel power plants, nuclear power plants and hydropower are usually able to produce precisely the amount of energy an electricity grid requires at a given time. Solar energy can only be captured during the day, and ideally in cloudless conditions. Wind power generation can vary significantly not only day-to-day, but even month-to-month. This poses a challenge when transitioning away from fossil fuels: energy demand will often be higher or lower than what renewables can provide. Both scenarios can cause electricity grids to become overloaded, leading to power outages.
In the medium-term, this variability may require keeping some gas-fired power plants or other dispatchable generation on standby until there is enough energy storage, demand response, grid improvement, or baseload power from non-intermittent sources. In the long-term, energy storage is an important way of dealing with intermittency. Using diversified renewable energy sources and smart grids can also help flatten supply and demand.
Sector coupling of the power generation sector with other sectors may increase flexibility: for example the transport sector can be coupled by charging electric vehicles and sending electricity from vehicle to grid. Similarly the industry sector can be coupled by hydrogen produced by electrolysis, and the buildings sector by thermal energy storage for space heating and cooling.
Building overcapacity for wind and solar generation can help ensure sufficient electricity production even during poor weather. In optimal weather, it may be necessary to curtail energy generation if it is not possible to use or store excess electricity.
Electrical energy storage
Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 85% of all grid power storage. Batteries are increasingly being deployed for storage and grid ancillary services and for domestic storage. Green hydrogen is a more economical means of long-term renewable energy storage, in terms of capital expenditures compared to pumped hydroelectric or batteries.
Energy supply security
Two main renewable energy sources - solar power and wind power - are usually deployed in distributed generation architecture, which offers specific benefits and comes with specific risks. Notable risks are associated with centralisation of 90% of the supply chains in a single country (China) in the photovoltaic sector. Mass-scale installation of photovoltaic power inverters with remote control, security vulnerabilities and backdoors results in cyberattacks that can disable generation from millions of physically decentralised panels, resulting in disappearance of hundreds of gigawatts of installed power from the grid in one moment. Similar attacks have targeted wind power farms through vulnerabilities in their remote control and monitoring systems. The European NIS2 directive partially responds to these challenges by extending the scope of cybersecurity regulations to the energy generation market.
Mainstream technologies
Solar energy
Solar power produced around 1.3 terrawatt-hours (TWh) worldwide in 2022, representing 4.6% of the world's electricity. Almost all of this growth has happened since 2010. Solar energy can be harnessed anywhere that receives sunlight; however, the amount of solar energy that can be harnessed for electricity generation is influenced by weather conditions, geographic location and time of day.
There are two mainstream ways of harnessing solar energy: solar thermal, which converts solar energy into heat; and photovoltaics (PV), which converts it into electricity. PV is far more widespread, accounting for around two thirds of the global solar energy capacity as of 2022. It is also growing at a much faster rate, with 170 GW newly installed capacity in 2021, compared to 25 GW of solar thermal.
Passive solar refers to a range of construction strategies and technologies that aim to optimize the distribution of solar heat in a building. Examples include solar chimneys, orienting a building to the sun, using construction materials that can store heat, and designing spaces that naturally circulate air.
From 2020 to 2022, solar technology investments almost doubled from USD 162 billion to USD 308 billion, driven by the sector's increasing maturity and cost reductions, particularly in solar photovoltaic (PV), which accounted for 90% of total investments. China and the United States were the main recipients, collectively making up about half of all solar investments since 2013. Despite reductions in Japan and India due to policy changes and COVID-19, growth in China, the United States, and a significant increase from Vietnam's feed-in tariff program offset these declines. Globally, the solar sector added 714 gigawatts (GW) of solar PV and concentrated solar power (CSP) capacity between 2013 and 2021, with a notable rise in large-scale solar heating installations in 2021, especially in China, Europe, Turkey, and Mexico.
Photovoltaics
A photovoltaic system, consisting of solar cells assembled into panels, converts light into electrical direct current via the photoelectric effect. PV has several advantages that make it by far the fastest-growing renewable energy technology. It is cheap, low-maintenance and scalable; adding to an existing PV installation as demanded arises is simple. Its main disadvantage is its poor performance in cloudy weather.
PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. A household's solar panels can either be used for just that household or, if connected to an electrical grid, can be aggregated with millions of others.
The first utility-scale solar power plant was built in 1982 in Hesperia, California by ARCO. The plant was not profitable and was sold eight years later. However, over the following decades, PV cells became significantly more efficient and cheaper. As a result, PV adoption has grown exponentially since 2010. Global capacity increased from 230 GW at the end of 2015 to 890 GW in 2021. PV grew fastest in China between 2016 and 2021, adding 560 GW, more than all advanced economies combined. Four of the ten biggest solar power stations are in China, including the biggest, Golmud Solar Park in China.
Solar thermal
Unlike photovoltaic cells that convert sunlight directly into electricity, solar thermal systems convert it into heat. They use mirrors or lenses to concentrate sunlight onto a receiver, which in turn heats a water reservoir. The heated water can then be used in homes. The advantage of solar thermal is that the heated water can be stored until it is needed, eliminating the need for a separate energy storage system. Solar thermal power can also be converted to electricity by using the steam generated from the heated water to drive a turbine connected to a generator. However, because generating electricity this way is much more expensive than photovoltaic power plants, there are very few in use today.
Wind power
Humans have harnessed wind energy since at least 3500 BC. Until the 20th century, it was primarily used to power ships, windmills and water pumps. Today, the vast majority of wind power is used to generate electricity using wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine. Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms.
Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore, and likely also industrial use of new types of VAWT turbines in addition to the horizontal axis units currently in use. As offshore wind speeds average ~90% greater than that of land, offshore resources can contribute substantially more energy than land-stationed turbines.
Investments in wind technologies reached USD 161 billion in 2020, with onshore wind dominating at 80% of total investments from 2013 to 2022. Offshore wind investments nearly doubled to USD 41 billion between 2019 and 2020, primarily due to policy incentives in China and expansion in Europe. Global wind capacity increased by 557 GW between 2013 and 2021, with capacity additions increasing by an average of 19% each year.
Hydropower
Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. Water can generate electricity with a conversion efficiency of about 90%, which is the highest rate in renewable energy. There are many forms of water energy:
Historically, hydroelectric power came from constructing large hydroelectric dams and reservoirs, which are still popular in developing countries. The largest of them are the Three Gorges Dam (2003) in China and the Itaipu Dam (1984) built by Brazil and Paraguay.
Small hydro systems are hydroelectric power installations that typically produce up to of power. They are often used on small rivers or as a low-impact development on larger rivers. China is the largest producer of hydroelectricity in the world and has more than 45,000 small hydro installations.
Run-of-the-river hydroelectricity plants derive energy from rivers without the creation of a large reservoir. The water is typically conveyed along the side of the river valley (using channels, pipes or tunnels) until it is high above the valley floor, whereupon it can be allowed to fall through a penstock to drive a turbine. A run-of-river plant may still produce a large amount of electricity, such as the Chief Joseph Dam on the Columbia River in the United States. However many run-of-the-river hydro power plants are micro hydro or pico hydro plants.
Much hydropower is flexible, thus complementing wind and solar. In 2021, the world renewable hydropower capacity was 1,360 GW. Only a third of the world's estimated hydroelectric potential of 14,000 TWh/year has been developed. New hydropower projects face opposition from local communities due to their large impact, including relocation of communities and flooding of wildlife habitats and farming land. High cost and lead times from permission process, including environmental and risk assessments, with lack of environmental and social acceptance are therefore the primary challenges for new developments. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid. Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Because dispatchable power is more valuable than VRE countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.
Bioenergy
Biomass is biological material derived from living, or recently living organisms. Most commonly, it refers to plants or plant-derived materials. As an energy source, biomass can either be used directly via combustion to produce heat, or converted to a more energy-dense biofuel like ethanol. Wood is the most significant biomass energy source as of 2012 and is usually sourced from a trees cleared for silvicultural reasons or fire prevention. Municipal wood waste – for instance, construction materials or sawdust – is also often burned for energy. The biggest per-capita producers of wood-based bioenergy are heavily forested countries like Finland, Sweden, Estonia, Austria, and Denmark.
Bioenergy can be environmentally destructive if old-growth forests are cleared to make way for crop production. In particular, demand for palm oil to produce biodiesel has contributed to the deforestation of tropical rainforests in Brazil and Indonesia. In addition, burning biomass still produces carbon emissions, although much less than fossil fuels (39 grams of CO2 per megajoule of energy, compared to 75 g/MJ for fossil fuels).
Some biomass sources are unsustainable at current rates of exploitation (as of 2017).
Biofuel
Biofuels are primarily used in transportation, providing 3.5% of the world's transport energy demand in 2022, up from 2.7% in 2010. Biojet is expected to be important for short-term reduction of carbon dioxide emissions from long-haul flights.
Aside from wood, the major sources of bioenergy are bioethanol and biodiesel. Bioethanol is usually produced by fermenting the sugar components of crops like sugarcane and maize, while biodiesel is mostly made from oils extracted from plants, such as soybean oil and corn oil. Most of the crops used to produce bioethanol and biodiesel are grown specifically for this purpose, although used cooking oil accounted for 14% of the oil used to produce biodiesel as of 2015. The biomass used to produce biofuels varies by region. Maize is the major feedstock in the United States, while sugarcane dominates in Brazil. In the European Union, where biodiesel is more common than bioethanol, rapeseed oil and palm oil are the main feedstocks. China, although it produces comparatively much less biofuel, uses mostly corn and wheat. In many countries, biofuels are either subsidized or mandated to be included in fuel mixtures.
There are many other sources of bioenergy that are more niche, or not yet viable at large scales. For instance, bioethanol could be produced from the cellulosic parts of crops, rather than only the seed as is common today. Sweet sorghum may be a promising alternative source of bioethanol, due to its tolerance of a wide range of climates. Cow dung can be converted into methane. There is also a great deal of research involving algal fuel, which is attractive because algae is a non-food resource, grows around 20 times faster than most food crops, and can be grown almost anywhere.
Geothermal energy
Geothermal energy is thermal energy (heat) extracted from the Earth's crust. It originates from several different sources, of which the most significant is slow radioactive decay of minerals contained in the Earth's interior, as well as some leftover heat from the formation of the Earth. Some of the heat is generated near the Earth's surface in the crust, but some also flows from deep within the Earth from the mantle and core. Geothermal energy extraction is viable mostly in countries located on tectonic plate edges, where the Earth's hot mantle is more exposed. As of 2023, the United States has by far the most geothermal capacity (2.7 GW, or less than 0.2% of the country's total energy capacity), followed by Indonesia and the Philippines. Global capacity in 2022 was 15 GW.
Geothermal energy can be either used directly to heat homes, as is common in Iceland where almost all of its energy is renewable, or to generate electricity. At smaller scales, geothermal power can be generated with geothermal heat pumps, which can extract heat from ground temperatures of under , allowing them to be used at relatively shallow depths of a few meters. Electricity generation requires large plants and ground temperatures of at least . In some countries, electricity produced from geothermal energy accounts for a large portion of the total, such as Kenya (43%) and Indonesia (5%).
Technical advances may eventually make geothermal power more widely available. For example, enhanced geothermal systems involve drilling around into the Earth, breaking apart hot rocks and extracting the heat using water. In theory, this type of geothermal energy extraction could be done anywhere on Earth.
Emerging technologies
There are also other renewable energy technologies that are still under development, including enhanced geothermal systems, concentrated solar power, cellulosic ethanol, and marine energy. These technologies are not yet widely demonstrated or have limited commercialization. Some may have potential comparable to other renewable energy technologies, but still depend on further breakthroughs from research, development and engineering.
Enhanced geothermal systems
Enhanced geothermal systems (EGS) are a new type of geothermal power which does not require natural hot water reservoirs or steam to generate power. Most of the underground heat within drilling reach is trapped in solid rocks, not in water. EGS technologies use hydraulic fracturing to break apart these rocks and release the heat they contain, which is then harvested by pumping water into the ground. The process is sometimes known as "hot dry rock" (HDR). Unlike conventional geothermal energy extraction, EGS may be feasible anywhere in the world, depending on the cost of drilling. EGS projects have so far primarily been limited to demonstration plants, as the technology is capital-intensive due to the high cost of drilling.
Marine energy
Marine energy (also sometimes referred to as ocean energy) is the energy carried by ocean waves, tides, salinity, and ocean temperature differences. Technologies to harness the energy of moving water include wave power, marine current power, and tidal power. Reverse electrodialysis (RED) is a technology for generating electricity by mixing fresh water and salty sea water in large power cells. Most marine energy harvesting technologies are still at low technology readiness levels and not used at large scales. Tidal energy is generally considered the most mature, but has not seen wide deployment. The world's largest tidal power station is on Sihwa Lake, South Korea, which produces around 550 gigawatt-hours of electricity per year.
Earth infrared thermal radiation
Earth emits roughly 1017 W of infrared thermal radiation that flows toward the cold outer space. Solar energy hits the surface and atmosphere of the earth and produces heat. Using various theorized devices like emissive energy harvester (EEH) or thermoradiative diode, this energy flow can be converted into electricity. In theory, this technology can be used during nighttime.
Others
Algae fuels
Producing liquid fuels from oil-rich (fat-rich) varieties of algae is an ongoing research topic. Various microalgae grown in open or closed systems are being tried including some systems that can be set up in brownfield and desert lands.
Space-based solar power
There have been numerous proposals for space-based solar power, in which very large satellites with photovoltaic panels would be equipped with microwave transmitters to beam power back to terrestrial receivers. A 2024 study by the NASA Office of Science and Technology Policy examined the concept and concluded that with current and near-future technologies it would be economically uncompetitive.
Water vapor
Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.
Nuclear energy
Breeder reactors could, in principle, depending on the fuel cycle employed, extract almost all of the energy contained in uranium or thorium, decreasing fuel requirements by a factor of 100 compared to widely used once-through light water reactors, which extract less than 1% of the energy in the actinide metal (uranium or thorium) mined from the earth. The high fuel-efficiency of breeder reactors could greatly reduce concerns about fuel supply, energy used in mining, and storage of radioactive waste. With seawater uranium extraction (currently too expensive to be economical), there is enough fuel for breeder reactors to satisfy the world's energy needs for 5 billion years at 1983's total energy consumption rate, thus making nuclear energy effectively a renewable energy. In addition to seawater the average crustal granite rocks contain significant quantities of uranium and thorium with which breeder reactors can supply abundant energy for the remaining lifespan of the sun on the main sequence of stellar evolution.
Artificial photosynthesis
Artificial photosynthesis uses techniques including nanotechnology to store solar electromagnetic energy in chemical bonds by splitting water to produce hydrogen and then using carbon dioxide to make methanol. Researchers in this field strived to design molecular mimics of photosynthesis that use a wider region of the solar spectrum, employ catalytic systems made from abundant, inexpensive materials that are robust, readily repaired, non-toxic, stable in a variety of environmental conditions and perform more efficiently allowing a greater proportion of photon energy to end up in the storage compounds, i.e., carbohydrates (rather than building and sustaining living cells). However, prominent research faces hurdles, Sun Catalytix a MIT spin-off stopped scaling up their prototype fuel-cell in 2012 because it offers few savings over other ways to make hydrogen from sunlight.
Market and industry trends
Most new renewables are solar, followed by wind then hydro then bioenergy. Investment in renewables, especially solar, tends to be more effective in creating jobs than coal, gas or oil. Worldwide, renewables employ about 12 million people as of 2020, with solar PV being the technology employing the most at almost 4 million. However, as of February 2024, the world's supply of workforce for solar energy is lagging greatly behind demand as universities worldwide still produce more workforce for fossil fuels than for renewable energy industries.
In 2021, China accounted for almost half of the global increase in renewable electricity.
There are 3,146 gigawatts installed in 135 countries, while 156 countries have laws regulating the renewable energy sector.
Globally in 2020 there are over 10 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer. The clean energy sectors added about 4.7 million jobs globally between 2019 and 2022, totaling 35 million jobs by 2022.
Usage by sector or application
Some studies say that a global transition to 100% renewable energy across all sectors – power, heat, transport and industry – is feasible and economically viable.
One of the efforts to decarbonize transportation is the increased use of electric vehicles (EVs). Despite that and the use of biofuels, such as biojet, less than 4% of transport energy is from renewables. Occasionally hydrogen fuel cells are used for heavy transport. Meanwhile, in the future electrofuels may also play a greater role in decarbonizing hard-to-abate sectors like aviation and maritime shipping.
Solar water heating makes an important contribution to renewable heat in many countries, most notably in China, which now has 70% of the global total (180 GWth). Most of these systems are installed on multi-family apartment buildings and meet a portion of the hot water needs of an estimated 50–60 million households in China. Worldwide, total installed solar water heating systems meet a portion of the water heating needs of over 70 million households.
Heat pumps provide both heating and cooling, and also flatten the electric demand curve and are thus an increasing priority. Renewable thermal energy is also growing rapidly. About 10% of heating and cooling energy is from renewables.
Cost comparison
The International Renewable Energy Agency (IRENA) stated that ~86% (187 GW) of renewable capacity added in 2022 had lower costs than electricity generated from fossil fuels. IRENA also stated that capacity added since 2000 reduced electricity bills in 2022 by at least $520 billion, and that in non-OECD countries, the lifetime savings of 2022 capacity additions will reduce costs by up to $580 billion.
* = 2018. All other values for 2019.
Growth of renewables
The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.
In the decade of 2010–2019, worldwide investment in renewable energy capacity excluding large hydropower amounted to US$2.7 trillion, of which the top countries China contributed US$818 billion, the United States contributed US$392.3 billion, Japan contributed US$210.9 billion, Germany contributed US$183.4 billion, and the United Kingdom contributed US$126.5 billion. This was an increase of over three and possibly four times the equivalent amount invested in the decade of 2000–2009 (no data is available for 2000–2003).
As of 2022, an estimated 28% of the world's electricity was generated by renewables. This is up from 19% in 1990.
Future projections
A December 2022 report by the IEA forecasts that over 2022-2027, renewables are seen growing by almost 2 400 GW in its main forecast, equal to the entire installed power capacity of China in 2021. This is an 85% acceleration from the previous five years, and almost 30% higher than what the IEA forecast in its 2021 report, making its largest ever upward revision. Renewables are set to account for over 90% of global electricity capacity expansion over the forecast period. To achieve net zero emissions by 2050, IEA believes that 90% of global electricity generation will need to be produced from renewable sources.
In June 2022 IEA Executive Director Fatih Birol said that countries should invest more in renewables to "ease the pressure on consumers from high fossil fuel prices, make our energy systems more secure, and get the world on track to reach our climate goals."
China's five year plan to 2025 includes increasing direct heating by renewables such as geothermal and solar thermal.
REPowerEU, the EU plan to escape dependence on fossil Russian gas, is expected to call for much more green hydrogen.
After a transitional period, renewable energy production is expected to make up most of the world's energy production. In 2018, the risk management firm, DNV GL, forecasts that the world's primary energy mix will be split equally between fossil and non-fossil sources by 2050.
Middle eastern nations are also planning on reducing their reliance fossil fuel. Many planned green projects will contribute in 26% of energy supply for the region by 2050 achieving emission reductions equal to 1.1 Gt CO2/year.
Massive Renewable Energy Projects in the Middle East:
Mohammed bin Rashid Al Maktoum Solar Park in Duba, UAE
Shuaibah Two (2) Solar Facility in Mecca Province, Saudi Arabia
NEOM Green Hydrogen Project in NEOM, Saudi Arabia
Gulf of Suez Wind Power Project in Suez, Egypt
Al-Ajban Solar Park in Abu Dhabi, UAE
Demand
In July 2014, WWF and the World Resources Institute convened a discussion among a number of major US companies who had declared their intention to increase their use of renewable energy. These discussions identified a number of "principles" which companies seeking greater access to renewable energy considered important market deliverables. These principles included choice (between suppliers and between products), cost competitiveness, longer term fixed price supplies, access to third-party financing vehicles, and collaboration.
UK statistics released in September 2020 noted that "the proportion of demand met from renewables varies from a low of 3.4 per cent (for transport, mainly from biofuels) to highs of over 20 per cent for 'other final users', which is largely the service and commercial sectors that consume relatively large quantities of electricity, and industry".
In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
Developing countries
In Kenya, the Olkaria V Geothermal Power Station is one of the largest in the world. The Grand Ethiopia Renaissance Dam project incorporates wind turbines. Once completed, Morocco's Ouarzazate Solar Power Station is projected to provide power to over a million people.
Policy
Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in the early 2000s, most countries around the world now have some form of energy policy.
The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, with 75 countries signing the charter of IRENA. As of April 2019, IRENA has 160 member states. The then United Nations Secretary-General Ban Ki-moon has said that renewable energy can lift the poorest nations to new levels of prosperity, and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.
The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies. In 2017, a total of 121 countries adopted some form of renewable energy policy. National targets that year existed in 176 countries. In addition, there is also a wide range of policies at the state/provincial, and local levels. Some public utilities help plan or install residential energy upgrades.
Many national, state and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies. Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy.
Global and national policies related to renewable energy can be divided based on sectors, such as agriculture, transport, buildings, industry:
Climate neutrality (net zero emissions) by the year 2050 is the main goal of the European Green Deal. For the European Union to reach their target of climate neutrality, one goal is to decarbonise its energy system by aiming to achieve "net-zero greenhouse gas emissions by 2050."
Finance
The International Renewable Energy Agency's (IRENA) 2023 report on renewable energy finance highlights steady investment growth since 2018: USD 348 billion in 2020 (a 5.6% increase from 2019), USD 430 billion in 2021 (24% up from 2020), and USD 499 billion in 2022 (16% higher). This trend is driven by increasing recognition of renewable energy's role in mitigating climate change and enhancing energy security, along with investor interest in alternatives to fossil fuels. Policies such as feed-in tariffs in China and Vietnam have significantly increased renewable adoption. Furthermore, from 2013 to 2022, installation costs for solar photovoltaic (PV), onshore wind, and offshore wind fell by 69%, 33%, and 45%, respectively, making renewables more cost-effective.
Between 2013 and 2022, the renewable energy sector underwent a significant realignment of investment priorities. Investment in solar and wind energy technologies markedly increased. In contrast, other renewable technologies such as hydropower (including pumped storage hydropower), biomass, biofuels, geothermal, and marine energy experienced a substantial decrease in financial investment. Notably, from 2017 to 2022, investment in these alternative renewable technologies declined by 45%, falling from USD 35 billion to USD 17 billion.
In 2023, the renewable energy sector experienced a significant surge in investments, particularly in solar and wind technologies, totaling approximately USD 200 billion—a 75% increase from the previous year. The increased investments in 2023 contributed between 1% and 4% to the GDP in key regions including the United States, China, the European Union, and India.
The energy sector receives investments of approximately USD 3 trillion each year, with USD 1.9 trillion directed towards clean energy technologies and infrastructure. To meet the targets set in the Net Zero Emissions (NZE) Scenario by 2035, this investment must increase to USD 5.3 trillion per year.
Debates
Nuclear power proposed as renewable energy
Geopolitics
The geopolitical impact of the growing use of renewable energy is a subject of ongoing debate and research. Many fossil-fuel producing countries, such as Qatar, Russia, Saudi Arabia and Norway, are currently able to exert diplomatic or geopolitical influence as a result of their oil wealth. Most of these countries are expected to be among the geopolitical "losers" of the energy transition, although some, like Norway, are also significant producers and exporters of renewable energy. Fossil fuels and the infrastructure to extract them may, in the long term, become stranded assets. It has been speculated that countries dependent on fossil fuel revenue may one day find it in their interests to quickly sell off their remaining fossil fuels.
Conversely, nations abundant in renewable resources, and the minerals required for renewables technology, are expected to gain influence. In particular, China has become the world's dominant manufacturer of the technology needed to produce or store renewable energy, especially solar panels, wind turbines, and lithium-ion batteries. Nations rich in solar and wind energy could become major energy exporters. Some may produce and export green hydrogen, although electricity is projected to be the dominant energy carrier in 2050, accounting for almost 50% of total energy consumption (up from 22% in 2015). Countries with large uninhabited areas such as Australia, China, and many African and Middle Eastern countries have a potential for huge installations of renewable energy. The production of renewable energy technologies requires rare-earth elements with new supply chains.
Countries with already weak governments that rely on fossil fuel revenue may face even higher political instability or popular unrest. Analysts consider Nigeria, Angola, Chad, Gabon, and Sudan, all countries with a history of military coups, to be at risk of instability due to dwindling oil income.
A study found that transition from fossil fuels to renewable energy systems reduces risks from mining, trade and political dependence because renewable energy systems don't need fuel – they depend on trade only for the acquisition of materials and components during construction.
In October 2021, European Commissioner for Climate Action Frans Timmermans suggested "the best answer" to the 2021 global energy crisis is "to reduce our reliance on fossil fuels." He said those blaming the European Green Deal were doing so "for perhaps ideological reasons or sometimes economic reasons in protecting their vested interests." Some critics blamed the European Union Emissions Trading System (EU ETS) and closure of nuclear plants for contributing to the energy crisis. European Commission President Ursula von der Leyen said that Europe is "too reliant" on natural gas and too dependent on natural gas imports. According to Von der Leyen, "The answer has to do with diversifying our suppliers ... and, crucially, with speeding up the transition to clean energy."
Metal and mineral extraction
The transition to renewable energy requires increased extraction of certain metals and minerals. Like all mining, this impacts the environment and can lead to environmental conflict. Wind power requires large amounts of copper and zinc, as well as smaller amounts of the rarer metal neodymium. Solar power is less resource-intensive, but still requires significant amounts of aluminum. The expansion of electrical grids requires both copper and aluminum. Batteries, which are critical to enable storage of renewable energy, use large quantities of copper, nickel, aluminum and graphite. Demand for lithium is expected to grow 42-fold from 2020 to 2040. Demand for nickel, cobalt and graphite is expected to grow by a factor of about 20–25. For each of the most relevant minerals and metals, its mining is dominated by a single country: copper in Chile, nickel in Indonesia, rare earths in China, cobalt in the Democratic Republic of the Congo (DRC), and lithium in Australia. China dominates processing of all of these.
Recycling these metals after the devices they are embedded in are spent is essential to create a circular economy and ensure renewable energy is sustainable. By 2040, recycled copper, lithium, cobalt, and nickel from spent batteries could reduce combined primary supply requirements for these minerals by around 10%.
A controversial approach is deep sea mining. Minerals can be collected from new sources like polymetallic nodules lying on the seabed. This would damage local biodiversity, but proponents point out that biomass on resource-rich seabeds is much scarcer than in the mining regions on land, which are often found in vulnerable habitats like rainforests.
Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste. In several African countries, the green energy transition has created a mining boom, causing deforestation, and threatening already endangered species.
Conservation areas
Installations used to produce wind, solar and hydropower are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts. More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphasized that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.
In 2020 scientists published a world map of areas that contain renewable energy materials as well as estimations of their overlaps with "Key Biodiversity Areas", "Remaining Wilderness" and "Protected Areas". The authors assessed that careful strategic planning is needed.
Recycling of solar panels
Solar panels are recycled to reduce electronic waste and create a source for materials that would otherwise need to be mined, but such business is still small and work is ongoing to improve and scale-up the process.
Society and culture
Public support
Solar power plants may compete with arable land, while on-shore wind farms often face opposition due to aesthetic concerns and noise. Such opponents are often described as NIMBYs ("not in my back yard"). Some environmentalists are concerned about fatal collisions of birds and bats with wind turbines. Although protests against new wind farms occasionally occur around the world, regional and national surveys generally find broad support for both solar and wind power.
Community-owned wind energy is sometimes proposed as a way to increase local support for wind farms. A 2011 UK Government document stated that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake." In the 2000s and early 2010s, many renewable projects in Germany, Sweden and Denmark were owned by local communities, particularly through cooperative structures. In the years since, more installations in Germany have been undertaken by large companies, but community ownership remains strong in Denmark.
History
Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. The oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. The use of biomass for fire did not become commonplace until many hundreds of thousands of years later. Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile. From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times. Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
In 1885, Werner Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905. Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".
The theory of peak oil was published in 1956. In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.
New government spending, regulation and policies helped the renewables industry weather the 2009 global financial crisis better than many other sectors. In 2022, renewables accounted for 30% of global electricity generation, up from 21% in 1985.
| Technology | Energy | null |
25794 | https://en.wikipedia.org/wiki/Revolver | Revolver | A revolver is a repeating handgun with at least one barrel and a revolving cylinder containing multiple chambers (each holding a single cartridge) for firing. Because most revolver models hold up to six cartridges, before needing to be reloaded, revolvers are commonly called six shooters or sixguns. Due to their rotating cylinder mechanism, they may also be called wheel guns.
Before firing, cocking the revolver's hammer partially rotates the cylinder, indexing one of the cylinder chambers into alignment with the barrel, allowing the bullet to be fired through the bore. By sequentially rotating through each chamber, the revolver allows the user to fire multiple times until having to reload the gun, unlike older single-shot firearms that had to be reloaded after each shot.
The hammer cocking in nearly all revolvers is manually driven and can be cocked either by the user using the thumb to directly pull back the hammer (as in single-action), or via internal linkage relaying the force of the trigger-pull (as in double-action), or both (as in double-action/single-action).
Some rare revolver models can utilize the blowback of the preceding shot to automatically cock the hammer and index the next chamber, although these self-loading revolvers (known as automatic revolvers, despite technically being semi-automatic) never gained any widespread usage.
Though the majority of weapons using a revolver mechanism are handguns, other firearms may also have a revolver action. These include some models of rifles, shotguns, grenade launchers, and autocannons. Revolver weapons differ from Gatling-style rotary weapons in that in a revolver only the chambers rotate, while in a rotary weapon there are multiple full firearm actions with their own barrels which rotate around a common ammunition feed.
Famous revolver models include the Colt 1851 Navy Revolver, the Webley, the Colt Single Action Army, the Colt Official Police, Smith & Wesson Model 10, the Smith & Wesson Model 29 of Dirty Harry fame, the Nagant M1895, and the Colt Python.
Although largely surpassed in convenience and ammunition capacity by semi-automatic pistols, revolvers still remain popular as back-up and off-duty handguns among American law enforcement officers and security guards and are still common in the American private sector as defensive, sporting, and hunting firearms.
History
In the development of firearms, an important limiting factor was the time required to reload the weapon after it was fired. While the user was reloading, the weapon was useless, allowing an adversary to attack the user. Several approaches to the problem of increasing the rate of fire were developed, the earliest involving multi-barrelled weapons which allowed two or more shots without reloading. Later weapons featured multiple barrels revolving along a single axis.
A matchlock revolver with a single barrel and four chambers held at the Tower of London is believed to have been invented some time in the 15th century. A revolving three-barrelled matchlock pistol in Venice is dated from at least 1548. During the late 16th century in China, Zhao Shi-zhen invented the Xun Lei Chong, a five-barreled musket revolver spear. Around the same time, the earliest examples of the modern revolver were made in Germany. These weapons featured a single barrel with a revolving cylinder holding the powder and ball. They would soon be made by many European gun-makers, in numerous designs and configurations. However, these weapons were complicated, difficult to use and prohibitively expensive to make, and thus not widely distributed.
In the early 19th century, multiple-barrel handguns called "pepper-boxes" were popular. Originally they were muzzleloaders, but in 1837, the Belgian gunsmith Mariette invented a hammerless pepperbox with a ring trigger and turn-off barrels that could be unscrewed.
In 1836, American Samuel Colt patented a popular revolver which led to the widespread use of the revolver. According to Colt, he came up with the idea for the revolver while at sea, inspired by the capstan, which had a ratchet and pawl mechanism on it, a version of which was used in his guns to rotate the cylinder by cocking the hammer. This provided a reliable and repeatable way to index each round and did away with the need to manually rotate the cylinder. Revolvers proliferated largely due to Colt's ability as a salesman, but his influence spread in other ways as well. The build quality of his company's guns became famous, and its armories in America and England trained several seminal generations of toolmakers and other machinists, who had great influence in other manufacturing efforts of the next half century.
Early revolvers were caplock muzzleloaders: the user had to pour black powder into each chamber, ram down a bullet on top of it, then place a percussion cap on the nipple at the rear of each chamber, where the hammer would fall on it and ignite the powder charge. This was similar to loading a traditional single-shot muzzle-loading pistol, except that the powder and shot could be loaded directly into the front of the cylinder rather than having to be loaded down the whole length of the barrel. Importantly, this allowed the barrel itself to be rifled, since the user was not required to force the tight-fitting bullet down the barrel in order to load it (a traditional muzzle-loading pistol had a smoothbore barrel and the shot was relatively loose-fitting, which allowed easy loading, but was much less accurate). After firing a shot, the user would raise their pistol vertically while cocking the hammer back for their next shot, so the fragments of the burst percussion cap would fall clear of the weapon and not jam the mechanism. Some of the most popular cap-and-ball revolvers were the Colt Model 1851 "Navy" model, 1860 "Army" model, and Colt Pocket Percussion Revolvers, all of which saw extensive use in the American Civil War. Although American revolvers were the most common, European arms makers were making numerous revolvers by that time as well, many of which found their way into the hands of the American forces. These included the single-action Lefaucheux and LeMat revolvers, as well as the Beaumont–Adams and Tranter revolvers—early double-action weapons in spite of being muzzle-loaders.
In 1854, Eugene Lefaucheux introduced the Lefaucheux Model 1854, the first revolver to use self-contained metallic cartridges rather than loose powder, pistol ball, and percussion caps. It is a single-action, pinfire revolver holding six rounds.
On November 17, 1856, Horace Smith and Daniel B. Wesson signed an agreement for the exclusive use of the Rollin White Patent at a rate of 25 cents for every revolver. Smith & Wesson began production late in 1857, and enjoyed years of exclusive production of rear-loading cartridge revolvers in America due to their association with Rollin White, who held the patent and vigorously defended it against any perceived infringement by other manufacturers (much as Colt had done with his original patent on the revolver). Although White held the patent, other manufacturers were able to sell firearms using the design, provided they were willing to pay royalties.
After White's patent expired in April 1869, a third extension was refused. Other gun-makers were then allowed to produce their own weapons using the rear-loading method, without having to pay a royalty on each gun sold. Early guns were often conversions of earlier cap-and-ball revolvers, modified to accept metallic cartridges loaded from the rear, but later models, such as the Colt Model 1872 "open top" and the Smith & Wesson Model 3, were designed from the start as cartridge revolvers.
In 1873, Colt introduced the famous Model 1873, also known as the Single Action Army, the "Colt .45" (not to be confused with Colt-made models of the M1911 semi-automatic), and "the Peacemaker", one of the most famous handguns ever made. This popular design, which was a culmination of many of the advances introduced in earlier weapons, fired 6 metallic cartridges and was offered in over 30 different calibers and various barrel lengths. It is still in production, along with numerous clones and lookalikes, and its overall appearance has remained the same since 1873. Although originally made for the United States Army, the Model 1873 was widely distributed and popular with civilians, ranchers, lawmen, and outlaws alike. Its design has influenced countless other revolvers. Colt has discontinued its production twice, but resumed production due to popular demand.
In the U.S., the single-action revolver remained more popular than the double-action revolver until the late 19th century. In Europe, however, arms makers were quick to adopt the double-action trigger. While the U.S. was producing weapons like the Model 1873, European manufacturers were building double-action models like the French MAS Modèle 1873 and the later British Enfield Mk I and II revolvers. (Britain relied on cartridge conversions of the earlier Beaumont–Adams double-action prior to this.) Colt's first attempt at a double action revolver to compete with European manufacturers was the Colt Model 1877, which earned lasting notoriety for its complex, expensive, and fragile trigger mechanism, which in addition to failing frequently, also had a heavy trigger pull.
In 1889, Colt introduced the Model 1889, the first double action revolver with a "swing-out" cylinder, as opposed to a "top-break" or "side-loading" cylinder. Swing-out cylinders quickly caught on, because they combined the best features of earlier designs. Top-break actions had the ability to eject all empty shells simultaneously and exposed all chambers for easy reloading, but having the frame hinged into two halves weakened the gun and negatively affected accuracy due to the lack of rigidity. "Side-loaders", like the earlier Colt Model 1871 and 1873, had a rigid frame, but required the user to eject and load one chamber at a time as they rotated the cylinder to line each chamber up with the side-mounted loading gate. Smith & Wesson followed seven years later with the Hand Ejector, Model 1896 in .32 S&W Long caliber, followed by the very similar, yet improved, Model 1899 (later known as the Model 10), which introduced the new .38 Special cartridge. The Model 10 went on to become the best selling handgun of the 20th century, at 6,000,000 units, and the .38 Special is still the most popular chambering for revolvers in the world. These new guns were an improvement over the Colt 1889 design since they incorporated a combined center-pin and ejector rod to lock the cylinder in position, whereas the Colt 1889 did not use a center pin and the cylinder was prone to move out of alignment.
Revolvers have remained popular in many areas, although for law enforcement and military personnel, they have largely been supplanted by magazine-fed semi-automatic pistols, such as the Beretta M9 and the SIG Sauer M17, especially in circumstances where faster reload times and higher cartridge capacity are important.
Patents
In 1815, (sometimes incorrectly dated as 1825) a French inventor called Julien Leroy patented a flintlock and percussion revolving rifle with a mechanically indexed cylinder and a priming magazine.
Elisha Collier of Boston, Massachusetts, patented a flintlock revolver in Britain in 1818, and significant numbers were being produced in London by 1822. The origination of this invention is in doubt, as similar designs were patented in the same year by Artemus Wheeler in the United States, and by Cornelius Coolidge in France. Samuel Colt submitted a British patent for his revolver in 1835 and a U.S. patent (number 138) on February 25, 1836, for a Revolving gun, and made the first production model on March 5 of that year.
Another revolver patent was issued to Samuel Colt on August 29, 1839. The February 25, 1836, patent was then reissued as entitled Revolving gun on October 24, 1848. This was followed by on September 3, 1850, for a Revolver, and by on September 10, 1850, for a Revolver. In 1855, Rollin White patented the bored-through cylinder entitled Improvement in revolving fire-arms . In 1856, Horace Smith & Daniel Wesson formed a partnership (S&W), then developed and manufactured a revolver chambered for a self-contained metallic cartridge. In 1993, was issued to Roger C. Field for an economical device for minimizing the flash gap of a revolver between the barrel and the cylinder.
Design
A revolver has several firing chambers arranged in a circle in a cylindrical block; one at a time, these chambers are brought into alignment with the firing mechanism and barrel. In contrast, other repeating firearms, such as bolt-action, lever-action, pump-action, and semi-automatic, have a single firing chamber and a mechanism to load and extract cartridges into it.
A single-action revolver requires the hammer to be pulled back by hand before each shot, which also revolves the cylinder. This leaves the trigger with one "single action" to perform—releasing the hammer to fire the shot. In contrast, with a self-cocking, or double-action, revolver, one long squeeze of the trigger pulls back the hammer and revolves the cylinder, then finally fires the shot, thus requiring more force and distance to pull the trigger than in a single-action revolver. They can generally be fired faster than a single-action, but with reduced accuracy in the hands of most shooters.
Most modern revolvers are "traditional double-action", which means they may operate either in single-action or self-cocking mode. The accepted meaning of "double-action" has come to be the same as "self-cocking", so modern revolvers that cannot be pre-cocked are called "double-action-only". These are intended for concealed carry, because the hammer of a traditional design is prone to snagging on clothes when drawn. Most revolvers do not come with accessory rails, which are used for mounting lights and lasers, except for the Smith & Wesson M&P R8 (.357 Magnum), Smith & Wesson Model 325 Thunder Ranch (.45 ACP), and all versions of the Chiappa Rhino (.357 Magnum, 9×19mm, .40 S&W, or 9×21mm) except for the 2" and 3" models, respectively. However, certain revolvers, such as the Taurus Judge and Charter Arms revolvers, can be fitted with accessory rails.
Revolvers most commonly have 6 chambers, hence the common names of "six-gun" or "six-shooter". However, some revolvers have more or less than 6, depending on the size of the gun and caliber of the bullet. Each chamber has to be reloaded manually, which makes reloading a revolver a much slower procedure than reloading a semi-automatic pistol.
Compared to autoloading handguns, a revolver is often much simpler to operate and may have greater reliability. For example, should a semiautomatic pistol fail to fire, clearing the chamber requires manually cycling the action to remove the errant round, as cycling the action normally depends on the energy of a cartridge firing. With a revolver, this is not necessary as none of the energy for cycling the revolver comes from the firing of the cartridge, but is instead supplied by the user either through cocking the hammer or, in a double-action design, by just squeezing the trigger. Another significant advantage of revolvers is superior ergonomics, particularly for users with small hands. A revolver's grip does not hold a magazine, and it can be designed or customized much more than the grip of a typical semi-automatic. Partially because of these reasons, revolvers still hold significant market share as concealed carry and home-defense weapons.
A revolver can be kept loaded and ready to fire without fatiguing any springs and is not very dependent on lubrication for proper firing. Additionally, in the case of double-action-only revolvers there is no risk of accidental discharge from dropping alone, as the hammer is cocked by the trigger pull. However, the revolver's clockwork-like internal parts are relatively delicate and can become misaligned after a severe impact, and its revolving cylinder can become jammed by excessive dirt or debris.
Over the long period of development of the revolver, many calibers have been used. Some of these have proved more durable during periods of standardization and some have entered general public awareness. Among these are the .22 Long Rifle, a caliber popular for target shooting and teaching novice shooters; .38 Special and .357 Magnum, known for police use; the .44 Magnum, famous from Clint Eastwood's Dirty Harry films; and the .45 Colt, used in the Colt revolver of the Wild West. Introduced in 2003, the Smith & Wesson Model 500 is one of the most powerful revolvers, utilizing the .500 S&W Magnum cartridge.
Because the rounds in a revolver are headspaced on the rim, some revolvers are capable of chambering more than one type of ammunition. Revolvers chambered in .44 Magnum will also chamber .44 Special and .44 Russian, likewise revolvers in .357 Magnum will safely chamber .38 Special, .38 Long Colt, and .38 Short Colt; while revolvers in .22 WMR can chamber .22 Long Rifle, .22 Long, and .22 Short, it is not safe to do so, due to differences in cartridge pressures and the fact that .22 WMR does not shoot a "heeled" bullet, along with differences in rim diameter that can allow high pressure gases to escape behind the cartridge and seriously injure the user. However, some .22 revolvers come with interchangeable cylinders so that .22 Long Rifle can be shot from a .22 WMR revolver. In 1996, the Medusa Model 47 was made with the ability to chamber 25 different cartridges with bullet diameters between .355" and .357".
Revolver technology is also present in other weapons used by the U.S. military. Some autocannons and grenade launchers use mechanisms similar to revolvers, and some riot shotguns use spring-loaded cylinders holding up to 12 rounds. In addition to serving as backup guns, revolvers still fill the specialized role as a shield gun; law enforcement personnel using a "bulletproof" gun shield sometimes opt for a revolver instead of a self-loading pistol, because the slide of a pistol may strike the front of the shield when fired. Revolvers do not suffer from this disadvantage. A second revolver may be secured behind the shield to provide a quick means of continuity of fire. Many police also still use revolvers as their duty weapon due to their relative mechanical simplicity and ease of use.
In 2010, major revolver manufacturers started producing polymer frame revolvers like the Ruger LCR, Smith & Wesson Bodyguard 38, and Taurus Protector Polymer. The new design incorporates polymer technology that lowers weight significantly, helps absorb recoil, and is strong enough to handle .38 Special +P and .357 Magnum loads. The polymer is only used on the lower frame and is joined to an upper frame, barrel, and cylinder that are made of metal alloy. Polymer technology is considered one of the major advancements in revolver history because the frame was previously always metal alloy and mostly a one-piece design.
Another 21st century development in revolver technology is the Chiappa Rhino, a revolver introduced by Italian manufacturer Chiappa in 2009, and first sold in the U.S. in 2010. The Rhino, built with the U.S. concealed carry market in mind, is designed so that the bullet fires from the bottom chamber of the cylinder instead of the top chamber, as is typical in revolvers. This is intended to reduce muzzle flip, allowing for faster and more accurate repeat shots. In addition, the cylinder cross-section is hexagonal instead of circular, further reducing the weapon's profile.
Loading and unloading
Front-loading cylinder
The first revolvers were front loading (also referred to as muzzleloading), and were similar to muskets in that the powder and bullet were loaded separately. These were caplocks or "cap and ball" revolvers, because the caplock method of priming was the first to be compact enough to make a practical revolver feasible. When loading, each chamber in the cylinder was rotated out of line with the barrel, and charged from the front with loose powder and an oversized bullet. Next, the chamber was aligned using the ramming lever underneath the barrel. Pulling the lever would drive a rammer into the chamber, pushing the ball securely in place. Finally, the user would place percussion caps on the nipples on the rear face of the cylinder.
After each shot, a user was advised to raise the revolver vertically while cocking back the hammer so as to allow the fragments of the spent percussion cap to fall out safely. Otherwise, the fragments could fall into the revolver's mechanism and jam it. Caplock revolvers were vulnerable to "chain fires", wherein hot gas from a shot ignited the powder in the other chambers. This could be prevented by sealing the chambers with cotton, wax, or grease. Chain fire led to the shots hitting the shooters hand, which is one of the main reasons why revolver rifles were uncommon. By the time metallic cartridges became common, more effective mechanisms for a repeating rifle, such as lever-action, had been developed.
Loading a cylinder in this manner was a slow and awkward process and generally could not be done in the midst of battle. Some soldiers avoided this by carrying multiple revolvers in the field. Another solution was to use a revolver with a detachable cylinder design. These revolvers allowed the shooter to quickly remove a cylinder and replace it with a full one.
Fixed cylinder designs
In many of the first generation of cartridge revolvers (especially those that were converted after manufacture), the base pin on which the cylinder revolved was removed, and the cylinder taken out from the revolver for loading. Most revolvers using this method of loading are single-action revolvers, although Iver Johnson produced double-action models with removable cylinders. The removable-cylinder design is employed in some modern "micro-revolvers" (usually chambered in .22 rimfire and small enough to fit in the palm of the hand) to simplify their design.
Later single-action revolver models with a fixed cylinder used a loading gate at the rear of the cylinder that allowed insertion of one cartridge at a time for loading, while a rod under the barrel could be pressed rearward to eject a fired case.
The loading gate on the original Colt designs (and on nearly all single-action revolvers since, such as the famous Colt Single Action Army) is on the right side, which was done to facilitate loading while on horseback; with the revolver held in the left hand with the reins of the horse, the cartridges can be ejected and loaded with the right hand.
Because the cylinders in these types of revolvers are firmly attached at the front and rear of the frame, and the frame is typically full thickness all the way around, fixed cylinder revolvers are inherently strong designs. Accordingly, many modern large caliber hunting revolvers tend to be based on the fixed cylinder design. Fixed cylinder revolvers can fire the strongest and most powerful cartridges, but at the price of being the slowest to load or unload since they cannot use speedloaders or moon clips to load multiple cartridges at once, as only one chamber is exposed at a time to the loading gate.
Top-break cylinder
In a top-break revolver, the frame is hinged at the bottom front of the cylinder. Releasing the lock and pushing the barrel down exposes the rear face of the cylinder. In most top-break revolvers, this act also operates an extractor that pushes the cartridges in the chambers back far enough that they will fall free, or can be removed easily. Fresh rounds are then inserted into the cylinder. The barrel and cylinder are then rotated back and locked in place, and the revolver is ready to fire.
Top-break revolvers are able to be loaded more rapidly than fixed frame revolvers, especially with the aid of a speedloader or moon clip. However, this design is much weaker and cannot handle high pressure rounds. While this design has become mostly obsolete, supplanted by the stronger yet equally convenient swing-out cylinder design, manufacturers still make reproductions of late 19th century designs for use in cowboy action shooting.
The first top-break revolver was patented in France and Britain at the end of December in 1858 by Devisme. The most commonly found top-break revolvers were manufactured by Smith & Wesson, Webley & Scott, Iver Johnson, Harrington & Richardson, Manhattan Fire Arms, Meriden Arms, and Forehand & Wadsworth.
Tip-up cylinder
The tip-up revolver was the first design to be used with metallic cartridges in the Smith & Wesson Model 1, on which the barrel pivoted upwards, hinged on the forward end of the topstrap. On the S&W tip-up revolvers, the barrel release catch is located on both sides of the frame in front of the trigger. Smith & Wesson discontinued it in the third series of the Smith & Wesson Model 1 1/2 but it was fairly widely used in Europe in the 19th century after a patent by Spirlet in 1870, which also included an ejector star.
Swing-out cylinder
The most modern method of loading and unloading a revolver is by means of the swing-out cylinder. The first swing-out cylinder revolver was patented in France and Britain at the end of December in 1858 by Devisme. The cylinder is mounted on a pivot that is parallel to the chambers, and the cylinder swings out and down (to the left in most cases). An extractor is fitted, operated by a rod projecting from the front of the cylinder assembly. When pressed, it will push all fired rounds free simultaneously (as in top-break models, the travel is designed to not completely extract longer, unfired rounds). The cylinder may then be loaded (individually or with the use of a speedloader), closed, and latched in place.
The pivoting part that supports the cylinder is called the crane; it is the weak point of swing-out cylinder designs. Using the method often portrayed in movies and television of flipping the cylinder open and closed with a flick of the wrist can actually cause the crane to bend over time, throwing the cylinder out of alignment with the barrel. Lack of alignment between chamber and barrel is dangerous, as it can impede the bullet's transition from chamber to barrel. This causes higher pressures in the chamber, bullet damage, and the potential for an explosion if the bullet becomes stuck.
The shock of firing can exert a great deal of stress on the crane, as in most designs the cylinder is only held closed at one point, the rear of the cylinder. Stronger designs, such as the Ruger Super Redhawk, use a lock in the crane as well as the lock at the rear of the cylinder. This latch provides a more secure bond between cylinder and frame, and allows the use of larger, more powerful cartridges. Swing-out cylinders are not as strong as fixed cylinders, and great care must be taken with the cylinder when loading, so as not to damage the crane.
Other designs
One unique design was designed by Merwin Hulbert in which the barrel and cylinder assembly were rotated 90° and pulled forward to eject shells from the cylinder.
Action
Single-action
In a single-action revolver, the hammer is manually cocked, usually with the thumb of the firing or supporting hand. This action advances the cylinder to the next round and locks the cylinder in place, with the chamber aligned with the barrel. When the trigger is pulled, it releases the hammer, which fires the round in the chamber. To fire again, the hammer must be manually cocked again. This is called "single-action" because the trigger only performs a single action, of releasing the hammer. Because only a single action is performed and trigger pull is lightened, firing a revolver in this way allows most shooters to achieve greater accuracy. Additionally, the need to cock the hammer manually acts as a safety. With some revolvers, since the hammer rests on the primer or nipple, accidental discharge from impact is more likely if all 6 chambers are loaded. The Colt Paterson, Colt Walker, Colt Dragoon, and Colt Single Action Army of the American frontier era are examples of this system.
Double-action
In double-action (DA), the stroke of the trigger pull generates two actions:
The hammer is pulled back to the cocked position, which also indexes the cylinder to the next round.
The hammer is released to strike the firing pin.
Thus, DA means that a cocking action separate from the trigger pull is unnecessary; every trigger pull will result in a complete cycle. This allows uncocked carry, while also allowing draw-and-fire using only the trigger. However, this will require a longer and harder trigger stroke, though this drawback can also be viewed as a safety feature, as the gun is safer against accidental discharges from being dropped. The sole mode of operation was seen as reducing training time for the British Army in WWII where the revolver usage was rapid fire at very close ranges.
Most double-action revolvers may be fired in two ways:
The first way is single-action; that is, exactly the same as a single-action revolver; the hammer is cocked with the thumb, which indexes the cylinder, and when the trigger is pulled, the hammer is released and the round is fired.
The second way is double-action, or from a hammer-down position. In this case, the trigger first cocks the hammer and revolves the cylinder, then trips the hammer at the rear of the trigger stroke, firing the round in the chamber.
Certain revolvers, called double-action-only (DAO) or, more correctly but less commonly, self-cocking, lack the latch that enables the hammer to be locked to the rear, and thus can only be fired in the double-action mode. With no way to lock the hammer back, DAO designs tend to have bobbed or spurless hammers, and may even have the hammer completely covered by the revolver's frame (i.e., shrouded or hooded). These are generally intended for concealed carrying, as a hammer spur could snag when the revolver is drawn from clothing, but this design may result in reduction in accuracy in aimed fire.
DA and DAO revolvers were the standard-issue sidearm of many police departments for many decades. Only in the 1980s and 1990s did the semiautomatic pistol gain popularity after the advent of safe actions. The reasons for these choices are the modes of carry and use. Double-action is preferred in high-stress situations because it allows a mode of carry in which one only has to draw and pull the trigger—no safety catch release nor separate cocking stroke is required.
Other
In the era of cap-and-ball revolvers in the mid 19th century, two revolver models, the English Tranter and the American Savage "Figure Eight", used a method whereby the hammer was cocked by the shooter's middle finger pulling on a second trigger below the main trigger.
Iver Johnson made an unusual model from 1940 to 1947 called the Trigger Cocking Double Action. If the hammer was down, pulling the trigger would cock the hammer. If the trigger was pulled with the hammer cocked, it would then fire. This meant that to fire the revolver from a hammer down state, the trigger must be pulled twice.
This is similar to the action of the "sytème Triple action" patented by Eugène Lefaucheux in French patent number 55784 on September 27, 1862 for pinfire revolvers produced in the 1860's in France. The Lefaucheux Triple Action can be used in single action by cocking the hammer with the thumb, or in double action with a long pull on the trigger, or the hammer can be cocked by a pull on the trigger, then allowing one to shoot in single action.
3D printed revolver
The Zig zag revolver is a 3D printed .38 Revolver released to the public in May 2014. It was created by a Japanese citizen from Kawasaki named Yoshitomo Imura, using a 3D-printer and plastic filament.
Use with suppressors
As a general rule, revolvers cannot be effective with a sound suppressor ("silencer"), as there is usually a small gap between the revolving cylinder and the barrel which a bullet must traverse, or jump, when fired. From this opening, a rather loud report is produced. A suppressor can only suppress noise coming from the muzzle.
A suppressible revolver design does exist in the Nagant M1895, a Belgian-designed revolver used by Imperial Russia and later the Soviet Union from 1895 through World War II. This revolver uses a unique cartridge whose case extends beyond the tip of the bullet, and a cylinder that moves forward to place the end of the cartridge inside the barrel when ready to fire. This bridges the gap between the cylinder and the barrel, and the case expands to seal the gap when fired. While the tiny gap between cylinder and barrel on most revolvers is insignificant to the internal ballistics, the seal of the Nagant is especially effective when used with a suppressor, and a number of suppressed Nagant revolvers have been used since its invention.
There is a modern revolver of Russian design, the OTs-38, which uses ammunition that incorporates the silencing mechanism into the cartridge case, making the gap between cylinder and barrel irrelevant as far as the suppression issue is concerned. The OTs-38 does need an unusually close and precise fit between the cylinder and barrel due to the shape of bullet in the special ammunition (Soviet SP-4), which was originally designed for use in a semi-automatic.
Additionally, the U.S. military experimented with designing a special version of the Smith & Wesson Model 29 for tunnel rats, called the Quiet Special Purpose Revolver, or QSPR, that uses special .40 caliber ammunition. It never entered official service.
Automatic revolvers
The term "automatic revolver" has two different meanings, the first being used in the late nineteenth and early twentieth centuries when "automatic" referred not to the operational mechanism of firing, but to extraction and ejection of spent casings. An "automatic revolver" in this context is one which extracts empty fired cases "automatically", such as upon breaking open the action, rather than requiring manual extraction of each case individually with a sliding rod or pin (as in the Colt Single Action Army design). This term was widely used in the advertising of the period as a way to distinguish such revolvers from the far more common rod-extraction types.
In the second sense, "automatic revolver" refers to the mechanism of firing rather than extraction. Double-action revolvers use a long trigger pull to cock the hammer, thus negating the need to manually cock the hammer between shots. The disadvantage of this is that the long, heavy pull cocking the hammer makes the double-action revolver much harder to shoot accurately than a single-action revolver (although cocking the hammer of a double-action reduces the length and weight of the trigger pull). A rare class of revolvers, called automatic for its firing design, attempts to overcome this restriction, giving the high speed of a double-action with the trigger effort of a single-action. The Webley-Fosbery Automatic Revolver is the most famous commercial example of this. It was recoil-operated, and the cylinder and barrel recoiled backwards to cock the hammer and revolve the cylinder. Cam grooves were milled on the outside of the cylinder to provide a means of advancing to the next chamber—half a turn as the cylinder moved back, and half a turn as it moved forward. .38 caliber versions held eight shots; .455 caliber versions held six. At the time, the few available automatic pistols were larger, less reliable, and more expensive. The automatic revolver was popular when it first came out, but was quickly superseded by the creation of reliable, inexpensive semi-automatic pistols.
In 1997, the Mateba company developed a type of recoil-operated automatic revolver, commercially named the Mateba Autorevolver, which uses the recoil energy to auto-rotate a normal revolver cylinder holding six or seven cartridges, depending on the model. The company has made several versions of its Autorevolver, including longer-barrelled and carbine variations, chambered in .357 Magnum, .44 Magnum and .454 Casull.
The Pancor Jackhammer is a combat shotgun based on a similar mechanism to an automatic revolver. It uses a blow-forward action to move the barrel forward (which unlocks it from the cylinder), rotate the cylinder, and cock the hammer.
Revolving long guns
Revolvers are not limited to handguns; as a longer barrelled arm is more useful in military applications than a sidearm, the idea was applied to both rifles and shotguns throughout the history of the revolver mechanism with mixed degrees of success.
Rifles
Revolving rifles were developed in an attempt to increase the rate of fire of rifles by combining them with the revolving firing mechanism that had been developed earlier for revolving pistols. Colt began experimenting with revolving rifles in the early 19th century, making them in a variety of calibers and barrel lengths. Colt revolving rifles were the first repeating rifles adopted by the U.S. military, issued to soldiers to improve their rate of fire. However, after firing six shots, the shooter had to take an excessive amount of time to reload. Also, on occasion, these Colt rifles discharged all their rounds at once, endangering the shooter. Even so, an early model was used in the Seminole Wars in 1838. During the Civil War, a LeMat Carbine was made based on the LeMat revolver.
Taurus and its Australian partner company, Rossi, manufactures a carbine variant of the Taurus Judge revolver known as the Taurus/Rossi Circuit Judge. It comes in the original combination chambering of .45 Long Colt and .410 bore, as well as the .44 Magnum chambering and dual-cylinder .22LR/.22WMR model. The rifle has small blast shields attached to the cylinder to protect the shooter from hot gases escaping between the cylinder and barrel.
Shotguns
Colt briefly manufactured several revolving shotguns that were met with mixed success. The Colt Model 1839 Shotgun was manufactured between 1839 and 1841. Later, the Colt Model 1855 Shotgun, based on the Model 1855 revolving rifle, was manufactured between 1860 and 1863. Because of their low production numbers and age, they are among the rarest of all Colt firearms.
The Armsel Striker was a modern take on the revolving shotgun that held 10 rounds of 12-gauge ammunition in its cylinder. It was copied by Cobray as the Streetsweeper.
As noted, the original Taurus/Rossi Circuit Judge is chambered to use both .45 Long Colt cartridges and .410 bore shells. However, it utilizes a rifled, rather than smooth-bore barrel.
The MTs255 () is a shotgun fed by a 5-round internal revolving cylinder. It is produced by the TsKIB SOO, Central Design, and Research Bureau of Sporting and Hunting Arms. They are available in 12, 20, 28, and 32 gauges, and .410 bore.
Other weapons
The Hawk MM-1, Milkor MGL, RG-6, and RGP-40 are grenade launchers that use a revolver action. Because the cylinders are much more massive, they use a spring-wound mechanism to index the cylinder.
Revolver cannons use a motor-driven, revolver-like mechanism to fire medium caliber ammunition.
Six gun
A six gun or six-shooter is a revolver that holds six cartridges. The cylinder in a six gun is often called a "wheel", and the six gun is itself often called a "wheel gun". Although a "six gun" can refer to any six-chambered revolver, it is typically a reference to the Colt Single Action Army, or its modern look-alikes such as the Ruger Vaquero and Beretta Stampede.
Until the 1970s, when older-design revolvers such as the Colt Single Action Army and Ruger Blackhawk were re-engineered with drop safeties (such as firing pin blocks, hammer blocks, or transfer bars) that prevent the firing pin from contacting the cartridge's primer unless the trigger is pulled, safe carry required the hammer being positioned over an empty chamber, thus reducing the available cartridges from six to five, or, on some models, positioned in between chambers on either a pin or in a groove made for that purpose, thus keeping the full six rounds available. This kept the uncocked hammer from resting directly on the primer of a cartridge. If not used in this manner, the hammer rests directly on a primer and unintentional firing may occur if the gun is dropped or the hammer is struck. Some holster makers provided a thick leather thong to place underneath the hammer that both allowed the carry of a gun fully loaded with all six rounds and secured the gun in the holster to help prevent its accidental loss.
Six guns are commonly used by cowboy action shooting enthusiasts in competition shooting and are designed to mimic the gunfights of the Old West, as well as for other applications such as general target shooting, hunting, and personal defense.
Notable brands and manufacturers
Robert Adams
Armscor
Astra
Charter Arms
Chiappa Firearms
Cimarron Firearms
Colt's Manufacturing Company
Fabrique Nationale de Herstal
Freedom Arms
Griswold and Gunnison
Harrington & Richardson
Iver Johnson
Janz (revolvers)
Kimber Manufacturing
Korth
Magnum Research
Manurhin
Mateba Arms
Meriden Firearms Co.
Merwin Hulbert
Nagant
North American Arms
Remington Arms
Rossi
Royal Small Arms Factory
Sturm, Ruger & Co.
Smith & Wesson
Taurus Firearms
United States Fire-Arms Manufacturing Company
A. Uberti, Srl.
William Tranter
Webley & Scott
Dan Wesson Firearms
Gallery
| Technology | Firearms | null |
25809 | https://en.wikipedia.org/wiki/Riemann%20zeta%20function | Riemann zeta function | The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter (zeta), is a mathematical function of a complex variable defined as for and its analytic continuation elsewhere.
The Riemann zeta function plays a pivotal role in analytic number theory and has applications in physics, probability theory, and applied statistics.
Leonhard Euler first introduced and studied the function over the reals in the first half of the eighteenth century. Bernhard Riemann's 1859 article "On the Number of Primes Less Than a Given Magnitude" extended the Euler definition to a complex variable, proved its meromorphic continuation and functional equation, and established a relation between its zeros and the distribution of prime numbers. This paper also contained the Riemann hypothesis, a conjecture about the distribution of complex zeros of the Riemann zeta function that many mathematicians consider the most important unsolved problem in pure mathematics.
The values of the Riemann zeta function at even positive integers were computed by Euler. The first of them, , provides a solution to the Basel problem. In 1979 Roger Apéry proved the irrationality of . The values at negative integer points, also found by Euler, are rational numbers and play an important role in the theory of modular forms. Many generalizations of the Riemann zeta function, such as Dirichlet series, Dirichlet -functions and -functions, are known.
Definition
The Riemann zeta function is a function of a complex variable , where and are real numbers. (The notation , , and is used traditionally in the study of the zeta function, following Riemann.) When , the function can be written as a converging summation or as an integral:
where
is the gamma function. The Riemann zeta function is defined for other complex values via analytic continuation of the function defined for .
Leonhard Euler considered the above series in 1740 for positive integer values of , and later Chebyshev extended the definition to
The above series is a prototypical Dirichlet series that converges absolutely to an analytic function for such that and diverges for all other values of . Riemann showed that the function defined by the series on the half-plane of convergence can be continued analytically to all complex values . For , the series is the harmonic series which diverges to , and
Thus the Riemann zeta function is a meromorphic function on the whole complex plane, which is holomorphic everywhere except for a simple pole at with residue .
Euler's product formula
In 1737, the connection between the zeta function and prime numbers was discovered by Euler, who proved the identity
where, by definition, the left hand side is and the infinite product on the right hand side extends over all prime numbers (such expressions are called Euler products):
Both sides of the Euler product formula converge for . The proof of Euler's identity uses only the formula for the geometric series and the fundamental theorem of arithmetic. Since the harmonic series, obtained when , diverges, Euler's formula (which becomes ) implies that there are infinitely many primes. Since the logarithm of is approximately , the formula can also be used to prove the stronger result that the sum of the reciprocals of the primes is infinite. On the other hand, combining that with the sieve of Eratosthenes shows that the density of the set of primes within the set of positive integers is zero.
The Euler product formula can be used to calculate the asymptotic probability that randomly selected integers are set-wise coprime. Intuitively, the probability that any single number is divisible by a prime (or any integer) is . Hence the probability that numbers are all divisible by this prime is , and the probability that at least one of them is not is . Now, for distinct primes, these divisibility events are mutually independent because the candidate divisors are coprime (a number is divisible by coprime divisors and if and only if it is divisible by , an event which occurs with probability ). Thus the asymptotic probability that numbers are coprime is given by a product over all primes,
Riemann's functional equation
This zeta function satisfies the functional equation
where is the gamma function. This is an equality of meromorphic functions valid on the whole complex plane. The equation relates values of the Riemann zeta function at the points and , in particular relating even positive integers with odd negative integers. Owing to the zeros of the sine function, the functional equation implies that has a simple zero at each even negative integer , known as the trivial zeros of . When is an even positive integer, the product on the right is non-zero because has a simple pole, which cancels the simple zero of the sine factor.
The functional equation was established by Riemann in his 1859 paper "On the Number of Primes Less Than a Given Magnitude" and used to construct the analytic continuation in the first place.
Equivalencies
An equivalent relationship had been conjectured by Euler over a hundred years earlier, in 1749, for the Dirichlet eta function (the alternating zeta function):
Incidentally, this relation gives an equation for calculating in the region i.e.
where the η-series is convergent (albeit non-absolutely) in the larger half-plane (for a more detailed survey on the history of the functional equation, see e.g. Blagouchine).
Riemann also found a symmetric version of the functional equation applying to the -function:
which satisfies:
(Riemann's original was slightly different.)
The factor was not well-understood at the time of Riemann, until John Tate's (1950) thesis, in which it was shown that this so-called "Gamma factor" is in fact the local L-factor corresponding to the Archimedean place, the other factors in the Euler product expansion being the local L-factors of the non-Archimedean places.
Zeros, the critical line, and the Riemann hypothesis
The functional equation shows that the Riemann zeta function has zeros at . These are called the trivial zeros. They are trivial in the sense that their existence is relatively easy to prove, for example, from being 0 in the functional equation. The non-trivial zeros have captured far more attention because their distribution not only is far less understood but, more importantly, their study yields important results concerning prime numbers and related objects in number theory. It is known that any non-trivial zero lies in the open strip , which is called the critical strip. The set is called the critical line. The Riemann hypothesis, considered one of the greatest unsolved problems in mathematics, asserts that all non-trivial zeros are on the critical line. In 1989, Conrey proved that more than 40% of the non-trivial zeros of the Riemann zeta function are on the critical line.
For the Riemann zeta function on the critical line, see -function.
Number of zeros in the critical strip
Let be the number of zeros of in the critical strip , whose imaginary parts are in the interval .
Trudgian proved that, if , then
.
The Hardy–Littlewood conjectures
In 1914, G. H. Hardy proved that has infinitely many real zeros.
Hardy and J. E. Littlewood formulated two conjectures on the density and distance between the zeros of on intervals of large positive real numbers. In the following, is the total number of real zeros and the total number of zeros of odd order of the function lying in the interval .
These two conjectures opened up new directions in the investigation of the Riemann zeta function.
Zero-free region
The location of the Riemann zeta function's zeros is of great importance in number theory. The prime number theorem is equivalent to the fact that there are no zeros of the zeta function on the line. A better result that follows from an effective form of Vinogradov's mean-value theorem is that whenever and .
In 2015, Mossinghoff and Trudgian proved that zeta has no zeros in the region
for .
This is the largest known zero-free region in the critical strip for .
The strongest result of this kind one can hope for is the truth of the Riemann hypothesis, which would have many profound consequences in the theory of numbers.
Other results
It is known that there are infinitely many zeros on the critical line. Littlewood showed that if the sequence () contains the imaginary parts of all zeros in the upper half-plane in ascending order, then
The critical line theorem asserts that a positive proportion of the nontrivial zeros lies on the critical line. (The Riemann hypothesis would imply that this proportion is 1.)
In the critical strip, the zero with smallest non-negative imaginary part is (). The fact that
for all complex implies that the zeros of the Riemann zeta function are symmetric about the real axis. Combining this symmetry with the functional equation, furthermore, one sees that the non-trivial zeros are symmetric about the critical line .
It is also known that no zeros lie on the line with real part 1.
Specific values
For any positive even integer ,
where is the -th Bernoulli number.
For odd positive integers, no such simple expression is known, although these values are thought to be related to the algebraic -theory of the integers; see Special values of -functions.
For nonpositive integers, one has
for (using the convention that ).
In particular, vanishes at the negative even integers because for all odd other than 1. These are the so-called "trivial zeros" of the zeta function.
Via analytic continuation, one can show that
This gives a pretext for assigning a finite value to the divergent series 1 + 2 + 3 + 4 + ⋯, which has been used in certain contexts (Ramanujan summation) such as string theory. Analogously, the particular value
can be viewed as assigning a finite result to the divergent series 1 + 1 + 1 + 1 + ⋯.
The value
is employed in calculating kinetic boundary layer problems of linear kinetic equations.
Although
diverges, its Cauchy principal value
exists and is equal to the Euler–Mascheroni constant .
The demonstration of the particular value
is known as the Basel problem. The reciprocal of this sum answers the question: What is the probability that two numbers selected at random are relatively prime?
The value
is Apéry's constant.
Taking the limit through the real numbers, one obtains . But at complex infinity on the Riemann sphere the zeta function has an essential singularity.
Various properties
For sums involving the zeta function at integer and half-integer values, see rational zeta series.
Reciprocal
The reciprocal of the zeta function may be expressed as a Dirichlet series over the Möbius function :
for every complex number with real part greater than 1. There are a number of similar relations involving various well-known multiplicative functions; these are given in the article on the Dirichlet series.
The Riemann hypothesis is equivalent to the claim that this expression is valid when the real part of is greater than .
Universality
The critical strip of the Riemann zeta function has the remarkable property of universality. This zeta function universality states that there exists some location on the critical strip that approximates any holomorphic function arbitrarily well. Since holomorphic functions are very general, this property is quite remarkable. The first proof of universality was provided by Sergei Mikhailovitch Voronin in 1975. More recent work has included effective versions of Voronin's theorem and extending it to Dirichlet L-functions.
Estimates of the maximum of the modulus of the zeta function
Let the functions and be defined by the equalities
Here is a sufficiently large positive number, , , , . Estimating the values and from below shows, how large (in modulus) values can take on short intervals of the critical line or in small neighborhoods of points lying in the critical strip .
The case was studied by Kanakanahalli Ramachandra; the case , where is a sufficiently large constant, is trivial.
Anatolii Karatsuba proved, in particular, that if the values and exceed certain sufficiently small constants, then the estimates
hold, where and are certain absolute constants.
The argument of the Riemann zeta function
The function
is called the argument of the Riemann zeta function. Here is the increment of an arbitrary continuous branch of along the broken line joining the points , and .
There are some theorems on properties of the function . Among those results are the mean value theorems for and its first integral
on intervals of the real line, and also the theorem claiming that every interval for
contains at least
points where the function changes sign. Earlier similar results were obtained by Atle Selberg for the case
Representations
Dirichlet series
An extension of the area of convergence can be obtained by rearranging the original series. The series
converges for , while
converge even for . In this way, the area of convergence can be extended to for any negative integer .
The recurrence connection is clearly visible from the expression valid for enabling further expansion by integration by parts.
Mellin-type integrals
The Mellin transform of a function is defined as
in the region where the integral is defined. There are various expressions for the zeta function as Mellin transform-like integrals. If the real part of is greater than one, we have
and ,
where denotes the gamma function. By modifying the contour, Riemann showed that
for all (where denotes the Hankel contour).
We can also find expressions which relate to prime numbers and the prime number theorem. If is the prime-counting function, then
for values with .
A similar Mellin transform involves the Riemann function , which counts prime powers with a weight of , so that
Now
These expressions can be used to prove the prime number theorem by means of the inverse Mellin transform. Riemann's prime-counting function is easier to work with, and can be recovered from it by Möbius inversion.
Theta functions
The Riemann zeta function can be given by a Mellin transform
in terms of Jacobi's theta function
However, this integral only converges if the real part of is greater than 1, but it can be regularized. This gives the following expression for the zeta function, which is well defined for all except 0 and 1:
Laurent series
The Riemann zeta function is meromorphic with a single pole of order one at . It can therefore be expanded as a Laurent series about ; the series development is then
The constants here are called the Stieltjes constants and can be defined by the limit
The constant term is the Euler–Mascheroni constant.
Integral
For all , , the integral relation (cf. Abel–Plana formula)
holds true, which may be used for a numerical evaluation of the zeta function.
Rising factorial
Another series development using the rising factorial valid for the entire complex plane is
This can be used recursively to extend the Dirichlet series definition to all complex numbers.
The Riemann zeta function also appears in a form similar to the Mellin transform in an integral over the Gauss–Kuzmin–Wirsing operator acting on ; that context gives rise to a series expansion in terms of the falling factorial.
Hadamard product
On the basis of Weierstrass's factorization theorem, Hadamard gave the infinite product expansion
where the product is over the non-trivial zeros of and the letter again denotes the Euler–Mascheroni constant. A simpler infinite product expansion is
This form clearly displays the simple pole at , the trivial zeros at −2, −4, ... due to the gamma function term in the denominator, and the non-trivial zeros at . (To ensure convergence in the latter formula, the product should be taken over "matching pairs" of zeros, i.e. the factors for a pair of zeros of the form and should be combined.)
Globally convergent series
A globally convergent series for the zeta function, valid for all complex numbers except for some integer , was conjectured by Konrad Knopp in 1926 and proven by Helmut Hasse in 1930 (cf. Euler summation):
The series appeared in an appendix to Hasse's paper, and was published for the second time by Jonathan Sondow in 1994.
Hasse also proved the globally converging series
in the same publication. Research by Iaroslav Blagouchine
has found that a similar, equivalent series was published by Joseph Ser in 1926.
In 1997 K. Maślanka gave another globally convergent (except ) series for the Riemann zeta function:
where real coefficients are given by:
Here are the Bernoulli numbers and denotes the Pochhammer symbol.
Note that this representation of the zeta function is essentially an interpolation with nodes, where the nodes are points , i.e. exactly those where the zeta values are precisely known, as Euler showed. An elegant and very short proof of this representation of the zeta function, based on Carlson's theorem, was presented by Philippe Flajolet in 2006.
The asymptotic behavior of the coefficients is rather curious: for growing values, we observe regular oscillations with a nearly exponentially decreasing amplitude and slowly decreasing frequency (roughly as ). Using the saddle point method, we can show that
where stands for:
(see for details).
On the basis of this representation, in 2003 Luis Báez-Duarte provided a new criterion for the Riemann hypothesis. Namely, if we define the coefficients as
then the Riemann hypothesis is equivalent to
Rapidly convergent series
Peter Borwein developed an algorithm that applies Chebyshev polynomials to the Dirichlet eta function to produce a very rapidly convergent series suitable for high precision numerical calculations.
Series representation at positive integers via the primorial
Here is the primorial sequence and is Jordan's totient function.
Series representation by the incomplete poly-Bernoulli numbers
The function can be represented, for , by the infinite series
where , is the th branch of the Lambert -function, and is an incomplete poly-Bernoulli number.
The Mellin transform of the Engel map
The function is iterated to find the coefficients appearing in Engel expansions.
The Mellin transform of the map is related to the Riemann zeta function by the formula
Thue-Morse sequence
Certain linear combinations of Dirichlet series whose coefficients are terms of the Thue-Morse sequence give rise to identities involving the Riemann Zeta function (Tóth, 2022 ). For instance:
where is the term of the Thue-Morse sequence. In fact, for all with real part greater than , we have
In nth dimensions
The zeta function can also be represented as an nth amount of integrals:
and it only works for
Numerical algorithms
A classical algorithm, in use prior to about 1930, proceeds by applying the Euler-Maclaurin formula to obtain, for n and m positive integers,
where, letting denote the indicated Bernoulli number,
and the error satisfies
with σ = Re(s).
A modern numerical algorithm is the Odlyzko–Schönhage algorithm.
Applications
The zeta function occurs in applied statistics including Zipf's law, Zipf–Mandelbrot law, and Lotka's law.
Zeta function regularization is used as one possible means of regularization of divergent series and divergent integrals in quantum field theory. In one notable example, the Riemann zeta function shows up explicitly in one method of calculating the Casimir effect. The zeta function is also useful for the analysis of dynamical systems.
Musical tuning
In the theory of musical tunings, the zeta function can be used to find equal divisions of the octave (EDOs) that closely approximate the intervals of the harmonic series. For increasing values of , the value of
peaks near integers that correspond to such EDOs. Examples include popular choices such as 12, 19, and 53.
Infinite series
The zeta function evaluated at equidistant positive integers appears in infinite series representations of a number of constants.
In fact the even and odd terms give the two sums
and
Parametrized versions of the above sums are given by
and
with and where and are the polygamma function and Euler's constant, respectively, as well as
all of which are continuous at . Other sums include
where denotes the imaginary part of a complex number.
Another interesting series that relates to the natural logarithm of the lemniscate constant is the following
There are yet more formulas in the article Harmonic number.
Generalizations
There are a number of related zeta functions that can be considered to be generalizations of the Riemann zeta function. These include the Hurwitz zeta function
(the convergent series representation was given by Helmut Hasse in 1930, cf. Hurwitz zeta function), which coincides with the Riemann zeta function when (the lower limit of summation in the Hurwitz zeta function is 0, not 1), the Dirichlet -functions and the Dedekind zeta function. For other related functions see the articles zeta function and -function.
The polylogarithm is given by
which coincides with the Riemann zeta function when .
The Clausen function can be chosen as the real or imaginary part of .
The Lerch transcendent is given by
which coincides with the Riemann zeta function when and (the lower limit of summation in the Lerch transcendent is 0, not 1).
The multiple zeta functions are defined by
One can analytically continue these functions to the -dimensional complex space. The special values taken by these functions at positive integer arguments are called multiple zeta values by number theorists and have been connected to many different branches in mathematics and physics.
| Mathematics | Specific functions | null |
25825 | https://en.wikipedia.org/wiki/Red | Red | Red is the color at the long wavelength end of the visible spectrum of light, next to orange and opposite violet. It has a dominant wavelength of approximately 625–740 nanometres. It is a primary color in the RGB color model and a secondary color (made from magenta and yellow) in the CMYK color model, and is the complementary color of cyan. Reds range from the brilliant yellow-tinged scarlet and vermillion to bluish-red crimson, and vary in shade from the pale red pink to the dark red burgundy.
Red pigment made from ochre was one of the first colors used in prehistoric art. The Ancient Egyptians and Mayans colored their faces red in ceremonies; Roman generals had their bodies colored red to celebrate victories. It was also an important color in China, where it was used to color early pottery and later the gates and walls of palaces. In the Renaissance, the brilliant red costumes for the nobility and wealthy were dyed with kermes and cochineal. The 19th century brought the introduction of the first synthetic red dyes, which replaced the traditional dyes. Red became a symbolic color of communism and socialism; Soviet Russia adopted a red flag following the Bolshevik Revolution in 1917. The Soviet red banner would subsequently be used throughout the entire history of the Soviet Union. China adopted its own red flag following the Chinese Communist Revolution. A red flag was also adopted by North Vietnam in 1954, and by all of Vietnam in 1975.
Since red is the color of blood, it has historically been associated with sacrifice, danger, and courage. Modern surveys in Europe and the United States show red is also the color most commonly associated with heat, activity, passion, sexuality, anger, love, and joy. In China, India, and many other Asian countries it is the color symbolizing happiness and good fortune.
Shades and variations
Varieties of the color red may differ in hue, chroma (also called saturation, intensity, or colorfulness), or lightness (or value, tone, or brightness), or in two or three of these qualities. Variations in value are also called tints and shades, a tint being a red or other hue mixed with white, a shade being mixed with black. Four examples are shown below.
In science and nature
Seeing red
The human eye sees red when it looks at light with a wavelength between approximately 625 and 740 nanometers. It is a primary color in the RGB color model and the light just past this range is called infrared, or below red, and cannot be seen by human eyes, although it can be sensed as heat. In the language of optics, red is the color evoked by light that stimulates neither the S or the M (short and medium wavelength) cone cells of the retina, combined with a fading stimulation of the L (long-wavelength) cone cells.
Primates can distinguish the full range of the colors of the spectrum visible to humans, but many kinds of mammals, such as dogs and cattle, have dichromacy, which means they can see blues and yellows, but cannot distinguish red and green (both are seen as gray). Bulls, for instance, cannot see the red color of the cape of a bullfighter, but they are agitated by its movement. (See color vision).
One theory for why primates developed sensitivity to red is that it allowed ripe fruit to be distinguished from unripe fruit and inedible vegetation. This may have driven further adaptations by species taking advantage of this new ability, such as the emergence of red faces.
Red light is used to help adapt night vision in low-light or night time, as the rod cells in the human eye are not sensitive to red.
In color theory and on a computer screen
In the RYB color model, which is the basis of traditional color theory, red is one of the three primary colors, along with blue and yellow. Painters in the Renaissance mixed red and blue to make violet: Cennino Cennini, in his 15th-century manual on painting, wrote, "If you want to make a lovely violet colour, take fine lac (red lake), ultramarine blue (the same amount of the one as of the other) with a binder"; he noted that it could also be made by mixing blue indigo and red hematite.
In the CMY and CMYK color models, red is a secondary color subtractively mixed from magenta and yellow.
In the RGB color model, red, green and blue are additive primary colors. Red, green and blue light combined makes white light, and these three colors, combined in different mixtures, can produce nearly any other color. This principle is used to generate colors on such as computer monitors and televisions. For example, magenta on a computer screen is made by a similar formula to that used by Cennino Cennini in the Renaissance to make violet, but using additive colors and light instead of pigment: it is created by combining red and blue light at equal intensity on a black screen. Violet is made on a computer screen in a similar way, but with a greater amount of blue light and less red light.
Color of sunset
As a ray of white sunlight travels through the atmosphere to the eye, some of the colors are scattered out of the beam by air molecules and airborne particles due to Rayleigh scattering, changing the final color of the beam that is seen. Colors with a shorter wavelength, such as blue and green, scatter more strongly, and are removed from the light that finally reaches the eye. At sunrise and sunset, when the path of the sunlight through the atmosphere to the eye is longest, the blue and green components are removed almost completely, leaving the longer wavelength orange and red light. The remaining reddened sunlight can also be scattered by cloud droplets and other relatively large particles, which give the sky above the horizon its red glow.
Lasers
Lasers emitting in the red region of the spectrum have been available since the invention of the ruby laser in 1960. In 1962 the red helium–neon laser was invented, and these two types of lasers were widely used in many scientific applications including holography, and in education. Red helium–neon lasers were used commercially in LaserDisc players. The use of red laser diodes became widespread with the commercial success of modern DVD players, which use a 660 nm laser diode technology. Today, red and red-orange laser diodes are widely available to the public in the form of extremely inexpensive laser pointers. Portable, high-powered versions are also available for various applications. More recently, 671 nm diode-pumped solid state (DPSS) lasers have been introduced to the market for all-DPSS laser display systems, particle image velocimetry, Raman spectroscopy, and holography.
Red's wavelength has been an important factor in laser technologies; red lasers, used in early compact disc technologies, are being replaced by blue lasers, as red's longer wavelength causes the laser's recordings to take up more space on the disc than would blue-laser recordings.
Astronomy
Mars is called the Red Planet because of the reddish color imparted to its surface by the abundant iron oxide present there.
Astronomical objects that are moving away from the observer exhibit a Doppler red shift.
Jupiter's surface displays a Great Red Spot caused by an oval-shaped mega storm south of the planet's equator.
Red giants are stars that have exhausted the supply of hydrogen in their cores and switched to thermonuclear fusion of hydrogen in a shell that surrounds its core. They have radii tens to hundreds of times larger than that of the Sun. However, their outer envelope is much lower in temperature, giving them an orange hue. Despite the lower energy density of their envelope, red giants are many times more luminous than the Sun due to their large size.
Red supergiants like Betelgeuse, Antares, Mu Cephei, VV Cephei, and VY Canis Majoris one of the biggest stars in the Universe, are the biggest variety of red giants. They are huge in size, with radii 200 to 1700 times greater than the Sun, but relatively cool in temperature (3000–4500 K), causing their distinct red tint. Because they are shrinking rapidly in size, they are surrounded by an envelope or skin much bigger than the star itself. The envelope of Betelgeuse is 250 times bigger than the star inside.
A red dwarf is a small and relatively cool star, which has a mass of less than half that of the Sun and a surface temperature of less than 4,000 K. Red dwarfs are by far the most common type of star in the Galaxy, but due to their low luminosity, from Earth, none are visible to the naked eye.
Interstellar reddening is caused by the extinction of radiation by dust and gas
Pigments and dyes
Food coloring
The most common synthetic food coloring today is Allura Red AC, a red azo dye that goes by several names including: Allura Red, Food Red 17, C.I. 16035, FD&C Red 40, It was originally manufactured from coal tar, but now is mostly made from petroleum.
In Europe, Allura Red AC is not recommended for consumption by children. It is banned in Denmark, Belgium, France and Switzerland, and was also banned in Sweden until the country joined the European Union in 1994. The European Union approves Allura Red AC as a food colorant, but EU countries' local laws banning food colorants are preserved.
In the United States, Allura Red AC is approved by the Food and Drug Administration (FDA) for use in cosmetics, drugs, and food. It is used in some tattoo inks and is used in many products, such as soft drinks, children's medications, and cotton candy. On June 30, 2010, the Center for Science in the Public Interest (CSPI) called for the FDA to ban Red 40.
Because of public concerns about possible health risks associated with synthetic dyes, many companies have switched to using natural pigments such as carmine, made from crushing the tiny female cochineal insect. This insect, originating in Mexico and Central America, was used to make the brilliant scarlet dyes of the European Renaissance.
Autumn leaves
The red of autumn leaves is produced by pigments called anthocyanins. They are not present in the leaf throughout the growing season, but are actively produced towards the end of summer. They develop in late summer in the sap of the cells of the leaf, and this development is the result of complex interactions of many influences—both inside and outside the plant. Their formation depends on the breakdown of sugars in the presence of bright light as the level of phosphate in the leaf is reduced.
During the summer growing season, phosphate is at a high level. It has a vital role in the breakdown of the sugars manufactured by chlorophyll. But in the fall, phosphate, along with the other chemicals and nutrients, moves out of the leaf into the stem of the plant. When this happens, the sugar-breakdown process changes, leading to the production of anthocyanin pigments. The brighter the light during this period, the greater the production of anthocyanins and the more brilliant the resulting color display. When the days of autumn are bright and cool, and the nights are chilly but not freezing, the brightest colorations usually develop.
Anthocyanins temporarily color the edges of some of the very young leaves as they unfold from the buds in early spring. They also give the familiar color to such common fruits as cranberries, red apples, blueberries, cherries, raspberries, and plums.
Anthocyanins are present in about 10% of tree species in temperate regions, although in certain areas—a famous example being New England—up to 70% of tree species may produce the pigment. In autumn forests they appear vivid in the maples, oaks, sourwood, sweetgums, dogwoods, tupelos, cherry trees and persimmons. These same pigments often combine with the carotenoids' colors to create the deeper orange, fiery reds, and bronzes typical of many hardwood species. (See Autumn leaf color).
Blood and other reds in nature
Oxygenated blood is red due to the presence of oxygenated hemoglobin that contains iron molecules, with the iron components reflecting red light. Red meat gets its color from the iron found in the myoglobin and hemoglobin in the muscles and residual blood.
Plants like apples, strawberries, cherries, tomatoes, peppers, and pomegranates are often colored by forms of carotenoids, red pigments that also assist photosynthesis.
Hair color
Red hair occurs naturally on approximately 1–2% of the human population. It occurs more frequently (2–6%) in people of northern or western European ancestry, and less frequently in other populations. Red hair appears in people with two copies of a recessive gene on chromosome 16 which causes a mutation in the MC1R protein.
Red hair varies from a deep burgundy through burnt orange to bright copper. It is characterized by high levels of the reddish pigment pheomelanin (which also accounts for the red color of the lips) and relatively low levels of the dark pigment eumelanin. The term "redhead" (originally redd hede) has been in use since at least 1510.
In animal and human behavior
Red is associated with dominance in a number of animal species. For example, in mandrills, red coloration of the face is greatest in alpha males, increasingly less prominent in lower ranking subordinates, and directly correlated with levels of testosterone. Red can also affect the perception of dominance by others, leading to significant differences in mortality, reproductive success and parental investment between individuals displaying red and those not. In humans, wearing red has been linked with increased performance in competitions, including professional sport and multiplayer video games. Controlled tests have demonstrated that wearing red does not increase performance or levels of testosterone during exercise, so the effect is likely to be produced by perceived rather than actual performance. Judges of tae kwon do have been shown to favor competitors wearing red protective gear over blue, and, when asked, a significant majority of people say that red abstract shapes are more "dominant", "aggressive", and "likely to win a physical competition" than blue shapes. In contrast to its positive effect in physical competition and dominance behavior, exposure to red decreases performance in cognitive tasks and elicits aversion in psychological tests where subjects are placed in an "achievement" context (e.g. taking an IQ test).
History and art
In prehistory and the ancient world
Inside cave 13B at Pinnacle Point, an archeological site found on the coast of South Africa, paleoanthropologists in 2000 found evidence that, between 170,000 and 40,000 years ago, Late Stone Age people were scraping and grinding ochre, a clay colored red by iron oxide, probably with the intention of using it to color their bodies.
Red hematite powder was also found scattered around the remains at a grave site in a Zhoukoudian cave complex near Beijing. The site has evidence of habitation as early as 700,000 years ago. The hematite might have been used to symbolize blood in an offering to the dead.
Red, black and white were the first colors used by artists in the Upper Paleolithic age, probably because natural pigments such as red ochre and iron oxide were readily available where early people lived. Madder, a plant whose root could be made into a red dye, grew widely in Europe, Africa and Asia. The cave of Altamira in Spain has a painting of a bison colored with red ochre that dates to between 15,000 and 16,500 BC.
A red dye called Kermes was made beginning in the Neolithic Period by drying and then crushing the bodies of the females of a tiny scale insect in the genus Kermes, primarily Kermes vermilio. The insects live on the sap of certain trees, especially Kermes oak trees near the Mediterranean region. Jars of kermes have been found in a Neolithic cave-burial at Adaoutse, Bouches-du-Rhône. Kermes from oak trees was later used by Romans, who imported it from Spain. A different variety of dye was made from Porphyrophora hamelii (Armenian cochineal) scale insects that lived on the roots and stems of certain herbs. It was mentioned in texts as early as the 8th century BC, and it was used by the ancient Assyrians and Persians.
In ancient Egypt, red was associated with life, health, and victory. Egyptians would color themselves with red ochre during celebrations. Egyptian women used red ochre as a cosmetic to redden cheeks and lips and also used henna to color their hair and paint their nails.
The ancient Romans wore togas with red stripes on holidays, and the bride at a wedding wore a red shawl, called a flammeum. Red was used to color statues and the skin of gladiators. Red was also the color associated with army; Roman soldiers wore red tunics, and officers wore a cloak called a paludamentum which, depending upon the quality of the dye, could be crimson, scarlet or purple. In Roman mythology red is associated with the god of war, Mars. The vexilloid of the Roman Empire had a red background with the letters SPQR in gold. A Roman general receiving a triumph had his entire body painted red in honor of his achievement.
The Romans liked bright colors, and many Roman villas were decorated with vivid red murals. The pigment used for many of the murals was called vermilion, and it came from the mineral cinnabar, a common ore of mercury. It was one of the finest reds of ancient times – the paintings have retained their brightness for more than twenty centuries. The source of cinnabar for the Romans was a group of mines near Almadén, southwest of Madrid, in Spain. Working in the mines was extremely dangerous, since mercury is highly toxic; the miners were slaves or prisoners, and being sent to the cinnabar mines was a virtual death sentence.
The Middle Ages
After the fall of the Western Roman Empire, red was adopted as a color of majesty and authority by the Byzantine Empire, and the princes of Europe. It also played an important part in the rituals of the Roman Catholic Church, symbolizing the blood of Christ and the Christian martyrs.
In Western Europe, Emperor Charlemagne painted his palace red as a very visible symbol of his authority, and wore red shoes at his coronation. Kings, princes and, beginning in 1295, Roman Catholic cardinals began to wear red colored habitus. When Abbe Suger rebuilt Saint Denis Basilica outside Paris in the early 12th century, he added stained glass windows colored blue cobalt glass and red glass tinted with copper. Together they flooded the basilica with a mystical light. Soon stained glass windows were being added to cathedrals all across France, England and Germany. In medieval painting red was used to attract attention to the most important figures; both Christ and the Virgin Mary were commonly painted wearing red mantles.
In western countries red is a symbol of martyrs and sacrifice, particularly because of its association with blood. Beginning in the Middle Ages, the Pope and Cardinals of the Roman Catholic Church wore red to symbolize the blood of Christ and the Christian martyrs. The banner of the Christian soldiers in the First Crusade was a red cross on a white field, the St. George's Cross. According to Christian tradition, Saint George was a Roman soldier who was a member of the guards of the Emperor Diocletian, who refused to renounce his Christian faith and was martyred. The Saint George's Cross became the Flag of England in the 16th century, and now is part of the Union Flag of the United Kingdom, as well as the Flag of the Republic of Georgia.
Renaissance
In Renaissance painting, red was used to draw the attention of the viewer; it was often used as the color of the cloak or costume of Christ, the Virgin Mary, or another central figure.
In Venice, Titian was the master of fine reds, particularly vermilion; he used many layers of pigment mixed with a semi-transparent glaze, which let the light pass through, to create a more luminous color. The figures of God, the Virgin Mary and two apostles are highlighted by their vermilion red costumes.
Queen Elizabeth I of England liked to wear bright reds, before she adopted the more sober image of the "Virgin Queen".
Red costumes were not limited to the upper classes. In Renaissance Flanders, people of all social classes wore red at celebrations. One such celebration was captured in The Wedding Dance (1566) by Pieter Bruegel the Elder.
The painter Johannes Vermeer skilfully used different shades and tints of vermilion to paint the red skirt in The Girl with the Wine Glass, then glazed it with madder lake to make a more luminous color.
Reds from the New World
In Latin America, the Aztec people, the Paracas culture and other societies used cochineal, a vivid scarlet dye made from insects. From the 16th until the 19th century, cochineal became a highly profitable export from Spanish Mexico to Europe.
18th to 20th century
In the 18th century, red began to take on a new identity as the color of resistance and revolution. It was already associated with blood, and with danger; a red flag hoisted before a battle meant that no prisoners would be taken. In 1793–94, red became the color of the French Revolution. A red Phrygian cap, or "liberty cap", was part of the uniform of the sans-culottes, the most militant faction of the revolutionaries.
In the late 18th century, during a strike English dock workers carried red flags, and it thereafter became closely associated with the new labour movement, and later with the Labour Party in the United Kingdom, founded in 1900.
In Paris in 1832, a red flag was carried by working-class demonstrators in the failed June Rebellion (an event immortalised in Les Misérables), and later in the 1848 French Revolution. The red flag was proposed as the new national French flag during the 1848 revolution, but was rejected by at the urging of the poet and statesman Alphonse Lamartine in favour of the tricolor flag. It appeared again as the flag of the short-lived Paris Commune in 1871. It was then adopted by Karl Marx and the new European movements of socialism and communism. Soviet Russia adopted a red flag following the Bolshevik Revolution in 1917. The People's Republic of China adopted the red flag following the Chinese Communist Revolution. It was adopted by North Vietnam in 1954, and by all of Vietnam in 1975.
Symbolism
Courage and sacrifice
Surveys show that red is the color most associated with courage. In western countries red is a symbol of martyrs and sacrifice, particularly because of its association with blood. Beginning in the Middle Ages, the Pope and Cardinals of the Roman Catholic Church wore red to symbolize the blood of Christ and the Christian martyrs. The banner of the Christian soldiers in the First Crusade was a red cross on a white field, the St. George's Cross. According to Christian tradition, Saint George was a Roman soldier who was a member of the guards of the Emperor Diocletian, who refused to renounce his Christian faith and was martyred. The Saint George's Cross became the Flag of England in the 16th century, and now is part of the Union Flag of the United Kingdom, as well as the Flag of the Republic of Georgia.
Hatred, anger, aggression, passion, heat and war
While red is the color most associated with love, it also the color most frequently associated with hatred, anger, aggression and war. People who are angry are said to "." Red is the color most commonly associated with passion and heat. In ancient Rome, red was the color of Mars, the god of war—the planet Mars was named for him because of its red color.
Warning and danger
Red is the traditional color of warning and danger, and is therefore often used on flags. In the Middle Ages up through the French Revolution, a red flag shown in warfare indicated the intent to take no prisoners. Similarly, a red flag hoisted by a pirate ship meant no mercy would be shown to their target. In Britain, in the early days of motoring, motor cars had to follow a man with a red flag who would warn horse-drawn vehicles, before the Locomotives on Highways Act 1896 abolished this law. In automobile races, the red flag is raised if there is danger to the drivers. In international football, a player who has made a serious violation of the rules is shown a red penalty card and ejected from the game.
Several studies have indicated that red carries the strongest reaction of all the colors, with the level of reaction decreasing gradually with the colors orange, yellow, and white, respectively. For this reason, red is generally used as the highest level of warning, such as threat level of terrorist attack in the United States. In fact, teachers at a primary school in the UK have been told not to mark children's work in red ink because it encourages a "negative approach".
Red is the international color of stop signs and stop lights on highways and intersections. It was standardized as the international color at the Vienna Convention on Road Signs and Signals of 1968. It was chosen partly because red is the brightest color in daytime (next to orange), though it is less visible at twilight, when green is the most visible color. Red also stands out more clearly against a cool natural backdrop of blue sky, green trees or gray buildings. But it was mostly chosen as the color for stoplights and stop signs because of its universal association with danger and warning. The 1968 Vienna Convention on Road Signs and Signals of 1968 uses red color also for the margin of danger warning sign, give way signs and prohibitory signs, following the previous German-type signage (established by Verordnung über Warnungstafeln für den Kraftfahrzeugverkehr in 1927).
The color that attracts attention
Red is the color that most attracts attention. Surveys show it is the color most frequently associated with visibility, proximity, and extroverts. It is also the color most associated with dynamism and activity.
Red is used in modern fashion much as it was used in Medieval painting; to attract the eyes of the viewer to the person who is supposed to be the center of attention. People wearing red seem to be closer than those dressed in other colors, even if they are actually the same distance away. Monarchs, wives of presidential candidates and other celebrities often wear red to be visible from a distance in a crowd. It is also commonly worn by lifeguards and others whose job requires them to be easily found.
Because red attracts attention, it is frequently used in advertising, though studies show that people are less likely to read something printed in red because they know it is advertising, and because it is more difficult visually to read than black and white text.
Seduction, sexuality and sin
Red by a large margin is the color most commonly associated with seduction, sexuality, eroticism and immorality, possibly because of its close connection with passion and with danger.
Red was long seen as having a dark side, particularly in Christian theology. It was associated with sexual passion, anger, sin, and the devil. In the Old Testament of the Bible, the Book of Isaiah said: "Though your sins be as scarlet, they shall be white as snow." In the New Testament, in the Book of Revelation, the Antichrist appears as a red monster, ridden by a woman dressed in scarlet, known as the Whore of Babylon.
Satan is often depicted as colored red and/or wearing a red costume in both iconography and popular culture. By the 20th century, the devil in red had become a folk character in legends and stories. The devil in red appears more often in cartoons and movies than in religious art.
In 17th-century New England, red was associated with adultery. In the 1850 novel by Nathaniel Hawthorne, The Scarlet Letter, set in a Puritan New England community, a woman is punished for adultery with ostracism, her sin represented by a red letter 'A' sewn onto her clothes.
Red is still commonly associated with prostitution. At various points in history, prostitutes were required to wear red to announce their profession. Houses of prostitution displayed a red light. Beginning in the early 20th century, houses of prostitution were allowed only in certain specified neighborhoods, which became known as red-light districts. Large red-light districts are found today in Bangkok and Amsterdam.
In the handkerchief code, the color red signifies interest in the sexual act of fisting.
In both Christian and Hebrew tradition, red is also sometimes associated with murder or guilt, with "having blood on one's hands", or "being caught red-handed.
In religion
In Christianity, red is associated with the blood of Christ and the sacrifice of martyrs. In the Roman Catholic Church it is also associated with pentecost and the Holy Spirit. Since 1295, it is the color worn by Cardinals, the senior clergy of the Roman Catholic Church. Red is the liturgical color for the feasts of martyrs, representing the blood of those who suffered death for their faith. It is sometimes used as the liturgical color for Holy Week, including Palm Sunday and Good Friday, although this is a modern (20th-century) development. In Catholic practice, it is also the liturgical color used to commemorate the Holy Spirit (for this reason it is worn at Pentecost and during Confirmation masses). Because of its association with martyrdom and the Spirit, it is also the color used to commemorate saints who were martyred, such as St. George and all the Apostles (except for the Apostle St. John, who was not martyred, where white is used). As such, it is used to commemorate bishops, who are the successors of the Apostles (for this reason, when funeral masses are held for bishops, cardinals, or popes, red is used instead of the white that would ordinarily be used).
In Buddhism, red is one of the five colors which are said to have emanated from the Buddha when he attained enlightenment, or nirvana. It is particularly associated with the benefits of the practice of Buddhism; achievement, wisdom, virtue, fortune and dignity. It was also believed to have the power to resist evil. In China red was commonly used for the walls, pillars, and gates of temples.
In the Shinto religion of Japan, the gateways of temples, called torii, are traditionally painted vermilion red and black. The torii symbolizes the passage from the profane world to a sacred place. The bridges in the gardens of Japanese temples are also painted red (and usually only temple bridges are red, not bridges in ordinary gardens), since they are also passages to sacred places. Red was also considered a color which could expel evil and disease.
In Taoism, red is sometimes used to symbolize yang.
In Chinese folk religion, red is also sometimes used to symbolize yang in the context of the creator Pangu, who hatched out of a cosmic egg colored like a taijitu. Some art of Pangu colored yang as red. In addition, red is also an auspicious color according to Chinese beliefs.
Military uses
Red uniform
The red military uniform was adopted by the English Parliament's New Model Army in 1645, and was still worn as a dress uniform by the British Army until the outbreak of the First World War in August 1914. Ordinary soldiers wore red coats dyed with madder, while officers wore scarlet coats dyed with the more expensive cochineal. This led to British soldiers being known as red coats.
In the modern British army, scarlet is still worn by the Foot Guards, the Life Guards, and by some regimental bands or drummers for ceremonial purposes. Officers and NCOs of those regiments which previously wore red retain scarlet as the color of their "mess" or formal evening jackets. The Royal Gibraltar Regiment has a scarlet tunic in its winter dress.
Scarlet is worn for some full dress, military band or mess uniforms in the modern armies of a number of the countries that made up the former British Empire. These include the Australian, Jamaican, New Zealand, Fijian, Canadian, Kenyan, Ghanaian, Indian, Singaporean, Sri Lankan and Pakistani armies.
The musicians of the United States Marine Corps Band wear red, following an 18th-century military tradition that the uniforms of band members are the reverse of the uniforms of the other soldiers in their unit. Since the US Marine uniform is blue with red facings, the band wears the reverse.
Red Serge is the uniform of the Royal Canadian Mounted Police, created in 1873 as the North-West Mounted Police, and given its present name in 1920. The uniform was adapted from the tunic of the British Army. Cadets at the Royal Military College of Canada also wear red dress uniforms.
The Brazilian Marine Corps wears a red dress uniform.
NATO Military Symbols for Land Based Systems uses red to denote hostile forces, hence the terms "red team" and "Red Cell" to denote challengers during exercises.
In sports
The first known team sport to feature red uniforms was chariot racing during the late Roman Empire. The earliest races were between two chariots, one driver wearing red, the other white. Later, the number of teams was increased to four, including drivers in light green and sky blue. Twenty-five races were run in a day, with a total of one hundred chariots participating.
Today many sports teams throughout the world feature red on their uniforms. Along with blue, red is the most commonly used non-white color in sports. Numerous national sports teams wear red, often through association with their national flags. A few of these teams feature the color as part of their nickname such as Spain (with their association football (soccer) national team nicknamed La Furia Roja or "The Red Fury") and Belgium (whose football team bears the nickname Rode Duivels or "Red Devils").
In club association football (soccer), red is a commonly used color throughout the world. Among European notable club teams most often playing at home in red shirts include Bayern Munich, Benfica, Liverpool, Manchester United and Roma. Furthermore, many prominent teams play in partially red color schemes, involving different-colored sleeves or stripes. A number of teams' nicknames feature the color. A red penalty card is issued to a player who commits a serious infraction: the player is immediately disqualified from further play and his team must continue with one fewer player for the game's duration.
Rosso Corsa is the red international motor racing color of cars entered by teams from Italy. Since the 1920s Italian race cars of Alfa Romeo, Maserati, Lancia, and later Ferrari and Abarth have been painted with a color known as rosso corsa ("racing red"). National colors were mostly replaced in Formula One by commercial sponsor liveries in 1968, but unlike most other teams, Ferrari always kept the traditional red, although the shade of the color varies. Ducati traditionally run red factory bikes in motorcycle World Championship racing.
The color is commonly used for professional sports teams in Canada and the United States with eleven Major League Baseball teams, eleven National Hockey League teams, seven National Football League teams and eleven National Basketball Association teams prominently featuring some shade of the color. The color is also featured in the league logos of Major League Baseball, the National Football League and the National Basketball Association. In the National Football League, a red flag is thrown by the head coach to challenge a referee's decision during the game. During the 1950s when red was strongly associated with communism in the United States, the modern Cincinnati Reds team was known as the "Redlegs" and the term was used on baseball cards. After the red scare faded, the team was known as the "Reds" again.
In boxing, red is often the color used on a fighter's gloves. George Foreman wore the same red trunks he used during his loss to Muhammad Ali when he defeated Michael Moorer 20 years later to regain the title he lost. Boxers named or nicknamed "red" include Red Burman, Ernie "Red" Lopez, and his brother Danny "Little Red" Lopez.
On flags
Red is the most common color found in national flags, found on the flags of 77 percent of the 210 countries listed as independent in 2016; far ahead of white (58 percent); green (40 percent) and blue (37 percent). The British flag bears the colors red, white and blue; it includes the cross of Saint George, patron saint of England, and the saltire of Saint Patrick, patron saint of Ireland, both of which are red on white. The flag of the United States bears the colors of Britain, the colors of the French include red as part of the old Paris coat of arms, and other countries' flags, such as those of Australia, New Zealand, and Fiji, carry a small inset of the British flag in memory of their ties to that country. Many former colonies of Spain, such as Mexico, Colombia, Costa Rica, Cuba, Ecuador, Panama, Peru, Puerto Rico and Venezuela, also feature red-one of the colors of the Spanish flag-on their own banners. Red flags are also used to symbolize storms, bad water conditions, and many other dangers.
The red on the flag of Nepal represents the floral emblem of the country, the rhododendron.
Red, blue, and white are also the Pan-Slavic colors adopted by the Slavic solidarity movement of the late nineteenth century. Initially these were the colors of the Russian flag; as the Slavic movement grew, they were adopted by other Slavic peoples including Slovaks, Slovenes, and Serbs. The flags of the Czech Republic and Poland use red for historic heraldic reasons (see Coat of arms of Poland and Coat of arms of the Czech Republic) & not due to Pan-Slavic connotations. In 2004 Georgia adopted a new white flag, which consists of four small and one big red cross in the middle touching all four sides.
Red, white, and black were the colors of the German Empire from 1870 to 1918, and as such they came to be associated with German nationalism. In the 1920s they were adopted as the colors of the Nazi flag. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." The red part of the flag was also chosen to attract attention – Hitler wrote: "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The red also symbolized the social program of the Nazis, aimed at German workers. Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design.
Red, white, green and black are the colors of Pan-Arabism and are used by many Arab countries.
Red, gold, green, and black are the colors of Pan-Africanism. Several African countries thus use the color on their flags, including South Africa, Ghana, Senegal, Mali, Ethiopia, Togo, Guinea, Benin, and Zimbabwe. The Pan-African colors are borrowed from the flag of Ethiopia, one of the oldest independent African countries. Rwanda, notably, removed red from its flag after the Rwandan genocide because of red's association with blood.
The flags of Japan and Bangladesh both have a red circle in the middle of different colored backgrounds. The flag of the Philippines has a red trapezoid on the bottom signifying blood, courage, and valor (also, if the flag is inverted so that the red trapezoid is on top and the blue at the bottom, it indicates a state of war). The flag of Singapore has a red rectangle on the top. The field of the flag of Portugal is green and red. The Ottoman Empire adopted several different red flags during the six centuries of its rule, with the successor Republic of Turkey continuing the 1844 Ottoman flag.
In politics
In 18th-century Europe, red was usually associated with the monarchy and with those in power. The Pope wore red, as did the Swiss Guards of the Kings of France, the soldiers of the British Army and the Danish Army.
In the Roman Empire, freed slaves were given a red Phrygian cap as an emblem of their liberation. Because of this symbolism, the red "Liberty cap" became a symbol of the American patriots fighting for independence from England. During the French Revolution, the Jacobins also adapted the red Phrygian cap, and forced the deposed King Louis XVI to wear one after his arrest.
Socialism and communism
In the 19th century, with the Industrial Revolution and the rise of worker's movements, red became the color of socialism (especially the Marxist variant), and, with the Paris Commune of 1871, of revolution.
In the 20th century, red was the color first of the Russian Bolsheviks and then, after the success of the Russian Revolution of 1917, of communist parties around the world. However, after the fall of the Soviet Union in 1991, Russia went back to the pre-revolutionary blue, white and red flag.
Red also became the color of many social democratic parties in Europe, including the Labour Party in Britain (founded 1900); the Social Democratic Party of Germany (whose roots went back to 1863) and the French Socialist Party, which dated back under different names, to 1879. The Socialist Party of America (1901–1972) and the Communist Party USA (1919) both also chose red as their color.
Members of the Christian-Social People's Party in Liechtenstein (founded 1918) advocated an expansion of democracy and progressive social policies, and were often referred to disparagingly as "Reds" for their social liberal leanings and party colors.
The Chinese Communist Party, founded in 1920, adopted the red flag and hammer and sickle emblem of the Soviet Union, which became the national symbols when the Party took power in China in 1949. Under Party leader Mao Zedong, the Party anthem became "The East Is Red", and Mao Zedong himself was sometimes referred to as a "red sun". During the Cultural Revolution in China, Party ideology was enforced by the Red Guards, and the sayings of Mao Zedong were published as a little red book in hundreds of millions of copies. Today the Chinese Communist Party claims to be the largest political party in the world, with eighty million members.
Beginning in the 1960s and the 1970s, paramilitary extremist groups such as the Red Army Faction in Germany, the Japanese Red Army and the Shining Path Maoist movement in Peru used red as their color. But in the 1980s, some European socialist and social democratic parties, such as the Labour Party in Britain and the Socialist Party in France, moved away from the symbolism of the far left, keeping the red color but changing their symbol to a less-threatening red rose.
Red is used around the world by political parties of the left or center-left. In the United States, it is the color of the Communist Party USA, and of the Social Democrats, USA.
United States
In the United States, political commentators often refer to the "red states", which voted for Republican candidates in the last four presidential elections, and "blue states", which voted for Democrats. This convention is relatively recent: before the 2000 presidential election, media outlets assigned red and blue to both parties, sometimes alternating the allocation for each election. Fixed usage was established during the 39-day recount following the 2000 election, when the media began to discuss the contest in terms of "red states" versus "blue states". States which voted for different parties in two of the last four presidential elections are called "Swing States", and are usually colored purple, a mix of red and blue.
Social and special interest groups
Such names as Red Club (a bar), Red Carpet (a discothèque) or Red Cottbus and Club Red (event locations) suggest liveliness and excitement. The Red Hat Society is a social group founded in 1998 for women 50 and over. Use of the color red to call attention to an emergency situation is evident in the names of such organizations as the Red Cross (humanitarian aid), Red Hot Organization (AIDS support), and the Red List of Threatened Species (of IUCN). In reference to humans, term "red" is often used in the West to describe the indigenous peoples of the Americas.
Idioms
Many idiomatic expressions exploit the various connotations of red:
Expressing emotion
"to see red" (to be angry or aggressive)
"to have red ears / a red face" (to be embarrassed)
"to paint the town red" (to have an enjoyable evening, usually with a generous amount of eating, drinking, dancing)
Giving warning
"to raise a red flag" (to signal that something is problematic)
"like a red rag to a bull" (to cause someone to be enraged)
"to be in the red" (to be losing money, from the accounting convention of writing deficits and losses in red ink)
Calling attention
"a red letter day" (a special or important event, from the medieval custom of printing the dates of saints' days and holy days in red ink.)
"to roll out the red carpet" (to formally welcome an important guest)
"to give red-carpet treatment" (to treat someone as important or special)
"to catch someone red-handed" (to catch or discover someone doing something bad or wrong)
Other idioms
"to tie up in red tape". In England red tape was used by lawyers and government officials to identify important documents. It became a term for excessive bureaucratic regulation. It was popularized in the 19th century by the writer Thomas Carlyle, who complained about "red-tapism".
"red herring". A false clue that leads investigators off the track. Refers to the practice of using a fragrant smoked fish to distract hunting or tracking dogs from the track they are meant to follow.
"red ink" (to show a business loss)
| Physical sciences | Color terms | null |
25852 | https://en.wikipedia.org/wiki/Rice%27s%20theorem | Rice's theorem | In computability theory, Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, "does the program terminate for all inputs?"), unlike a syntactic property (for instance, "does the program contain an if-then-else statement?"). A non-trivial property is one which is neither true for every program, nor false for every program.
The theorem generalizes the undecidability of the halting problem. It has far-reaching implications on the feasibility of static analysis of programs. It implies that it is impossible, for example, to implement a tool that checks whether any given program is correct, or even executes without error (it is possible to implement a tool that always overestimates or always underestimates e.g. the correctness of a program, so in practice one has to decide what is less of a problem).
The theorem is named after Henry Gordon Rice, who proved it in his doctoral dissertation of 1951 at Syracuse University.
Introduction
Rice's theorem puts a theoretical bound on which types of static analysis can be performed automatically. One can distinguish between the syntax of a program, and its semantics. The syntax is the detail of how the program is written, or its "intension", and the semantics is how the program behaves when run, or its "extension". Rice's theorem asserts that it is impossible to decide a property of programs which depends only on the semantics and not on the syntax, unless the property is trivial (true of all programs, or false of all programs).
By Rice's theorem, it is impossible to write a program that automatically verifies for the absence of bugs in other programs, taking a program and a specification as input, and checking whether the program satisfies the specification.
This does not imply an impossibility to prevent certain types of bugs. For example, Rice's theorem implies that in dynamically typed programming languages which are Turing-complete, it is impossible to verify the absence of type errors. On the other hand, statically typed programming languages feature a type system which statically prevents type errors. In essence, this should be understood as a feature of the syntax (taken in a broad sense) of those languages. In order to type check a program, its source code must be inspected; the operation does not depend merely on the hypothetical semantics of the program.
In terms of general software verification, this means that although one cannot algorithmically check whether any given program satisfies a given specification, one can require programs to be annotated with extra information that proves the program is correct, or to be written in a particular restricted form that makes the verification possible, and only accept programs which are verified in this way. In the case of type safety, the former corresponds to type annotations, and the latter corresponds to type inference. Taken beyond type safety, this idea leads to correctness proofs of programs through proof annotations such as in Hoare logic.
Another way of working around Rice's theorem is to search for methods which catch many bugs, without being complete. This is the theory of abstract interpretation.
Yet another direction for verification is model checking, which can only apply to finite-state programs, not to Turing-complete languages.
Formal statement
Let φ be an admissible numbering of partial computable functions. Let P be a subset of . Suppose that:
P is non-trivial: P is neither empty nor itself.
P is extensional: for all integers m and n, if φm = φn, then m ∈ P ⟺ n ∈ P.
Then P is undecidable.
A more concise statement can be made in terms of index sets: The only decidable index sets are ∅ and .
Examples
Given a program P which takes a natural number n and returns a natural number P(n), the following questions are undecidable:
Does P terminate on a given n? (This is the halting problem.)
Does P terminate on 0?
Does P terminate on all n (i.e., is P total)?
Does P terminate and return 0 on every input?
Does P terminate and return 0 on some input?
Does P terminate and return the same value for all inputs?
Is P equivalent to a given program Q?
Proof by Kleene's recursion theorem
Assume for contradiction that is a non-trivial, extensional and computable set of natural numbers. There is a natural number and a natural number . Define a function by when and when . By Kleene's recursion theorem, there exists such that . Then, if , we have , contradicting the extensionality of since , and conversely, if , we have , which again contradicts extensionality since .
Proof by reduction from the halting problem
Proof sketch
Suppose, for concreteness, that we have an algorithm for examining a program p and determining infallibly whether p is an implementation of the squaring function, which takes an integer d and returns d2. The proof works just as well if we have an algorithm for deciding any other non-trivial property of program behavior (i.e. a semantic and non-trivial property), and is given in general below.
The claim is that we can convert our algorithm for identifying squaring programs into one that identifies functions that halt. We will describe an algorithm that takes inputs a and i and determines whether program a halts when given input i.
The algorithm for deciding this is conceptually simple: it constructs (the description of) a new program t taking an argument n, which (1) first executes program a on input i (both a and i being hard-coded into the definition of t), and (2) then returns the square of n. If a(i) runs forever, then t never gets to step (2), regardless of n. Then clearly, t is a function for computing squares if and only if step (1) terminates. Since we have assumed that we can infallibly identify programs for computing squares, we can determine whether t, which depends on a and i, is such a program; thus we have obtained a program that decides whether program a halts on input i. Note that our halting-decision algorithm never executes t, but only passes its description to the squaring-identification program, which by assumption always terminates; since the construction of the description of t can also be done in a way that always terminates, the halting-decision cannot fail to halt either.
halts (a,i) {
define t(n) {
a(i)
return n×n
}
return is_a_squaring_function(t)
}
This method does not depend specifically on being able to recognize functions that compute squares; as long as some program can do what we are trying to recognize, we can add a call to a to obtain our t. We could have had a method for recognizing programs for computing square roots, or programs for computing the monthly payroll, or programs that halt when given the input "Abraxas"; in each case, we would be able to solve the halting problem similarly.
Formal proof
For the formal proof, algorithms are presumed to define partial functions over strings and are themselves represented by strings. The partial function computed by the algorithm represented by a string a is denoted Fa. This proof proceeds by reductio ad absurdum: we assume that there is a non-trivial property that is decided by an algorithm, and then show that it follows that we can decide the halting problem, which is not possible, and therefore a contradiction.
Let us now assume that P(a) is an algorithm that decides some non-trivial property of Fa. Without loss of generality we may assume that P(no-halt) = "no", with no-halt being the representation of an algorithm that never halts. If this is not true, then this holds for the algorithm that computes the negation of the property P. Now, since P decides a non-trivial property, it follows that there is a string b that represents an algorithm Fb and P(b) = "yes". We can then define an algorithm H(a, i) as follows:
1. construct a string t that represents an algorithm T(j) such that
T first simulates the computation of Fa(i),
then T simulates the computation of Fb(j) and returns its result.
2. return P(t).
We can now show that H decides the halting problem:
Assume that the algorithm represented by a halts on input i. In this case Ft = Fb and, because P(b) = "yes" and the output of P(x) depends only on Fx, it follows that P(t) = "yes" and, therefore H(a, i) = "yes".
Assume that the algorithm represented by a does not halt on input i. In this case Ft = Fno-halt, i.e., the partial function that is never defined. Since P(no-halt) = "no" and the output of P(x) depends only on Fx, it follows that P(t) = "no" and, therefore H(a, i) = "no".
Since the halting problem is known to be undecidable, this is a contradiction and the assumption that there is an algorithm P(a) that decides a non-trivial property for the function represented by a must be false.
| Mathematics | Computability theory | null |
25856 | https://en.wikipedia.org/wiki/Radiation | Radiation | In physics, radiation is the emission or transmission of energy in the form of waves or particles through space or a material medium. This includes:
electromagnetic radiation consisting of photons, such as radio waves, microwaves, infrared, visible light, ultraviolet, x-rays, and gamma radiation (γ)
particle radiation consisting of particles of non-zero rest energy, such as alpha radiation (α), beta radiation (β), proton radiation and neutron radiation
acoustic radiation, such as ultrasound, sound, and seismic waves, all dependent on a physical transmission medium
gravitational radiation, in the form of gravitational waves, ripples in spacetime
Radiation is often categorized as either ionizing or non-ionizing depending on the energy of the radiated particles. Ionizing radiation carries more than 10 electron volts (eV), which is enough to ionize atoms and molecules and break chemical bonds. This is an important distinction due to the large difference in harmfulness to living organisms. A common source of ionizing radiation is radioactive materials that emit α, β, or γ radiation, consisting of helium nuclei, electrons or positrons, and photons, respectively. Other sources include X-rays from medical radiography examinations and muons, mesons, positrons, neutrons and other particles that constitute the secondary cosmic rays that are produced after primary cosmic rays interact with Earth's atmosphere.
Gamma rays, X-rays, and the higher energy range of ultraviolet light constitute the ionizing part of the electromagnetic spectrum. The word "ionize" refers to the breaking of one or more electrons away from an atom, an action that requires the relatively high energies that these electromagnetic waves supply. Further down the spectrum, the non-ionizing lower energies of the lower ultraviolet spectrum cannot ionize atoms, but can disrupt the inter-atomic bonds that form molecules, thereby breaking down molecules rather than atoms; a good example of this is sunburn caused by long-wavelength solar ultraviolet. The waves of longer wavelength than UV in visible light, infrared, and microwave frequencies cannot break bonds but can cause vibrations in the bonds which are sensed as heat. Radio wavelengths and below generally are not regarded as harmful to biological systems. These are not sharp delineations of the energies; there is some overlap in the effects of specific frequencies.
The word "radiation" arises from the phenomenon of waves radiating (i.e., traveling outward in all directions) from a source. This aspect leads to a system of measurements and physical units that apply to all types of radiation. Because such radiation expands as it passes through space, and as its energy is conserved (in vacuum), the intensity of all types of radiation from a point source follows an inverse-square law in relation to the distance from its source. Like any ideal law, the inverse-square law approximates a measured radiation intensity to the extent that the source approximates a geometric point.
Ionizing radiation
Radiation with sufficiently high energy can ionize atoms; that is to say it can knock electrons off atoms, creating ions. Ionization occurs when an electron is stripped (or "knocked out") from an electron shell of the atom, which leaves the atom with a net positive charge. Because living cells and, more importantly, the DNA in those cells can be damaged by this ionization, exposure to ionizing radiation increases the risk of cancer. Thus "ionizing radiation" is somewhat artificially separated from particle radiation and electromagnetic radiation, simply due to its great potential for biological damage. While an individual cell is made of trillions of atoms, only a small fraction of those will be ionized at low to moderate radiation powers. The probability of ionizing radiation causing cancer is dependent upon the absorbed dose of the radiation and is a function of the damaging tendency of the type of radiation (equivalent dose) and the sensitivity of the irradiated organism or tissue (effective dose).
If the source of the ionizing radiation is a radioactive material or a nuclear process such as fission or fusion, there is particle radiation to consider. Particle radiation is subatomic particles accelerated to relativistic speeds by nuclear reactions. Because of their momenta, they are quite capable of knocking out electrons and ionizing materials, but since most have an electrical charge, they do not have the penetrating power of ionizing radiation. The exception is neutron particles; see below. There are several different kinds of these particles, but the majority are alpha particles, beta particles, neutrons, and protons. Roughly speaking, photons and particles with energies above about 10 electron volts (eV) are ionizing (some authorities use 33 eV, the ionization energy for water). Particle radiation from radioactive material or cosmic rays almost invariably carries enough energy to be ionizing.
Most ionizing radiation originates from radioactive materials and space (cosmic rays), and as such is naturally present in the environment, since most rocks and soil have small concentrations of radioactive materials. Since this radiation is invisible and not directly detectable by human senses, instruments such as Geiger counters are usually required to detect its presence. In some cases, it may lead to secondary emission of visible light upon its interaction with matter, as in the case of Cherenkov radiation and radio-luminescence.
Ionizing radiation has many practical uses in medicine, research, and construction, but presents a health hazard if used improperly. Exposure to radiation causes damage to living tissue; high doses result in Acute radiation syndrome (ARS), with skin burns, hair loss, internal organ failure, and death, while any dose may result in an increased chance of cancer and genetic damage; a particular form of cancer, thyroid cancer, often occurs when nuclear weapons and reactors are the radiation source because of the biological proclivities of the radioactive iodine fission product, iodine-131. However, calculating the exact risk and chance of cancer forming in cells caused by ionizing radiation is still not well understood, and currently estimates are loosely determined by population-based data from the atomic bombings of Hiroshima and Nagasaki and from follow-up of reactor accidents, such as the Chernobyl disaster. The International Commission on Radiological Protection states that "The Commission is aware of uncertainties and lack of precision of the models and parameter values", "Collective effective dose is not intended as a tool for epidemiological risk assessment, and it is inappropriate to use it in risk projections" and "in particular, the calculation of the number of cancer deaths based on collective effective doses from trivial individual doses should be avoided".
Ultraviolet radiation
Ultraviolet, of wavelengths from 10 nm to 200 nm, ionizes air molecules, causing it to be strongly absorbed by air and by ozone (O3) in particular. Ionizing UV therefore does not penetrate Earth's atmosphere to a significant degree, and is sometimes referred to as vacuum ultraviolet. Although present in space, this part of the UV spectrum is not of biological importance, because it does not reach living organisms on Earth.
There is a zone of the atmosphere in which ozone absorbs some 98% of non-ionizing but dangerous UV-C and UV-B. This ozone layer starts at about and extends upward. Some of the ultraviolet spectrum that does reach the ground is non-ionizing, but is still biologically hazardous due to the ability of single photons of this energy to cause electronic excitation in biological molecules, and thus damage them by means of unwanted reactions. An example is the formation of pyrimidine dimers in DNA, which begins at wavelengths below 365 nm (3.4 eV), which is well below ionization energy. This property gives the ultraviolet spectrum some of the dangers of ionizing radiation in biological systems without actual ionization occurring. In contrast, visible light and longer-wavelength electromagnetic radiation, such as infrared, microwaves, and radio waves, consists of photons with too little energy to cause damaging molecular excitation, and thus this radiation is far less hazardous per unit of energy.
X-rays
X-rays are electromagnetic waves with a wavelength less than about 10−9 m (greater than and ). A smaller wavelength corresponds to a higher energy according to the equation E = hc/λ. (E is Energy; h is the Planck constant; c is the speed of light; λ is wavelength.) When an X-ray photon collides with an atom, the atom may absorb the energy of the photon and boost an electron to a higher orbital level, or if the photon is extremely energetic, it may knock an electron from the atom altogether, causing the atom to ionize. Generally, larger atoms are more likely to absorb an X-ray photon since they have greater energy differences between orbital electrons. The soft tissue in the human body is composed of smaller atoms than the calcium atoms that make up bone, so there is a contrast in the absorption of X-rays. X-ray machines are specifically designed to take advantage of the absorption difference between bone and soft tissue, allowing physicians to examine structure in the human body.
X-rays are also totally absorbed by the thickness of the earth's atmosphere, resulting in the prevention of the X-ray output of the sun, smaller in quantity than that of UV but nonetheless powerful, from reaching the surface.
Gamma radiation
Gamma (γ) radiation consists of photons with a wavelength less than (greater than 1019 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation.
Gamma rays can be stopped by a sufficiently thick or dense layer of material, where the stopping power of the material per given area depends mostly (but not entirely) on the total mass along the path of the radiation, regardless of whether the material is of high or low density. However, as is the case with X-rays, materials with a high atomic number such as lead or depleted uranium add a modest (typically 20% to 30%) amount of stopping power over an equal mass of less dense and lower atomic weight materials (such as water or concrete). The atmosphere absorbs all gamma rays approaching Earth from space. Even air is capable of absorbing gamma rays, halving the energy of such waves by passing through, on the average, .
Alpha radiation
Alpha particles are helium-4 nuclei (two protons and two neutrons). They interact with matter strongly due to their charges and combined mass, and at their usual velocities only penetrate a few centimetres of air, or a few millimetres of low density material (such as the thin mica material which is specially placed in some Geiger counter tubes to allow alpha particles in). This means that alpha particles from ordinary alpha decay do not penetrate the outer layers of dead skin cells and cause no damage to the live tissues below. Some very high energy alpha particles compose about 10% of cosmic rays, and these are capable of penetrating the body and even thin metal plates. However, they are of danger only to astronauts, since they are deflected by the Earth's magnetic field and then stopped by its atmosphere.
Alpha radiation is dangerous when alpha-emitting radioisotopes are inhaled or ingested (breathed or swallowed). This brings the radioisotope close enough to sensitive live tissue for the alpha radiation to damage cells. Per unit of energy, alpha particles are at least 20 times more effective at cell-damage than gamma rays and X-rays. See relative biological effectiveness for a discussion of this. Examples of highly poisonous alpha-emitters are all isotopes of radium, radon, and polonium, due to the amount of decay that occur in these short half-life materials.
Beta radiation
Beta-minus (β−) radiation consists of an energetic electron. It is more penetrating than alpha radiation but less than gamma. Beta radiation from radioactive decay can be stopped with a few centimetres of plastic or a few millimetres of metal. It occurs when a neutron decays into a proton in a nucleus, releasing the beta particle and an antineutrino. Beta radiation from linac accelerators is far more energetic and penetrating than natural beta radiation. It is sometimes used therapeutically in radiotherapy to treat superficial tumors.
Beta-plus (β+) radiation is the emission of positrons, which are the antimatter form of electrons. When a positron slows to speeds similar to those of electrons in the material, the positron will annihilate an electron, releasing two gamma photons of 511 keV in the process. Those two gamma photons will be traveling in (approximately) opposite directions. The gamma radiation from positron annihilation consists of high energy photons, and is also ionizing.
Neutron radiation
Neutrons are categorized according to their speed/energy. Neutron radiation consists of free neutrons. These neutrons may be emitted during either spontaneous or induced nuclear fission. Neutrons are rare radiation particles; they are produced in large numbers only where chain reaction fission or fusion reactions are active; this happens for about 10 microseconds in a thermonuclear explosion, or continuously inside an operating nuclear reactor; production of the neutrons stops almost immediately in the reactor when it goes non-critical.
Neutrons can make other objects, or material, radioactive. This process, called neutron activation, is the primary method used to produce radioactive sources for use in medical, academic, and industrial applications. Even comparatively low speed thermal neutrons cause neutron activation (in fact, they cause it more efficiently). Neutrons do not ionize atoms in the same way that charged particles such as protons and electrons do (by the excitation of an electron), because neutrons have no charge. It is through their absorption by nuclei which then become unstable that they cause ionization. Hence, neutrons are said to be "indirectly ionizing". Even neutrons without significant kinetic energy are indirectly ionizing, and are thus a significant radiation hazard. Not all materials are capable of neutron activation; in water, for example, the most common isotopes of both types atoms present (hydrogen and oxygen) capture neutrons and become heavier but remain stable forms of those atoms. Only the absorption of more than one neutron, a statistically rare occurrence, can activate a hydrogen atom, while oxygen requires two additional absorptions. Thus water is only very weakly capable of activation. The sodium in salt (as in sea water), on the other hand, need only absorb a single neutron to become Na-24, a very intense source of beta decay, with a half-life of 15 hours.
In addition, high-energy (high-speed) neutrons have the ability to directly ionize atoms. One mechanism by which high energy neutrons ionize atoms is to strike the nucleus of an atom and knock the atom out of a molecule, leaving one or more electrons behind as the chemical bond is broken. This leads to production of chemical free radicals. In addition, very high energy neutrons can cause ionizing radiation by "neutron spallation" or knockout, wherein neutrons cause emission of high-energy protons from atomic nuclei (especially hydrogen nuclei) on impact. The last process imparts most of the neutron's energy to the proton, much like one billiard ball striking another. The charged protons and other products from such reactions are directly ionizing.
High-energy neutrons are very penetrating and can travel great distances in air (hundreds or even thousands of metres) and moderate distances (several metres) in common solids. They typically require hydrogen rich shielding, such as concrete or water, to block them within distances of less than 1 m. A common source of neutron radiation occurs inside a nuclear reactor, where a metres-thick water layer is used as effective shielding.
Cosmic radiation
There are two sources of high energy particles entering the Earth's atmosphere from outer space: the sun and deep space. The sun continuously emits particles, primarily free protons, in the solar wind, and occasionally augments the flow hugely with coronal mass ejections (CME).
The particles from deep space (inter- and extra-galactic) are much less frequent, but of much higher energies. These particles are also mostly protons, with much of the remainder consisting of helions (alpha particles). A few completely ionized nuclei of heavier elements are present. The origin of these galactic cosmic rays is not yet well understood, but they seem to be remnants of supernovae and especially gamma-ray bursts (GRB), which feature magnetic fields capable of the huge accelerations measured from these particles. They may also be generated by quasars, which are galaxy-wide jet phenomena similar to GRBs but known for their much larger size, and which seem to be a violent part of the universe's early history.
Non-ionizing radiation
The kinetic energy of particles of non-ionizing radiation is too small to produce charged ions when passing through matter. For non-ionizing electromagnetic radiation (see types below), the associated particles (photons) have only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. The effect of non-ionizing forms of radiation on living tissue has only recently been studied. Nevertheless, different biological effects are observed for different types of non-ionizing radiation.
Even "non-ionizing" radiation is capable of causing thermal-ionization if it deposits enough heat to raise temperatures to ionization energies. These reactions occur at far higher total energies than with ionization radiation, which requires only single particles to cause ionization. A familiar example of thermal ionization is the flame-ionization of a common fire, and the browning reactions in common food items induced by infrared radiation, during broiling-type cooking.
The electromagnetic spectrum is the range of all possible electromagnetic radiation frequencies. The electromagnetic spectrum (usually just spectrum) of an object is the characteristic distribution of electromagnetic radiation emitted by, or absorbed by, that particular object.
The non-ionizing portion of electromagnetic radiation consists of electromagnetic waves that (as individual quanta or particles, see photon) are not energetic enough to detach electrons from atoms or molecules and hence cause their ionization. These include radio waves, microwaves, infrared, and (sometimes) visible light. The lower frequencies of ultraviolet light may cause chemical changes and molecular damage similar to ionization, but is technically not ionizing. The highest frequencies of ultraviolet light, as well as all X-rays and gamma-rays are ionizing.
The occurrence of ionization depends on the energy of the individual particles or waves, and not on their number. An intense flood of particles or waves will not cause ionization if these particles or waves do not carry enough energy to be ionizing, unless they raise the temperature of a body to a point high enough to ionize small fractions of atoms or molecules by the process of thermal-ionization (this, however, requires relatively extreme radiation intensities).
Ultraviolet light
As noted above, the lower part of the spectrum of ultraviolet, called soft UV, from 3 eV to about 10 eV, is non-ionizing. However, the effects of non-ionizing ultraviolet on chemistry and the damage to biological systems exposed to it (including oxidation, mutation, and cancer) are such that even this part of ultraviolet is often compared with ionizing radiation.
Visible light
Light, or visible light, is a very narrow range of electromagnetic radiation of a wavelength that is visible to the human eye, or 380–750 nm which equates to a frequency range of 790 to 400 THz respectively. More broadly, physicists use the term "light" to mean electromagnetic radiation of all wavelengths, whether visible or not.
Infrared
Infrared (IR) light is electromagnetic radiation with a wavelength between 0.7 and 300 μm, which corresponds to a frequency range between 430 and 1 THz respectively. IR wavelengths are longer than that of visible light, but shorter than that of microwaves. Infrared may be detected at a distance from the radiating objects by "feel". Infrared sensing snakes can detect and focus infrared by use of a pinhole lens in their heads, called "pits". Bright sunlight provides an irradiance of just over 1 kW/m2 at sea level. Of this energy, 53% is infrared radiation, 44% is visible light, and 3% is ultraviolet radiation.
Microwave
Microwaves are electromagnetic waves with wavelengths ranging from as short as 1 mm to as long as 1 m, which equates to a frequency range of 300 MHz to 300 GHz. This broad definition includes both UHF and EHF (millimetre waves), but various sources use different other limits. In all cases, microwaves include the entire super high frequency band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3 mm).
Radio waves
Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Like all other electromagnetic waves, they travel at the speed of light. Naturally occurring radio waves are made by lightning, or by certain astronomical objects. Artificially generated radio waves are used for fixed and mobile radio communication, broadcasting, radar and other navigation systems, satellite communication, computer networks and innumerable other applications. In addition, almost any wire carrying alternating current will radiate some of the energy away as radio waves; these are mostly termed interference. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves may bend at the rate of the curvature of the Earth and may cover a part of the Earth very consistently, shorter waves travel around the world by multiple reflections off the ionosphere and the Earth. Much shorter wavelengths bend or reflect very little and travel along the line of sight.
Very low frequency
Very low frequency (VLF) refers to a frequency range of 30 Hz to 3 kHz which corresponds to wavelengths of respectively. Since there is not much bandwidth in this range of the radio spectrum, only the very simplest signals can be transmitted, such as for radio navigation. Also known as the myriametre band or myriametre wave as the wavelengths range from 100 km to 10 km (an obsolete metric unit equal to 10 km).
Extremely low frequency
Extremely low frequency (ELF) is radiation frequencies from 3 to 30 Hz (108 to 107 m respectively). In atmosphere science, an alternative definition is usually given, from 3 Hz to 3 kHz. In the related magnetosphere science, the lower frequency electromagnetic oscillations (pulsations occurring below ~3 Hz) are considered to lie in the ULF range, which is thus also defined differently from the ITU Radio Bands. A massive military ELF antenna in Michigan radiates very slow messages to otherwise unreachable receivers, such as submerged submarines.
Thermal radiation (heat)
Thermal radiation is a common synonym for infrared radiation emitted by objects at temperatures often encountered on Earth. Thermal radiation refers not only to the radiation itself, but also the process by which the surface of an object radiates its thermal energy in the form of black-body radiation. Infrared or red radiation from a common household radiator or electric heater is an example of thermal radiation, as is the heat emitted by an operating incandescent light bulb. Thermal radiation is generated when energy from the movement of charged particles within atoms is converted to electromagnetic radiation.
As noted above, even low-frequency thermal radiation may cause temperature-ionization whenever it deposits sufficient thermal energy to raise temperatures to a high enough level. Common examples of this are the ionization (plasma) seen in common flames, and the molecular changes caused by the "browning" during food-cooking, which is a chemical process that begins with a large component of ionization.
Black-body radiation
Black-body radiation is an idealized spectrum of radiation emitted by a body that is at a uniform temperature. The shape of the spectrum and the total amount of energy emitted by the body is a function of the absolute temperature of that body. The radiation emitted covers the entire electromagnetic spectrum and the intensity of the radiation (power/unit-area) at a given frequency is described by Planck's law of radiation. For a given temperature of a black-body there is a particular frequency at which the radiation emitted is at its maximum intensity. That maximum radiation frequency moves toward higher frequencies as the temperature of the body increases. The frequency at which the black-body radiation is at maximum is given by Wien's displacement law and is a function of the body's absolute temperature. A black-body is one that emits at any temperature the maximum possible amount of radiation at any given wavelength. A black-body will also absorb the maximum possible incident radiation at any given wavelength. A black-body with a temperature at or below room temperature would thus appear absolutely black, as it would not reflect any incident light nor would it emit enough radiation at visible wavelengths for our eyes to detect. Theoretically, a black-body emits electromagnetic radiation over the entire spectrum from very low frequency radio waves to x-rays, creating a continuum of radiation.
The color of a radiating black-body tells the temperature of its radiating surface. It is responsible for the color of stars, which vary from infrared through red (), to yellow (), to white and to blue-white () as the peak radiance passes through those points in the visible spectrum. When the peak is below the visible spectrum the body is black, while when it is above the body is blue-white, since all the visible colors are represented from blue decreasing to red.
Discovery
Electromagnetic radiation of wavelengths other than visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to William Herschel, the astronomer. Herschel published his results in 1800 before the Royal Society of London. Herschel, like Ritter, used a prism to refract light from the Sun and detected the infrared (beyond the red part of the spectrum), through an increase in the temperature recorded by a thermometer.
In 1801, the German physicist Johann Wilhelm Ritter made the discovery of ultraviolet by noting that the rays from a prism darkened silver chloride preparations more quickly than violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the UV rays were capable of causing chemical reactions.
The first radio waves detected were not from a natural source, but were produced deliberately and artificially by the German scientist Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations in the radio frequency range, following formulas suggested by the equations of James Clerk Maxwell.
Wilhelm Röntgen discovered and named X-rays. While experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. Within a month, he discovered the main properties of X-rays that we understand to this day.
In 1896, Henri Becquerel found that rays emanating from certain minerals penetrated black paper and caused fogging of an unexposed photographic plate. His doctoral student Marie Curie discovered that only certain chemical elements gave off these rays of energy. She named this behavior radioactivity.
Alpha rays (alpha particles) and beta rays (beta particles) were differentiated by Ernest Rutherford through simple experimentation in 1899. Rutherford used a generic pitchblende radioactive source and determined that the rays produced by the source had differing penetrations in materials. One type had short penetration (it was stopped by paper) and a positive charge, which Rutherford named alpha rays. The other was more penetrating (able to expose film through paper but not metal) and had a negative charge, and this type Rutherford named beta. This was the radiation that had been first detected by Becquerel from uranium salts. In 1900, the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays.
Henri Becquerel himself proved that beta rays are fast electrons, while Rutherford and Thomas Royds proved in 1909 that alpha particles are ionized helium. Rutherford and Edward Andrade proved in 1914 that gamma rays are like X-rays, but with shorter wavelengths.
Cosmic ray radiations striking the Earth from outer space were finally definitively recognized and proven to exist in 1912, as the scientist Victor Hess carried an electrometer to various altitudes in a free balloon flight. The nature of these radiations was only gradually understood in later years.
The Neutron and neutron radiation were discovered by James Chadwick in 1932. A number of other high energy particulate radiations such as positrons, muons, and pions were discovered by cloud chamber examination of cosmic ray reactions shortly thereafter, and others types of particle radiation were produced artificially in particle accelerators, through the last half of the twentieth century.
Applications
Medicine
Radiation and radioactive substances are used for diagnosis, treatment, and research. X-rays, for example, pass through muscles and other soft tissue but are stopped by dense materials. This property of X-rays enables doctors to find broken bones and to locate cancers that might be growing in the body. Doctors also find certain diseases by injecting a radioactive substance and monitoring the radiation given off as the substance moves through the body. Radiation used for cancer treatment is called ionizing radiation because it forms ions in the cells of the tissues it passes through as it dislodges electrons from atoms. This can kill cells or change genes so the cells cannot grow. Other forms of radiation such as radio waves, microwaves, and light waves are called non-ionizing. They do not have as much energy so they are not able to ionize cells.
Communication
All modern communication systems use forms of electromagnetic radiation. Variations in the intensity of the radiation represent changes in the sound, pictures, or other information being transmitted. For example, a human voice can be sent as a radio wave or microwave by making the wave vary to corresponding variations in the voice. Musicians have also experimented with gamma rays sonification, or using nuclear radiation, to produce sound and music.
Science
Researchers use radioactive atoms to determine the age of materials that were once part of a living organism. The age of such materials can be estimated by measuring the amount of radioactive carbon they contain in a process called radiocarbon dating. Similarly, using other radioactive elements, the age of rocks and other geological features (even some man-made objects) can be determined; this is called Radiometric dating. Environmental scientists use radioactive atoms, known as tracer atoms, to identify the pathways taken by pollutants through the environment.
Radiation is used to determine the composition of materials in a process called neutron activation analysis. In this process, scientists bombard a sample of a substance with particles called neutrons. Some of the atoms in the sample absorb neutrons and become radioactive. The scientists can identify the elements in the sample by studying the emitted radiation.
Possible damage to health and environment from certain types of radiation
Radiation is not always dangerous, and not all types of radiation are equally dangerous, contrary to several common medical myths. For example, although bananas contain naturally occurring radioactive isotopes, particularly potassium-40 (40K), which emit ionizing radiation when undergoing radioactive decay, the levels of such radiation are far too low to induce radiation poisoning, and bananas are not a radiation hazard. It would not be physically possible to eat enough bananas to cause radiation poisoning, as the radiation dose from bananas is non-cumulative. Radiation is ubiquitous on Earth, and humans are adapted to survive at the normal low-to-moderate levels of radiation found on Earth's surface. The relationship between dose and toxicity is often non-linear, and many substances that are toxic at very high doses actually have neutral or positive health effects, or are biologically essential, at moderate or low doses. There is some evidence to suggest that this is true for ionizing radiation: normal levels of ionizing radiation may serve to stimulate and regulate the activity of DNA repair mechanisms. High enough levels of any kind of radiation will eventually become lethal, however.
Ionizing radiation in certain conditions can damage living organisms, causing cancer or genetic damage.
Non-ionizing radiation in certain conditions also can cause damage to living organisms, such as burns. In 2011, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) released a statement adding radio frequency electromagnetic fields (including microwave and millimetre waves) to their list of things which are possibly carcinogenic to humans.
RWTH Aachen University's EMF-Portal web site presents one of the biggest database about the effects of electromagnetic radiation. As of 12 July 2019 it has 28,547 publications and 6,369 summaries of individual scientific studies on the effects of electromagnetic fields.
Environmental radioactivity
On Earth there are different sources of radiation, natural as well as artificial. Natural radiation can come from the Sun, Earth itself, or from cosmic radiation.
| Physical sciences | Basics_6 | null |
25875 | https://en.wikipedia.org/wiki/Rheumatoid%20arthritis | Rheumatoid arthritis | Rheumatoid arthritis (RA) is a long-term autoimmune disorder that primarily affects joints. It typically results in warm, swollen, and painful joints. Pain and stiffness often worsen following rest. Most commonly, the wrist and hands are involved, with the same joints typically involved on both sides of the body. The disease may also affect other parts of the body, including skin, eyes, lungs, heart, nerves, and blood. This may result in a low red blood cell count, inflammation around the lungs, and inflammation around the heart. Fever and low energy may also be present. Often, symptoms come on gradually over weeks to months.
While the cause of rheumatoid arthritis is not clear, it is believed to involve a combination of genetic and environmental factors. The underlying mechanism involves the body's immune system attacking the joints. This results in inflammation and thickening of the joint capsule. It also affects the underlying bone and cartilage. The diagnosis is made mostly on the basis of a person's signs and symptoms. X-rays and laboratory testing may support a diagnosis or exclude other diseases with similar symptoms. Other diseases that may present similarly include systemic lupus erythematosus, psoriatic arthritis, and fibromyalgia among others.
The goals of treatment are to reduce pain, decrease inflammation, and improve a person's overall functioning. This may be helped by balancing rest and exercise, the use of splints and braces, or the use of assistive devices. Pain medications, steroids, and NSAIDs are frequently used to help with symptoms. Disease-modifying antirheumatic drugs (DMARDs), such as hydroxychloroquine and methotrexate, may be used to try to slow the progression of disease. Biological DMARDs may be used when the disease does not respond to other treatments. However, they may have a greater rate of adverse effects. Surgery to repair, replace, or fuse joints may help in certain situations.
RA affects about 24.5 million people as of 2015. This is 0.5–1% of adults in the developed world with between 5 and 50 per 100,000 people newly developing the condition each year. Onset is most frequent during middle age and women are affected 2.5 times as frequently as men. It resulted in 38,000 deaths in 2013, up from 28,000 deaths in 1990. The first recognized description of RA was made in 1800 by Dr. Augustin Jacob Landré-Beauvais (1772–1840) of Paris. The term rheumatoid arthritis is based on the Greek for watery and inflamed joints.
Signs and symptoms
RA primarily affects joints, but it also affects other organs in more than 15–25% of cases. Associated problems include cardiovascular disease, osteoporosis, interstitial lung disease, infection, cancer, feeling tired, depression, mental difficulties, and trouble working.
Joints
Arthritis of joints involves inflammation of the synovial membrane. Joints become swollen, tender and warm, and stiffness limits their movement. With time, multiple joints are affected (polyarthritis). Most commonly involved are the small joints of the hands, feet and cervical spine, but larger joints like the shoulder and knee can also be involved. Synovitis can lead to tethering of tissue with loss of movement and erosion of the joint surface causing deformity and loss of function. The fibroblast-like synoviocytes (FLS), highly specialized mesenchymal cells found in the synovial membrane, have an active and prominent role in these pathogenic processes of the rheumatic joints.
RA typically manifests with signs of inflammation, with the affected joints being swollen, warm, painful and stiff, particularly early in the morning on waking or following prolonged inactivity. Increased stiffness early in the morning is often a prominent feature of the disease and typically lasts for more than an hour. Gentle movements may relieve symptoms in early stages of the disease. These signs help distinguish rheumatoid from non-inflammatory problems of the joints, such as osteoarthritis. In arthritis of non-inflammatory causes, signs of inflammation and early morning stiffness are less prominent.
The pain associated with RA is induced at the site of inflammation and classified as nociceptive as opposed to neuropathic. The joints are often affected in a fairly symmetrical fashion, although this is not specific, and the initial presentation may be asymmetrical.
As the pathology progresses the inflammatory activity leads to tendon tethering and erosion and destruction of the joint surface, which impairs range of movement and leads to deformity. The fingers may develop almost any deformity depending on which joints are most involved. Specific deformities, which also occur in osteoarthritis, include ulnar deviation, boutonniere deformity (also "buttonhole deformity", flexion of proximal interphalangeal joint and extension of distal interphalangeal joint of the hand), swan neck deformity (hyperextension at proximal interphalangeal joint and flexion at distal interphalangeal joint) and "Z-thumb." "Z-thumb" or "Z-deformity" consists of hyperextension of the interphalangeal joint, fixed flexion and subluxation of the metacarpophalangeal joint and gives a "Z" appearance to the thumb. The hammer toe deformity may be seen. In the worst case, joints are known as arthritis mutilans due to the mutilating nature of the deformities.
Skin
The rheumatoid nodule, which is sometimes in the skin, is the most common non-joint feature and occurs in 30% of people who have RA. It is a type of inflammatory reaction known to pathologists as a "necrotizing granuloma". The initial pathologic process in nodule formation is unknown but may be essentially the same as the synovitis, since similar structural features occur in both. The nodule has a central area of fibrinoid necrosis that may be fissured and which corresponds to the fibrin-rich necrotic material found in and around an affected synovial space. Surrounding the necrosis is a layer of palisading macrophages and fibroblasts, corresponding to the intimal layer in synovium and a cuff of connective tissue containing clusters of lymphocytes and plasma cells, corresponding to the subintimal zone in synovitis. The typical rheumatoid nodule may be a few millimetres to a few centimetres in diameter and is usually found over bony prominences, such as the elbow, the heel, the knuckles, or other areas that sustain repeated mechanical stress. Nodules are associated with a positive RF (rheumatoid factor) titer, ACPA, and severe erosive arthritis. Rarely, these can occur in internal organs or at diverse sites on the body.
Several forms of vasculitis occur in RA, but are mostly seen with long-standing and untreated disease. The most common presentation is due to involvement of small- and medium-sized vessels. Rheumatoid vasculitis can thus commonly present with skin ulceration and vasculitic nerve infarction known as mononeuritis multiplex.
Other, rather rare, skin associated symptoms include pyoderma gangrenosum, Sweet's syndrome, drug reactions, erythema nodosum, lobe panniculitis, atrophy of finger skin, palmar erythema, and skin fragility (often worsened by corticosteroid use).
Diffuse alopecia areata (Diffuse AA) occurs more commonly in people with rheumatoid arthritis. RA is also seen more often in those with relatives who have AA.
Lungs
Lung fibrosis is a recognized complication of rheumatoid arthritis. It is also a rare but well-recognized consequence of therapy (for example with methotrexate and leflunomide). Caplan's syndrome describes lung nodules in individuals with RA and additional exposure to coal dust. Exudative pleural effusions are also associated with RA.
Heart and blood vessels
People with RA are more prone to atherosclerosis, and risk of myocardial infarction (heart attack) and stroke is markedly increased.
Other possible complications that may arise include: pericarditis, endocarditis, left ventricular failure, valvulitis and fibrosis. Many people with RA do not experience the same chest pain that others feel when they have angina or myocardial infarction. To reduce cardiovascular risk, it is crucial to maintain optimal control of the inflammation caused by RA (which may be involved in causing the cardiovascular risk), and to use exercise and medications appropriately to reduce other cardiovascular risk factors such as blood lipids and blood pressure. Doctors who treat people with RA should be sensitive to cardiovascular risk when prescribing anti-inflammatory medications, and may want to consider prescribing routine use of low doses of aspirin if the gastrointestinal effects are tolerable.
Blood
Anemia is by far the most common abnormality of the blood cells which can be caused by a variety of mechanisms. The chronic inflammation caused by RA leads to raised hepcidin levels, leading to anemia of chronic disease where iron is poorly absorbed and also sequestered into macrophages. The red cells are of normal size and color (normocytic and Normochromic).
A low white blood cell count usually only occurs in people with Felty's syndrome with an enlarged liver and spleen. The mechanism of neutropenia is complex. An increased platelet count occurs when inflammation is uncontrolled.
Other
The role of the circadian clock in rheumatoid arthritis suggests a correlation between an early morning rise in circulating levels of pro-inflammatory cytokines, such as interleukin-6 and painful morning joint stiffness.
Kidneys
Renal amyloidosis can occur as a consequence of untreated chronic inflammation. Treatment with penicillamine or gold salts such as sodium aurothiomalate are recognized causes of membranous nephropathy.
Eyes
The eye can be directly affected in the form of episcleritis or scleritis, which when severe can very rarely progress to perforating scleromalacia. Rather more common is the indirect effect of keratoconjunctivitis sicca, which is a dryness of eyes and mouth caused by lymphocyte infiltration of lacrimal and salivary glands. When severe, dryness of the cornea can lead to keratitis and loss of vision as well as being painful. Preventive treatment of severe dryness with measures such as nasolacrimal duct blockage is important.
Liver
Liver problems in people with rheumatoid arthritis may be due to the underlying disease process or as a result of the medications used to treat the disease. A coexisting autoimmune liver disease, such as primary biliary cirrhosis or autoimmune hepatitis may also cause problems.
Neurological
Peripheral neuropathy and mononeuritis multiplex may occur. The most common problem is carpal tunnel syndrome caused by compression of the median nerve by swelling around the wrist.
Rheumatoid disease of the spine can lead to myelopathy.
Atlanto-axial subluxation can occur, owing to erosion of the odontoid process or transverse ligaments in the cervical spine connection to the skull. Such an erosion (>3mm) can give rise to vertebrae slipping over one another and compressing the spinal cord. Clumsiness is initially experienced, but without due care, this can progress to quadriplegia or even death.
Vertigo may be associated with rheumatoid arthritis via the following associations that can cause vertigo:
Ménière's disease
"Biologic disease-modifying antirheumatic drugs" This may not happen in the absence of infection.
Atlanto-axial joint instability can cause symptoms including vertigo and sudden death.
Atypical Cogan's syndrome may be associated with rheumoatoid arthritis.
Constitutional symptoms
Constitutional symptoms including fatigue, low grade fever, malaise, morning stiffness, loss of appetite and loss of weight are common systemic manifestations seen in people with active RA.
Bones
Local osteoporosis occurs in RA around inflamed joints. It is postulated to be partially caused by inflammatory cytokines. More general osteoporosis is probably contributed to by immobility, systemic cytokine effects, local cytokine release in bone marrow and corticosteroid therapy.
Cancer
The incidence of lymphoma is increased, although it is uncommon and associated with the chronic inflammation, not the treatment of RA. The risk of non-melanoma skin cancer is increased in people with RA compared to the general population, an association possibly due to the use of immunosuppression agents for treating RA.
Teeth
Periodontitis and tooth loss are common in people with rheumatoid arthritis.
Risk factors
RA is a systemic (whole body) autoimmune disease. Some genetic and environmental factors affect the risk for RA.
Genetic
Worldwide, RA affects approximately 1% of the adult population and occurs in one in 1,000 children. Studies show that RA primarily affects individuals between the ages of 40–60 years and is seen more commonly in females. A family history of RA increases the risk around three to five times; as of 2016, it was estimated that genetics may account for 40–65% of cases of seropositive RA, but only around 20% for seronegative RA. RA is strongly associated with genes of the inherited tissue type major histocompatibility complex (MHC) antigen. HLA-DR4 is the major genetic factor implicated – the relative importance varies across ethnic groups.
Genome-wide association studies examining single-nucleotide polymorphisms have found around one hundred alleles associated with RA risk. Risk alleles within the HLA (particularly HLA-DRB1) genes harbor more risk than other loci. The HLA encodes proteins that control recognition of self- versus non-self molecules. Other risk loci include genes affecting co-stimulatory immune pathwaysfor example CD28 and CD40, cytokine signaling, lymphocyte receptor activation threshold (e.g., PTPN22), and innate immune activationappear to have less influence than HLA mutations.
Environmental
There are established epigenetic and environmental risk factors for RA. Smoking is an established risk factor for RA in Caucasian populations, increasing the risk three times compared to non-smokers, particularly in men, heavy smokers, and those who are rheumatoid factor positive. Modest alcohol consumption may be protective.
Silica exposure has been linked to RA.
Negative findings
No infectious agent has been consistently linked with RA and there is no evidence of disease clustering to indicate its infectious cause, but periodontal disease has been consistently associated with RA.
The many negative findings suggest that either the trigger varies, or that it might, in fact, be a chance event inherent with the immune response.
Pathophysiology
RA primarily starts as a state of persistent cellular activation leading to autoimmunity and immune complexes in joints and other organs where it manifests.
The clinical manifestations of disease are primarily inflammation of the synovial membrane and joint damage, and the fibroblast-like synoviocytes play a key role in these pathogenic processes. Three phases of progression of RA are an initiation phase (due to non-specific inflammation), an amplification phase (due to T cell activation), and chronic inflammatory phase, with tissue injury resulting from the cytokines, IL–1, TNF-alpha, and IL–6.
Non-specific inflammation
Factors allowing an abnormal immune response, once initiated, become permanent and chronic. These factors are genetic disorders which change regulation of the adaptive immune response. Genetic factors interact with environmental risk factors for RA, with cigarette smoking as the most clearly defined risk factor.
Other environmental and hormonal factors may explain higher risks for women, including onset after childbirth and hormonal medications. A possibility for increased susceptibility is that negative feedback mechanisms – which normally maintain tolerance – are overtaken by positive feedback mechanisms for certain antigens, such as IgG Fc bound by rheumatoid factor and citrullinated fibrinogen bound by antibodies to citrullinated peptides (ACPA – Anti–citrullinated protein antibody). A debate on the relative roles of B-cell produced immune complexes and T cell products in inflammation in RA has continued for 30 years, but neither cell is necessary at the site of inflammation, only autoantibodies to IgGFc, known as rheumatoid factors and ACPA, with ACPA having an 80% specificity for diagnosing RA. As with other autoimmune diseases, people with RA have abnormally glycosylated antibodies, which are believed to promote joint inflammation.
Amplification in the synovium
Once the generalized abnormal immune response has become established – which may take several years before any symptoms occur – plasma cells derived from B lymphocytes produce rheumatoid factors and ACPA of the IgG and IgM classes in large quantities. These activate macrophages through Fc receptor and complement binding, which is part of the intense inflammation in RA. Binding of an autoreactive antibody to the Fc receptors is mediated through the antibody's N-glycans, which are altered to promote inflammation in people with RA.
This contributes to local inflammation in a joint, specifically the synovium with edema, vasodilation and entry of activated T-cells, mainly CD4 in microscopically nodular aggregates and CD8 in microscopically diffuse infiltrates.
Synovial macrophages and dendritic cells function as antigen-presenting cells by expressing MHC class II molecules, which establishes the immune reaction in the tissue.
Chronic inflammation
The disease progresses by forming granulation tissue at the edges of the synovial lining, pannus with extensive angiogenesis and enzymes causing tissue damage. The fibroblast-like synoviocytes have a prominent role in these pathogenic processes. The synovium thickens, cartilage and underlying bone disintegrate, and the joint deteriorates, with raised calprotectin levels serving as a biomarker of these events.
Cytokines and chemokines attract and accumulate immune cells, i.e. activated T- and B cells, monocytes and macrophages from activated fibroblast-like synoviocytes, in the joint space. By signalling through RANKL and RANK, they eventually trigger osteoclast production, which degrades bone tissue. The fibroblast-like synoviocytes that are present in the synovium during rheumatoid arthritis display altered phenotype compared to the cells present in normal tissues. The aggressive phenotype of fibroblast-like synoviocytes in rheumatoid arthritis and the effect these cells have on the microenvironment of the joint can be summarized into hallmarks that distinguish them from healthy fibroblast-like synoviocytes. These hallmark features of fibroblast-like synoviocytes in rheumatoid arthritis are divided into seven cell-intrinsic hallmarks and four cell-extrinsic hallmarks. The cell-intrinsic hallmarks are: reduced apoptosis, impaired contact inhibition, increased migratory invasive potential, changed epigenetic landscape, temporal and spatial heterogeneity, genomic instability and mutations, and reprogrammed cellular metabolism. The cell-extrinsic hallmarks of FLS in RA are: promotes osteoclastogenesis and bone erosion, contributes to cartilage degradation, induces synovial angiogenesis, and recruits and stimulates immune cells.
Diagnosis
Imaging
X-rays of the hands and feet are generally performed when many joints affected. In RA, there may be no changes in the early stages of the disease or the x-ray may show osteopenia near the joint, soft tissue swelling, and a smaller than normal joint space. As the disease advances, there may be bony erosions and subluxation. Other medical imaging techniques such as magnetic resonance imaging (MRI) and ultrasound are also used in RA.
Technical advances in ultrasonography like high-frequency transducers (10 MHz or higher) have improved the spatial resolution of ultrasound images depicting 20% more erosions than conventional radiography. Color Doppler and power Doppler ultrasound are useful in assessing the degree of synovial inflammation as they can show vascular signals of active synovitis. This is important, since in the early stages of RA, the synovium is primarily affected, and synovitis seems to be the best predictive marker of future joint damage.
Blood tests
When RA is clinically suspected, a physician may test for rheumatoid factor (RF) and anti-citrullinated protein antibodies (ACPAs measured as anti-CCP antibodies).
The test is positive approximately two-thirds of the time, but a negative RF or CCP antibody does not rule out RA; rather, the arthritis is called seronegative, which occurs in approximately a third of people with RA. During the first year of illness, rheumatoid factor is more likely to be negative with some individuals becoming seropositive over time. RF is a non-specific antibody and seen in about 10% of healthy people, in many other chronic infections like hepatitis C, and chronic autoimmune diseases such as Sjögren's syndrome and systemic lupus erythematosus. Therefore, the test is not specific for RA.
Hence, new serological tests check for anti-citrullinated protein antibodies ACPAs. These tests are again positive in 61–75% of all RA cases, but with a specificity of around 95%. As with RF, ACPAs are many times present before symptoms have started.
The by far most common clinical test for ACPAs is the anti-cyclic citrullinated peptide (anti CCP) ELISA. In 2008 a serological point-of-care test for the early detection of RA combined the detection of RF and anti-MCV with a sensitivity of 72% and specificity of 99.7%.
To improve the diagnostic capture rate in the early detection of patients with RA and to risk stratify these individuals, the rheumatology field continues to seek complementary markers to both RF and anti-CCP. 14-3-3η (YWHAH) is one such marker that complements RF and anti-CCP, along with other serological measures like C-reactive protein. In a systematic review, 14-3-3η has been described as a welcome addition to the rheumatology field. The authors indicate that the serum based 14-3-η marker is additive to the armamentarium of existing tools available to clinicians, and that there is adequate clinical evidence to support its clinical benefits.
Other blood tests are usually done to differentiate from other causes of arthritis, like the erythrocyte sedimentation rate (ESR), C-reactive protein, full blood count, kidney function, liver enzymes and other immunological tests (e.g., antinuclear antibody/ANA) are all performed at this stage. Elevated ferritin levels can reveal hemochromatosis, a mimic of RA, or be a sign of Still's disease, a seronegative, usually juvenile, variant of rheumatoid Arthritis.
Classification criteria
In 2010, the 2010 ACR / EULAR Rheumatoid Arthritis Classification Criteria were introduced.
The new criteria are not diagnostic criteria, but are classification criteria to identify disease with a high likelihood of developing a chronic form. However a score of 6 or greater unequivocally classifies a person with a diagnosis of rheumatoid arthritis.
These new classification criteria overruled the "old" ACR criteria of 1987 and are adapted for early RA diagnosis. The "new" classification criteria, jointly published by the American College of Rheumatology (ACR) and the European League Against Rheumatism (EULAR) establish a point value between 0 and 10. Four areas are covered in the diagnosis:
joint involvement, designating the metacarpophalangeal joints, proximal interphalangeal joints, the interphalangeal joint of the thumb, second through fifth metatarsophalangeal joint and wrist as small joints, and shoulders, elbows, hip joints, knees, and ankles as large joints:
Involvement of 1 large joint gives 0 points
Involvement of 2–10 large joints gives 1 point
Involvement of 1–3 small joints (with or without involvement of large joints) gives 2 points
Involvement of 4–10 small joints (with or without involvement of large joints) gives 3 points
Involvement of more than 10 joints (with involvement of at least 1 small joint) gives 5 points
serological parameters – including the rheumatoid factor as well as ACPA – "ACPA" stands for "anti-citrullinated protein antibody":
Negative RF and negative ACPA gives 0 points
Low-positive RF or low-positive ACPA gives 2 points
High-positive RF or high-positive ACPA gives 3 points
acute phase reactants: 1 point for elevated erythrocyte sedimentation rate, ESR, or elevated CRP value (c-reactive protein)
duration of arthritis: 1 point for symptoms lasting six weeks or longer
The new criteria accommodate to the growing understanding of RA and the improvements in diagnosing RA and disease treatment. In the "new" criteria, serology and autoimmune diagnostics carries major weight, as ACPA detection is appropriate to diagnose the disease in an early state, before joints destructions occur. Destruction of the joints viewed in radiological images was a significant point of the ACR criteria from 1987. This criterion no longer is regarded to be relevant, as this is just the type of damage that treatment is meant to avoid.
Differential diagnoses
Several other medical conditions can resemble RA, and need to be distinguished from it at the time of diagnosis:
Crystal induced arthritis (gout, and pseudogout) – usually involves particular joints (knee, MTP1, heels) and can be distinguished with an aspiration of joint fluid if in doubt. Redness, asymmetric distribution of affected joints, pain occurs at night and the starting pain is less than an hour with gout.
Osteoarthritis – distinguished with X-rays of the affected joints and blood tests, older age, starting pain less than an hour, asymmetric distribution of affected joints and pain worsens when using joint for longer periods.
Systemic lupus erythematosus (SLE) – distinguished by specific clinical symptoms and blood tests (antibodies against double-stranded DNA)
One of the several types of psoriatic arthritis resembles RA – nail changes and skin symptoms distinguish between them
Lyme disease causes erosive arthritis and may closely resemble RA – it may be distinguished by blood test in endemic areas
Reactive arthritis – asymmetrically involves heel, sacroiliac joints and large joints of the leg. It is usually associated with urethritis, conjunctivitis, iritis, painless buccal ulcers, and keratoderma blennorrhagica.
Axial spondyloarthritis (including ankylosing spondylitis) – this involves the spine, although an RA-like symmetrical small-joint polyarthritis may occur in the context of this condition.
Hepatitis C – RA-like symmetrical small-joint polyarthritis may occur in the context of this condition. Hepatitis C may also induce rheumatoid factor auto-antibodies.
Rarer causes which usually behave differently but may cause joint pains:
Sarcoidosis, amyloidosis, and Whipple's disease can also resemble RA.
Hemochromatosis may cause hand joint arthritis.
Acute rheumatic fever can be differentiated by a migratory pattern of joint involvement and evidence of antecedent streptococcal infection.
Bacterial arthritis (such as by Streptococcus) is usually asymmetric, while RA usually involves both sides of the body symmetrically.
Gonococcal arthritis (a bacterial arthritis) is also initially migratory and can involve tendons around the wrists and ankles.
Sometimes arthritis is in an undifferentiated stage (i.e. none of the above criteria is positive), even if synovitis is witnessed and assessed with ultrasound imaging.
Difficult-to-treat
Rheumatoid arthritis (D2T RA) is a specific classification RA by the European League against Rheumatism (EULAR).
Signs of illness:
Persistence of signs and symptoms
Drug resistance
Does not respond on two or more biological treatments
Does not respond on anti-rheumatic drugs with different mechanism of action
Factors contributing to difficult-to-treat disease:
Genetic risk factors
Environmental factors (diet, smoking, physical activity)
Overweight and obese
Genetic factors
Genetic factors such as HLA-DR1B1, TRAF1, PSORS1C1 and microRNA 146a are associated with difficult to treat rheumatoid arthritis, other gene polymorphisms seem to be correlated with response to biologic modifying anti-rheumatic drugs (bDMARDs). Next one is FOXO3A gene region been reported as associated with worst disorder. The minor allele at FOXO3A summon a differential response of monocytes in RA patients. FOXO3A can provide an increase of pro-inflammatory cytokines, including TNFα. Possible gene polymorphism: STAT4, PTPN2, PSORS1C1 and TRAF3IP2 genes had been correlated with response to TNF inhibitors.
HLA-DR1 and HLA-DRB1 gene
The HLA-DRB1 gene is part of a family of genes called the human leukocyte antigen (HLA) complex. The HLA complex is the human version of the major histocompatibility complex (MHC). Currently, have been identified at least 2479 different versions of the HLA-DRB1 gene. The presence of HLA-DRB1 alleles seems to predict radiographic damage, which may be partially mediated by ACPA development, and also elevated sera inflammatory levels and high swollen joint count. HLA-DR1 is encoded by the most risk allele HLA-DRB1 which share a conserved 5-aminoacid sequence that is correlated with the development of anti-citrullinated protein antibodies. HLA-DRB1 gene have more strong correlation with disease development. Susceptibility to and outcome for rheumatoid arthritis (RA) may associate with particular HLA-DR alleles, but these alleles vary among ethnic groups and geographic areas.
MicroRNAs
MicroRNAs are a factor in the development of that type of disease. MicroRNAs usually operate as a negative regulator of the expression of target proteins and their increased concentration after biologic treatment (bDMARDs) or after anti-rheumatic drugs. Level of miRNA before and after anti-TNFa/DMRADs combination therapy are potential novel biomarkers for predicting and monitoring outcome. For instance, some of them were found significantly upregulated by anti-TNFa/DMRADs combination therapy. For example, miRNA-16-5p, miRNA-23-3p, miRNA125b-5p, miRNA-126-3p, miRNA-146a-5p, miRNA-223-3p. Curious fact is that only responder patients showed an increase in those miRNAs after therapy, and paralleled the reduction of TNFα, interleukin (IL)-6, IL-17, rheumatoid factor (RF), and C-reactive protein (CRP).
Monitoring progression
Many tools can be used to monitor remission in rheumatoid arthritis.
DAS28: Disease Activity Score of 28 joints () is widely used as an indicator of RA disease activity and response to treatment. Joints included are (bilaterally): proximal interphalangeal joints (10 joints), metacarpophalangeal joints (10), wrists (2), elbows (2), shoulders (2) and knees (2). When looking at these joints, both the number of joints with tenderness upon touching (TEN28) and swelling (SW28) are counted. The erythrocyte sedimentation rate (ESR) is measured and the affected person makes a subjective assessment (SA) of disease activity during the preceding 7 days on a scale between 0 and 100, where 0 is "no activity" and 100 is "highest activity possible". With these parameters, DAS28 is calculated as:
From this, the disease activity of the affected person can be classified as follows:
It is not always a reliable indicator of treatment effect. One major limitation is that low-grade synovitis may be missed.
Other: Other tools to monitor remission in rheumatoid arthritis are: ACR-EULAR Provisional Definition of Remission of Rheumatoid arthritis, Simplified Disease Activity Index and Clinical Disease Activity Index. Some scores do not require input from a healthcare professional and allow self-monitoring by the person, like HAQ-DI.
Management
There is no cure for RA, but treatments can improve symptoms and slow the progress of the disease. Disease-modifying treatment has the best results when it is started early and aggressively. The results of a recent systematic review found that combination therapy with tumor necrosis factor (TNF) and non-TNF biologics plus methotrexate (MTX) resulted in improved disease control, Disease Activity Score (DAS)-defined remission, and functional capacity compared with a single treatment of either methotrexate or a biologic alone.
The goals of treatment are to minimize symptoms such as pain and swelling, to prevent bone deformity (for example, bone erosions visible in X-rays), and to maintain day-to-day functioning. This is primarily addressed with disease-modifying antirheumatic drugs (DMARDs); dosed physical activity; analgesics and physical therapy may be used to help manage pain. RA should generally be treated with at least one specific anti-rheumatic medication while combination therapies and corticosteroids are common in treatment. The use of benzodiazepines (such as diazepam) to treat the pain is not recommended as it does not appear to help and is associated with risks.
Lifestyle
Regular exercise is recommended as both safe and useful to maintain muscle strength and overall physical function. Physical activity is beneficial for people with rheumatoid arthritis who experience fatigue, although there was little to no evidence to suggest that exercise may have an impact on physical function in the long term, a study found that carefully dosed exercise has shown significant improvements in patients with RA. Physical activity increases the production of synovial fluid, which lubricates the joints and reduces friction. Moderate effects have been found for aerobic exercises and resistance training on cardiovascular fitness and muscle strength in RA. Furthermore, physical activity had no detrimental side effects like increased disease activity in any exercise dimension. It is uncertain if eating or avoiding specific foods or other specific dietary measures help improve symptoms, but several studies have shown that high-vegetable diets improve RA symptoms whereas high-meat diets make symptoms worse. Occupational therapy has a positive role to play in improving functional ability in people with rheumatoid arthritis. Weak evidence supports the use of wax baths (thermotherapy) to treat arthritis in the hands.
Educational approaches that inform people about tools and strategies available to help them cope with rheumatoid arthritis may improve a person's psychological status and level of depression in the shorter-term. The use of extra-depth shoes and molded insoles may reduce pain during weight-bearing activities such as walking. Insoles may also prevent the progression of bunions.
Disease-modifying agents
Disease-modifying antirheumatic drugs (DMARDs) are the primary treatment for RA. They are a diverse collection of drugs, grouped by use and convention. They have been found to improve symptoms, decrease joint damage, and improve overall functional abilities. DMARDs should be started early in the disease as they result in disease remission in approximately half of people and improved outcomes overall.
The following drugs are considered DMARDs: methotrexate, sulfasalazine, leflunomide, hydroxychloroquine, TNF inhibitors (certolizumab, adalimumab, infliximab and etanercept), abatacept, anakinra, and auranofin. Additionally, rituximab and tocilizumab are monoclonal antibodies and are also DMARDs. Use of tocilizumab is associated with a risk of increased cholesterol levels.
The most commonly used agent is methotrexate with other frequently used agents including sulfasalazine and leflunomide. Leflunomide is effective when used from 6–12 months, with similar effectiveness to methotrexate when used for 2 years. Sulfasalazine also appears to be most effective in the short-term treatment of rheumatoid arthritis.
Hydroxychloroquine, in addition to its low toxicity profile, is considered effective for treatment of moderate RA symptoms.
Agents may be used in combination, however, people may experience greater side effects. Methotrexate is the most important and useful DMARD and is usually the first treatment. A combined approach with methotrexate and biologics improves ACR50, HAQ scores and RA remission rates. This benefit from the combination of methotrexate with biologics occurs both when this combination is the initial treatment and when drugs are prescribed in a sequential or step-up manner. Triple therapy consisting of methotrexate, sulfasalazine and hydroxychloroquine may also effectively control disease activity. Adverse effects should be monitored regularly with toxicity including gastrointestinal, hematologic, pulmonary, and hepatic. Side effects such as nausea, vomiting or abdominal pain can be reduced by taking folic acid.
Rituximab combined with methotrexate appears to be more effective in improving symptoms compared to methotrexate alone. Rituximab works by decreasing levels of B-cells (immune cell that is involved in inflammation). People taking rituximab had improved pain, function, reduced disease activity and reduced joint damage based on x-ray images. After 6 months, 21% more people had improvement in their symptoms using rituximab and methotrexate.
Biological agents should generally be used only if methotrexate and other conventional agents are not effective after a trial of three months. They are associated with a higher rate of serious infections as compared to other DMARDs. Biological DMARD agents used to treat rheumatoid arthritis include: tumor necrosis factor alpha inhibitors (TNF inhibitors) such as infliximab; interleukin 1 blockers such as anakinra, monoclonal antibodies against B cells such as rituximab, interleukin 6 blockers such as tocilizumab, and T cell co-stimulation blockers such as abatacept. They are often used in combination with either methotrexate or leflunomide. Biologic monotherapy or tofacitinib with methotrexate may improve ACR50, RA remission rates and function. Abatacept should not be used at the same time as other biologics. In those who are well controlled (low disease activity) on TNF inhibitors, decreasing the dose does not appear to affect overall function. Discontinuation of TNF inhibitors (as opposed to gradually lowering the dose) by people with low disease activity may lead to increased disease activity and may affect remission, damage that is visible on an x-ray, and a person's function. People should be screened for latent tuberculosis before starting any TNF inhibitor therapy to avoid reactivation of tuberculosis.
TNF inhibitors and methotrexate appear to have similar effectiveness when used alone and better results are obtained when used together. Golimumab is effective when used with methotraxate. TNF inhibitors may have equivalent effectiveness with etanercept appearing to be the safest. Injecting etanercept, in addition to methotrexate twice a week may improve ACR50 and decrease radiographic progression for up to 3 years. Abatacept appears effective for RA with 20% more people improving with treatment than without but long term safety studies are yet unavailable. Adalimumab slows the time for the radiographic progression when used for 52 weeks. However, there is a lack of evidence to distinguish between the biologics available for RA. Issues with the biologics include their high cost and association with infections including tuberculosis. Use of biological agents may reduce fatigue. The mechanism of how biologics reduce fatigue is unclear.
Gold and cyclosporin
Sodium aurothiomalate, auranofin, and cyclosporin are less commonly used due to more common adverse effects. However, cyclosporin was found to be effective in the progressive RA when used up to one year.
Anti-inflammatory and analgesic agents
Glucocorticoids can be used in the short term and at the lowest dose possible for flare-ups and while waiting for slow-onset drugs to take effect. Combination of glucocorticoids and conventional therapy has shown a decrease in rate of erosion of bones. Steroids may be injected into affected joints during the initial period of RA, prior to the use of DMARDs or oral steroids.
Non-NSAID drugs to relieve pain, like paracetamol may be used to help relieve the pain symptoms; they do not change the underlying disease. The use of paracetamol may be associated with the risk of developing ulcers.
NSAIDs reduce both pain and stiffness in those with RA but do not affect the underlying disease and appear to have no effect on people's long term disease course and thus are no longer first line agents. NSAIDs should be used with caution in those with gastrointestinal, cardiovascular, or kidney problems. Rofecoxib was withdrawn from the global market as its long-term use was associated to an increased risk of heart attacks and strokes. Use of methotrexate together with NSAIDs is safe, if adequate monitoring is done. COX-2 inhibitors, such as celecoxib, and NSAIDs are equally effective. A 2004 Cochrane review found that people preferred NSAIDs over paracetamol. However, it is yet to be clinically determined whether NSAIDs are more effective than paracetamol.
The neuromodulator agents topical capsaicin may be reasonable to use in an attempt to reduce pain. Nefopam by mouth and cannabis are not recommended as of 2012 as the risks of use appear to be greater than the benefits.
Limited evidence suggests the use of weak oral opioids but the adverse effects may outweigh the benefits.
Alternatively, physical therapy has been tested and shown as an effective aid in reducing pain in patients with RA. As most RA is detected early and treated aggressively, physical therapy plays more of a preventative and compensatory role, aiding in pain management alongside regular rheumatic therapy.
Surgery
Especially for affected fingers, hands, and wrists, synovectomy may be needed to prevent pain or tendon rupture when drug treatment has failed. Severely affected joints may require joint replacement surgery, such as knee replacement. Postoperatively, physiotherapy is always necessary. There is insufficient evidence to support surgical treatment on arthritic shoulders.
Physiotherapy
For people with RA, physiotherapy may be used together with medical management. This may include cold and heat application, electronic stimulation, and hydrotherapy. Although medications improve symptoms of RA, muscle function is not regained when disease activity is controlled.
Physiotherapy promotes physical activity. In RA, physical activity like exercise in the appropriate dosage (frequency, intensity, time, type, volume, progression) and physical activity promotion is effective in improving cardiovascular fitness, muscle strength, and maintaining a long term active lifestyle. In the short term, resistance exercises, with or without range of motion exercises, improve self-reported hand functions. Physical activity promotion according to the public health recommendations should be an integral part of standard care for people with RA and other arthritic diseases. Additionally, the combination of physical activities and cryotherapy show its efficacy on the disease activity and pain relief. The combination of aerobic activity and cryotherapy may be an innovative therapeutic strategy to improve the aerobic capacity in arthritis patients and consequently reduce their cardiovascular risk while minimizing pain and disease activity.
Compression gloves
Compression gloves are handwear designed to help prevent the occurrence of various medical disorders relating to blood circulation in the wrists and hands. They can be used to treat the symptoms of arthritis, though the medical benefits may be limited.
Alternative medicine
In general, there is not enough evidence to support any complementary health approaches for RA, with safety concerns for some of them. Some mind and body practices and dietary supplements may help people with symptoms and therefore may be beneficial additions to conventional treatments, but there is not enough evidence to draw conclusions. A systematic review of CAM modalities (excluding fish oil) found that " The available evidence does not support their current use in the management of RA." Studies showing beneficial effects in RA on a wide variety of CAM modalities are often affected by publication bias and are generally not high quality evidence such as randomized controlled trials (RCTs).
A 2005 Cochrane review states that low level laser therapy can be tried to improve pain and morning stiffness due to rheumatoid arthritis as there are few side-effects.
There is limited evidence that tai chi might improve the range of motion of a joint in persons with rheumatoid arthritis. The evidence for acupuncture is inconclusive with it appearing to be equivalent to sham acupuncture.
A Cochrane review in 2002 showed some benefits of the electrical stimulation as a rehabilitation intervention to improve the power of the hand grip and help to resist fatigue. D‐penicillamine may provide similar benefits as DMARDs but it is also highly toxic. Low-quality evidence suggests the use of therapeutic ultrasound on arthritic hands. Potential benefits include increased grip strength, reduced morning stiffness and number of swollen joints. There is tentative evidence of benefit of transcutaneous electrical nerve stimulation (TENS) in RA. Acupuncture‐like TENS (AL-TENS) may decrease pain intensity and improve muscle power scores.
Low-quality evidence suggests people with active RA may benefit from assistive technology. This may include less discomfort and difficulty such as when using an eye drop device. Balance training is of unclear benefits.
Dietary supplements
Fatty acids
There has been a growing interest in the role of long-chain omega-3 polyunsaturated fatty acids to reduce inflammation and alleviate the symptoms of RA. Metabolism of omega-3 polyunsaturated fatty acids produces docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA), which inhibits pro-inflammatory eicosanoids and cytokines (TNF-a, IL-1b and IL-6), decreasing both lymphocyte proliferation and reactive oxygen species. These studies showed evidence for significant clinical improvements on RA in inflammatory status and articular index. Gamma-linolenic acid, an omega-6 fatty acid, may reduce pain, tender joint count and stiffness, and is generally safe. For omega-3 polyunsaturated fatty acids (found in fish oil, flax oil and hemp oil), a meta-analysis reported a favorable effect on pain, although confidence in the effect was considered moderate. The same review reported less inflammation but no difference in joint function. A review examined the effect of marine oil omega-3 fatty acids on pro-inflammatory eicosanoid concentrations; leukotriene4 (LTB4) was lowered in people with rheumatoid arthritis but not in those with non-autoimmune chronic diseases. Fish consumption has no association with RA. A fourth review limited inclusion to trials in which people eat ≥2.7 g/day for more than three months. Use of pain relief medication was decreased, but improvements in tender or swollen joints, morning stiffness and physical function were not changed. Collectively, the current evidence is not strong enough to determine that supplementation with omega-3 fatty acids or regular consumption of fish are effective treatments for rheumatoid arthritis.
Herbal
The American College of Rheumatology states that no herbal medicines have health claims supported by high-quality evidence and thus they do not recommend their use. There is no scientific basis to suggest that herbal supplements advertised as "natural" are safer for use than conventional medications as both are chemicals. Herbal medications, although labelled "natural", may be toxic or fatal if consumed. Due to the false belief that herbal supplements are always safe, there is sometimes a hesitancy to report their use which may increase the risk of adverse reactions.
Pregnancy
More than 75% of women with rheumatoid arthritis have symptoms improve during pregnancy but might have symptoms worsen after delivery. Methotrexate and leflunomide are teratogenic (harmful to foetus) and not used in pregnancy. It is recommended women of childbearing age should use contraceptives to avoid pregnancy and to discontinue its use if pregnancy is planned. Low dose of prednisolone, hydroxychloroquine and sulfasalazine are considered safe in pregnant women with rheumatoid arthritis. Prednisolone should be used with caution as the side effects include infections and fractures.
Vaccinations
People with RA have an increased risk of infections and mortality and recommended vaccinations can reduce these risks. The inactivated influenza vaccine should be received annually. The pneumococcal vaccine should be administered twice for people under the age 65 and once for those over 65. Lastly, the live-attenuated zoster vaccine should be administered once after the age 60, but is not recommended in people on a tumor necrosis factor alpha blocker.
Prognosis
The course of the disease varies greatly. Some people have mild short-term symptoms, but in most the disease is progressive for life. Around 25% will have subcutaneous nodules (known as rheumatoid nodules); this is associated with a poor prognosis.
Prognostic factors
Poor prognostic factors include,
Persistent synovitis
Early erosive disease
Extra-articular findings (including subcutaneous rheumatoid nodules)
Positive serum RF findings
Positive serum anti-CCP autoantibodies
Positive serum 14-3-3η (YWHAH) levels above 0.5 ng/ml
Carriership of HLA-DR4 "Shared Epitope" alleles
Family history of RA
Poor functional status
Socioeconomic factors
Elevated acute phase response (erythrocyte sedimentation rate [ESR], C-reactive protein [CRP])
Increased clinical severity.
Distance from primary care and specialist care in rural communities
Mortality
RA reduces lifespan on average from three to twelve years. Young age at onset, long disease duration, the presence of other health problems, and characteristics of severe RAsuch as poor functional ability or overall health status, a lot of joint damage on x-rays, the need for hospitalisation or involvement of organs other than the jointshave been shown to associate with higher mortality. Positive responses to treatment may indicate a better prognosis. A 2005 study by the Mayo Clinic noted that individuals with RA have a doubled risk of heart disease, independent of other risk factors such as diabetes, excessive alcohol use, and elevated cholesterol, blood pressure and body mass index. The mechanism by which RA causes this increased risk remains unknown; the presence of chronic inflammation has been proposed as a contributing factor. It is possible that the use of new biologic drug therapies extend the lifespan of people with RA and reduce the risk and progression of atherosclerosis. This is based on cohort and registry studies, and still remains hypothetical. It is still uncertain whether biologics improve vascular function in RA or not. There was an increase in total cholesterol and HDLc levels and no improvement of the atherogenic index.
Epidemiology
RA affects 0.5–1% of adults in the developed world with between 5 and 50 per 100,000 people newly developing the condition each year. In 2010 it resulted in about 49,000 deaths globally.
Onset is uncommon under the age of 15 and from then on the incidence rises with age until the age of 80. Women are affected three to five times as often as men.
The age at which the disease most commonly starts is in women between 40 and 50 years of age, and for men somewhat later. RA is a chronic disease, and although rarely, a spontaneous remission may occur, the common course of progression consists of persistent symptoms that wax and wane in intensity, along with continued deterioration of joint structures, leading to deformation and disability.
There is an association between periodontitis and rheumatoid arthritis (RA), hypothesised to lead to enhanced generation of RA-related autoantibodies. Oral bacteria that invade the blood may also contribute to chronic inflammatory responses and generation of autoantibodies.
History
The first recognized description of RA in modern medicine was in 1800 by the French physician Augustin Jacob Landré-Beauvais (1772–1840) who was based in the famed Salpêtrière Hospital in Paris. The name "rheumatoid arthritis" itself was coined in 1859 by British rheumatologist Alfred Baring Garrod.
The art of Peter Paul Rubens may possibly depict the effects of RA. In his later paintings, his rendered hands show, in the opinion of some physicians, increasing deformity consistent with the symptoms of the disease. RA appears to some to have been depicted in 16th-century paintings. However, it is generally recognized in art historical circles that the painting of hands in the 16th and 17th century followed certain stylized conventions, most clearly seen in the Mannerist movement. It was conventional, for instance, to show the upheld right hand of Christ in what now appears a deformed posture. These conventions are easily misinterpreted as portrayals of disease.
Historic (though not necessarily effective) treatments for RA have also included: rest, ice, compression and elevation, apple diet, nutmeg, some light exercise every now and then, nettles, bee venom, copper bracelets, rhubarb diet, extractions of teeth, fasting, honey, vitamins, insulin, magnets, and electroconvulsive therapy (ECT).
Etymology
Rheumatoid arthritis is derived from the Greek word ῥεύμα-rheuma (nom.), ῥεύματος-rheumatos (gen.) ("flow, current"). The suffix -oid ("resembling") gives the translation as joint inflammation that resembles rheumatic fever. Rhuma which means watery discharge might refer to the fact that the joints are swollen or that the disease may be made worse by wet weather.
Research
Meta-analysis found an association between periodontal disease and RA, but the mechanism of this association remains unclear. Two bacterial species associated with periodontitis are implicated as mediators of protein citrullination in the gums of people with RA.
Vitamin D deficiency is more common in people with rheumatoid arthritis than in the general population. However, whether vitamin D deficiency is a cause or a consequence of the disease remains unclear. One meta-analysis found that vitamin D levels are low in people with rheumatoid arthritis and that vitamin D status correlates inversely with prevalence of rheumatoid arthritis, suggesting that vitamin D deficiency is associated with susceptibility to rheumatoid arthritis.
The fibroblast-like synoviocytes have a prominent role in the pathogenic processes of the rheumatic joints, and therapies that target these cells are emerging as promising therapeutic tools, raising hope for future applications in rheumatoid arthritis.
Possible links with intestinal barrier dysfunction are investigated.
| Biology and health sciences | Specific diseases | Health |
25880 | https://en.wikipedia.org/wiki/Refractive%20index | Refractive index | In optics, the refractive index (or refraction index) of an optical medium is the ratio of the apparent speed of light in the air or vacuum to the speed in the medium. The refractive index determines how much the path of light is bent, or refracted, when entering a material. This is described by Snell's law of refraction, , where and are the angle of incidence and angle of refraction, respectively, of a ray crossing the interface between two media with refractive indices and . The refractive indices also determine the amount of light that is reflected when reaching the interface, as well as the critical angle for total internal reflection, their intensity (Fresnel equations) and Brewster's angle.
The refractive index, , can be seen as the factor by which the speed and the wavelength of the radiation are reduced with respect to their vacuum values: the speed of light in a medium is , and similarly the wavelength in that medium is , where is the wavelength of that light in vacuum. This implies that vacuum has a refractive index of 1, and assumes that the frequency () of the wave is not affected by the refractive index.
The refractive index may vary with wavelength. This causes white light to split into constituent colors when refracted. This is called dispersion. This effect can be observed in prisms and rainbows, and as chromatic aberration in lenses. Light propagation in absorbing materials can be described using a complex-valued refractive index. The imaginary part then handles the attenuation, while the real part accounts for refraction. For most materials the refractive index changes with wavelength by several percent across the visible spectrum. Consequently, refractive indices for materials reported using a single value for must specify the wavelength used in the measurement.
The concept of refractive index applies across the full electromagnetic spectrum, from X-rays to radio waves. It can also be applied to wave phenomena such as sound. In this case, the speed of sound is used instead of that of light, and a reference medium other than vacuum must be chosen.
For lenses (such as eye glasses), a lens made from a high refractive index material will be thinner, and hence lighter, than a conventional lens with a lower refractive index. Such lenses are generally more expensive to manufacture than conventional ones.
Definition
The relative refractive index of an optical medium 2 with respect to another reference medium 1 () is given by the ratio of speed of light in medium 1 to that in medium 2. This can be expressed as follows:
If the reference medium 1 is vacuum, then the refractive index of medium 2 is considered with respect to vacuum. It is simply represented as and is called the absolute refractive index of medium 2.
The absolute refractive index n of an optical medium is defined as the ratio of the speed of light in vacuum, , and the phase velocity of light in the medium,
Since is constant, is inversely proportional to :
The phase velocity is the speed at which the crests or the phase of the wave moves, which may be different from the group velocity, the speed at which the pulse of light or the envelope of the wave moves. Historically air at a standardized pressure and temperature has been common as a reference medium.
History
Thomas Young was presumably the person who first used, and invented, the name "index of refraction", in 1807.
At the same time he changed this value of refractive power into a single number, instead of the traditional ratio of two numbers. The ratio had the disadvantage of different appearances. Newton, who called it the "proportion of the sines of incidence and refraction", wrote it as a ratio of two numbers, like "529 to 396" (or "nearly 4 to 3"; for water). Hauksbee, who called it the "ratio of refraction", wrote it as a ratio with a fixed numerator, like "10000 to 7451.9" (for urine). Hutton wrote it as a ratio with a fixed denominator, like 1.3358 to 1 (water).
Young did not use a symbol for the index of refraction, in 1807. In the later years, others started using different symbols: , , and . The symbol gradually prevailed.
Typical values
Refractive index also varies with wavelength of the light as given by Cauchy's equation. The most general form of this equation is
where is the refractive index, is the wavelength, and , , , etc., are coefficients that can be determined for a material by fitting the equation to measured refractive indices at known wavelengths. The coefficients are usually quoted for as the vacuum wavelength in micrometres.
Usually, it is sufficient to use a two-term form of the equation:
where the coefficients and are determined specifically for this form of the equation.
For visible light most transparent media have refractive indices between 1 and 2. A few examples are given in the adjacent table. These values are measured at the yellow doublet D-line of sodium, with a wavelength of 589 nanometers, as is conventionally done. Gases at atmospheric pressure have refractive indices close to 1 because of their low density. Almost all solids and liquids have refractive indices above 1.3, with aerogel as the clear exception. Aerogel is a very low density solid that can be produced with refractive index in the range from 1.002 to 1.265. Moissanite lies at the other end of the range with a refractive index as high as 2.65. Most plastics have refractive indices in the range from 1.3 to 1.7, but some high-refractive-index polymers can have values as high as 1.76.
For infrared light refractive indices can be considerably higher. Germanium is transparent in the wavelength region from and has a refractive index of about 4. A type of new materials termed "topological insulators", was recently found which have high refractive index of up to 6 in the near to mid infrared frequency range. Moreover, topological insulators are transparent when they have nanoscale thickness. These properties are potentially important for applications in infrared optics.
Refractive index below unity
According to the theory of relativity, no information can travel faster than the speed of light in vacuum, but this does not mean that the refractive index cannot be less than 1. The refractive index measures the phase velocity of light, which does not carry information. The phase velocity is the speed at which the crests of the wave move and can be faster than the speed of light in vacuum, and thereby give a refractive index This can occur close to resonance frequencies, for absorbing media, in plasmas, and for X-rays. In the X-ray regime the refractive indices are lower than but very (exceptions close to some resonance frequencies).
As an example, water has a refractive index of for X-ray radiation at a photon energy of ( wavelength).
An example of a plasma with an index of refraction less than unity is Earth's ionosphere. Since the refractive index of the ionosphere (a plasma), is less than unity, electromagnetic waves propagating through the plasma are bent "away from the normal" (see Geometric optics) allowing the radio wave to be refracted back toward earth, thus enabling long-distance radio communications. | Physical sciences | Optics | null |
25897 | https://en.wikipedia.org/wiki/Road | Road | A road is a thoroughfare for the conveyance of traffic that mostly has an improved surface for use by vehicles (motorized and non-motorized) and pedestrians. Unlike streets, whose primary function is to serve as public spaces, the main function of roads is transportation.
There are many types of roads, including parkways, avenues, controlled-access highways (freeways, motorways, and expressways), tollways, interstates, highways, thoroughfares, and local roads.
The primary features of roads include lanes, sidewalks (pavement), roadways (carriageways), medians, shoulders, verges, bike paths (cycle paths), and shared-use paths.
Definitions
Historically many roads were simply recognizable routes without any formal construction or some maintenance.
The Organization for Economic Co-operation and Development (OECD) defines a road as "a line of communication (travelled way) using a stabilized base other than rails or air strips open to public traffic, primarily for the use of road motor vehicles running on their own wheels", which includes "bridges, tunnels, supporting structures, junctions, crossings, interchanges, and toll roads, but not cycle paths".
The Eurostat, ITF and UNECE Glossary for Transport Statistics Illustrated defines a road as a "Line of communication (traveled way) open to public traffic, primarily for the use of road motor vehicles, using a stabilized base other than rails or air strips. [...] Included are paved roads and other roads with a stabilized base, e.g. gravel roads. Roads also cover streets, bridges, tunnels, supporting structures, junctions, crossings and interchanges. Toll roads are also included. Excluded are dedicated cycle lanes."
The 1968 Vienna Convention on Road Traffic defines a road as the entire surface of any way or street open to public traffic.
In urban areas roads may diverge through a city or village and be named as streets, serving a dual function as urban space easement and route. Modern roads are normally smoothed, paved, or otherwise prepared to allow easy travel.
Australia
Part 2, Division 1, clauses 11–13 of the National Transport Commission Regulations 2006 defines a road in Australia as 'an area that is open to or used by the public and is developed for, or has as one of its main uses, the driving or riding of motor vehicles.'
Further, it defines a shoulder (typical an area of the road outside the edge line, or the curb) and a road-related area which includes green areas separating roads, areas designated for cyclists and areas generally accessible to the public for driving, riding or parking vehicles.
New Zealand
In New Zealand, the definition of a road is broad in common law where the statutory definition includes areas the public has access to, by right or not. Beaches, publicly accessible car parks and yards (even if privately owned), river beds, road shoulders (verges), wharves and bridges are included. However, the definition of a road for insurance purposes may be restricted to reduce risk.
United Kingdom
In the United Kingdom The Highway Code details rules for "road users", but there is some ambiguity between the terms highway and road. For the purposes of the English law, Highways Act 1980, which covers England and Wales but not Scotland or Northern Ireland, road is "any length of highway or of any other road to which the public has access, and includes bridges over which a road passes". This includes footpaths, bridleways and cycle tracks, and also road and driveways on private land and many car parks. Vehicle Excise Duty, a road use tax, is payable on some vehicles used on the public road.
The definition of a road depends on the definition of a highway; there is no formal definition for a highway in the relevant Act. A 1984 ruling said "the land over which a public right of way exists is known as a highway; and although most highways have been made up into roads, and most easements of way exist over footpaths, the presence or absence of a made road has nothing to do with the distinction. Another legal view is that while a highway historically included footpaths, bridleways, driftways, etc., it can now be used to mean those ways that allow the movement of motor vehicles, and the term rights of way can be used to cover the wider usage.
United States
In the United States, laws distinguish between public roads, which are open to public use, and private roads, which are privately controlled.
History
The assertion that the first pathways were the trails made by animals has not been universally accepted; in many cases animals do not follow constant paths. Some believe that some roads originated from following animal trails. The Icknield Way may exemplify this type of road origination, where human and animal both selected the same natural line. By about 10,000 BC human travelers used rough roads/pathways.
The world's oldest known paved road was constructed in Egypt some time between 2600 and 2200 BC.
Corduroy roads (log roads) are found dating to 4000 BC in Glastonbury, England.
The Sweet Track, a timber track causeway in England, is one of the oldest engineered roads discovered and the oldest timber trackway discovered in Northern Europe. Built in winter 3807 BC or spring 3806 BC, (tree-ring dating – dendrochronology – enabled very precise dating). It was claimed to be the oldest road in the world until the 2009 discovery of a 6,000-year-old trackway in Plumstead, London.
In 500 BC, Darius I the Great started an extensive road system for the Achaemenid Empire (Persia), including the Royal Road, which was one of the finest highways of its time, connecting Sardis (the westernmost major city of the empire) to Susa. The road remained in use after Roman times. These road systems reached as far east as Bactria and India.
In ancient times, transport by river was far easier and faster than transport by road, especially considering the cost of road construction and the difference in carrying capacity between carts and river barges. A hybrid of road transport and ship transport beginning in about 1740 is the horse-drawn boat in which the horse follows a cleared path along the river bank.
From about 312 BC, the Roman Empire built straight strong stone Roman roads throughout Europe and North Africa, in support of its military campaigns. At its peak the Roman Empire was connected by 29 major roads moving out from Rome and covering 78,000 kilometers or 52,964 Roman miles of paved roads.
In the 8th century AD, many roads were built throughout the Arab Empire. The most sophisticated roads were those in Baghdad, which were paved with tar. Tar was derived from petroleum, accessed from oil fields in the region, through the chemical process of destructive distillation.
The Highways Act 1555 in Britain transferred responsibility for maintaining roads from government to local parishes. This resulted in a poor and variable state of roads. To remedy this, the first of the turnpike trusts was established around 1706, to build good roads and collect tolls from passing vehicles. Eventually there were approximately 1,100 trusts in Britain and some of engineered roads. The Rebecca Riots in Carmarthenshire and Rhayader from 1839 to 1844 contributed to a Royal Commission that led to the demise of the system in 1844, which coincided with the development of the UK railway system.
In the late-19th century roading engineers began to cater for cyclists by building separate lanes alongside roadways.
From the beginning of the 20th century, roads were increasingly built for tourism and also to create jobs. A typical example of the stimulation of tourism is the Great Dolomite Road, while the creation of the panoramic coastal road Strada Costiera between Duino and Barcola, Italy, in 1928 was very much focused on creating jobs.
The Autostrada dei Laghi ("Lakes Motorway") in Italy, the first controlled-access highway built in the world, connecting Milan to Lake Como and Lake Maggiore, and now parts of the A8 and A9 motorways, was devised by Piero Puricelli and was inaugurated in 1924. This motorway, called autostrada, contained only one lane in each direction and no interchanges.
Construction
In transport engineering, subgrade is the native material underneath a constructed road.
Road construction requires the creation of an engineered continuous right-of-way or roadbed, overcoming geographic obstacles and having grades low enough to permit vehicle or foot travel, and may be required to meet standards set by law or official guidelines. The process is often begun with the removal of earth and rock by digging or blasting, construction of embankments, bridges and tunnels, and removal of vegetation (this may involve deforestation) and followed by the laying of pavement material. A variety of road building equipment is employed in road building.
After design, approval, planning, legal, and environmental considerations have been addressed alignment of the road is set out by a surveyor. The radii and gradient are designed and staked out to best suit the natural ground levels and minimize the amount of cut and fill. Great care is taken to preserve reference Benchmarks
Roads are designed and built for primary use by vehicular and pedestrian traffic. Storm drainage and environmental considerations are a major concern. Erosion and sediment controls are constructed to prevent detrimental effects. Drainage lines are laid with sealed joints in the road easement with runoff coefficients and characteristics adequate for the land zoning and storm water system. Drainage systems must be capable of carrying the ultimate design flow from the upstream catchment with approval for the outfall from the appropriate authority to a watercourse, creek, river or the sea for drainage discharge.
A borrow pit (source for obtaining fill, gravel, and rock) and a water source should be located near or in reasonable distance to the road construction site. Approval from local authorities may be required to draw water or for working (crushing and screening) of materials for construction needs. The topsoil and vegetation is removed from the borrow pit and stockpiled for subsequent rehabilitation of the extraction area. Side slopes in the excavation area not steeper than one vertical to two horizontal for safety reasons.
Old road surfaces, fences, and buildings may need to be removed before construction can begin. Trees in the road construction area may be marked for retention. These protected trees should not have the topsoil within the area of the tree's drip line removed and the area should be kept clear of construction material and equipment. Compensation or replacement may be required if a protected tree is damaged. Much of the vegetation may be mulched and put aside for use during reinstatement. The topsoil is usually stripped and stockpiled nearby for rehabilitation of newly constructed embankments along the road. Stumps and roots are removed and holes filled as required before the earthwork begins. Final rehabilitation after road construction is completed will include seeding, planting, watering and other activities to reinstate the area to be consistent with the untouched surrounding areas.
Processes during earthwork include excavation, removal of material to spoil, filling, compacting, construction and trimming. If rock or other unsuitable material is discovered it is removed, moisture content is managed and replaced with standard fill compacted to meet the design requirements (generally 90–95% relative compaction). Blasting is not frequently used to excavate the roadbed as the intact rock structure forms an ideal road base. When a depression must be filled to come up to the road grade the native bed is compacted after the topsoil has been removed. The fill is made by the "compacted layer method" where a layer of fill is spread then compacted to specifications, under saturated conditions. The process is repeated until the desired grade is reached.
General fill material should be free of organics, meet minimum California bearing ratio (CBR) results and have a low plasticity index. The lower fill generally comprises sand or a sand-rich mixture with fine gravel, which acts as an inhibitor to the growth of plants or other vegetable matter. The compacted fill also serves as lower-stratum drainage. Select second fill (sieved) should be composed of gravel, decomposed rock or broken rock below a specified particle size and be free of large lumps of clay. Sand clay fill may also be used. The roadbed must be "proof rolled" after each layer of fill is compacted. If a roller passes over an area without creating visible deformation or spring the section is deemed to comply.
Geosynthetics such as geotextiles, geogrids, and geocells are frequently used in the various pavement layers to improve road quality. These materials and methods are used in low-traffic private roadways as well as public roads and highways. Geosynthetics perform four main functions in roads: separation, reinforcement, filtration, and drainage; which increase the pavement performance, reduce construction costs and decrease maintenance.
The completed roadway is finished by paving or left with a gravel or other natural surface. The type of road surface is dependent on economic factors and expected usage. Safety improvements such as traffic signs, crash barriers, raised pavement markers and other forms of road surface marking are installed.
According to a May 2009 report by the American Association of State Highway and Transportation Officials (AASHTO) and TRIP – a national transportation research organization – driving on rough roads costs the average American motorist approximately $400 a year in extra vehicle operating costs. Drivers living in urban areas with populations more than 250,000 are paying upwards of $750 more annually because of accelerated vehicle deterioration, increased maintenance, additional fuel consumption, and tire wear caused by poor road conditions.
When a single carriageway road is converted into dual carriageway by building a second separate carriageway alongside the first, it is usually referred to as duplication, twinning or doubling. The original carriageway is changed from two-way to become one-way, while the new carriageway is one-way in the opposite direction. In the same way as converting railway lines from single track to double track, the new carriageway is not always constructed directly alongside the existing carriageway.
Reallocation
Roads that are intended for use by a particular mode of transport can be reallocated for another mode of transport, i.e. by using traffic signs. For instance, in the ongoing road space reallocation effort, some roads (particularly in city centers) which are intended for use by cars are increasingly being repurposed for cycling and/or walking.
Maintenance
Like all structures, roads deteriorate over time. Deterioration is primarily due to environmental effects such as frost heaves, thermal cracking and oxidation often contribute, however accumulated damage from vehicles also contributes. According to a series of experiments carried out in the late 1950s, called the AASHO Road Test, it was empirically determined that the effective damage done to the road is roughly proportional to the fourth power of axle weight. A typical tractor-trailer weighing 80,000 pounds (36.287 t) with 8,000 pounds (3.629 t) on the steer axle and 36,000 pounds (16.329 t) on both of the tandem axle groups is expected to do 7,800 times more damage than a passenger vehicle with 2,000 pounds (0.907 t) on each axle. Potholes on roads are caused by rain damage and vehicle braking or related construction work.
Pavements are designed for an expected service life or design life. In some parts of the United Kingdom the standard design life is 40 years for new bitumen and concrete pavement. Maintenance is considered in the whole life cost of the road with service at 10, 20 and 30-year milestones. Roads can be and are designed for a variety of lives (8-, 15-, 30-, and 60-year designs). When pavement lasts longer than its intended life, it may have been overbuilt, and the original costs may have been too high. When a pavement fails before its intended design life, the owner may have excessive repair and rehabilitation costs. Some asphalt pavements are designed as perpetual pavements with an expected structural life in excess of 50 years.
Many asphalt pavements built over 35 years ago, despite not being specifically designed as a perpetual pavement, have remained in good condition long past their design life. Many concrete pavements built since the 1950s have significantly outlived their intended design lives. Some roads like Chicago's Wacker Drive, a major two-level (and at one point, three-level) roadway in the downtown area, are being rebuilt with a designed service life of 100 years.
Virtually all roads require some form of maintenance before they come to the end of their service life. Pro-active agencies use pavement management techniques to continually monitor road conditions and schedule preventive maintenance treatments as needed to prolong the lifespan of their roads. Technically advanced agencies monitor the road network surface condition with sophisticated equipment such as laser/inertial profilometers. These measurements include road curvature, cross slope, asperity, roughness, rutting and texture. Software algorithms use this data to recommend maintenance or new construction.
Maintenance treatments for asphalt concrete generally include thin asphalt overlays, crack sealing, surface rejuvenating, fog sealing, micro milling or diamond grinding and surface treatments. Thin surfacing preserves, protects and improves the functional condition of the road while reducing the need for routing maintenance, leading to extended service life without increasing structural capacity.
Older concrete pavements that develop faults can be repaired with a dowel bar retrofit, in which slots are cut in the pavement at each joint, and dowel bars are placed in the slots, which are then filled with concrete patching material. This can extend the life of the concrete pavement for 15 years.
Failure to maintain roads properly can create significant costs to society. A 2009 report released by the American Association of State Highway and Transportation Officials estimated that about 50% of the roads in the US are in bad condition, with urban areas worse. The report estimates that urban drivers pay an average of $746/year on vehicle repairs while the average US motorist pays about $335/year. In contrast, the average motorist pays about $171/year in road maintenance taxes (based on 600 gallons/year and $0.285/gallon tax).
Slab stabilization
Distress and serviceability loss on concrete roads can be caused by loss of support due to voids beneath the concrete pavement slabs. The voids usually occur near cracks or joints due to surface water infiltration. The most common causes of voids are pumping, consolidation, subgrade failure and bridge approach failure. Slab stabilization is a non-destructive method of solving this problem and is usually employed with other concrete pavement restoration methods including patching and diamond grinding. The technique restores support to concrete slabs by filing small voids that develop underneath the concrete slab at joints, cracks or the pavement edge.
The process consists of pumping a cementitious grout or polyurethane mixture through holes drilled through the slab. The grout can fill small voids beneath the slab and/or sub-base. The grout also displaces free water and helps keep water from saturating and weakening support under the joints and slab edge after stabilization is complete. The three steps for this method after finding the voids are locating and drilling holes, grout injection and post-testing the stabilized slabs.
Slab stabilization does not correct depressions, increase the design structural capacity, stop erosion or eliminate faulting. It does, however, restore the slab support, therefore, decreasing deflections under the load. Stabilization should only be performed at joints and cracks where the loss of support exists. Visual inspection is the simplest manner to find voids. Signs that repair is needed are transverse joint faulting, corner breaks and shoulder drop off and lines at or near joints and cracks. Deflection testing is another common procedure used to locate voids. It is recommended to do this testing at night as during cooler temperatures, joints open, aggregate interlock diminishes and load deflections are at their highest.
Testing
Ground penetrating radar pulses electromagnetic waves into the pavement and measures and graphically displays the reflected signal. This can reveal voids and other defects.
The epoxy/core test, detects voids by visual and mechanical methods. It consists of drilling a 25 to 50 millimeter hole through the pavement into the sub-base with a dry-bit roto-hammer. Next, a two-part epoxy is poured into the hole – dyed for visual clarity. Once the epoxy hardens, technicians drill through the hole. If a void is present, the epoxy will stick to the core and provide physical evidence.
Common stabilization materials include pozzolan-cement grout and polyurethane. The requirements for slab stabilization are strength and the ability to flow into or expand to fill small voids. Colloidal mixing equipment is necessary to use the pozzolan-cement grouts. The contractor must place the grout using a positive-displacement injection pump or a non-pulsing progressive cavity pump. A drill is also necessary but it must produce a clean hole with no surface spalling or breakouts. The injection devices must include a grout packer capable of sealing the hole. The injection device must also have a return hose or a fast-control reverse switch, in case workers detect slab movement on the uplift gauge. The uplift beam helps to monitor the slab deflection and has to have sensitive dial gauges.
Joint sealing
Also called joint and crack repair, this method's purpose is to minimize infiltration of surface water and incompressible material into the joint system. Joint sealants are also used to reduce dowel bar corrosion in concrete pavement restoration techniques. Successful resealing consists of old sealant removal, shaping and cleaning the reservoir, installing the backer rod and installing the sealant. Sawing, manual removal, plowing and cutting are methods used to remove the old sealant. Saws are used to shape the reservoir. When cleaning the reservoir, no dust, dirt or traces of old sealant should remain. Thus, it is recommended to water wash, sand-blast and then air blow to remove any sand, dirt or dust. The backer rod installation requires a double-wheeled, steel roller to insert the rod to the desired depth. After inserting the backer rod, the sealant is placed into the joint. There are various materials to choose for this method including hot pour bituminous liquid, silicone and preformed compression seals.
Safety considerations
Careful design and construction of roads can increase road traffic safety and reduce the harm (deaths, injuries, and property damage) on the highway system from traffic collisions.
On neighborhood roads traffic calming, safety barriers, pedestrian crossings and cycle lanes can help protect pedestrians, cyclists, and drivers.
Lane markers in some countries and states are marked with Cat's eyes or Botts dots. Botts dots are not used where it is icy in the winter, because frost and snowplows can break the glue that holds them to the road, although they can be embedded in short, shallow trenches carved in the roadway, as is done in the mountainous regions of California.
For major roads risk can be reduced by providing limited access from properties and local roads, grade separated junctions and median dividers between opposite-direction traffic to reduce the likelihood of head-on collisions.
The placement of energy attenuation devices (e.g. guardrails, wide grassy areas, sand barrels) is also common. Some road fixtures such as road signs and fire hydrants are designed to collapse on impact. Light poles are designed to break at the base rather than violently stop a car that hits them. Highway authorities may also remove larger trees from the immediate vicinity of the road. During heavy rains, if the elevation of the road surface is not higher than the surrounding landscape, it may result in flooding.
Speed limits can improve road traffic safety and reduce the number of road traffic casualties from traffic collisions. In their World report on road traffic injury prevention report, the World Health Organization (WHO) identify speed control as one of various interventions likely to contribute to a reduction in road casualties.
Road conditions
Road conditions are the collection of factors describing the ease of driving on a particular stretch of road, or on the roads of a particular locality, including the quality of the pavement surface, potholes, road markings, and weather. It has been reported that "[p]roblems of transportation participants and road conditions are the main factors that lead to road traffic accidents". It has further been specifically noted that "weather conditions and road conditions are interlinked as weather conditions affect the road conditions". Specific aspects of road conditions can be of particular importance for particular purposes. For example, for autonomous vehicles such as self-driving cars, significant road conditions can include "shadowing and lighting changes, road surface texture changes, and road markings consisting of circular reflectors, dashed lines, and solid lines".
Various government agencies and private entities, including local news services, track and report on road conditions to the public so that drivers going through a particular area can be aware of hazards that may exist in that area. News agencies, in turn, rely on tips from area residents with respect to certain aspects of road conditions in their coverage area.
Environmental performance
Careful design and construction of a road can reduce any negative environmental impacts.
Water management systems can be used to reduce the effect of pollutants from roads. Rainwater and snowmelt running off of roads tends to pick up gasoline, motor oil, heavy metals, trash and other pollutants and result in water pollution. Road runoff is a major source of nickel, copper, zinc, cadmium, lead and polycyclic aromatic hydrocarbons (PAHs), which are created as combustion byproducts of gasoline and other fossil fuels.
De-icing chemicals and sand can run off into roadsides, contaminate groundwater and pollute surface waters; and road salts can be toxic to sensitive plants and animals. Sand applied to icy roads can be ground up by traffic into fine particulates and contribute to air pollution.
Roads are a chief source of noise pollution. In the early 1970s, it was recognized that design of roads can be conducted to influence and minimize noise generation. Noise barriers can reduce noise pollution near built-up areas. Regulations can restrict the use of engine braking.
Motor vehicle emissions contribute air pollution. Concentrations of air pollutants and adverse respiratory health effects are greater near the road than at some distance away from the road. Road dust kicked up by vehicles may trigger allergic reactions. In addition, on-road transportation greenhouse gas emissions are the largest single cause of climate change, scientists say.
Regulation
Right- and left-hand traffic
Traffic flows on the right or on the left side of the road depending on the country. In countries where traffic flows on the right, traffic signs are mostly on the right side of the road, roundabouts and traffic circles go counter-clockwise/anti-clockwise, and pedestrians crossing a two-way road should watch out for traffic from the left first. In countries where traffic flows on the left, the reverse is true.
About 33% of the world by population drive on the left, and 67% keep right. By road distances, about 28% drive on the left, and 72% on the right, even though originally most traffic drove on the left worldwide.
Economics
Transport economics is used to understand both the relationship between the transport system and the wider economy and the complex effects of the road network structure when there are multiple paths and competing modes for both personal and freight (road/rail/air/ferry) and where induced demand can result in increased on decreased transport levels when road provision is increased by building new roads or decreased (for example California State Route 480). Roads are generally built and maintained by the public sector using taxation although implementation may be through private contractors). or occasionally using road tolls.
Public-private partnerships are a way for communities to address the rising cost by injecting private funds into the infrastructure. There are four main ones:
design/build
design/build/operate/maintain
design/build/finance/operate
build/own/operate
Society depends heavily on efficient roads. In the European Union (EU) 44% of all goods are moved by trucks over roads and 85% of all people are transported by cars, buses or coaches on roads. The term was also commonly used to refer to roadsteads, waterways that lent themselves to use by shipping.
Construction costs
According to the New York State Thruway Authority, some sample per-mile costs to construct multi-lane roads in several US northeastern states were:
Connecticut Turnpike – $3,449,000 per mile
New Jersey Turnpike – $2,200,000 per mile
Pennsylvania Turnpike (Delaware Extension) – $1,970,000 per mile
Northern Indiana Toll Road – $1,790,000 per mile
Garden State Parkway – $1,720,000 per mile
Massachusetts Turnpike – $1,600,000 per mile
Thruway, New York to Pennsylvania Line – $1,547,000 per mile
Ohio Turnpike – $1,352,000 per mile
Pennsylvania Turnpike (early construction) – $736,000 per mile
Statistics
The United States has the largest network of roads of any country with as of 2009. The Republic of India has the second-largest road system globally with of road (2013). The People's Republic of China is third with of road (2007). The Federative Republic of Brazil has the fourth-largest road system in the world with (2002). See List of countries by road network size. When looking only at expressways, the National Trunk Highway System (NTHS) in China has a total length of at the end of 2006, and 60,300 km at the end of 2008, second only to the United States with in 2005. However, as of 2017, China has 130,000 km of Expressways.
Global connectivity
Eurasia, Africa, North America, South America, and Australia each have an extensive road network that connects most cities.
The North and South American road networks are separated by the Darién Gap, the only interruption in the Pan-American Highway. Eurasia and Africa are connected by roads on the Sinai Peninsula. The European Peninsula is connected to the Scandinavian Peninsula by the Øresund Bridge, and both have many connections to the mainland of Eurasia, including the bridges over the Bosphorus. Antarctica has very few roads and no continent-bridging network, though there are a few ice roads between bases, such as the South Pole Traverse. Bahrain is the only island country to be connected to a continental network by road (the King Fahd Causeway to Saudi Arabia).
Even well-connected road networks are controlled by many different legal jurisdictions, and laws such as which side of the road to drive on vary accordingly.
Many populated domestic islands are connected to the mainland by bridges. A very long example is the Overseas Highway connecting many of the Florida Keys with the continental United States.
Even on mainlands, some settlements have no roads connecting with the primary continental network, due to natural obstacles like mountains or wetlands, or high cost compared to the population served. Unpaved roads or lack of roads are more common in developing countries, and these can become impassible in wet conditions. As of 2014, only 43% of rural Africans have access to an all-season road. Due to steepness, mud, snow, or fords, roads can sometimes be passable only to four-wheel drive vehicles, those with snow chains or snow tires, or those capable of deep wading or amphibious operation.
Most disconnected settlements have local road networks connecting ports, buildings, and other points of interest.
Where demand for travel by road vehicle to a disconnected island or mainland settlement is high, roll-on/roll-off ferries are commonly available if the journey is relatively short. For long-distance trips, passengers usually travel by air and rent a car upon arrival. If facilities are available, vehicles and cargo can also be shipped to many disconnected settlements by boat, or air transport at much greater expense. The island of Great Britain is connected to the European road network by Eurotunnel Shuttle – an example of a car shuttle train which is a service used in other parts of Europe to travel under mountains and over wetlands.
In polar areas, disconnected settlements are often more easily reached by snowmobile or dogsled in cold weather, which can produce sea ice that blocks ports, and bad weather that prevents flying. For example, resupply aircraft are only flown to Amundsen–Scott South Pole Station October to February, and many residents of coastal Alaska have bulk cargo shipped in only during the warmer months. Permanent darkness during the winter can also make long-distance travel more dangerous in polar areas. Continental road networks do reach into these areas, such as the Dalton Highway to the North Slope of Alaska, the R21 highway to Murmansk in Russia, and many roads in Scandinavia (though due to fjords water transport is sometimes faster). Large areas of Alaska, Canada, Greenland, and Siberia are sparsely connected. For example, all 25 communities of Nunavut are disconnected from each other and the main North American road network.
Road transport of people and cargo by may also be obstructed by border controls and travel restrictions. For example, travel from other parts of Asia to South Korea would require passage through the hostile country of North Korea. Moving between most countries in Africa and Eurasia would require passing through Egypt and Israel, which is a politically sensitive area.
Some places are intentionally car-free, and roads (if present) might be used by bicycles or pedestrians.
Roads are under construction to many remote places, such as the villages of the Annapurna Circuit, and a road was completed in 2013 to Mêdog County. Additional intercontinental and transoceanic fixed links have been proposed, including a Bering Strait crossing that would connect Eurasia-Africa and North America, a Malacca Strait Bridge to the largest island of Indonesia from Asia, and a Strait of Gibraltar crossing to connect Europe and Africa directly.
| Technology | Transportation | null |
25898 | https://en.wikipedia.org/wiki/Roman%20roads | Roman roads | Roman roads ( ; singular: ; meaning "Roman way") were physical infrastructure vital to the maintenance and development of the Roman state, built from about 300 BC through the expansion and consolidation of the Roman Republic and the Roman Empire. They provided efficient means for the overland movement of armies, officials, civilians, inland carriage of official communications, and trade goods. Roman roads were of several kinds, ranging from small local roads to broad, long-distance highways built to connect cities, major towns and military bases. These major roads were often stone-paved and metaled, cambered for drainage, and were flanked by footpaths, bridleways and drainage ditches. They were laid along accurately surveyed courses, and some were cut through hills or conducted over rivers and ravines on bridgework. Sections could be supported over marshy ground on rafted or piled foundations.
At the peak of Rome's development, no fewer than 29 great military highways radiated from the capital, and the empire's 113 provinces were interconnected by 372 great roads. The whole comprised more than of roads, of which over were stone-paved. In Gaul alone, no less than of roadways are said to have been improved, and in Britain at least . The courses (and sometimes the surfaces) of many Roman roads survived for millennia; some are overlaid by modern roads.
Roman systems
Livy mentions some of the most familiar roads near Rome, and the milestones on them, at times long before the first paved road—the Appian Way. Unless these allusions are just simple anachronisms, the roads referred to were probably at the time little more than levelled earthen tracks. Thus, the Via Gabiana (during the time of Porsena) is mentioned in about 500 BC; the Via Latina (during the time of Gaius Marcius Coriolanus) in about 490 BC; the Via Nomentana (also known as "Via Ficulensis"), in 449 BC; the Via Labicana in 421 BC; and the Via Salaria in 361 BC.
In the Itinerary of Antoninus, the description of the road system is as follows:
With the exception of some outlying portions, such as Britain north of the Wall, Dacia, and certain provinces east of the Euphrates, the whole Empire was penetrated by these itinera (plural of iter). There is hardly a district to which we might expect a Roman official to be sent, on service either civil or military, where we do not find roads. They reach the Wall in Britain; run along the Rhine, the Danube, and the Euphrates; and cover, as with a network, the interior provinces of the Empire.
A road map of the empire reveals that it was generally laced with a dense network of prepared viae. Beyond its borders there were no paved roads; however, it can be supposed that footpaths and dirt roads allowed some transport. There were, for instance, some pre-Roman ancient trackways in Britain, such as the Ridgeway and the Icknield Way.
Laws and traditions
The Laws of the Twelve Tables, dated to about 450 BC, required that any public road (Latin via) be 8 Roman feet (perhaps about 2.37 m) wide where straight and twice that width where curved. These were probably the minimum widths for a via; in the later republic, widths of around 12 Roman feet were common for public roads in rural regions, permitting the passing of two carts of standard (4 foot) width without interference to pedestrian traffic. Actual practices varied from this standard. The Tables command Romans to build public roads and give wayfarers the right to pass over private land where the road is in disrepair. Building roads that would not need frequent repair therefore became an ideological objective, as well as building them as straight as practicable to construct the shortest possible roads, and thus save on material.
Roman law defined the right to use a road as a servitus, or liability. The ius eundi ("right of going") established a claim to use an iter, or footpath, across private land; the ius agendi ("right of driving"), an actus, or carriage track. A via combined both types of servitutes, provided it was of the proper width, which was determined by an arbiter. The default width was the latitudo legitima of 8 feet. Roman law and tradition forbade the use of vehicles in urban areas, except in certain cases. Married women and government officials on business could ride. The Lex Julia Municipalis restricted commercial carts to night-time access in the city within the walls and within a mile outside the walls.
Types
Roman roads varied from simple corduroy roads to paved roads using deep roadbeds of tamped rubble as an underlying layer to ensure that they kept dry, as the water would flow out from between the stones and fragments of rubble instead of becoming mud in clay soils. According to Ulpian, there were three types of roads:
Viae publicae, consulares, praetoriae or militares
Viae privatae, rusticae, glareae or agrariae
Viae vicinales
Viae publicae, consulares, praetoriae and militares
The first type of road included public high or main roads, constructed and maintained at the public expense, and with their soil vested in the state. Such roads led either to the sea, to a town, to a public river (one with a constant flow), or to another public road. Siculus Flaccus, who lived under Trajan (98–117), calls them viae publicae regalesque, and describes their characteristics as follows:
They are placed under curatores (commissioners), and repaired by redemptores (contractors) at the public expense; a fixed contribution, however, being levied from the neighboring landowners.
These roads bear the names of their constructors (e.g. Via Appia, Cassia, Flaminia).
Roman roads were named after the censor who had ordered their construction or reconstruction. The same person often served afterwards as consul, but the road name is dated to his term as censor. If the road was older than the office of censor or was of unknown origin, it was named for its destination or the region through which it mainly passed. A road was renamed if the censor ordered major work on it, such as paving, repaving, or rerouting. With the term viae regales compare the roads of the Persian kings (who probably organized the first system of public roads) and the King's Highway. With the term viae militariae compare the Icknield Way (Icen-hilde-weg, or "War-way of the Iceni").
There were many other people, besides special officials, who from time to time and for a variety of reasons sought to connect their names with a great public service like that of the roads. Gaius Gracchus, when Tribune of the People (123–122 BC), paved or gravelled many of the public roads and provided them with milestones and mounting-blocks for riders. Gaius Scribonius Curio, when Tribune (50 BC), sought popularity by introducing a Lex Viaria, under which he was to be chief inspector or commissioner for five years. Dio Cassius mentions that the Second Triumvirate obliged the Senators to repair the public roads at their own expense.
Viae privatae, rusticae, glareae and agrariae
The second category included private or country roads, originally constructed by private individuals, in whom their soil was vested and who had the power to dedicate them to the public use. Such roads benefited from a right of way in favor either of the public or of the owner of a particular estate. Under the heading of viae privatae were also included roads leading from the public or high roads to particular estates or settlements; Ulpian considers these to be public roads.
Features off the via were connected to the via by viae rusticae, or secondary roads. Both main or secondary roads might either be paved or left unpaved with a gravel surface, as they were in North Africa. These prepared but unpaved roads were viae glareae or sternendae ("to be strewn"). Beyond the secondary roads were the viae terrenae, "dirt roads".
Viae vicinales
The third category comprised roads at or in villages, districts, or crossroads, leading through or towards a vicus or village. Such roads ran either into a high road or into other viae vicinales, without any direct communication with a high road. They were considered public or private, according to the fact of their original construction out of public or private funds or materials. Such a road, though privately constructed, became a public road when the memory of its private constructors had perished.
Siculus Flaccus describes viae vicinales as roads "de publicis quae divertunt in agros et saepe ad alteras publicas perveniunt" (which turn off the public roads into fields, and often reach to other public roads). The repairing authorities, in this case, were the magistri pagorum or magistrates of the cantons. They could require the neighboring landowners either to furnish laborers for the general repair of the viae vicinales, or to keep in repair, at their own expense, a certain length of road passing through their respective properties.
Governance and financing
With the conquest of Italy, prepared viae were extended from Rome and its vicinity to outlying municipalities, sometimes overlying earlier roads. Building viae was a military responsibility and thus came under the jurisdiction of a consul. The process had a military name, viam munire, as though the via were a fortification. Municipalities, however, were responsible for their own roads, which the Romans called viae vicinales. Roads were not free to use; tolls abounded, especially at bridges. Often they were collected at the city gate. Freight costs were made heavier still by import and export taxes. These were only the charges for using the roads. Costs of services on the journey went up from there.
Financing road building was a Roman government responsibility. Maintenance, however, was generally left to the province. The officials tasked with fund-raising were the curatores viarum. They had a number of methods available to them. Private citizens with an interest in the road could be asked to contribute to its repair. High officials might distribute largesse to be used for roads. Censors, who were in charge of public morals and public works, were expected to fund repairs suâ pecuniâ (with their own money). Beyond those means, taxes were required.
A via connected two cities. Viae were generally centrally placed in the countryside. The construction and care of the public roads, whether in Rome, in Italy, or in the provinces, was, at all periods of Roman history, considered to be a function of the greatest weight and importance. This is clearly shown by the fact that the censors, in some respects the most venerable of Roman magistrates, had the earliest paramount authority to construct and repair all roads and streets. Indeed, all the various functionaries, including emperors, who succeeded the censors in this portion of their duties, may be said to have exercised a devolved censorial jurisdiction.
Costs and civic responsibilities
The devolution to the censorial jurisdictions became a practical necessity, resulting from the growth of the Roman dominions and the diverse labors which detained the censors in the capital city. Certain ad hoc official bodies successively acted as constructing and repairing authorities. In Italy, the censorial responsibility passed to the commanders of the Roman armies and later to special commissioners, and in some cases perhaps to the local magistrates. In the provinces, the consul or praetor and his legates received authority to deal directly with the contractor.
The care of the streets and roads within the Roman territory was committed in the earliest times to the censors. They eventually made contracts for paving the street inside Rome, including the Clivus Capitolinus, with lava, and for laying down the roads outside the city with gravel. Sidewalks were also provided. The aediles, probably by virtue of their responsibility for the freedom of traffic and policing the streets, co-operated with the censors and the bodies that succeeded them.
It would seem that in the reign of Claudius the quaestors had become responsible for the paving of the streets of Rome or at least shared that responsibility with the quattuorviri viarum. It has been suggested that the quaestors were obliged to buy their right to an official career by personal outlay on the streets. There was certainly no lack of precedents for this enforced liberality, and the change made by Claudius may have been a mere change in the nature of the expenditure imposed on the quaestors.
Official bodies
The official bodies which first succeeded the censors in the care of the streets and roads were:
Quattuorviri viis in urbe purgandis, with jurisdiction inside the walls of Rome;
Duoviri viis extra urbem purgandis, with jurisdiction outside the walls.
Both these bodies were probably of ancient origin. The first mention of either body occurs in the Lex Julia Municipalis in 45 BC. The quattuorviri were afterwards called quattuorviri viarum curandarum. The extent of jurisdiction of the duoviri is derived from their full title as duoviri viis extra propiusve urbem Romam passus mille purgandis. Their authority extended over all roads between their respective gates of issue in the city wall and the first milestone beyond.
In case of an emergency in the condition of a particular road, men of influence and liberality were appointed, or voluntarily acted, as curatores or temporary commissioners to superintend the work of repair. The dignity attached to such a curatorship is attested by a passage of Cicero. Among those who performed this duty in connection with particular roads was Julius Caesar, who became curator (67 BC) of the Via Appia and spent his own money liberally upon it. Certain persons appear also to have acted alone and taken responsibility for certain roads.
In the country districts, the magistri pagorum had authority to maintain the viae vicinales. In Rome each householder was legally responsible for the repairs to that portion of the street which passed his own house; it was the duty of the aediles to enforce this responsibility. The portion of any street which passed a temple or public building was repaired by the aediles at the public expense. When a street passed between a public building or temple and a private house, the public treasury and the private owner shared the expense equally.
Changes under Augustus
The governing structure was changed by Augustus, who in the course of his reconstitution of the urban administration, both abolished and created new offices in connection with the maintenance of public works, streets, and aqueducts in and around Rome. The task of maintaining the roads had previously been administered by two groups of minor magistrates, the quattuorviri (a board of four magistrates to oversee the roads inside the city) and the duoviri (a board of two to oversee the roads outside the city proper) who were both part of the collegia known as the vigintisexviri (literally meaning "Twenty-Six Men").
Augustus, finding the collegia ineffective, especially the boards dealing with road maintenance, reduced the number of magistrates from 26 to 20. Augustus abolished the duoviri and later granted the position as superintendent (according to Dio Cassius) of the road system connecting Rome to the rest of Italy and provinces beyond. In this capacity he had effectively given himself and any following emperors a paramount authority which had originally belonged to the city censors. The quattuorviri board was kept as it was until at least the reign of Hadrian (117 to 138 AD). Furthermore, he appointed praetorians to the offices of "road-maker" and assigning each one with two lictors, making the office of curator of each of the great public roads a perpetual magistracy rather than a temporary commission.
The persons appointed under the new system were of senatorial or equestrian rank, depending on the relative importance of the roads assigned to them. It was the duty of each curator to issue contracts for the maintenance of his road and to see that the contractor who undertook said work performed it faithfully, as to both quantity and quality. Augustus also authorized the construction of sewers and removed obstructions to traffic, as the aediles did in Rome.
It was in the character of an imperial curator (though probably armed with extraordinary powers) that Corbulo denounced the magistratus and mancipes of the Italian roads to Tiberius. He pursued them and their families with fines and imprisonment and was later rewarded with a consulship by Caligula, who also shared the habit of condemning well-born citizens to work on the roads. Under the rule of Claudius, Corbulo was brought to justice and forced to repay the money which had been extorted from his victims.
Other curatores
Special curatores for a term seem to have been appointed on occasion, even after the institution of the permanent magistrates bearing that title. The emperors who succeeded Augustus exercised a vigilant control over the condition of the public highways. Their names occur frequently in the inscriptions to restorers of roads and bridges. Thus, Vespasian, Titus, Domitian, Trajan, and Septimius Severus were commemorated in this capacity at Emérita. The Itinerary of Antoninus (which was probably a work of much earlier date and republished in an improved and enlarged form under one of the Antonine emperors) remains as standing evidence of the minute care which was bestowed on the service of the public roads.
Construction and engineering
Ancient Rome boasted impressive technological feats, using many advances that were lost during the Middle Ages. Some of these accomplishments would not be rivaled in Europe until the Modern Age. Many practical Roman innovations were adopted from earlier designs. Some of the common, earlier designs incorporated arches.
Practices and terminology
Roman road builders aimed at a regulation width (see Laws and traditions above), but actual widths have been measured at between and more than . Today, the concrete has worn from the spaces around the stones, giving the impression of a very bumpy road, but the original practice was to produce a surface that was no doubt much closer to being flat. Many roads were built to resist rain, freezing and flooding. They were constructed to need as little repair as possible.
Roman construction took a directional straightness. Many long sections are ruler-straight, but it should not be thought that all of them were. Some links in the network were as long as . Gradients of 10%–12% are known in ordinary terrain, 15%–20% in mountainous country. The Roman emphasis on constructing straight roads often resulted in steep slopes relatively impractical for most commercial traffic; over the years the Romans realized this and built longer but more manageable alternatives to existing roads. Roman roads generally went straight up and down hills, rather than in a serpentine pattern of switchbacks.
As to the standard Imperial terminology that was used, the words were localized for different elements used in construction and varied from region to region. Also, in the course of time, the terms via munita and vía publica became identical.
Materials and methods
Viae were distinguished according to their public or private character, as well as according to the materials employed and the methods followed in their construction. Ulpian divided them up in the following fashion:
Via terrena: A plain road of leveled earth.
Via glareata: An earthen road with a gravel surface.
Via munita: A built road, paved with rectangular blocks of local rock or with polygonal blocks of volcanic rock.
According to Isidore of Sevilla, the Romans borrowed the knowledge of construction of viae munitae from the Carthaginians, though certainly inheriting some construction techniques from the Etruscans.
Via terrena
The Viae terrenae were plain roads of leveled earth. These were mere tracks worn down by the feet of humans and animals, and possibly by wheeled carriages.
Via glareata
The Viae glareatae were earthen roads with a gravel surface or a gravel subsurface and paving on top. Livy speaks of the censors of his time as being the first to contract for paving the streets of Rome with flint stones, for laying gravel on the roads outside the city, and for forming raised footpaths at the sides. In these roads, the surface was hardened with gravel, and although pavements were introduced shortly afterwards, the blocks were laid on a bed of small stones. Examples include the Via Praenestina and Via Latina.
Via munita
The best sources of information as regards the construction of a regulation via munita are:
The many existing remains of viae publicae. These are often sufficiently well preserved to show that the rules of construction were, as far as local material allowed, minutely adhered to in practice.
The directions for making pavements given by Vitruvius. The pavement and the via munita were identical in construction, except as regards the top layer, or surface. Pavement consisted of marble or mosaic, and via munita consisted of blocks of stone or volcanic rock.
A passage in Statius describing the repairs of the Via Domitiana, a branch road of the Via Appia leading to Neapolis.
After the civil engineer looked over the site of the proposed road and determined roughly where it should go, the agrimensores went to work surveying the road bed. They used two main devices, the rod and a device called a groma, which helped them obtain right angles. The gromatici, the Roman equivalent of rod men, placed rods and put down a line called the rigor. As they did not possess anything like a transit, a surveyor tried to achieve straightness by looking along the rods and commanding the gromatici to move them as required. Using the gromae they then laid out a grid on the plan of the road. If the surveyor could not see his desired endpoint, a signal fire would often be lit at the endpoint in order to guide the surveyor. The libratores then began their work using ploughs and, sometimes with the help of legionaries, with spades excavated the road bed down to bedrock or at least to the firmest ground they could find. The excavation was called the fossa, the Latin word for ditch. The depth varied according to terrain.
The method varied according to geographic locality, materials available, and terrain, but the plan or ideal at which the engineer aimed was always the same. The road was constructed by filling the fossa. This was done by layering rock over other stones. Into the fossa was placed large amounts of rubble, gravel and stone, whatever fill was available. Sometimes a layer of sand was put down, if it was locally available. When the layers came to within 1 yd (1 m) or so of the surface, the subsurface was covered with gravel and tamped down, a process called pavire, or pavimentare.
The flat surface was then the pavimentum. It could be used as the road, or additional layers could be constructed. A statumen or "foundation" of flat stones set in cement might support the additional layers. The final steps utilized lime-based mortar, which the Romans had discovered. They seem to have mixed the mortar and the stones in the ditch. First a small layer of coarse concrete, the rudus, then a layer of fine concrete, the nucleus, went onto the pavement or statumen. Into or onto the nucleus went a course of polygonal or square paving stones, called the summa crusta. The crusta was crowned for drainage.
An example is found in an early basalt road by the Temple of Saturn on the Clivus Capitolinus. It had travertine paving, polygonal basalt blocks, concrete bedding (substituted for the gravel), and a rain-water gutter.
Engineering works
Romans preferred to engineer solutions to obstacles rather than circumvent them. Outcrops of stone, ravines, or hilly or mountainous terrain called for cuts and tunnels. An example of this is found on the Roman road from Căzănești near the Iron Gates. This road was half carved into the rock, about 5ft to 5ft 9in (1.5 to 1.75m); the rest of the road, above the Danube, was made from wooden structure, projecting out of the cliff. The road functioned as a towpath, making the Danube navigable. Tabula Traiana memorial plaque in Serbia is all that remains of the now-submerged road.
Roman bridges were some of the first large and lasting bridges created. River crossings were achieved by bridges, or pontes. Single slabs went over rills. A bridge could be of wood, stone, or both. Wooden bridges were constructed on pilings sunk into the river, or on stone piers. Stone arch bridges were used on larger or more permanent crossings. Most bridges also used concrete, which the Romans were the first to use for bridges. Roman bridges were so well constructed that many remain in use today.
Causeways were built over marshy ground. The road was first marked out with pilings. Between them were sunk large quantities of stone so as to raise the causeway to more than above the marsh. In the provinces, the Romans often did not bother with a stone causeway but used log roads (pontes longi).
Military and citizen utilization
The public road system of the Romans was thoroughly military in its aims and spirit. It was designed to unite and consolidate the conquests of the Roman people, whether within or without the limits of Italy proper. A legion on the march brought its own baggage train (impedimenta) and constructed its own camp (castra) every evening at the side of the road.
Milestones and markers
Milestones divided the Via Appia even before 250 BC into numbered miles, and most viae after 124 BC. The modern word "mile" derives from the Latin milia passuum, "one thousand paces", each of which was five Roman feet, or in total . A milestone, or miliarium, was a circular column on a solid rectangular base, set more than into the ground, standing tall, in diameter, and weighing more than 2 tons. At the base was inscribed the number of the mile relative to the road it was on. In a panel at eye height was the distance to the Roman Forum and various other information about the officials who made or repaired the road and when. These miliaria are valuable historical documents today, and their inscriptions are collected in Volume XVII of the Corpus Inscriptionum Latinarum. Milestones permitted distances and locations to be known and recorded exactly. It was not long before historians began to refer to the milestone at which an event occurred.The Romans had a preference for standardization wherever possible, so Augustus, after becoming permanent commissioner of roads in 20 BC, set up the miliarium aureum ("golden milestone") near the Temple of Saturn. All roads were considered to begin from this gilded bronze monument. On it were listed all the major cities in the empire and distances to them. Constantine called it the umbilicus Romae ("navel of Rome"), and built a similar—although more complex—monument in Constantinople, the Milion.
Itinerary maps and charts
Combined topographical and road-maps may have existed as specialty items in some Roman libraries, but they were expensive, hard to copy and not in general use. Travelers wishing to plan a journey could consult an itinerarium, which in its most basic form was a simple list of cities and towns along a given road and the distances between them. It was only a short step from lists to a master list, or a schematic route-planner in which roads and their branches were represented more or less in parallel, as in the . From this master list, parts could be copied and sold on the streets.
The most thorough used different symbols for cities, way stations, water courses, and so on. The Roman government from time to time would produce a master road itinerary. The first known were commissioned in 44 BC by Julius Caesar and Mark Antony. Three Greek geographers, Zenodoxus, Theodotus and Polyclitus, were hired to survey the system and compile a master itinerary; the task required over 25 years, and the resulting stone-engraved master itinerary was set up near the Pantheon. Travelers and itinerary sellers could make copies from it.
Vehicles and transportation
Outside the cities, Romans were avid riders and rode on or drove quite a number of vehicle types, some of which are mentioned here. Carts driven by oxen were used. Horse-drawn carts could travel up to per day, while pedestrians traveled per day. For purposes of description, Roman vehicles can be divided into the car, the coach, and the cart. Cars were used to transport one or two individuals, coaches were used to transport parties, and carts to transport cargo.
Of the cars, the most popular was the carrus, a standard chariot form descending to the Romans from a greater antiquity. The top was open, the front closed. One survives in the Vatican. It carried a driver and a passenger. A carrus with two horses was a biga; three horses, a triga; and four horses a quadriga. The tires were of iron. When not in use, its wheels were removed for easier storage. A more luxurious version, the carpentum, transported women and officials. It had an arched overhead covering of cloth and was drawn by mules. A lighter version, the cisium, equivalent to a gig, was open above and in front and had a seat. Drawn by one or two mules or horses, it was used for cab work, the cab drivers being called cisiani. The builder was a cisarius.
Of the coaches, the mainstay was the raeda or reda, which had four wheels. The high sides formed a sort of box in which seats were placed, with a notch on each side for entry. It carried several people with baggage up to the legal limit of 1,000 Roman librae (pounds), modern equivalent . It was drawn by teams of oxen, horses or mules. A cloth top could be put on for weather, in which case it resembled a covered wagon. The raeda was probably the main vehicle for travel on the roads. Raedae meritoriae were hired coaches. The fiscalis raeda was a government coach. The driver and the builder were both referred to as a raedarius.
Of the carts, the main one was the plaustrum or plostrum. This was simply a platform of boards attached to wheels and a cross-tree. The wheels, or tympana, were solid and were several centimetres (inches) thick. The sides could be built up with boards or rails. A large wicker basket was sometimes placed on it. A two-wheel version existed along with the normal four-wheel type called the plaustrum maius.
The military used a standard wagon. Their transportation service was the cursus clabularis, after the standard wagon, called a carrus clabularius, clabularis, clavularis, or clabulare. It transported the impedimenta (baggage) of a military column.
Way stations and traveler inns
For non-military officials and people on official business who had no legion at their service, the government maintained way stations, or mansiones ("staying places"), for their use. Passports were required for identification. Mansiones were located about apart. There the official traveller found a complete villa dedicated to his use. Often a permanent military camp or a town grew up around the mansio. For non-official travelers in need of refreshment, a private system of "inns" or cauponae were placed near the mansiones. They performed the same functions but were somewhat disreputable, as they were frequented by thieves and prostitutes. Graffiti decorate the walls of the few whose ruins have been found.
Genteel travelers needed something better than cauponae. In the early days of the viae, when little unofficial provision existed, houses placed near the road were required by law to offer hospitality on demand. Frequented houses no doubt became the first tabernae, which were hostels, rather than the "taverns" we know today. As Rome grew, so did its tabernae, becoming more luxurious and acquiring good or bad reputations as the case might be. An example is the Tabernae Caediciae at Sinuessa on the Via Appia. It had a large storage room containing barrels of wine, cheese and ham. Many cities of today grew up around a taberna complex, such as Rheinzabern in the Rhineland, and Saverne in Alsace.
A third system of way stations serviced vehicles and animals: the mutationes ("changing stations"). They were located every . In these complexes, the driver could purchase the services of wheelwrights, cartwrights, and equarii medici, or veterinarians. Using these stations as chariot relays, Tiberius hastened in 24 hours to join his brother, Drusus Germanicus, who was dying of gangrene as a result of a fall from a horse.
Post offices and services
Two postal services were available under the empire, one public and one private. The cursus publicus, founded by Augustus, carried the mail of officials by relay throughout the Roman road system. The vehicle for carrying mail was a cisium with a box, but for special delivery a horse and rider was faster. On average a relay of horses could carry a letter in a day. The postman wore a characteristic leather hat, the petanus. The postal service was a somewhat dangerous occupation, as postmen were a target for bandits and enemies of Rome. Private mail of the well-to-do was carried by tabellarii, an organization of slaves available for a price.
Locations
There are many examples of roads that still follow the route of Roman roads.
Italy
Major roads
Via Aemilia, from Rimini (Ariminum) to Placentia
Via Appia, the Appian way (312 BC), from Rome to Apulia
Via Aurelia (241 BC), from Rome to France
Via Cassia, from Rome to Tuscany
Via Flaminia (220 BC), from Rome to Rimini (Ariminum)
Via Raetia, from Verona north across the Brenner Pass
Via Salaria, from Rome to the Adriatic Sea (in the Marches)
Others
Via Aemilia Scauri (109 BC)
Via Aquillia, branches off the Appia at Capua to the sea at Hipponium (Vibo Valentia)
Via Brixiana, from Cremona to Brescia
Via Canalis, from Udine, Gemona and Val Canale to Villach in Carinthia and then over Alps to Salzburg or Vienna
Via Claudia Julia Augusta (13 BC)
Via Claudia Nova (47 AD)
Via Clodia, from Rome to Tuscany forming a system with the Cassia
Via Domitiana, coast road from Naples to Formia
Via Flacca
Via Flavia, from Trieste (Tergeste) to Dalmatia
Via Gemina, from Aquileia and Trieste through the Karst to Materija, Obrov, Lipa and Klana, from where, near Rijeka, descending towards Trsat (Tersatica) to continue along the Dalmatian coast
Via Julia Augusta (8 BC), exits Aquileia
Via Labicana, southeast from Rome, forming a system with the Praenestina
Via Latina, southeast from Rome to Casilinum where it joined the Via Appia.
Via Ostiensis, from Rome to Ostia
Via Postumia (148 BC), from Aquileia through Verona across the Apennines to Genoa
Via Popilia (132 BC), two distinct roads, one from Capua to Rhegium and the other from Ariminum through the later Veneto region
Via Praenestina, from Rome to Praeneste
Via Severiana, Terracina to Ostia
Via Tiberina, from Rome to Ocriculum
Via Tiburtina, from Rome to Tibur
Via Traiana, a branch of Via Appia, from Benevento to Brindisi
Via Traiana Nova (Italy), from Lake Bolsena to the Via Cassia. Known by archaeology only
Via Valeria from Tibur to Aternum
Via Valeria (Sicily) from Messina to Syracuse
Other areas
Africa
Main road: from Sala Colonia to Carthage to Alexandria.
In Egypt: Via Hadriana
In Mauretania Tingitana from Tingis southward (see: Roman roads in Morocco)
Albania / North Macedonia / Greece / Turkey
Via Egnatia (146 BC) connecting Dyrrhachium (on Adriatic Sea) to Byzantium via Thessaloniki
Austria / Serbia / Bulgaria / Turkey
Via Militaris (Via Diagonalis, Via Singidunum), connecting Middle Europe and Byzantium
Bulgaria / Romania
Via Pontica
Cyprus
Via Kolossus. Connecting Paphos, the island Roman capital, with Salamis, the second bigger city and port.
France
In France, a Roman road is called voie romaine in vernacular language.
Via Agrippa
Via Aquitania, from Narbonne, where it connected to the Via Domitia, to the Atlantic Ocean across Toulouse and Bordeaux
Via Domitia (118 BC), from Nîmes to the Pyrenees, where it joins to the Via Augusta at the Col de Panissars
Roman road (Nord), extending from Dunkirk to Cassel in Nord Département
Germania Inferior (Germany, Belgium, Netherlands)
Roman road from Trier to Cologne
Via Belgica (Boulogne-Cologne)
Lower Limes Germanicus
Interconnections between Lower Limes Germanicus and Via Belgica
Middle East
Via Maris
Via Traiana Nova
Petra Roman Road 1st-century Petra, Jordan
Romania
Trajan's bridge and Iron Gates road.
Via Traiana: Porolissum Napoca Potaissa Apulum road.
Via Pontica: Troesmis Piroboridava Caput Stenarum Apulum Partiscum Lugio
Spain and Portugal
Iter ab Emerita Asturicam, from Sevilla to Gijón. Later known as Vía de la Plata (plata means "silver" in Spanish, but in this case it is a false cognate of an Arabic word balata), part of the fan of the Way of Saint James. Now it is the A-66 freeway.
Via Augusta, from Cádiz to the Pyrénées, where it joins to the Via Domitia at the Coll de Panissars, near La Jonquera. It passes through Valencia, Tarragona (anciently Tarraco), and Barcelona.
Camiño de Oro, ending in Ourense, capital of the Province of Ourense, passing near the village of Reboledo.
Via Nova (or Via XVIII), from Bracara Augusta to Asturica Augusta
Syria
Road connecting Antioch and Chalcis.
Strata Diocletiana, along the Limes Arabicus, going through Palmyra and Damascus, and south to Arabia.
Trans-Alpine roads
These roads connected modern Italy and Germany:
Via Claudia Augusta (47) from Altinum (now Quarto d'Altino) to Augsburg via the Reschen Pass
Trans-Pyrenean roads
Connecting Hispania and Gallia:
Ab Asturica Burdigalam
Turkey
Roman road in Cilicia in south Turkey
Roman Road of Ankara
United Kingdom
Akeman Street
Camlet Way
Dere Street
Ermine Street
Fen Causeway
Fosse Way
King Street
London-West of England Roman Roads
Peddars Way
Pye Road
Roman road from Silchester to Bath
Stane Street (Chichester)
Stane Street (Colchester)
Stanegate
Via Devana
Watling Street
| Technology | Ground transportation networks | null |
25919 | https://en.wikipedia.org/wiki/Rapids | Rapids | Rapids are sections of a river where the river bed has a relatively steep gradient, causing an increase in water velocity and turbulence. Flow, gradient, constriction, and obstacles are four factors that are needed for a rapid to be created.
Physical factors
Rapids are hydrological features between a run (a smoothly flowing part of a stream) and a cascade. Rapids are characterized by the river becoming shallower with some rocks exposed above the flow surface. As flowing water splashes over and around the rocks, air bubbles become mixed in with it and portions of the surface acquire a white color, forming what is called "whitewater". Rapids occur where the bed material is highly resistant to the erosive power of the stream in comparison with the bed downstream of the rapids. Very young streams flowing across solid rock may be rapids for much of their length. Rapids cause water aeration of the stream or river, resulting in better water quality.
For a rapid to form, a necessary condition is the presence of a gradient, which refers to the river or stream's downward slope. When a river has a larger gradient, the water flows downhill faster. Gradients are typically measured in feet per mile. This impacts the river's flow or discharge, which is measured as a volume of water per unit of time. The faster the water flows, the more likely a rapid will form.
Rapids are categorized in classes, generally running from I to VI. A Class 5 rapid may be categorized as Class 5.1-5.9. While Class I rapids are easy to navigate and require little maneuvering, Class VI rapids pose threat to life with little or no chance for rescue. River rafting sports are carried out where many rapids are present in the course.
Constriction refers to when rivers flow through narrower channels, thus increasing the velocity of the water. This may also lead to the creation of obstructions due to sediment transportation and erosion. Obstacles may occur by human activity, natural landslides and earthquakes, or accumulation of sediment or debris. The more prominent these four factors are present in a river, the more likely that river is to be a rapid river.
Gallery
| Physical sciences | Fluvial landforms | null |
25927 | https://en.wikipedia.org/wiki/Rutherfordium | Rutherfordium | Rutherfordium is a synthetic chemical element; it has symbol Rf and atomic number 104. It is named after physicist Ernest Rutherford. As a synthetic element, it is not found in nature and can only be made in a particle accelerator. It is radioactive; the most stable known isotope, 267Rf, has a half-life of about 48 minutes.
In the periodic table, it is a d-block element and the second of the fourth-row transition elements. It is in period 7 and is a group 4 element. Chemistry experiments have confirmed that rutherfordium behaves as the heavier homolog to hafnium in group 4. The chemical properties of rutherfordium are characterized only partly. They compare well with the other group 4 elements, even though some calculations had indicated that the element might show significantly different properties due to relativistic effects.
In the 1960s, small amounts of rutherfordium were produced at Joint Institute for Nuclear Research in the Soviet Union and at Lawrence Berkeley National Laboratory in California. Priority of discovery and hence the name of the element was disputed between Soviet and American scientists, and it was not until 1997 that the International Union of Pure and Applied Chemistry (IUPAC) established rutherfordium as the official name of the element.
Introduction
History
Discovery
Rutherfordium was reportedly first detected in 1964 at the Joint Institute for Nuclear Research at Dubna (Soviet Union at the time). Researchers there bombarded a plutonium-242 target with neon-22 ions; a spontaneous fission activity with half-life 0.3 ± 0.1 seconds was detected and assigned to 260104. Later work found no isotope of element 104 with this half-life, so that this assignment must be considered incorrect.
In 1966–1969, the experiment was repeated. This time, the reaction products by gradient thermochromatography after conversion to chlorides by interaction with ZrCl4. The team identified spontaneous fission activity contained within a volatile chloride portraying eka-hafnium properties.
+ → 264−x104 → 264−x104Cl4
The researchers considered the results to support the 0.3 second half-life. Although it is now known that there is no isotope of element 104 with such a half-life, the chemistry does fit that of element 104, as chloride volatility is much greater in group 4 than in group 3 (or the actinides).
In 1969, researchers at University of California, Berkeley conclusively synthesized the element by bombarding a californium-249 target with carbon-12 ions and measured the alpha decay of 257104, correlated with the daughter decay of nobelium-253:
+ → 257104 + 4
They were unable to confirm the 0.3-second half-life for 260104, and instead found a 10–30 millisecond half-life for this isotope, agreeing with the modern value of 21 milliseconds. In 1970, the American team chemically identified element 104 using the ion-exchange separation method, proving it to be a group 4 element and the heavier homologue of hafnium.
The American synthesis was independently confirmed in 1973 and secured the identification of rutherfordium as the parent by the observation of K-alpha X-rays in the elemental signature of the 257104 decay product, nobelium-253.
Naming controversy
As a consequence of the initial competing claims of discovery, an element naming controversy arose. Since the Soviets claimed to have first detected the new element they suggested the name kurchatovium (Ku) in honor of Igor Kurchatov (1903–1960), former head of Soviet nuclear research. This name had been used in books of the Soviet Bloc as the official name of the element. The Americans, however, proposed rutherfordium (Rf) for the new element to honor New Zealand physicist Ernest Rutherford, who is known as the "father" of nuclear physics. In 1992, the IUPAC/IUPAP Transfermium Working Group (TWG) assessed the claims of discovery and concluded that both teams provided contemporaneous evidence to the synthesis of element 104 in 1969, and that credit should be shared between the two groups. In particular, this involved the TWG performing a new retrospective reanalysis of the Russian work in the face of the later-discovered fact that there is no 0.3-second isotope of element 104: they reinterpreted the Dubna results as having been caused by a spontaneous fission branch of 259104.
The American group wrote a scathing response to the findings of the TWG, stating that they had given too much emphasis on the results from the Dubna group. In particular they pointed out that the Russian group had altered the details of their claims several times over a period of 20 years, a fact that the Russian team does not deny. They also stressed that the TWG had given too much credence to the chemistry experiments performed by the Russians, considered the TWG's retrospective treatment of the Russian work based on unpublished documents to have been "highly irregular", noted that there was no proof that 259104 had a spontaneous fission branch at all (as of 2021 there still is not), and accused the TWG of not having appropriately qualified personnel on the committee. The TWG responded by saying that this was not the case and having assessed each point raised by the American group said that they found no reason to alter their conclusion regarding priority of discovery.
The International Union of Pure and Applied Chemistry (IUPAC) adopted unnilquadium (Unq) as a temporary, systematic element name, derived from the Latin names for digits 1, 0, and 4. In 1994, IUPAC suggested a set of names for elements 104 through 109, in which dubnium (Db) became element 104 and rutherfordium became element 106. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names rutherfordium and hahnium, originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name seaborgium for element 106, having just approved a rule that an element could not be named after a living person, even though the IUPAC had given the LBNL team the sole credit for its discovery. In 1997, IUPAC renamed elements 104 to 109, and gave elements 104 and 106 the Berkeley proposals rutherfordium and seaborgium. The name dubnium was given to element 105 at the same time. The 1997 names were accepted by researchers and became the standard.
Isotopes
Rutherfordium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Seventeen different isotopes have been reported with atomic masses from 252 to 270 (with the exceptions of 264 and 269). Most of these decay predominantly through spontaneous fission, particularly isotopes with even neutron numbers, while some of the lighter isotopes with odd neutron numbers also have significant alpha decay branches.
Stability and half-lives
Out of isotopes whose half-lives are known, the lighter isotopes usually have shorter half-lives. The three lightest known isotopes have half-lives of under 50 μs, with the lightest reported isotope 252Rf having a half-life shorter than one microsecond. The isotopes 256Rf, 258Rf, 260Rf are more stable at around 10 ms; 255Rf, 257Rf, 259Rf, and 262Rf live between 1 and 5 seconds; and 261Rf, 265Rf, and 263Rf are more stable, at around 1.1, 1.5, and 10 minutes respectively. The most stable known isotope, 267Rf, is one of the heaviest, and has a half-life of about 48 minutes. Rutherfordium isotopes with an odd neutron number tend to have longer half-lives than their even–even neighbors because the odd neutron provides additional hindrance against spontaneous fission.
The lightest isotopes were synthesized by direct fusion between two lighter nuclei and as decay products. The heaviest isotope produced by direct fusion is 262Rf; heavier isotopes have only been observed as decay products of elements with larger atomic numbers. The heavy isotopes 266Rf and 268Rf have also been reported as electron capture daughters of the dubnium isotopes 266Db and 268Db, but have short half-lives to spontaneous fission. It seems likely that the same is true for 270Rf, a possible daughter of 270Db. These three isotopes remain unconfirmed.
In 1999, American scientists at the University of California, Berkeley, announced that they had succeeded in synthesizing three atoms of 293Og. These parent nuclei were reported to have successively emitted seven alpha particles to form 265Rf nuclei, but their claim was retracted in 2001. This isotope was later discovered in 2010 as the final product in the decay chain of 285Fl.
Predicted properties
Very few properties of rutherfordium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that rutherfordium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of rutherfordium metal remain unknown and only predictions are available.
Chemical
Rutherfordium is the first transactinide element and the second member of the 6d series of transition metals. Calculations on its ionization potentials, atomic radius, as well as radii, orbital energies, and ground levels of its ionized states are similar to that of hafnium and very different from that of lead. Therefore, it was concluded that rutherfordium's basic properties will resemble those of other group 4 elements, below titanium, zirconium, and hafnium. Some of its properties were determined by gas-phase experiments and aqueous chemistry. The oxidation state +4 is the only stable state for the latter two elements and therefore rutherfordium should also exhibit a stable +4 state. In addition, rutherfordium is also expected to be able to form a less stable +3 state. The standard reduction potential of the Rf4+/Rf couple is predicted to be higher than −1.7 V.
Initial predictions of the chemical properties of rutherfordium were based on calculations which indicated that the relativistic effects on the electron shell might be strong enough that the 7p orbitals would have a lower energy level than the 6d orbitals, giving it a valence electron configuration of 6d1 7s2 7p1 or even 7s2 7p2, therefore making the element behave more like lead than hafnium. With better calculation methods and experimental studies of the chemical properties of rutherfordium compounds it could be shown that this does not happen and that rutherfordium instead behaves like the rest of the group 4 elements. Later it was shown in ab initio calculations with the high level of accuracy that the Rf atom has the ground state with the 6d2 7s2 valence configuration and the low-lying excited 6d1 7s2 7p1 state with the excitation energy of only 0.3–0.5 eV.
In an analogous manner to zirconium and hafnium, rutherfordium is projected to form a very stable, refractory oxide, RfO2. It reacts with halogens to form tetrahalides, RfX4, which hydrolyze on contact with water to form oxyhalides RfOX2. The tetrahalides are volatile solids existing as monomeric tetrahedral molecules in the vapor phase.
In the aqueous phase, the Rf4+ ion hydrolyzes less than titanium(IV) and to a similar extent as zirconium and hafnium, thus resulting in the RfO2+ ion. Treatment of the halides with halide ions promotes the formation of complex ions. The use of chloride and bromide ions produces the hexahalide complexes and . For the fluoride complexes, zirconium and hafnium tend to form hepta- and octa- complexes. Thus, for the larger rutherfordium ion, the complexes , and are possible.
Physical and atomic
Rutherfordium is expected to be a solid under normal conditions and have a hexagonal close-packed crystal structure (c/a = 1.61), similar to its lighter congener hafnium. It should be a metal with density ~17 g/cm3. The atomic radius of rutherfordium is expected to be ~150 pm. Due to relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, Rf+ and Rf2+ ions are predicted to give up 6d electrons instead of 7s electrons, which is the opposite of the behavior of its lighter homologs. When under high pressure (variously calculated as 72 or ~50 GPa), rutherfordium is expected to transition to body-centered cubic crystal structure; hafnium transforms to this structure at 71±1 GPa, but has an intermediate ω structure that it transforms to at 38±8 GPa that should be lacking for rutherfordium.
Experimental chemistry
Gas phase
Early work on the study of the chemistry of rutherfordium focused on gas thermochromatography and measurement of relative deposition temperature adsorption curves. The initial work was carried out at Dubna in an attempt to reaffirm their discovery of the element. Recent work is more reliable regarding the identification of the parent rutherfordium radioisotopes. The isotope 261mRf has been used for these studies, though the long-lived isotope 267Rf (produced in the decay chain of 291Lv, 287Fl, and 283Cn) may be advantageous for future experiments. The experiments relied on the expectation that rutherfordium would be a 6d element in group 4 and should therefore form a volatile molecular tetrachloride, that would be tetrahedral in shape. Rutherfordium(IV) chloride is more volatile than its lighter homologue hafnium(IV) chloride (HfCl4) because its bonds are more covalent.
A series of experiments confirmed that rutherfordium behaves as a typical member of group 4, forming a tetravalent chloride (RfCl4) and bromide (RfBr4) as well as an oxychloride (RfOCl2). A decreased volatility was observed for when potassium chloride is provided as the solid phase instead of gas, highly indicative of the formation of nonvolatile mixed salt.
Aqueous phase
Rutherfordium is expected to have the electron configuration [Rn]5f14 6d2 7s2 and therefore behave as the heavier homologue of hafnium in group 4 of the periodic table. It should therefore readily form a hydrated Rf4+ ion in strong acid solution and should readily form complexes in hydrochloric acid, hydrobromic or hydrofluoric acid solutions.
The most conclusive aqueous chemistry studies of rutherfordium have been performed by the Japanese team at Japan Atomic Energy Research Institute using the isotope 261mRf. Extraction experiments from hydrochloric acid solutions using isotopes of rutherfordium, hafnium, zirconium, as well as the pseudo-group 4 element thorium have proved a non-actinide behavior for rutherfordium. A comparison with its lighter homologues placed rutherfordium firmly in group 4 and indicated the formation of a hexachlororutherfordate complex in chloride solutions, in a manner similar to hafnium and zirconium.
+ 6 →
Very similar results were observed in hydrofluoric acid solutions. Differences in the extraction curves were interpreted as a weaker affinity for fluoride ion and the formation of the hexafluororutherfordate ion, whereas hafnium and zirconium ions complex seven or eight fluoride ions at the concentrations used:
+ 6 →
Experiments performed in mixed sulfuric and nitric acid solutions shows that rutherfordium has a much weaker affinity towards forming sulfate complexes than hafnium. This result is in agreement with predictions, which expect rutherfordium complexes to be less stable than those of zirconium and hafnium because of a smaller ionic contribution to the bonding. This arises because rutherfordium has a larger ionic radius (76 pm) than zirconium (71 pm) and hafnium (72 pm), and also because of relativistic stabilisation of the 7s orbital and destabilisation and spin–orbit splitting of the 6d orbitals.
Coprecipitation experiments performed in 2021 studied rutherfordium's behaviour in basic solution containing ammonia or sodium hydroxide, using zirconium, hafnium, and thorium as comparisons. It was found that rutherfordium does not strongly coordinate with ammonia and instead coprecipitates out as a hydroxide, which is probably Rf(OH)4.
| Physical sciences | Group 4 | Chemistry |
25948 | https://en.wikipedia.org/wiki/Refraction | Refraction | In physics, refraction is the redirection of a wave as it passes from one medium to another. The redirection can be caused by the wave's change in speed or by a change in the medium. Refraction of light is the most commonly observed phenomenon, but other waves such as sound waves and water waves also experience refraction. How much a wave is refracted is determined by the change in wave speed and the initial direction of wave propagation relative to the direction of change in speed.
For light, refraction follows Snell's law, which states that, for a given pair of media, the ratio of the sines of the angle of incidence and angle of refraction is equal to the ratio of phase velocities in the two media, or equivalently, to the refractive indices of the two media:
Optical prisms and lenses use refraction to redirect light, as does the human eye. The refractive index of materials varies with the wavelength of light, and thus the angle of the refraction also varies correspondingly. This is called dispersion and causes prisms and rainbows to divide white light into its constituent spectral colors.
General explanation
A correct explanation of refraction involves two separate parts, both a result of the wave nature of light.
Light slows as it travels through a medium other than vacuum (such as air, glass or water). This is not because of scattering or absorption. Rather it is because, as an electromagnetic oscillation, light itself causes other electrically charged particles such as electrons, to oscillate. The oscillating electrons emit their own electromagnetic waves which interact with the original light. The resulting "combined" wave has wave packets that pass an observer at a slower rate. The light has effectively been slowed. When light returns to a vacuum and there are no electrons nearby, this slowing effect ends and its speed returns to .
When light enters a slower medium at an angle, one side of the wavefront is slowed before the other. This asymmetrical slowing of the light causes it to change the angle of its travel. Once light is within the new medium with constant properties, it travels in a straight line again.
Slowing of light
As described above, the speed of light is slower in a medium other than vacuum. This slowing applies to any medium such as air, water, or glass, and is responsible for phenomena such as refraction. When light leaves the medium and returns to a vacuum, and ignoring any effects of gravity, its speed returns to the usual speed of light in vacuum, .
A correct explanation rests on light's nature as an electromagnetic wave. Because light is an oscillating electrical/magnetic wave, light traveling in a medium causes the electrically charged electrons of the material to also oscillate. (The material's protons also oscillate but as they are around 2000 times more massive, their movement and therefore their effect, is far smaller). A moving electrical charge emits electromagnetic waves of its own. The electromagnetic waves emitted by the oscillating electrons interact with the electromagnetic waves that make up the original light, similar to water waves on a pond, a process known as constructive interference. When two waves interfere in this way, the resulting "combined" wave may have wave packets that pass an observer at a slower rate. The light has effectively been slowed. When the light leaves the material, this interaction with electrons no longer happens, and therefore the wave packet rate (and therefore its speed) return to normal.
Bending of light
Consider a wave going from one material to another where its speed is slower as in the figure. If it reaches the interface between the materials at an angle one side of the wave will reach the second material first, and therefore slow down earlier. With one side of the wave going slower the whole wave will pivot towards that side. This is why a wave will bend away from the surface or toward the normal when going into a slower material. In the opposite case of a wave reaching a material where the speed is higher, one side of the wave will speed up and the wave will pivot away from that side.
Another way of understanding the same thing is to consider the change in wavelength at the interface. When the wave goes from one material to another where the wave has a different speed , the frequency of the wave will stay the same, but the distance between wavefronts or wavelength will change. If the speed is decreased, such as in the figure to the right, the wavelength will also decrease. With an angle between the wave fronts and the interface and change in distance between the wave fronts the angle must change over the interface to keep the wave fronts intact. From these considerations the relationship between the angle of incidence , angle of transmission and the wave speeds and in the two materials can be derived. This is the law of refraction or Snell's law and can be written as
The phenomenon of refraction can in a more fundamental way be derived from the 2 or 3-dimensional wave equation. The boundary condition at the interface will then require the tangential component of the wave vector to be identical on the two sides of the interface. Since the magnitude of the wave vector depend on the wave speed this requires a change in direction of the wave vector.
The relevant wave speed in the discussion above is the phase velocity of the wave. This is typically close to the group velocity which can be seen as the truer speed of a wave, but when they differ it is important to use the phase velocity in all calculations relating to refraction.
A wave traveling perpendicular to a boundary, i.e. having its wavefronts parallel to the boundary, will not change direction even if the speed of the wave changes.
Dispersion of light
Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass and water have higher refractive indexes than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency (and wavelength), a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies and different wavelengths.
Law
For light, the refractive index of a material is more often used than the wave phase speed in the material. They are directly related through the speed of light in vacuum as
In optics, therefore, the law of refraction is typically written as
On water
Refraction occurs when light goes through a water surface since water has a refractive index of 1.33 and air has a refractive index of about 1. Looking at a straight object, such as a pencil in the figure here, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is.
The depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by an archer fish.
For small angles of incidence (measured from the normal, when is approximately the same as ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90°, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached.
Atmospheric
The refractive index of air depends on the air density and thus vary with air temperature and pressure. Since the pressure is lower at higher altitudes, the refractive index is also lower, causing light rays to refract towards the earth surface when traveling long distances through the atmosphere. This shifts the apparent positions of stars slightly when they are close to the horizon and makes the sun visible before it geometrically rises above the horizon during a sunrise.
Temperature variations in the air can also cause refraction of light. This can be seen as a heat haze when hot and cold air is mixed e.g. over a fire, in engine exhaust, or when opening a window on a cold day. This makes objects viewed through the mixed air appear to shimmer or move around randomly as the hot and cold air moves. This effect is also visible from normal variations in air temperature during a sunny day when using high magnification telephoto lenses and is often limiting the image quality in these cases.
In a similar way, atmospheric turbulence gives rapidly varying distortions in the images of astronomical telescopes limiting the resolution of terrestrial telescopes not using adaptive optics or other techniques for overcoming these atmospheric distortions.
Air temperature variations close to the surface can give rise to other optical phenomena, such as mirages and Fata Morgana. Most commonly, air heated by a hot road on a sunny day deflects light approaching at a shallow angle towards a viewer. This makes the road appear reflecting, giving an illusion of water covering the road.
Clinical significance
In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision.
Refractive surgery is a medical procedure to treat common vision disorders.
Mechanical waves
Water
Water waves travel slower in shallower water. This can be used to demonstrate refraction in ripple tanks and also explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their original direction of travel to an angle more normal to the shoreline.
Sound
In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent on the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water.
Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries. Beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to address the meteorological effects of bending of sound rays in the lower atmosphere.
Gallery
| Physical sciences | Optics | null |
25949 | https://en.wikipedia.org/wiki/Recreational%20drug%20use | Recreational drug use | Recreational drug use is the use of one or more psychoactive drugs to induce an altered state of consciousness, either for pleasure or for some other casual purpose or pastime. When a psychoactive drug enters the user's body, it induces an intoxicating effect. Recreational drugs are commonly divided into three categories: depressants (drugs that induce a feeling of relaxation and calmness), stimulants (drugs that induce a sense of energy and alertness), and hallucinogens (drugs that induce perceptual distortions such as hallucination).
In popular practice, recreational drug use is generally tolerated as a social behaviour, rather than perceived as the medical condition of self-medication. However, drug use and drug addiction are severely stigmatized everywhere in the world. Many people also use prescribed and controlled depressants such as opioids, opiates, and benzodiazepines. What controlled substances are considered generally unlawful to possess varies by country, but usually includes cannabis, cocaine, opioids, MDMA, amphetamine, methamphetamine, psychedelics, benzodiazepines, and barbiturates. it is estimated that about 5% of people worldwide aged 15 to 65 (158 million to 351 million) had used controlled drugs at least once.
Common recreational drugs include caffeine, commonly found in coffee, tea, soft drinks, and chocolate; alcohol, commonly found in beer, wine, cocktails, and distilled spirits; nicotine, commonly found in tobacco, tobacco-based products, and electronic cigarettes; cannabis and hashish (with legality of possession varying inter/intra-nationally); and the controlled substances listed as controlled drugs in the Single Convention on Narcotic Drugs (1961) and the Convention on Psychotropic Substances (1971) of the United Nations (UN). Since the early 2000s, the European Union (EU) has developed several comprehensive and multidisciplinary strategies as part of its drug policy in order to prevent the diffusion of recreational drug use and abuse among the European population and raise public awareness on the adverse effects of drugs among all member states of the European Union, as well as conjoined efforts with European law enforcement agencies, such as Europol and EMCDDA, in order to counter organized crime and illegal drug trade in Europe.
Reasons for use
Many researchers have explored the etiology of recreational drug use. Some of the most common theories are: genetics, personality type, psychological problems, self-medication, sex, age, depression, curiosity, boredom, rebelliousness, a sense of belonging to a group, family and attachment issues, history of trauma, failure at school or work, socioeconomic stressors, peer pressure, juvenile delinquency, availability, historical factors, or socio-cultural influences. There has been no consensus on a single cause. Instead, experts tend to apply the biopsychosocial model. Any number of factors may influence an individual's drug use, as they are not mutually exclusive. Regardless of genetics, mental health, or traumatic experiences, social factors play a large role in the exposure to and availability of certain types of drugs and patterns of use.
According to addiction researcher Martin A. Plant, some people go through a period of self-redefinition before initiating recreational drug use. They tend to view using drugs as part of a general lifestyle that involves belonging to a subculture that they associate with heightened status and the challenging of social norms. Plant states: "From the user's point of view there are many positive reasons to become part of the milieu of drug taking. The reasons for drug use appear to have as much to do with needs for friendship, pleasure and status as they do with unhappiness or poverty. Becoming a drug taker, to many people, is a positive affirmation rather than a negative experience".
Evolution
Anthropological research has suggested that humans "may have evolved to counter-exploit plant neurotoxins". The ability to use botanical chemicals to serve the function of endogenous neurotransmitters may have improved survival rates, conferring an evolutionary advantage. A typically restrictive prehistoric diet may have emphasized the apparent benefit of consuming psychoactive drugs, which had themselves evolved to imitate neurotransmitters. Chemical–ecological adaptations and the genetics of hepatic enzymes, particularly cytochrome P450, have led researchers to propose that "humans have shared a co-evolutionary relationship with psychotropic plant substances that is millions of years old."
Health risks
The severity of impact and type of risks that come with recreational drug use vary widely with the drug in question and the amount being used. There are many factors in the environment and within the user that interact with each drug differently. Alcohol is sometimes considered one of the most dangerous recreational drugs. Alcoholic drinks, tobacco products and other nicotine-based products (e.g., electronic cigarettes), and cannabis are regarded by various medical professionals as the most common and widespread gateway drugs. In the United States, Australia, and New Zealand, the general onset of drinking alcohol, tobacco smoking, cannabis smoking, and consumption of multiple drugs most frequently occurs during adolescence and in middle school and secondary school settings.
Some scientific studies in the early 21st century found that a low to moderate level of alcohol consumption, particularly of red wine, might have substantial health benefits such as decreased risk of cardiovascular diseases, stroke, and cognitive decline. This claim has been disputed, specifically by British researcher David Nutt, professor of neuropsychopharmacology at the Imperial College London, who stated that studies showing benefits for "moderate" alcohol consumption in "some middle-aged men" lacked controls for the variable of what the subjects were drinking beforehand. Experts in the United Kingdom have suggested that some psychoactive drugs that may be causing less harm to fewer users (although they are also used less frequently in the first place) are cannabis, psilocybin mushrooms, LSD, and MDMA; however, these drugs have risks and side effects of their own.
Drug harmfulness
Drug harmfulness is defined as the degree to which a psychoactive drug has the potential to cause harm to the user and is measured in several ways, such as by addictiveness and the potential for physical harm. More objectively harmful drugs may be colloquially referred to as "hard drugs", and less harmful drugs as "soft drugs". The term "soft drug" is considered controversial by critics as it may imply the false belief that soft drugs cause lesser or insignificant harm.
Responsible use
Responsible drug use advocates that users should not take drugs at the same time as activities such as driving, swimming, operating machinery, or other activities that are unsafe without a sober state. Responsible drug use is emphasized as a primary prevention technique in harm-reduction drug policies. Harm-reduction policies were popularized in the late 1980s, although they began in the 1970s counter-culture, through cartoons explaining responsible drug use and the consequences of irresponsible drug use to users. Another issue is that the illegality of drugs causes social and economic consequences for users—the drugs may be "cut" with adulterants and the purity varies wildly, making overdoses more likely—and legalization of drug production and distribution could reduce these and other dangers of illegal drug use.
Prevention
In efforts to curtail recreational drug use, governments worldwide introduced several laws prohibiting the possession of almost all varieties of recreational drugs during the 20th century. The "War on Drugs" promoted by the United States, however, is now facing increasing criticism. Evidence is insufficient to tell if behavioral interventions help prevent recreational drug use in children.
One in four adolescents has used an illegal drug, and one in ten of those adolescents who need addiction treatment get some type of care. School-based programs are the most commonly used method for drug use education; however, the success rates of these intervention programs are highly dependent on the commitment of participants and are limited in general.
Demographics
Australia
Alcohol is the most widely used recreational drug in Australia. 86.2% of Australians aged 12 years and over have consumed alcohol at least once in their lifetime, compared to 34.8% of Australians aged 12 years and over who have used cannabis at least once in their lifetime.
United States
From the mid-19th century to the 1930s, American physicians prescribed Cannabis sativa as a prescription drug for various medical conditions. In the 1960s, the counterculture movement introduced the use of psychoactive drugs, including cannabis. Young adults and college students reported the recreational prevalence of cannabis, among other drugs, at 20-25% while the cultural mindset of using was open and curious. In 1969, the FBI reported that between the years 1966 and 1968, the number of arrests for marijuana possession, which had been outlawed throughout the United States under Marijuana Tax Act of 1937, had increased by 98%. Despite acknowledgement that drug use was greatly growing among America's youth during the late 1960s, surveys have suggested that only as much as 4% of the American population had ever smoked marijuana by 1969. By 1972, however, that number would increase to 12%. That number would then double by 1977.
The Controlled Substances Act of 1970 classified marijuana along with heroin and LSD as a Schedule I drug, i.e., having the relatively highest abuse potential and no accepted medical use. Most marijuana at that time came from Mexico, but in 1975 the Mexican government agreed to eradicate the crop by spraying it with the herbicide paraquat, raising fears of toxic side effects. Colombia then became the main supplier. The "zero tolerance" climate of the Reagan and Bush administrations (1981–1993) resulted in passage of strict laws and mandatory sentences for possession of marijuana. The "War on Drugs" thus brought with it a shift from reliance on imported supplies to domestic cultivation, particularly in Hawaii and California. Beginning in 1982, the Drug Enforcement Administration turned increased attention to marijuana farms in the United States, and there was a shift to the indoor growing of plants specially developed for small size and high yield. After over a decade of decreasing use, marijuana smoking began an upward trend once more in the early 1990s, especially among teenagers, but by the end of the decade this upswing had leveled off well below former peaks of use.
Society and culture
Many movements and organizations are advocating for or against the liberalization of the use of recreational drugs, most notably regarding the legalization of marijuana and cannabinoids for medical and/or recreational use. Subcultures have emerged among users of recreational drugs, as well as alternative lifestyles and social movements among those who abstain from them, such as teetotalism and "straight edge".
Since the early 2000s, medical professionals have acknowledged and addressed the problem of the increasing consumption of alcoholic drinks and club drugs (such as MDMA, cocaine, rohypnol, GHB, ketamine, PCP, LSD, and methamphetamine) associated with rave culture among adolescents and young adults in the Western world. Studies have shown that adolescents are more likely than young adults to use multiple drugs, and the consumption of club drugs is highly associated with the presence of criminal behaviors and recent alcohol abuse or dependence.
The prevalence of recreational drugs in human societies is widely reflected in fiction, entertainment, and the arts, subject to prevailing laws and social conventions. For instance, in the music industry, the musical genres hip hop, hardcore rap, and trap, alongside their derivative subgenres and subcultures, are most notorious for having continuously celebrated and promoted drug trafficking, gangster lifestyle, and consumption of alcohol and other drugs since their inception in the United States during the late 1980s–early 1990s. In video games, for example, drugs are portrayed in a variety of ways: including power-ups (cocaine gum replenishes stamina in Red Dead Redemption 2), obstacles to be avoided (such as the Fuzzies in Super Mario World 2: Yoshi's Island that distort the player's view when accidentally consumed), items to be bought and sold for in-game currency (coke dealing is a big part of Scarface: The World Is Yours). In the Fallout video game franchise, drugs ("chems" in the game) can fill the role of any above mentioned. Drug trafficking, gang rivalries, and their related criminal underworld also play a big part in the Grand Theft Auto video game franchise.
Common recreational drugs
The following substances are commonly used recreationally:
Alcohol: Most drinking alcohol is ethanol, . Drinking alcohol creates intoxication, relaxation and lowered inhibitions. It is produced by the fermentation of sugars by yeasts to create wine, beer, and distilled liquor (e.g., vodka, rum, gin, etc.). In most areas of the world, it is legal for those over a certain age (18 in most countries). It is an IARC Group 1 carcinogen and a teratogen. Alcohol withdrawal can be life-threatening.
Amphetamines: Used recreationally to provide alertness and a sense of energy. Prescribed for ADHD, narcolepsy, depression, and weight loss. A potent central nervous system stimulant, in the 1940s and 50s methamphetamine was used by Axis and Allied troops in World War II, and, later on, other armies, and by Japanese factory workers. It increases muscle strength and fatigue resistance and improves reaction time. Methamphetamine use can be neurotoxic, which means it damages dopamine neurons. As a result of this brain damage, chronic use can lead to post acute withdrawal syndrome.
Caffeine: Often found in coffee, black tea, energy drinks, some soft drinks (e.g., Coca-Cola, Pepsi, and Mountain Dew, among others), and chocolate. It is the world's most widely consumed psychoactive drug, but has only mild dependence liability for long-term users.
Cannabis: Its common forms include marijuana and hashish, which are smoked, vaporized or eaten. It contains at least 85 cannabinoids. The primary psychoactive component is THC, which mimics the neurotransmitter anandamide, named after the Hindu ananda, "joy, bliss, delight". When cannabis is eaten, THC metabolized into 11-OH-THC, this molecule is the primary psychoactive coumpound of edible forms of cannabis. THC and 11-OH-THC are partial agonist at CB1 and CB2 receptors of the endocannabinoid system.
Cocaine: It is available as a white powder, which is insufflated ("sniffed" into the nostrils) or converted into a solution with water and injected. A popular derivative, crack cocaine is typically smoked. When transformed into its freebase form, crack, the cocaine vapour may be inhaled directly. This is thought to increase bioavailability, but has also been found to be toxic, due to the production of methylecgonidine during pyrolysis.
MDMA: Commonly known as ecstasy, it is a common club drug in the rave scene.
Ketamine: An anesthetic used legally by paramedics and doctors in emergency situations for its dissociative and analgesic qualities and illegally in the club drug scene.
Lean: A liquid drug mixture made when mixing cough syrup, sweets, soft drinks and codeine. It originated in the 1990s in Houston. Ever since then, this drug usage has grown and is often used at parties and in the trap music scene. Many people would get a drowsy feeling when consuming this drug.
LSD: A popular ergoline derivative, that was first synthesized in 1938 by Albert Hofmann. However, he failed to notice its psychedelic effects until 1943. It's a serotonergic psychedelic (partial agonist at serotonin receptors, particularly the 5-HT2A subtypes) like psilocin, mescaline and DMT. But LSD is unique because it is also a partial agonist of dopamine and norepinephrine receptors, particularly the D2R subtypes. LSD (d-Lysergic Acid Diethylamide) is a molecule of the lysergamide family, a subclass of the tryptamine family. In the 1950s, it was used in psychological therapy, and, covertly, by the CIA in Project MKULTRA, in which the drug was administered to unwitting US and Canadian citizens. It played a central role in 1960s 'counter-culture', and was banned in October 1968 by US President Lyndon B Johnson.
Nitrous oxide: legally used by dentists as an anxiolytic and anaesthetic, it is also used recreationally by users who obtain it from whipped cream canisters (whippets or whip-its) (see inhalant), as it causes perceptual effects, a "high" and at higher doses, hallucinations.
Opiates and opioids: Available by prescription for pain relief. Commonly used opioids include oxycodone, hydrocodone, codeine, fentanyl, heroin, methadone, and morphine. Opioids have a high potential for addiction and have the ability to induce severe physical withdrawal symptoms upon cessation of frequent use. Heroin can be smoked, insufflated, or turned into a solution with water and injected. Percocet is a prescription opioid containing oxycodone and acetaminophen.
Psilocybin mushrooms: This hallucinogenic drug was an important drug in the psychedelic scene. Until 1963, when it was chemically analysed by Albert Hofmann, it was completely unknown to modern science that Psilocybe semilanceata ("Liberty Cap", common throughout Europe) contains psilocybin, a hallucinogen previously identified only in species native to Mexico, Asia, and North America.
Tobacco: Nicotiana tabacum. Nicotine is the key drug contained in tobacco leaves, which are either smoked, chewed or snuffed. It contains nicotine, which crosses the blood–brain barrier in 10–20 seconds. It mimics the action of the neurotransmitter acetylcholine at nicotinic acetylcholine receptors in the brain and the neuromuscular junction. The neuronal forms of the receptor are present both post-synaptically (involved in classical neurotransmission) and pre-synaptically, where they can influence the release of multiple neurotransmitters.
Tranquilizers: barbiturates, benzodiazepines (e.g. alprazolam, diazepam, etc.)(commonly prescribed for anxiety disorders; known to cause dementia and post acute withdrawal syndrome)
"Bath salts": slang term that generally refers to substituted cathinones such as Mephedrone and Methylenedioxypyrovalerone (MDPV), but not always
DMT – primary ingredient in ayahuasca, can also be smoked (inhalation causes a brief effect lasting usually 5 to 15 minutes).
Peyote: This hallucinogen contains mescaline, native to southwestern Texas and Mexico. Echinopsis pachanoi is a faster growing cactus containing mescaline. It is one of the few narcotics legally available in the United States for religious purposes by the Native American Church.
Salvia divinorum: This hallucinogenic Mexican herb in the mint family; not considered recreational, most likely due to the nature of the hallucinations (legal in some jurisdictions)
Synthetic cannabis: "Spice", "K2", JWH-018, AM-2201
Quaaludes: A popular club drug in the 1970s. No longer prescribed or manufactured in many countries but remains popular in South Africa.
Routes of administration
Drugs are often associated with a particular route of administration. Many drugs can be consumed in more than one way. For example, marijuana can be swallowed like food or smoked, and cocaine can be "sniffed" in the nostrils, injected, or, with various modifications, smoked.
inhalation: all intoxicative inhalants (see below) that are gases or solvent vapours that are inhaled through the trachea, as the name suggests
insufflation: also known as "snorting", or "sniffing", this method involves the user placing a powder in the nostrils and breathing in through the nose, so that the drug is absorbed by the mucous membranes. Drugs that are "snorted", or "sniffed", include powdered amphetamines, cocaine, heroin, ketamine, MDMA, and snuff tobacco.
Subcutaneous injection (see also the article Skin popping): injection of drug into the third lowest layer of skin.
Intramuscular injection: injection of drug into a muscle.
intravenous injection (see also the article Drug injection): the user injects a solution of water and the drug into a vein, or less commonly, into the tissue. Drugs that are injected include morphine and heroin, less commonly other opioids. Stimulants like cocaine or methamphetamine may also be injected. In rare cases, users inject other drugs.
oral intake: caffeine, ethanol, cannabis edibles, psilocybin mushrooms, coca tea, poppy tea, laudanum, GHB, ecstasy pills with MDMA or various other substances (mainly stimulants and psychedelics), prescription and over-the-counter drugs (ADHD and narcolepsy medications, benzodiazepines, anxiolytics, sedatives, cough suppressants, morphine, codeine, opioids and others)
sublingual: substances diffuse into the blood through tissues under the tongue. Many psychoactive drugs can be or have been specifically designed for sublingual administration, including barbiturates, benzodiazepines, opioid analgesics with poor gastrointestinal bioavailability, LSD blotters, coca leaves, some hallucinogens. This route of administration is activated when chewing some forms of smokeless tobacco (e.g. dipping tobacco, snus).
intrarectal: administering into the rectum, most water-soluble drugs can be used this way.
smoking (see also the section below): tobacco, cannabis, opium, crystal meth, phencyclidine, crack cocaine, and heroin (diamorphine as freebase) known as chasing the dragon.
transdermal patches with prescription drugs: e.g. methylphenidate (Daytrana) and fentanyl.
Many drugs are taken through various routes. Intravenous route is the most efficient, but also one of the most dangerous. Nasal, rectal, inhalation and smoking are safer. The oral route is one of the safest and most comfortable, but of little bioavailability.
Types
Depressants
Depressants are psychoactive drugs that temporarily diminish the function or activity of a specific part of the body or mind. Colloquially, depressants are known as "downers", and users generally take them to feel more relaxed and less tense. Examples of these kinds of effects may include anxiolysis, sedation, and hypotension. Depressants are widely used throughout the world as prescription medicines and as illicit substances. When these are used, effects may include anxiolysis (reduction of anxiety), analgesia (pain relief), sedation, somnolence, cognitive/memory impairment, dissociation, muscle relaxation, lowered blood pressure/heart rate, respiratory depression, anesthesia, and anticonvulsant effects. Depressants exert their effects through a number of different pharmacological mechanisms, the most prominent of which include potentiation of GABA or opioid activity, and inhibition of adrenergic, histamine or acetylcholine activity. Some are also capable of inducing feelings of euphoria. The most widely used depressant by far is alcohol (i.e. ethanol).
Stimulants or "uppers", such as amphetamines or cocaine, which increase mental or physical function, have an opposite effect to depressants.
Depressants, in particular alcohol, can precipitate psychosis. A 2019 systematic review and meta-analysis by Murrie et al. found that the rate of transition from opioid, alcohol and sedative induced psychosis to schizophrenia was 12%, 10% and 9% respectively.
Antihistamines
Antihistamines (or "histamine antagonists") inhibit the release or action of histamine. "Antihistamine" can be used to describe any histamine antagonist, but the term is usually reserved for the classical antihistamines that act upon the H1 histamine receptor. Antihistamines are used as treatment for allergies. Allergies are caused by an excessive response of the body to allergens, such as the pollen released by grasses and trees. An allergic reaction causes release of histamine by the body. Other uses of antihistamines are to help with normal symptoms of insect stings even if there is no allergic reaction. Their recreational appeal exists mainly due to their anticholinergic properties, that induce anxiolysis and, in some cases such as diphenhydramine, chlorpheniramine, and orphenadrine, a characteristic euphoria at moderate doses. High dosages taken to induce recreational drug effects may lead to overdoses. Antihistamines are also consumed in combination with alcohol, particularly by youth who find it hard to obtain alcohol. The combination of the two drugs can cause intoxication with lower alcohol doses.
Hallucinations and possibly delirium resembling the effects of Datura stramonium can result if the drug is taken in much higher than therapeutic doses. Antihistamines are widely available over the counter at drug stores (without a prescription), in the form of allergy medication and some cough medicines. They are sometimes used in combination with other substances such as alcohol.
The most common unsupervised use of antihistamines in terms of volume and percentage of the total is perhaps in parallel to the medicinal use of some antihistamines to extend and intensify the effects of opioids and depressants. The most commonly used are hydroxyzine, mainly to extend a supply of other drugs, as in medical use, and the above-mentioned ethanolamine and alkylamine-class first-generation antihistamines, which are – once again as in the 1950s – the subject of medical research into their anti-depressant properties.
For all of the above reasons, the use of medicinal scopolamine for recreational uses is also observed.
Analgesics
Analgesics (also known as "painkillers") are used to relieve pain (achieve analgesia). The word analgesic derives from Greek "αν-" (an-, "without") and "άλγος" (álgos, "pain"). Analgesic drugs act in various ways on the peripheral and central nervous systems; they include paracetamol (also known in the US as acetaminophen), the nonsteroidal anti-inflammatory drugs (NSAIDs) such as the salicylates (e.g. aspirin), and opioid drugs such as hydrocodone, codeine, heroin and oxycodone. Some further examples of the brand name prescription opiates and opioid analgesics that may be used recreationally include Vicodin, Lortab, Norco (hydrocodone), Avinza, Kapanol (morphine), Opana, Paramorphan (oxymorphone), Dilaudid, Palladone (hydromorphone), and OxyContin (oxycodone).
Tranquilizers
The following are examples of tranquilizers (GABAergics):
Barbiturates
Benzodiazepines
Ethanol (drinking alcohol; ethyl alcohol)
Nonbenzodiazepines
Others
carisoprodol (Soma)
chloral hydrate
diethyl ether
ethchlorvynol (Placidyl; "jelly-bellies")
gamma-butyrolactone (GBL, a prodrug to GHB)
gamma-hydroxybutyrate (GHB; G; Xyrem; "Liquid Ecstasy", "Fantasy")
glutethimide (Doriden)
kava (from Piper methysticum; contains kavalactones)
ketamine, a phencyclidine (PCP) analog
meprobamate (Miltown)
methaqualone (Sopor, Mandrax; "Quaaludes")
phenibut
propofol (Diprivan), a general anesthetic
theanine (found in Camellia sinensis, the tea plant)
valerian (from Valeriana officinalis)
Stimulants
Stimulants, also known as "psychostimulants", induce euphoria with improvements in mental and physical function, such as enhanced alertness, wakefulness, and locomotion. Stimulants are also occasionally called "uppers". Depressants or "downers", which decrease mental or physical function, are in stark contrast to stimulants and are considered to be their functional opposites.
Stimulants enhance the activity of the central and peripheral nervous systems. Common effects may include increased alertness, awareness, wakefulness, endurance, productivity, and motivation, arousal, locomotion, heart rate, and blood pressure, and a diminished desire for food and sleep.
Use of stimulants may cause the body to significantly reduce its production of endogenous compounds that fulfill similar functions. Once the effect of the ingested stimulant has worn off the user may feel depressed, lethargic, confused, and dysphoric. This is colloquially termed a "crash" and may promote reuse of the stimulant.
Amphetamines are a significant cause of drug-induced psychosis. Importantly, a 2019 meta-analysis found that 22% of people with amphetamine-induced psychosis transition to a later diagnosis of schizophrenia.
Examples of stimulants include:
Sympathomimetics (catecholaminergics)—e.g. amphetamine, methamphetamine, cocaine, methylphenidate, ephedrine, pseudoephedrine
Entactogens (serotonergics, primarily phenethylamines)—e.g. MDMA (which is also an amphetamine)
Eugeroics, e.g. modafinil
Others
arecoline (found in Areca catechu)
caffeine (found in Coffea spp.)
nicotine (found in Nicotiana spp.)
rauwolscine (found in Rauvolfia serpentina)
yohimbine (Procomil; a tryptamine alkaloid found in Pausinystalia johimbe)
Euphoriants
Alcohol: "Euphoria, the feeling of well-being, has been reported during the early (10–15 min) phase of alcohol consumption" (e.g., beer, wine or spirits)
Cannabis: Tetrahydrocannabinol, the main psychoactive ingredient in this plant, can have sedative and euphoric properties.
Catnip: Catnip contains a sedative known as nepetalactone that activates opioid receptors. In cats it elicits sniffing, licking, chewing, head shaking, rolling, and rubbing which are indicators of pleasure. In humans, however, catnip does not act as a euphoriant.
Stimulants: "Psychomotor stimulants produce locomotor activity (the subject becomes hyperactive), euphoria, (often expressed by excessive talking and garrulous behaviour), and anorexia. The amphetamines are the best known drugs in this category..."
MDMA: The "euphoriant drugs such as MDMA ('ecstasy') and MDEA ('eve')" are popular among young adults. MDMA "users experience short-term feelings of euphoria, rushes of energy and increased tactility" as well as interpersonal connectedness.
Opium: This "drug derived from the unripe seed-pods of the opium poppy…produces drowsiness and euphoria and reduces pain. Morphine and codeine are opium derivatives." Opioids have led to many deaths in the United States, particularly by causing respiratory depression.
Hallucinogens
Hallucinogens can be divided into three broad categories: psychedelics, dissociatives, and deliriants. They can cause subjective changes in perception, thought, emotion and consciousness. Unlike other psychoactive drugs such as stimulants and opioids, hallucinogens do not merely amplify familiar states of mind but also induce experiences that differ from those of ordinary consciousness, often compared to non-ordinary forms of consciousness such as trance, meditation, conversion experiences, and dreams.
Psychedelics, dissociatives, and deliriants have a long worldwide history of use within medicinal and religious traditions. They are used in shamanic forms of ritual healing and divination, in initiation rites, and in the religious rituals of syncretistic movements such as União do Vegetal, Santo Daime, Temple of the True Inner Light, and the Native American Church. When used in religious practice, psychedelic drugs, as well as other substances like tobacco, are referred to as entheogens.
Hallucinogen-induced psychosis occurs when psychosis persists despite no longer being intoxicated with the drug. It is estimated that 26% of people with hallucinogen-induced psychosis will transition to a diagnosis of schizophrenia. This percentage is less than the psychosis transition rate for cannabis (34%) but higher than that of amphetamines (22%).
Starting in the mid-20th century, psychedelic drugs have been the object of extensive attention in the Western world. They have been and are being explored as potential therapeutic agents in treating depression, post-traumatic stress disorder, obsessive–compulsive disorder, alcoholism, and opioid addiction. Yet the most popular, and at the same time most stigmatized, use of psychedelics in Western culture has been associated with the search for direct religious experience, enhanced creativity, personal development, and "mind expansion". The use of psychedelic drugs was a major element of the 1960s counterculture, where it became associated with various social movements and a general atmosphere of rebellion and strife between generations.
Deliriants
atropine (alkaloid found in plants of the family Solanaceae, including datura, deadly nightshade, henbane and mandrake)
dimenhydrinate (Dramamine, an antihistamine)
diphenhydramine (Benadryl, Unisom, Nytol)
hyoscyamine (alkaloid also found in the Solanaceae)
hyoscine hydrobromide (another Solanaceae alkaloid)
myristicin (found in Myristica fragrans ("Nutmeg"))
ibotenic acid (found in Amanita muscaria ("Fly Agaric"); prodrug to muscimol)
muscimol (also found in Amanita muscaria, a GABAergic)
Dissociatives
dextromethorphan (DXM; Robitussin, Delsym, etc.; "Dex", "Robo", "Cough Syrup", "DXM")
"Triple C's, Coricidin, Skittles" refer to a potentially fatal formulation containing both dextromethorphan and chlorpheniramine.
ketamine (K; Ketalar, Ketaset, Ketanest; "Ket", "Kit Kat", "Special-K", "Vitamin K", "Jet Fuel", "Horse Tranquilizer")
methoxetamine (Mex, Mket, Mexi)
phencyclidine (PCP; Sernyl; "Angel Dust", "Rocket Fuel", "Sherm", "Killer Weed", "Super Grass")
nitrous oxide (N2O; "NOS", "Laughing Gas", "Whippets", "Balloons")
Psychedelics
Phenethylamines
2C-B ("Nexus", "Venus", "Eros", "Bees")
2C-E ("Eternity", "Hummingbird")
2C-I ("Infinity")
2C-T-2 ("Rosy")
2C-T-7 ("Blue Mystic", "Lucky 7")
DOB
DOC
DOI
DOM ("Serenity, Tranquility, and Peace" ("STP"))
MDMA ("Ecstasy", "E", "Molly", "Mandy", "MD", "Crystal Love")
mescaline (found in peyote and Trichocereus macrogonus (Peruvian torch, San Pedro cactus))
Tryptamines (including ergolines and lysergamides)
5-MeO-DiPT ("Foxy", "Foxy Methoxy")
5-MeO-DMT (found in various plants like chacruna, jurema, vilca, and yopo)
alpha-methyltryptamine (αMT; Indopan; "Spirals")
bufotenin (secreted by Bufo alvarius, also found in various Amanita mushrooms)
N,N-dimethyltryptamine (N,N-DMT; DMT; "Dimitri", "Disneyland", "Spice"; found in large amounts in Psychotria and in D. cabrerana)
lysergic acid amide (LSA; ergine; found in morning glory and Hawaiian baby woodrose seeds)
lysergic acid diethylamide (LSD; L; Delysid; "Acid", "Sid". "Cid", "Lucy", "Sidney", "Blotters", "Droppers", "Sugar Cubes")
O-Acetylpsilocin (believed to be a prodrug of psilocin)
psilocin (found in psilocybin mushrooms)
psilocybin (also found in psilocybin mushrooms; prodrug to psilocin)
ibogaine (found in Tabernanthe iboga ("Iboga"))
Atypicals
salvinorin A (found in Salvia divinorum, a trans-neoclerodane diterpenoid ("Diviner's Sage", "Lady Salvia", "Salvinorin"))
tetrahydrocannabinol (found in cannabis)
Inhalants
Inhalants are gases, aerosols, or solvents that are breathed in and absorbed through the lungs. While some "inhalant" drugs are used for medical purposes, as in the case of nitrous oxide, a dental anesthetic, inhalants are used as recreational drugs for their intoxicating effect. Most inhalant drugs that are used non-medically are ingredients in household or industrial chemical products that are not intended to be concentrated and inhaled, including organic solvents (found in cleaning products, fast-drying glues, and nail polish removers), fuels (gasoline (petrol) and kerosene), and propellant gases such as Freon and compressed hydrofluorocarbons that are used in aerosol cans such as hairspray, whipped cream, and non-stick cooking spray. A small number of recreational inhalant drugs are pharmaceutical products that are used illicitly, such as anesthetics (ether and nitrous oxide) and volatile anti-angina drugs (alkyl nitrites, more commonly known as "poppers").
The most serious inhalant abuse occurs among children and teens who "[...] live on the streets completely without family ties". Inhalant users inhale vapor or aerosol propellant gases using plastic bags held over the mouth or by breathing from a solvent-soaked rag or an open container. The effects of inhalants range from an alcohol-like intoxication and intense euphoria to vivid hallucinations, depending on the substance and the dosage. Some inhalant users are injured due to the harmful effects of the solvents or gases, or due to other chemicals used in the products inhaled. As with any recreational drug, users can be injured due to dangerous behavior while they are intoxicated, such as driving under the influence. Computer cleaning dusters are dangerous to inhale, because the gases expand and cool rapidly upon being sprayed. In many cases, users have died from hypoxia (lack of oxygen), pneumonia, cardiac failure or arrest, or aspiration of vomit.
Examples include:
Chloroform
Ethyl chloride
Diethyl ether
Ethane and ethylene
Laughing gas (nitrous oxide)
Poppers (alkyl nitrites)
Solvents and propellants (including propane, butane, freon, gasoline, kerosene, toluene) along with the fumes of glues containing them
List of drugs which can be smoked
Plants:
black tar heroin
cannabis
datura and other Solanaceae (formerly smoked to treat asthma)
opium
salvia divinorum
tobacco
possibly other plants (see the section below)
Substances (also not necessarily psychoactive plants smoked within them):
5-MeO-DMT
Bufotenine
crack cocaine
dimethyltryptamine (DMT)
DiPT
methamphetamine
Methaqualone
phencyclidine (PCP)
synthetic cannabinoids (see also: synthetic cannabis)
many others, including some prescription drugs
List of psychoactive plants, fungi, and animals
Minimally psychoactive plants which contain mainly caffeine and theobromine:
cocoa
coffee
guarana (caffeine in guarana is sometimes called guaranine)
kola
tea (caffeine in tea is sometimes called theine) – also contains theanine
yerba mate (caffeine in yerba mate is sometimes called mateine)
Most known psychoactive plants:
cannabis: cannabinoids
coca: cocaine
kava: kavalactones
khat: cathine and cathinone
nutmeg: myristicin and elemicin
opium poppy: morphine, codeine, and other opiates
salvia divinorum: salvinorin A
tobacco: nicotine and beta-carboline alkaloids
Solanaceae plants—contain atropine, hyoscyamine, and scopolamine:
datura
deadly nightshade Atropa belladonna
henbane
mandrake (mandragora)
other Solanaceae
Cacti with mescaline:
Peyote
Trichocereus macrogonus, the Peruvian torch cactus, and in particular its variety T. macrogonus var. pachanoi, the San Pedro cactus
Other plants:
Areca catechu (see: betel and paan)—arecoline
Ayahuasca (for DMT)
Calea zacatechichi
damiana
ephedra: ephedrine
kratom: mitragynine, mitraphylline, 7-hydroxymitragynine, raubasine, and corynanthine
Morning glory and Hawaiian Baby Woodrose – lysergic acid amide (LSA, ergine)
Rauvolfia serpentina: rauwolscine
Silene capensis
Tabernanthe iboga ("Iboga")—ibogaine
valerian: valerian (the chemical with the same name)
various plants like chacruna, jurema, vilca, and yopo – 5-MeO-DMT
yohimbe (Pausinystalia johimbe): yohimbine and corynanthine
many others
Fungi:
various Amanita mushrooms: muscimol
Amanita muscaria: ibotenic acid and muscimol
Claviceps purpurea and other Clavicipitaceae: ergotamine (not psychoactive itself but used in synthesis of LSD)
psilocybin mushrooms: psilocybin and psilocin
Psychoactive animals:
hallucinogenic fish
psychoactive toads: Bufo alvarius (Colorado River toad or Sonoran Desert toad) contains bufotenin (5-MeO-DMT)
| Biology and health sciences | Drugs and pharmacology | null |
25977 | https://en.wikipedia.org/wiki/Ideal%20%28ring%20theory%29 | Ideal (ring theory) | In mathematics, and more specifically in ring theory, an ideal of a ring is a special subset of its elements. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer (even or odd) results in an even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring in a way similar to how, in group theory, a normal subgroup can be used to construct a quotient group.
Among the integers, the ideals correspond one-for-one with the non-negative integers: in this ring, every ideal is a principal ideal consisting of the multiples of a single non-negative number. However, in other rings, the ideals may not correspond directly to the ring elements, and certain properties of integers, when generalized to rings, attach more naturally to the ideals than to the elements of the ring. For instance, the prime ideals of a ring are analogous to prime numbers, and the Chinese remainder theorem can be generalized to ideals. There is a version of unique prime factorization for the ideals of a Dedekind domain (a type of ring important in number theory).
The related, but distinct, concept of an ideal in order theory is derived from the notion of ideal in ring theory. A fractional ideal is a generalization of an ideal, and the usual ideals are sometimes called integral ideals for clarity.
History
Ernst Kummer invented the concept of ideal numbers to serve as the "missing" factors in number rings in which unique factorization fails; here the word "ideal" is in the sense of existing in imagination only, in analogy with "ideal" objects in geometry such as points at infinity.
In 1876, Richard Dedekind replaced Kummer's undefined concept by concrete sets of numbers, sets that he called ideals, in the third edition of Dirichlet's book Vorlesungen über Zahlentheorie, to which Dedekind had added many supplements.
Later the notion was extended beyond number rings to the setting of polynomial rings and other commutative rings by David Hilbert and especially Emmy Noether.
Definitions
Given a ring , a left ideal is a subset of that is a subgroup of the additive group of that "absorbs multiplication from the left by elements of "; that is, is a left ideal if it satisfies the following two conditions:
is a subgroup of ,
For every and every , the product is in .
In other words, a left ideal is a left submodule of , considered as a left module over itself.
A right ideal is defined similarly, with the condition replaced by . A two-sided ideal is a left ideal that is also a right ideal.
If the ring is commutative, the three definitions are the same, and one talks simply of an ideal. In the non-commutative case, "ideal" is often used instead of "two-sided ideal".
If is a left, right or two-sided ideal, the relation if and only if
is an equivalence relation on , and the set of equivalence classes forms a left, right or bi module denoted and called the quotient of by . (It is an instance of a congruence relation and is a generalization of modular arithmetic.)
If the ideal is two-sided, is a ring, and the function
that associates to each element of its equivalence class is a surjective ring homomorphism that has the ideal as its kernel. Conversely, the kernel of a ring homomorphism is a two-sided ideal. Therefore, the two-sided ideals are exactly the kernels of ring homomorphisms.
Note on convention
By convention, a ring has the multiplicative identity. But some authors do not require a ring to have the multiplicative identity; i.e., for them, a ring is a rng. For a rng , a left ideal is a subrng with the additional property that is in for every and every . (Right and two-sided ideals are defined similarly.) For a ring, an ideal (say a left ideal) is rarely a subring; since a subring shares the same multiplicative identity with the ambient ring , if were a subring, for every , we have i.e., .
The notion of an ideal does not involve associativity; thus, an ideal is also defined for non-associative rings (often without the multiplicative identity) such as a Lie algebra.
Examples and properties
(For the sake of brevity, some results are stated only for left ideals but are usually also true for right ideals with appropriate notation changes.)
In a ring R, the set R itself forms a two-sided ideal of R called the unit ideal. It is often also denoted by since it is precisely the two-sided ideal generated (see below) by the unity . Also, the set consisting of only the additive identity 0R forms a two-sided ideal called the zero ideal and is denoted by . Every (left, right or two-sided) ideal contains the zero ideal and is contained in the unit ideal.
An (left, right or two-sided) ideal that is not the unit ideal is called a proper ideal (as it is a proper subset). Note: a left ideal is proper if and only if it does not contain a unit element, since if is a unit element, then for every . Typically there are plenty of proper ideals. In fact, if R is a skew-field, then are its only ideals and conversely: that is, a nonzero ring R is a skew-field if are the only left (or right) ideals. (Proof: if is a nonzero element, then the principal left ideal (see below) is nonzero and thus ; i.e., for some nonzero . Likewise, for some nonzero . Then .)
The even integers form an ideal in the ring of all integers, since the sum of any two even integers is even, and the product of any integer with an even integer is also even; this ideal is usually denoted by . More generally, the set of all integers divisible by a fixed integer is an ideal denoted . In fact, every non-zero ideal of the ring is generated by its smallest positive element, as a consequence of Euclidean division, so is a principal ideal domain.
The set of all polynomials with real coefficients that are divisible by the polynomial is an ideal in the ring of all real-coefficient polynomials .
Take a ring and positive integer . For each , the set of all matrices with entries in whose -th row is zero is a right ideal in the ring of all matrices with entries in . It is not a left ideal. Similarly, for each , the set of all matrices whose -th column is zero is a left ideal but not a right ideal.
The ring of all continuous functions from to under pointwise multiplication contains the ideal of all continuous functions such that . Another ideal in is given by those functions that vanish for large enough arguments, i.e. those continuous functions for which there exists a number such that whenever .
A ring is called a simple ring if it is nonzero and has no two-sided ideals other than . Thus, a skew-field is simple and a simple commutative ring is a field. The matrix ring over a skew-field is a simple ring.
If is a ring homomorphism, then the kernel is a two-sided ideal of . By definition, , and thus if is not the zero ring (so ), then is a proper ideal. More generally, for each left ideal I of S, the pre-image is a left ideal. If I is a left ideal of R, then is a left ideal of the subring of S: unless f is surjective, need not be an ideal of S; see also #Extension and contraction of an ideal below.
Ideal correspondence: Given a surjective ring homomorphism , there is a bijective order-preserving correspondence between the left (resp. right, two-sided) ideals of containing the kernel of and the left (resp. right, two-sided) ideals of : the correspondence is given by and the pre-image . Moreover, for commutative rings, this bijective correspondence restricts to prime ideals, maximal ideals, and radical ideals (see the Types of ideals section for the definitions of these ideals).
(For those who know modules) If M is a left R-module and a subset, then the annihilator of S is a left ideal. Given ideals of a commutative ring R, the R-annihilator of is an ideal of R called the ideal quotient of by and is denoted by ; it is an instance of idealizer in commutative algebra.
Let be an ascending chain of left ideals in a ring R; i.e., is a totally ordered set and for each . Then the union is a left ideal of R. (Note: this fact remains true even if R is without the unity 1.)
The above fact together with Zorn's lemma proves the following: if is a possibly empty subset and is a left ideal that is disjoint from E, then there is an ideal that is maximal among the ideals containing and disjoint from E. (Again this is still valid if the ring R lacks the unity 1.) When , taking and , in particular, there exists a left ideal that is maximal among proper left ideals (often simply called a maximal left ideal); see Krull's theorem for more.
An arbitrary union of ideals need not be an ideal, but the following is still true: given a possibly empty subset X of R, there is the smallest left ideal containing X, called the left ideal generated by X and is denoted by . Such an ideal exists since it is the intersection of all left ideals containing X. Equivalently, is the set of all the (finite) left R-linear combinations of elements of X over R:
(since such a span is the smallest left ideal containing X.) A right (resp. two-sided) ideal generated by X is defined in the similar way. For "two-sided", one has to use linear combinations from both sides; i.e.,
A left (resp. right, two-sided) ideal generated by a single element x is called the principal left (resp. right, two-sided) ideal generated by x and is denoted by (resp. ). The principal two-sided ideal is often also denoted by . If is a finite set, then is also written as .
There is a bijective correspondence between ideals and congruence relations (equivalence relations that respect the ring structure) on the ring: Given an ideal of a ring , let if . Then is a congruence relation on . Conversely, given a congruence relation on , let . Then is an ideal of .
Types of ideals
To simplify the description all rings are assumed to be commutative. The non-commutative case is discussed in detail in the respective articles.
Ideals are important because they appear as kernels of ring homomorphisms and allow one to define factor rings. Different types of ideals are studied because they can be used to construct different types of factor rings.
Maximal ideal: A proper ideal is called a maximal ideal if there exists no other proper ideal with a proper subset of . The factor ring of a maximal ideal is a simple ring in general and is a field for commutative rings.
Minimal ideal: A nonzero ideal is called minimal if it contains no other nonzero ideal.
Zero ideal: the ideal .
Unit ideal: the whole ring (being the ideal generated by ).
Prime ideal: A proper ideal is called a prime ideal if for any and in , if is in , then at least one of and is in . The factor ring of a prime ideal is a prime ring in general and is an integral domain for commutative rings.
Radical ideal or semiprime ideal: A proper ideal is called radical or semiprime if for any in , if is in for some , then is in . The factor ring of a radical ideal is a semiprime ring for general rings, and is a reduced ring for commutative rings.
Primary ideal: An ideal is called a primary ideal if for all and in , if is in , then at least one of and is in for some natural number . Every prime ideal is primary, but not conversely. A semiprime primary ideal is prime.
Principal ideal: An ideal generated by one element.
Finitely generated ideal: This type of ideal is finitely generated as a module.
Primitive ideal: A left primitive ideal is the annihilator of a simple left module.
Irreducible ideal: An ideal is said to be irreducible if it cannot be written as an intersection of ideals that properly contain it.
Comaximal ideals: Two ideals , are said to be comaximal if for some and .
Regular ideal: This term has multiple uses. See the article for a list.
Nil ideal: An ideal is a nil ideal if each of its elements is nilpotent.
Nilpotent ideal: Some power of it is zero.
Parameter ideal: an ideal generated by a system of parameters.
Perfect ideal: A proper ideal in a Noetherian ring is called a perfect ideal if its grade equals the projective dimension of the associated quotient ring, . A perfect ideal is unmixed.
Unmixed ideal: A proper ideal in a Noetherian ring is called an unmixed ideal (in height) if the height of is equal to the height of every associated prime of . (This is stronger than saying that is equidimensional. | Mathematics | Abstract algebra | null |
25987 | https://en.wikipedia.org/wiki/Rickets | Rickets | Rickets, scientific nomenclature: rachitis (from Greek , meaning 'in or of the spine'), is a condition that results in weak or soft bones in children and may have either dietary deficiency or genetic causes. Symptoms include bowed legs, stunted growth, bone pain, large forehead, and trouble sleeping. Complications may include bone deformities, bone pseudofractures and fractures, muscle spasms, or an abnormally curved spine. The analogous condition in adults is osteomalacia.
The most common cause of rickets is a vitamin D deficiency, although hereditary genetic forms also exist. This can result from eating a diet without enough vitamin D, dark skin, too little sun exposure, exclusive breastfeeding without vitamin D supplementation, celiac disease, and certain genetic conditions. Other factors may include not enough calcium or phosphorus. The underlying mechanism involves insufficient calcification of the growth plate. Diagnosis is generally based on blood tests finding a low calcium, low phosphorus, and a high alkaline phosphatase together with X-rays.
Prevention for exclusively breastfed babies is vitamin D supplements. Otherwise, treatment depends on the underlying cause. If due to a lack of vitamin D, treatment is usually with vitamin D and calcium. This generally results in improvements within a few weeks. Bone deformities may also improve over time. Occasionally surgery may be performed to correct bone deformities. Genetic forms of the disease typically require specialized treatment.
Rickets occurs relatively commonly in the Middle East, Africa, and Asia. It is generally uncommon in the United States and Europe, except among certain minority groups but rates have been increasing among some populations. It begins in childhood, typically between the ages of 3 and 18 months old. Rates of disease are equal in males and females. Cases of what is believed to have been rickets have been described since the 1st century, and the condition was widespread in the Roman Empire. The disease was common into the 20th century. Early treatments included the use of cod liver oil.
Signs and symptoms
Signs and symptoms of dietary deficiency rickets can include bone tenderness, and a susceptibility for bone fractures, particularly greenstick fractures. Early skeletal deformities can arise in infants such as soft, thinned skull bones – a condition known as craniotabes, which is the first sign of rickets; skull bossing may be present and a delayed closure of the fontanelles.
Young children may have bowed legs and thickened ankles and wrists; older children may have knock knees. Spinal curvatures of kyphoscoliosis or lumbar lordosis may be present. The pelvic bones may be deformed. A condition known as rachitic rosary can result as the thickening caused by nodules forming on the costochondral joints. This appears as a visible bump in the middle of each rib in a line on each side of the body. This somewhat resembles a rosary, giving rise to its name. The deformity of a pigeon chest may result in the presence of Harrison's groove.
Hypocalcemia, a low level of calcium in the blood can result in tetany – uncontrolled muscle spasms. Dental problems can also arise.
An X-ray or radiograph of an advanced patient with rickets tends to present in a classic way: the bowed legs (outward curve of long bone of the legs) and a deformed chest. Changes in the skull also occur causing a distinctive "square headed" appearance known as "caput quadratum". These deformities persist into adult life if not treated. Long-term consequences include permanent curvatures or disfiguration of the long bones, and a curved back.
Cause
Maternal deficiencies may be the cause of overt bone disease from before birth and impairment of bone quality after birth. The primary cause of congenital rickets is vitamin D deficiency in the mother's blood. Vitamin D ensures that serum phosphate and calcium levels are sufficient to facilitate the mineralization of bone. Congenital rickets may also be caused by other maternal diseases, including severe osteomalacia, untreated celiac disease, malabsorption, pre-eclampsia, and premature birth. Rickets in children is similar to osteoporosis in the elderly, with brittle bones. Pre-natal care includes checking vitamin levels and ensuring that any deficiencies are supplemented.
Exclusively breast-fed infants may require rickets prevention by vitamin D supplementation or an increased exposure to sunlight.
In sunny countries such as Nigeria, South Africa, and Bangladesh, there is sufficient endogenous vitamin D due to exposure to the sun. However, the disease occurs among older toddlers and children in these countries, which in these circumstances is attributed to low dietary calcium intakes due to a mainly cereal-based diet.
Those at higher risk for developing rickets include:
Breast-fed infants whose mothers are not exposed to sunlight
Breast-fed infants who are not exposed to sunlight
Breast-fed babies who are exposed to little sunlight
Adolescents, in particular when undergoing the pubertal growth spurt
Any child whose diet does not contain enough vitamin D or calcium
Diseases causing soft bones in infants, like hypophosphatasia or hypophosphatemia, can also lead to rickets.
Strontium is allied with calcium uptake into bones; at excessive dietary levels strontium has a rachitogenic (rickets-producing) action.
Sunlight
Sunlight, especially ultraviolet light, lets human skin cells convert vitamin D from an inactive to active state. In the absence of vitamin D, dietary calcium is not properly absorbed, resulting in hypocalcaemia, leading to skeletal and dental deformities and neuromuscular symptoms, e.g. hyperexcitability. Foods that contain vitamin D include butter, eggs, fish liver oils, margarine, fortified milk and juice, portabella and shiitake mushrooms, and oily fishes such as tuna, herring, and salmon. A rare X-linked dominant form exists called vitamin D-resistant rickets or X-linked hypophosphatemia.
Cases have been reported in Britain in recent years of rickets in children of many social backgrounds caused by insufficient production in the body of vitamin D because the sun's ultraviolet light was not reaching the skin due to use of strong sunblock, too much "covering up" in sunlight, or not getting out into the sun. Other cases have been reported among the children of some ethnic groups in which mothers avoid exposure to the sun for religious or cultural reasons, leading to a maternal shortage of vitamin D, and people with darker skin need more sunlight to maintain vitamin D levels.
Rickets had historically been a problem in London, especially during the Industrial Revolution. Persistent thick fog and heavy industrial smog permeating the city blocked out significant amounts of sunlight to such an extent that up to 80 percent of children at one time had varying degrees of rickets in one form or the other. It is sometimes known "the English Disease" in some foreign languages (e.g. German: , Dutch: , Hungarian: , Swedish: ).
Skin color theory
Rickets is often a result of vitamin D3 deficiency. The correlation between human skin color and latitude is thought to be the result of positive selection to varying levels of solar ultraviolet radiation. Northern latitudes have selection for lighter skin that allows UV rays to produce vitamin D from 7-dehydrocholesterol. Conversely, latitudes near the equator have selection for darker skin that can block the majority of UV radiation to protect from toxic levels of vitamin D, as well as skin cancer.
An anecdote often cited to support this hypothesis is that Arctic populations whose skin is relatively darker for their latitude, such as the Inuit, have a diet that is historically rich in vitamin D. Since these people acquire vitamin D through their diet, there is not a positive selective force to synthesize vitamin D from sunlight.
Environment mismatch: vitamin D deficiency arises from a mismatch between an individual's previous and current environment. This risk of mismatch increases with advances in transportation methods and increases in urban population size at high latitudes.
Similar to the environmental mismatch when dark-skinned people live at high latitudes, Rickets can also occur in religious communities that require long garments with hoods and veils. These hoods and veils act as sunlight barriers that prevent individuals from synthesizing vitamin D naturally from the sun.
In a study by Mithal et al., vitamin D insufficiency of various countries was measured by lower 25-hydroxyvitamin D. 25(OH) D is an indicator of vitamin D insufficiency that can be easily measured. These percentages should be regarded as relative vitamin D levels, and not as predicting evidence for development of rickets.
Asian immigrants living in Europe have an increased risk for vitamin D deficiency. Vitamin D insufficiency was found in 40% of non-Western immigrants in the Netherlands, and in more than 80% of Turkish and Moroccan immigrants.
The Middle East, despite high rates of sun-exposure, has the highest rates of rickets worldwide. This can be explained by limited sun exposure due to cultural practices and lack of vitamin D supplementation for breast-feeding women. Up to 70% and 80% of adolescent girls in Iran and Saudi Arabia, respectively, have vitamin D insufficiency. Socioeconomic factors that limit a vitamin D rich diet also plays a role.
In the United States, vitamin D insufficiency varies dramatically by ethnicity. Among females aged 70 years and older, the prevalence of low serum 25(OH) D levels was 28.5% for non-Hispanic whites, 55% for Mexican Americans, and 68% for non-Hispanic blacks. Among males, the prevalence was 23%, 45%, and 58%, respectively.
A systematic review published in the Cochrane Library looked at children up to three years old in Turkey and China and found there was a beneficial association between vitamin D and rickets. In Turkey children getting vitamin D had only a 4% chance of developing rickets compared to children who received no medical intervention. In China, a combination of vitamin D, calcium and nutritional counseling was linked to a decreased risk of rickets.
Parents can supplement their nutritional intake with vitamin D enhanced beverages if they feel their child is at risk for vitamin D deficiency.
A recent review links rickets disease to exclusive consumption of Neocate baby formula.
Diagnosis
Rickets may be diagnosed with the help of:
Blood tests:
Serum calcium may show low levels of calcium, serum phosphorus may be low, and serum alkaline phosphatase may be high from bones or changes in the shape or structure of the bones. This can show enlarged limbs and joints.
A bone density scan may be undertaken.
Radiography typically show widening of the zones of provisional calcification of the metaphyses secondary to unmineralized osteoid. Cupping, fraying, and splaying of metaphyses typically appears with growth and continued weight bearing. These changes are seen predominantly at sites of rapid growth, including the proximal humerus, distal radius, distal femur and both the proximal and the distal tibia. Therefore, a skeletal survey for rickets can be accomplished with anteroposterior radiographs of the knees, wrists, and ankles.
In veterinary practice, rickets, osteodystrophy and mineral metabolism disorders are diagnosed using an ultrasound echosteometer in the design М.М. Orlov and А.V. Savinkov.
Types
Vitamin D-related rickets
Vitamin D deficiency
Vitamin D-dependent rickets (VDDR)
Type 1: insufficiency in activation
VDDR1A: 25-Hydroxyvitamin D3 1-alpha-hydroxylase deficiency
VDDR1B: CYP2R1 deficiency
Type 2: resistance to calcitriol
VDDR2A: calcitriol receptor mutation
VDDR2B: unknown nuclear ribonucleoprotein interfering with signal transduction
Type 3: excessive inactivation (CYP3A4 mutation, dominant)
Hypocalcemia-related rickets
Hypocalcemia
Chronic kidney failure (CKD-BMD)
Hypophosphatemia-related rickets
Congenital
Vitamin D-resistant rickets
Autosomal dominant hypophosphatemic rickets (ADHR)
Autosomal recessive hypophosphatemic rickets (ARHR)
Hypophosphatemia (typically secondary to malabsorption)
Fanconi's syndrome
Secondary to other diseases
Tumor-induced osteomalacia
McCune–Albright syndrome
Epidermal nevus syndrome
Dent's disease
Differential diagnosis
Osteochondrodysplasias, also known as genetic bone diseases, may mimic the clinical picture of rickets in regard to the features of bone deformities. The radiologic picture and the laboratory findings of serum calcium, phosphate and alkaline phosphatase are important differentiating factors. Blount's disease is an important differential diagnosis because it causes knee deformities in a similar fashion to rickets namely bow legs or genu varum. Infants with rickets can have bone fractures. This sometimes leads to child abuse allegations. This issue appears to be more common for solely nursing infants of black mothers, in winter in temperate climates, suffering poor nutrition and no vitamin D supplementation. People with darker skin produce less vitamin D than those with lighter skin, for the same amount of sunlight.
Treatment
Diet and sunlight
Treatment involves increasing dietary intake of calcium, phosphates and vitamin D. Exposure to ultraviolet B light (most easily obtained when the sun is highest in the sky), cod liver oil, halibut-liver oil, and viosterol are all sources of vitamin D.
A sufficient amount of ultraviolet B light in sunlight each day and adequate supplies of calcium and phosphorus in the diet can prevent rickets. Darker-skinned people need to be exposed longer to the ultraviolet rays. The replacement of vitamin D has been proven to correct rickets using these methods of ultraviolet light therapy and medicine.
Recommendations are for 400 international units (IU) of vitamin D a day for infants and children. Children who do not get adequate amounts of vitamin D are at increased risk of rickets. Vitamin D is essential for allowing the body to uptake calcium for use in proper bone calcification and maintenance.
Supplementation
Sufficient vitamin D levels can also be achieved through dietary supplementation and/or exposure to sunlight. Vitamin D3 (cholecalciferol) is the preferred form since it is more readily absorbed than vitamin D2. Most dermatologists recommend vitamin D supplementation as an alternative to unprotected ultraviolet exposure due to the increased risk of skin cancer associated with sun exposure. Endogenous production with full body exposure to sunlight is approximately 250 μg (10,000 IU) per day.
According to the American Academy of Pediatrics (AAP), all infants, including those who are exclusively breast-fed, may need vitamin D supplementation until they start drinking at least of vitamin D-fortified milk or formula a day.
Despite this recommendation, a recent Cochrane systematic review has found limited evidence that vitamin D plus calcium, or calcium alone compared to vitamin D improves healing in children with nutritional rickets.
Surgery
Occasionally surgery is needed to correct severe and persistent deformities of the lower limbs, especially around the knees namely genu varum and genu valgum. Surgical correction of rachitic deformities can be achieved through osteotomies or guided growth surgery. Guided growth surgery has almost replaced the use of corrective osteotomies. The functional results of guided growth surgery in children with rickets are satisfactory. While bone osteotomies work through acute/immediate correction of the limb deformity, guided growth works through gradual correction.
Epidemiology
In developed countries, rickets is a rare disease (incidence of less than 1 in 200,000). Recently, cases of rickets have been reported among children who are not fed enough vitamin D.
In 2013/2014 there were fewer than 700 cases in England. In 2019 the number of cases hospitalised was said to be the highest in 50 years.
Rickets occurs relatively commonly in the Middle East, Africa, and Asia.
History
Greek physician Soranus of Ephesus, one of the chief representatives of the Methodic school of medicine who practiced in Alexandria and subsequently in Rome, reported deformation of the bones in infants as early as the first and second centuries AD. Rickets was not defined as a specific medical condition until 1645, when an English physician Daniel Whistler gave the earliest known description of the disease.
In 1650 a treatise on rickets was published by Francis Glisson, a physician at Caius College, Cambridge, who said it had first appeared about 30 years previously in the counties of Dorset and Somerset.
In 1857, John Snow suggested rickets, then widespread in Britain, was being caused by the adulteration of bakers' bread with alum.
German pediatrician Kurt Huldschinsky successfully demonstrated in the winter of 1918–1919 how rickets could be treated with ultraviolet lamps.
Between 1918 and 1920, the role of diet in the development of rickets was determined by Edward Mellanby.
In 1923, American physician Harry Steenbock demonstrated that irradiation by ultraviolet light increased the vitamin D content of foods and other organic materials. Steenbock's irradiation technique was used for foodstuffs, but most memorably for milk.
By 1945, rickets had all but been eliminated in the United States.
However, beginning around 2003, rickets reemerged as an issue in the US for some populations causing the American Academy of Pediatrics recommended that all infants have Vitamin D intake of 200 IU per day.
Etymology
The word rickets may be from the Old English word ('to twist'), although because this is conjectured, several major dictionaries simply say "origin unknown". The name rickets is plural in form but usually singular in construction. The Greek word (, meaning 'in or of the spine') was later adopted as the scientific term for rickets, due chiefly to the words' similarity in sound.
| Biology and health sciences | Health and fitness: General | Health |
25989 | https://en.wikipedia.org/wiki/RGB%20color%20model | RGB color model | The RGB color model is an additive color model in which the red, green, and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.
The main purpose of the RGB color model is for the sensing, representation, and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography and colored lighting. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors.
RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual red, green, and blue levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB value does not define the same color across devices without some kind of color management.
Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED, quantum dots, etc.), computer and mobile phone displays, video projectors, multicolor LED displays and large screens such as the Jumbotron. Color printers, on the other hand, are not RGB devices, but subtractive color devices typically using the CMYK color model.
Additive colors
To form a color with RGB, three light beams (one red, one green, and one blue) must be superimposed (for example by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of that color, and each of them can have an arbitrary intensity, from fully off to fully on, in the mixture.
The RGB color model is additive in the sense that if light beams of differing color (frequency) are superposed in space their light spectra adds up, wavelength for wavelength, to make up a resulting, total spectrum. This is essentially opposite to the subtractive color model, particularly the CMY color model, which applies to paints, inks, dyes and other substances whose color depends on reflecting certain components (frequencies) of the light under which we see them.
In the additive model, if the resulting spectrum, e.g. of superposing three colors, is flat, white color is perceived by the human eye upon direct incidence on the retina. This is in stark contrast to the subtractive model, where the perceived resulting spectrum is what reflecting surfaces, such as dyed surfaces, emit. A dye filters out all colors but its own; two blended dyes filter out all colors but the common color component between them, e.g. green as the common component between yellow and cyan, red as the common component between magenta and yellow, and blue-violet as the common component between magenta and cyan. There is no common color component among magenta, cyan and yellow, thus rendering a spectrum of zero intensity: black.
Zero intensity for each component gives the darkest color (no light, considered the black), and full intensity of each gives a white; the quality of this white depends on the nature of the primary light sources, but if they are properly balanced, the result is a neutral white matching the system's white point. When the intensities for all the components are the same, the result is a shade of gray, darker, or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed.
When one of the components has the strongest intensity, the color is a hue near this primary color (red-ish, green-ish, or blue-ish), and when two components have the same strongest intensity, then the color is a hue of a secondary color (a shade of cyan, magenta, or yellow). A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green+blue, magenta is blue+red, and yellow is red+green. Every secondary color is the complement of one primary color: cyan complements red, magenta complements green, and yellow complements blue. When all the primary colors are mixed in equal intensities, the result is white.
The RGB color model itself does not define what is meant by red, green, and blue colorimetrically, and so the results of mixing them are not specified as absolute, but relative to the primary colors. When the exact chromaticities of the red, green, and blue primaries are defined, the color model then becomes an absolute color space, such as sRGB or Adobe RGB.
Physical principles for the choice of red, green, and blue
The choice of primary colors is related to the physiology of the human eye; good primaries are stimuli that maximize the difference between the responses of the cone cells of the human retina to light of different wavelengths, and that thereby make a large color triangle.
The normal three kinds of light-sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish-green light and to differences between hues in the green-to-orange region.
As an example, suppose that light in the orange range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more. The difference in the response can be detected by the brain, and this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones simultaneously but to different degrees.
Use of the three primary colors is not sufficient to reproduce all colors; only colors within the color triangle defined by the chromaticities of the primaries can be reproduced by additive mixing of non-negative amounts of those colors of light.
History of RGB color model theory and usage
The RGB color model is based on the Young–Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann von Helmholtz in the early to mid-nineteenth century, and on James Clerk Maxwell's color triangle that elaborated that theory ().
Photography
The first experiments with RGB in early color photography were made in 1861 by Maxwell himself, and involved the process of combining three color-filtered separate takes. To reproduce the color photograph, three matching projections over a screen in a dark room were necessary.
The additive RGB model and variants such as orange–green–violet were also used in the Autochrome Lumière color plates and other screen-plate technologies such as the Joly color screen and the Paget process in the early twentieth century. Color photography by taking three separate plates was used by other pioneers, such as the Russian Sergey Prokudin-Gorsky in the period 1909 through 1915. Such methods lasted until about 1960 using the expensive and extremely complex tri-color carbro Autotype process.
When employed, the reproduction of prints from three-plate photos was done by dyes or pigments using the complementary CMY model, by simply using the negative plates of the filtered takes: reverse red gives the cyan plate, and so on.
Television
Before the development of practical electronic TV, there were patents on mechanically scanned color systems as early as 1889 in Russia. The color TV pioneer John Logie Baird demonstrated the world's first RGB color transmission in 1928, and also the world's first color broadcast in 1938, in London. In his experiments, scanning and display were done mechanically by spinning colorized wheels.
The Columbia Broadcasting System (CBS) began an experimental RGB field-sequential color system in 1940. Images were scanned electrically, but the system still used a moving part: the transparent RGB color wheel rotating at above 1,200 rpm in synchronism with the vertical scan. The camera and the cathode-ray tube (CRT) were both monochromatic. Color was provided by color wheels in the camera and the receiver. More recently, color wheels have been used in field-sequential projection TV receivers based on the Texas Instruments monochrome DLP imager.
The modern RGB shadow mask technology for color CRT displays was patented by Werner Flechsig in Germany in 1938.
Personal computers
Personal computers of the late 1970s and early 1980s, such as the Apple II and VIC-20, use composite video. The Commodore 64 and the Atari 8-bit computers use S-Video derivatives. IBM introduced a 16-color scheme (4 bits—1 bit each for red, green, blue, and intensity) with the Color Graphics Adapter (CGA) for its IBM PC in 1981, later improved with the Enhanced Graphics Adapter (EGA) in 1984. The first manufacturer of a truecolor graphics card for PCs (the TARGA) was Truevision in 1987, but it was not until the arrival of the Video Graphics Array (VGA) in 1987 that RGB became popular, mainly due to the analog signals in the connection between the adapter and the monitor which allowed a very wide range of RGB colors. Actually, it had to wait a few more years because the original VGA cards were palette-driven just like EGA, although with more freedom than VGA, but because the VGA connectors were analog, later variants of VGA (made by various manufacturers under the informal name Super VGA) eventually added true-color. In 1992, magazines heavily advertised true-color Super VGA hardware.
RGB devices
RGB and displays
One common application of the RGB color model is the display of colors on a cathode-ray tube (CRT), liquid-crystal display (LCD), plasma display, or organic light emitting diode (OLED) display such as a television, a computer's monitor, or a large scale screen. Each pixel on the screen is built by driving three small and very close but still separated RGB light sources. At common viewing distance, the separate sources are indistinguishable, which the eye interprets as a given solid color. All the pixels together arranged in the rectangular screen surface conforms the color image.
During digital image processing each pixel can be represented in the computer memory or interface hardware (for example, a graphics card) as binary values for the red, green, and blue color components. When properly managed, these values are converted into intensities or voltages via gamma correction to correct the inherent nonlinearity of some devices, such that the intended intensities are reproduced on the display.
The Quattron released by Sharp uses RGB color and adds yellow as a sub-pixel, supposedly allowing an increase in the number of available colors.
Video electronics
RGB is also the term referring to a type of component video signal used in the video electronics industry. It consists of three signals—red, green, and blue—carried on three separate cables/pins. RGB signal formats are often based on modified versions of the RS-170 and RS-343 standards for monochrome video. This type of video signal is widely used in Europe since it is the best quality signal that can be carried on the standard SCART connector. This signal is known as RGBS (4 BNC/RCA terminated cables exist as well), but it is directly compatible with RGBHV used for computer monitors (usually carried on 15-pin cables terminated with 15-pin D-sub or 5 BNC connectors), which carries separate horizontal and vertical sync signals.
Outside Europe, RGB is not very popular as a video signal format; S-Video takes that spot in most non-European regions. However, almost all computer monitors around the world use RGB.
Video framebuffer
A framebuffer is a digital device for computers which stores data in the so-called video memory (comprising an array of Video RAM or similar chips). This data goes either to three digital-to-analog converters (DACs) (for analog monitors), one per primary color or directly to digital monitors. Driven by software, the CPU (or other specialized chips) write the appropriate bytes into the video memory to define the image. Modern systems encode pixel color values by devoting 8 bits to each of the R, G, and B components. RGB information can be either carried directly by the pixel bits themselves or provided by a separate color look-up table (CLUT) if indexed color graphic modes are used.
A CLUT is a specialized RAM that stores R, G, and B values that define specific colors. Each color has its own address (index)—consider it as a descriptive reference number that provides that specific color when the image needs it. The content of the CLUT is much like a palette of colors. Image data that uses indexed color specifies addresses within the CLUT to provide the required R, G, and B values for each specific pixel, one pixel at a time. Of course, before displaying, the CLUT has to be loaded with R, G, and B values that define the palette of colors required for each image to be rendered. Some video applications store such palettes in PAL files (Age of Empires game, for example, uses over half-a-dozen) and can combine CLUTs on screen.
RGB24 and RGB32
This indirect scheme restricts the number of available colors in an image CLUT—typically 256-cubed (8 bits in three color channels with values of 0–255)—although each color in the RGB24 CLUT table has only 8 bits representing 256 codes for each of the R, G, and B primaries, making 16,777,216 possible colors. However, the advantage is that an indexed-color image file can be significantly smaller than it would be with only 8 bits per pixel for each primary.
Modern storage, however, is far less costly, greatly reducing the need to minimize image file size. By using an appropriate combination of red, green, and blue intensities, many colors can be displayed. Current typical display adapters use up to 24 bits of information for each pixel: 8-bit per component multiplied by three components (see the Numeric representations section below (24 bits = 2563, each primary value of 8 bits with values of 0–255). With this system, 16,777,216 (2563 or 224) discrete combinations of R, G, and B values are allowed, providing millions of different (though not necessarily distinguishable) hue, saturation and lightness shades. Increased shading has been implemented in various ways, some formats such as .png and .tga files among others using a fourth grayscale color channel as a masking layer, often called RGB32.
For images with a modest range of brightnesses from the darkest to the lightest, 8 bits per primary color provides good-quality images, but extreme images require more bits per primary color as well as the advanced display technology. For more information see High Dynamic Range (HDR) imaging.
Nonlinearity
In classic CRT devices, the brightness of a given point over the fluorescent screen due to the impact of accelerated electrons is not proportional to the voltages applied to the electron gun control grids, but to an expansive function of that voltage. The amount of this deviation is known as its gamma value (), the argument for a power law function, which closely describes this behavior. A linear response is given by a gamma value of 1.0, but actual CRT nonlinearities have a gamma value around 2.0 to 2.5.
Similarly, the intensity of the output on TV and computer display devices is not directly proportional to the R, G, and B applied electric signals (or file data values which drive them through digital-to-analog converters). On a typical standard 2.2-gamma CRT display, an input intensity RGB value of (0.5, 0.5, 0.5) only outputs about 22% of full brightness (1.0, 1.0, 1.0), instead of 50%. To obtain the correct response, a gamma correction is used in encoding the image data, and possibly further corrections as part of the color calibration process of the device. Gamma affects black-and-white TV as well as color. In standard color TV, broadcast signals are gamma corrected.
RGB and cameras
In color television and video cameras manufactured before the 1990s, the incoming light was separated by prisms and filters into the three RGB primary colors feeding each color into a separate video camera tube (or pickup tube). These tubes are a type of cathode-ray tube, not to be confused with that of CRT displays.
With the arrival of commercially viable charge-coupled device (CCD) technology in the 1980s, first, the pickup tubes were replaced with this kind of sensor. Later, higher scale integration electronics was applied (mainly by Sony), simplifying and even removing the intermediate optics, thereby reducing the size of home video cameras and eventually leading to the development of full camcorders. Current webcams and mobile phones with cameras are the most miniaturized commercial forms of such technology.
Photographic digital cameras that use a CMOS or CCD image sensor often operate with some variation of the RGB model. In a Bayer filter arrangement, green is given twice as many detectors as red and blue (ratio 1:2:1) in order to achieve higher luminance resolution than chrominance resolution. The sensor has a grid of red, green, and blue detectors arranged so that the first row is RGRGRGRG, the next is GBGBGBGB, and that sequence is repeated in subsequent rows. For every channel, missing pixels are obtained by interpolation in the demosaicing process to build up the complete image. Also, other processes used to be applied in order to map the camera RGB measurements into a standard color space as sRGB.
RGB and scanners
In computing, an image scanner is a device that optically scans images (printed text, handwriting, or an object) and converts it to a digital image which is transferred to a computer. Among other formats, flat, drum and film scanners exist, and most of them support RGB color. They can be considered the successors of early telephotography input devices, which were able to send consecutive scan lines as analog amplitude modulation signals through standard telephonic lines to appropriate receivers; such systems were in use in press since the 1920s to the mid-1990s. Color telephotographs were sent as three separated RGB filtered images consecutively.
Currently available scanners typically use CCD or contact image sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. Early color film scanners used a halogen lamp and a three-color filter wheel, so three exposures were needed to scan a single color image. Due to heating problems, the worst of them being the potential destruction of the scanned film, this technology was later replaced by non-heating light sources such as color LEDs.
Numeric representations
A color in the RGB color model is described by indicating how much of each of the red, green, and blue is included. The color is expressed as an RGB triplet (r,g,b), each component of which can vary from zero to a defined maximum value. If all the components are at zero the result is black; if all are at maximum, the result is the brightest representable white.
These ranges may be quantified in several different ways:
From 0 to 1, with any fractional value in between. This representation is used in theoretical analyses, and in systems that use floating point representations.
Each color component value can also be written as a percentage, from 0% to 100%.
In computers, the component values are often stored as unsigned integer numbers in the range 0 to 255, the range that a single 8-bit byte can offer. These are often represented as either decimal or hexadecimal numbers.
High-end digital image equipment are often able to deal with larger integer ranges for each primary color, such as 0..1023 (10 bits), 0..65535 (16 bits) or even larger, by extending the 24 bits (three 8-bit values) to 32-bit, 48-bit, or 64-bit units (more or less independent from the particular computer's word size).
For example, brightest saturated red is written in the different RGB notations as:
{| class="wikitable"
! Notation
! RGB triplet
|-
| Arithmetic
| (1.0, 0.0, 0.0)
|-
| Percentage
| (100%, 0%, 0%)
|-
| Digital 8-bit per channel
| (255, 0, 0) #FF0000 (hexadecimal)
|-
| Digital 12-bit per channel
| (4095, 0, 0) #FFF000000
|-
| Digital 16-bit per channel
| (65535, 0, 0) #FFFF00000000
|-
| Digital 24-bit per channel
| (16777215, 0, 0) #FFFFFF000000000000
|-
| Digital 32-bit per channel
| (4294967295, 0, 0) #FFFFFFFF0000000000000000
|}
In many environments, the component values within the ranges are not managed as linear (that is, the numbers are nonlinearly related to the intensities that they represent), as in digital cameras and TV broadcasting and receiving due to gamma correction, for example. Linear and nonlinear transformations are often dealt with via digital image processing. Representations with only 8 bits per component are considered sufficient if gamma correction is used.
Following is the mathematical relationship between RGB space to HSI space (hue, saturation, and intensity: HSI color space):
If , then .
Color depth
The RGB color model is one of the most common ways to encode color in computing, and several different digital representations are in use. The main characteristic of all of them is the quantization of the possible values per component (technically a sample) by using only integer numbers within some range, usually from 0 to some power of two minus one (2n − 1) to fit them into some bit groupings. Encodings of 1, 2, 4, 5, 8, and 16 bits per color are commonly found; the total number of bits used for an RGB color is typically called the color depth.
Geometric representation
Since colors are usually defined by three components, not only in the RGB model, but also in other color models such as CIELAB and Y'UV, among others, then a three-dimensional volume is described by treating the component values as ordinary Cartesian coordinates in a Euclidean space. For the RGB model, this is represented by a cube using non-negative values within a 0–1 range, assigning black to the origin at the vertex (0, 0, 0), and with increasing intensity values running along the three axes up to white at the vertex (1, 1, 1), diagonally opposite black.
An RGB triplet (r,g,b) represents the three-dimensional coordinate of the point of the given color within the cube or its faces or along its edges. This approach allows computations of the color similarity of two given RGB colors by simply calculating the distance between them: the shorter the distance, the higher the similarity. Out-of-gamut computations can also be performed this way.
Colors in web-page design
Initially, the limited color depth of most video hardware led to a limited color palette of 216 RGB colors, defined by the Netscape Color Cube. The web-safe color palette consists of the 216 (63) combinations of red, green, and blue where each color can take one of six values (in hexadecimal): #00, #33, #66, #99, #CC or #FF (based on the 0 to 255 range for each value discussed above). These hexadecimal values = 0, 51, 102, 153, 204, 255 in decimal, which = 0%, 20%, 40%, 60%, 80%, 100% in terms of intensity. This seems fine for splitting up 216 colors into a cube of dimension 6. However, lacking gamma correction, the perceived intensity on a standard 2.5 gamma CRT / LCD is only: 0%, 2%, 10%, 28%, 57%, 100%. See the actual web safe color palette for a visual confirmation that the majority of the colors produced are very dark.
With the predominance of 24-bit displays, the use of the full 16.7 million colors of the HTML RGB color code no longer poses problems for most viewers. The sRGB color space (a device-independent color space) for HTML was formally adopted as an Internet standard in HTML 3.2, though it had been in use for some time before that. All images and colors are interpreted as being sRGB (unless another color space is specified) and all modern displays can display this color space (with color management being built in into browsers or operating systems).
The syntax in CSS is:
rgb(#,#,#)
where # equals the proportion of red, green, and blue respectively. This syntax can be used after such selectors as "background-color:" or (for text) "color:".
Wide gamut color is possible in modern CSS, being supported by all major browsers since 2023.
For example, a color on the DCI-P3 color space can be indicated as:
color(display-p3 # # #)
where # equals the proportion of red, green, and blue in 0.0 to 1.0 respectively.
Color management
Proper reproduction of colors, especially in professional environments, requires color management of all the devices involved in the production process, many of them using RGB. Color management results in several transparent conversions between device-independent (sRGB, XYZ, L*a*b*) and device-dependent color spaces (RGB and others, as CMYK for color printing) during a typical production cycle, in order to ensure color consistency throughout the process. Along with the creative processing, such interventions on digital images can damage the color accuracy and image detail, especially where the gamut is reduced. Professional digital devices and software tools allow for 48 bpp (bits per pixel) images to be manipulated (16 bits per channel), to minimize any such damage.
ICC profile compliant applications, such as Adobe Photoshop, use either the Lab color space or the CIE 1931 color space as a Profile Connection Space when translating between color spaces.
RGB model and luminance–chrominance formats relationship
All luminance–chrominance formats used in the different TV and video standards such as YIQ for NTSC, YUV for PAL, YDBDR for SECAM, and YPBPR for component video use color difference signals, by which RGB color images can be encoded for broadcasting/recording and later decoded into RGB again to display them. These intermediate formats were needed for compatibility with pre-existent black-and-white TV formats. Also, those color difference signals need lower data bandwidth compared to full RGB signals.
Similarly, current high-efficiency digital color image data compression schemes such as JPEG and MPEG store RGB color internally in YCBCR format, a digital luminance–chrominance format based on YPBPR. The use of YCBCR also allows computers to perform lossy subsampling with the chrominance channels (typically to 4:2:2 or 4:1:1 ratios), which reduces the resultant file size.
| Physical sciences | Basics_7 | null |
25995 | https://en.wikipedia.org/wiki/Reciprocating%20engine | Reciprocating engine | A reciprocating engine, more often known as a piston engine, is typically a heat engine that uses one or more reciprocating pistons to convert high temperature and high pressure into a rotating motion. This article describes the common features of all types. The main types are: the internal combustion engine, used extensively in motor vehicles; the steam engine, the mainstay of the Industrial Revolution; and the Stirling engine for niche applications. Internal combustion engines are further classified in two ways: either a spark-ignition (SI) engine, where the spark plug initiates the combustion; or a compression-ignition (CI) engine, where the air within the cylinder is compressed, thus heating it, so that the heated air ignites fuel that is injected then or earlier.
Common features in all types
There may be one or more pistons. Each piston is inside a cylinder, into which a gas is introduced, either already under pressure (e.g. steam engine), or heated inside the cylinder either by ignition of a fuel air mixture (internal combustion engine) or by contact with a hot heat exchanger in the cylinder (Stirling engine). The hot gases expand, pushing the piston to the bottom of the cylinder. This position is also known as the bottom dead center (BDC), or where the piston forms the largest volume in the cylinder. The piston is returned to the cylinder top (top dead center) (TDC) by a flywheel, the power from other pistons connected to the same shaft or (in a double acting cylinder) by the same process acting on the other side of the piston. This is where the piston forms the smallest volume in the cylinder. In most types the expanded or "exhausted" gases are removed from the cylinder by this stroke. The exception is the Stirling engine, which repeatedly heats and cools the same sealed quantity of gas. The stroke is simply the distance between the TDC and the BDC, or the greatest distance that the piston can travel in one direction.
In some designs the piston may be powered in both directions in the cylinder, in which case it is said to be double-acting.
In most types, the linear movement of the piston is converted to a rotating movement via a connecting rod and a crankshaft or by a swashplate or other suitable mechanism. A flywheel is often used to ensure smooth rotation or to store energy to carry the engine through an un-powered part of the cycle. The more cylinders a reciprocating engine has, generally, the more vibration-free (smoothly) it can operate. The power of a reciprocating engine is proportional to the volume of the combined pistons' displacement.
A seal must be made between the sliding piston and the walls of the cylinder so that the high pressure gas above the piston does not leak past it and reduce the efficiency of the engine. This seal is usually provided by one or more piston rings. These are rings made of a hard metal, and are sprung into a circular groove in the piston head. The rings fit closely in the groove and press lightly against the cylinder wall to form a seal, and more heavily when higher combustion pressure moves around to their inner surfaces.
It is common to classify such engines by the number and alignment of cylinders and total volume of displacement of gas by the pistons moving in the cylinders usually measured in cubic centimetres (cm3 or cc) or litres (l) or (L) (US: liter). For example, for internal combustion engines, single and two-cylinder designs are common in smaller vehicles such as motorcycles, while automobiles typically have between four and eight, and locomotives and ships may have a dozen cylinders or more. Cylinder capacities may range from 10 cm3 or less in model engines up to thousands of liters in ships' engines.
The compression ratio affects the performance in most types of reciprocating engine. It is the ratio between the volume of the cylinder, when the piston is at the bottom of its stroke, and the volume when the piston is at the top of its stroke.
The bore/stroke ratio is the ratio of the diameter of the piston, or "bore", to the length of travel within the cylinder, or "stroke". If this is around 1 the engine is said to be "square". If it is greater than 1, i.e. the bore is larger than the stroke, it is "oversquare". If it is less than 1, i.e. the stroke is larger than the bore, it is "undersquare".
Cylinders may be aligned in line, in a V configuration, horizontally opposite each other, or radially around the crankshaft. Opposed-piston engines put two pistons working at opposite ends of the same cylinder and this has been extended into triangular arrangements such as the Napier Deltic. Some designs have set the cylinders in motion around the shaft, such as the rotary engine.
In some steam engines, the cylinders may be of varying size with the smallest bore cylinder working the highest pressure steam. This is then fed through one or more, increasingly larger bore cylinders successively, to extract power from the steam at increasingly lower pressures. These engines are called compound engines.
Aside from looking at the power that the engine can produce, the mean effective pressure (MEP), can also be used in comparing the power output and performance of reciprocating engines of the same size. The mean effective pressure is the fictitious pressure which would produce the same amount of net work that was produced during the power stroke cycle. This is shown by:
where is the total piston area of the engine, is the stroke length of the pistons, and is the total displacement volume of the engine. Therefore:
Whichever engine with the larger value of MEP produces more net work per cycle and performs more efficiently.
Operations
In steam engines and internal combustion engines, valves are required to allow the entry and exit of gases at the correct times in the piston's cycle. These are worked by cams, eccentrics or cranks driven by the shaft of the engine. Early designs used the D slide valve but this has been largely superseded by piston valve or poppet valve designs. In steam engines the point in the piston cycle at which the steam inlet valve closes is called the cutoff and this can often be controlled to adjust the torque supplied by the engine and improve efficiency. In some steam engines, the action of the valves can be replaced by an oscillating cylinder.
Internal combustion engines operate through a sequence of strokes that admit and remove gases to and from the cylinder. These operations are repeated cyclically and an engine is said to be 2-stroke, 4-stroke or 6-stroke depending on the number of strokes it takes to complete a cycle.
The most common type is 4-stroke, which has following cycles.
Intake: Also known as induction or suction. This stroke of the piston begins at top dead center (TDC) and ends at bottom dead center (BDC). In this stroke the intake valve must be in the open position while the piston pulls an air-fuel mixture into the cylinder by producing vacuum pressure into the cylinder through its downward motion. The piston is moving down as air is being sucked in by the downward motion against the piston.
Compression: This stroke begins at BDC, or just at the end of the suction stroke, and ends at TDC In this stroke the piston compresses the air-fuel mixture in preparation for ignition during the power stroke (below). Both the intake and exhaust valves are closed during this stage.
Combustion: Also known as power or ignition. This is the start of the second revolution of the four stroke cycle. At this point the crankshaft has completed a full 360 degree revolution. While the piston is at TDC (the end of the compression stroke) the compressed air-fuel mixture is ignited by a spark plug (in a gasoline engine) or by heat generated by high compression (diesel engines), forcefully returning the piston to BDC This stroke produces mechanical work from the engine to turn the crankshaft.
Exhaust: Also known as outlet. During the exhaust stroke, the piston, once again, returns from BDC to TDC while the exhaust valve is open. This action expels the spent air-fuel mixture through the exhaust valve.
History
The reciprocating engine developed in Europe during the 18th century, first as the atmospheric engine then later as the steam engine. These were followed by the Stirling engine and internal combustion engine in the 19th century. Today the most common form of reciprocating engine is the internal combustion engine running on the combustion of petrol, diesel, liquefied petroleum gas (LPG) or compressed natural gas (CNG) and used to power motor vehicles and engine power plants.
One notable reciprocating engine from the World War II era was the 28-cylinder, Pratt & Whitney R-4360 Wasp Major radial engine. It powered the last generation of large piston-engined planes before jet engines and turboprops took over from 1944 onward. It had a total engine capacity of , and a high power-to-weight ratio.
The largest reciprocating engine in production at present, but not the largest ever built, is the Wärtsilä-Sulzer RTA96-C turbocharged two-stroke diesel engine of 2006 built by Wärtsilä. It is used to power the largest modern container ships such as the Emma Mærsk. It is five stories high (), long, and weighs over in its largest 14 cylinders version producing more than . Each cylinder has a capacity of , making a total capacity of for the largest versions.
Engine capacity
For piston engines, an engine's capacity is the engine displacement, in other words the volume swept by all the pistons of an engine in a single movement. It is generally measured in litres (l) or cubic inches (c.i.d., cu in, or in3) for larger engines, and cubic centimetres (abbreviated cc) for smaller engines. All else being equal, engines with greater capacities are more powerful and consumption of fuel increases accordingly (although this is not true of every reciprocating engine), although power and fuel consumption are affected by many factors outside of engine displacement.
Power
Reciprocating engines can be characterized by their specific power, which is typically given in kilowatts per litre of engine displacement (in the U.S. also horsepower per cubic inch). The result offers an approximation of the peak power output of an engine. This is not to be confused with fuel efficiency, since high efficiency often requires a lean fuel-air ratio, and thus lower power density. A modern high-performance car engine makes in excess of .
Other modern non-internal combustion types
Reciprocating engines that are powered by compressed air, steam or other hot gases are still used in some applications such as to drive many modern torpedoes or as pollution-free motive power. Most steam-driven applications use steam turbines, which are more efficient than piston engines.
The French-designed FlowAIR vehicles use compressed air stored in a cylinder to drive a reciprocating engine in a local-pollution-free urban vehicle.
Torpedoes may use a working gas produced by high test peroxide or Otto fuel II, which pressurize without combustion. The Mark 46 torpedo, for example, can travel underwater at fuelled by Otto fuel without oxidant.
Reciprocating quantum heat engine
Quantum heat engines are devices that generate power from heat that flows from a hot to a cold reservoir.
The mechanism of operation of the engine can be described by the laws of quantum mechanics.
Quantum refrigerators are devices that consume power with the purpose to pump heat from a cold to a hot reservoir.
In a reciprocating quantum heat engine, the working medium is a quantum system such as spin systems or a harmonic oscillator.
The Carnot cycle and Otto cycle are the ones most studied.
The quantum versions obey the laws of thermodynamics. In addition, these models can justify the assumptions of
endoreversible thermodynamics.
A theoretical study has shown that it is possible and practical to build a reciprocating engine that is composed of a single oscillating atom. This is an area for future research and could have applications in nanotechnology.
Miscellaneous engines
There are a large number of unusual varieties of piston engines that have various claimed advantages, many of which see little if any current use:
Bourke engine
Free-piston engine
IRIS engine
Opposed-piston engine
Axial engine
Cam engine
Revolving cylinder engine
Swing-piston engine
Thermo-magnetic motor
| Technology | Engines | null |
26003 | https://en.wikipedia.org/wiki/Radian | Radian | The radian, denoted by the symbol rad, is the unit of angle in the International System of Units (SI) and is the standard unit of angular measure used in many areas of mathematics. It is defined such that one radian is the angle subtended at the centre of a circle by an arc that is equal in length to the radius. The unit was formerly an SI supplementary unit and is currently a dimensionless SI derived unit, defined in the SI as 1 rad = 1 and expressed in terms of the SI base unit metre (m) as . Angles without explicitly specified units are generally assumed to be measured in radians, especially in mathematical writing.
Definition
One radian is defined as the angle at the center of a circle in a plane that subtends an arc whose length equals the radius of the circle. More generally, the magnitude in radians of a subtended angle is equal to the ratio of the arc length to the radius of the circle; that is, , where is the magnitude in radians of the subtended angle, is arc length, and is radius. A right angle is exactly radians.
One complete revolution, expressed as an angle in radians, is the length of the circumference divided by the radius, which is , or . Thus, radians is equal to 360 degrees. The relation can be derived using the formula for arc length, . Since radian is the measure of an angle that is subtended by an arc of a length equal to the radius of the circle, . This can be further simplified to . Multiplying both sides by gives .
Unit symbol
The International Bureau of Weights and Measures and International Organization for Standardization specify rad as the symbol for the radian. Alternative symbols that were in use in 1909 are c (the superscript letter c, for "circular measure"), the letter r, or a superscript , but these variants are infrequently used, as they may be mistaken for a degree symbol (°) or a radius (r). Hence an angle of 1.2 radians would be written today as 1.2 rad; archaic notations include 1.2 r, 1.2, 1.2, or 1.2.
In mathematical writing, the symbol "rad" is often omitted. When quantifying an angle in the absence of any symbol, radians are assumed, and when degrees are meant, the degree sign is used.
Dimensional analysis
Plane angle may be defined as , where is the magnitude in radians of the subtended angle, is circular arc length, and is radius. One radian corresponds to the angle for which , hence = 1. However, is only to be used to express angles, not to express ratios of lengths in general. A similar calculation using the area of a circular sector gives 1 radian as 1 m2/m2 = 1. The key fact is that the radian is a dimensionless unit equal to 1. In SI 2019, the SI radian is defined accordingly as . It is a long-established practice in mathematics and across all areas of science to make use of .
Giacomo Prando writes "the current state of affairs leads inevitably to ghostly appearances and disappearances of the radian in the dimensional analysis of physical equations". For example, an object hanging by a string from a pulley will rise or drop by centimetres, where is the magnitude of the radius of the pulley in centimetres and is the magnitude of the angle through which the pulley turns in radians. When multiplying by , the unit radian does not appear in the product, nor does the unit centimetre—because both factors are magnitudes (numbers). Similarly in the formula for the angular velocity of a rolling wheel, , radians appear in the units of but not on the right hand side. Anthony French calls this phenomenon "a perennial problem in the teaching of mechanics". Oberhofer says that the typical advice of ignoring radians during dimensional analysis and adding or removing radians in units according to convention and contextual knowledge is "pedagogically unsatisfying".
In 1993 the American Association of Physics Teachers Metric Committee specified that the radian should explicitly appear in quantities only when different numerical values would be obtained when other angle measures were used, such as in the quantities of angle measure (rad), angular speed (rad/s), angular acceleration (rad/s2), and torsional stiffness (N⋅m/rad), and not in the quantities of torque (N⋅m) and angular momentum (kg⋅m2/s).
At least a dozen scientists between 1936 and 2022 have made proposals to treat the radian as a base unit of measurement for a base quantity (and dimension) of "plane angle". Quincey's review of proposals outlines two classes of proposal. The first option changes the unit of a radius to meters per radian, but this is incompatible with dimensional analysis for the area of a circle, . The other option is to introduce a dimensional constant. According to Quincey this approach is "logically rigorous" compared to SI, but requires "the modification of many familiar mathematical and physical equations". A dimensional constant for angle is "rather strange" and the difficulty of modifying equations to add the dimensional constant is likely to preclude widespread use.
In particular, Quincey identifies Torrens' proposal to introduce a constant equal to 1 inverse radian (1 rad−1) in a fashion similar to the introduction of the constant ε0. With this change the formula for the angle subtended at the center of a circle, , is modified to become , and the Taylor series for the sine of an angle becomes:
where is the angle in radians.
The capitalized function is the "complete" function that takes an argument with a dimension of angle and is independent of the units expressed, while is the traditional function on pure numbers which assumes its argument is a dimensionless number in radians. The capitalised symbol can be denoted if it is clear that the complete form is meant.
Current SI can be considered relative to this framework as a natural unit system where the equation is assumed to hold, or similarly, . This radian convention allows the omission of in mathematical formulas.
Defining radian as a base unit may be useful for software, where the disadvantage of longer equations is minimal. For example, the Boost units library defines angle units with a plane_angle dimension, and Mathematica's unit system similarly considers angles to have an angle dimension.
Conversions
Between degrees
As stated, one radian is equal to . Thus, to convert from radians to degrees, multiply by .
For example:
Conversely, to convert from degrees to radians, multiply by .
For example:
Radians can be converted to turns (one turn is the angle corresponding to a revolution) by dividing the number of radians by 2.
Between gradians
One revolution is radians, which equals one turn, which is by definition 400 gradians (400 gons or 400g). To convert from radians to gradians multiply by , and to convert from gradians to radians multiply by . For example,
Usage
Mathematics
In calculus and most other branches of mathematics beyond practical geometry, angles are measured in radians. This is because radians have a mathematical naturalness that leads to a more elegant formulation of some important results.
Results in analysis involving trigonometric functions can be elegantly stated when the functions' arguments are expressed in radians. For example, the use of radians leads to the simple limit formula
which is the basis of many other identities in mathematics, including
Because of these and other properties, the trigonometric functions appear in solutions to mathematical problems that are not obviously related to the functions' geometrical meanings (for example, the solutions to the differential equation , the evaluation of the integral and so on). In all such cases, it is appropriate that the arguments of the functions are treated as (dimensionless) numbers—without any reference to angles.
The trigonometric functions of angles also have simple and elegant series expansions when radians are used. For example, when x is the angle expressed in radians, the Taylor series for sin x becomes:
If y were the angle x but expressed in degrees, i.e. , then the series would contain messy factors involving powers of /180:
In a similar spirit, if angles are involved, mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) can be elegantly stated when the functions' arguments are angles expressed in radians (and messy otherwise). More generally, in complex-number theory, the arguments of these functions are (dimensionless, possibly complex) numbers—without any reference to physical angles at all.
Physics
The radian is widely used in physics when angular measurements are required. For example, angular velocity is typically expressed in the unit radian per second (rad/s). One revolution per second corresponds to 2 radians per second.
Similarly, the unit used for angular acceleration is often radian per second per second (rad/s2).
For the purpose of dimensional analysis, the units of angular velocity and angular acceleration are s−1 and s−2 respectively.
Likewise, the phase angle difference of two waves can also be expressed using the radian as the unit. For example, if the phase angle difference of two waves is (n⋅2) radians, where n is an integer, they are considered to be in phase, whilst if the phase angle difference of two waves is () radians, with n an integer, they are considered to be in antiphase.
A unit of reciprocal radian or inverse radian (rad−1) is involved in derived units such as meter per radian (for angular wavelength) or newton-metre per radian (for torsional stiffness).
Prefixes and variants
Metric prefixes for submultiples are used with radians. A milliradian (mrad) is a thousandth of a radian (0.001 rad), i.e. . There are 2 × 1000 milliradians (≈ 6283.185 mrad) in a circle. So a milliradian is just under of the angle subtended by a full circle. This unit of angular measurement of a circle is in common use by telescopic sight manufacturers using (stadiametric) rangefinding in reticles. The divergence of laser beams is also usually measured in milliradians.
The angular mil is an approximation of the milliradian used by NATO and other military organizations in gunnery and targeting. Each angular mil represents of a circle and is % or 1.875% smaller than the milliradian. For the small angles typically found in targeting work, the convenience of using the number 6400 in calculation outweighs the small mathematical errors it introduces. In the past, other gunnery systems have used different approximations to ; for example Sweden used the streck and the USSR used . Being based on the milliradian, the NATO mil subtends roughly 1 m at a range of 1000 m (at such small angles, the curvature is negligible).
Prefixes smaller than milli- are useful in measuring extremely small angles. Microradians (μrad, ) and nanoradians (nrad, ) are used in astronomy, and can also be used to measure the beam quality of lasers with ultra-low divergence. More common is the arc second, which is rad (around 4.8481 microradians).
History
Pre-20th century
The idea of measuring angles by the length of the arc was in use by mathematicians quite early. For example, al-Kashi (c. 1400) used so-called diameter parts as units, where one diameter part was radian. They also used sexagesimal subunits of the diameter part. Newton in 1672 spoke of "the angular quantity of a body's circular motion", but used it only as a relative measure to develop an astronomical algorithm.
The concept of the radian measure is normally credited to Roger Cotes, who died in 1716. By 1722, his cousin Robert Smith had collected and published Cotes' mathematical writings in a book, Harmonia mensurarum. In a chapter of editorial comments, Smith gave what is probably the first published calculation of one radian in degrees, citing a note of Cotes that has not survived. Smith described the radian in everything but name – "Now this number is equal to 180 degrees as the radius of a circle to the semicircumference, this is as 1 to 3.141592653589" –, and recognized its naturalness as a unit of angular measure.
In 1765, Leonhard Euler implicitly adopted the radian as a unit of angle. Specifically, Euler defined angular velocity as "The angular speed in rotational motion is the speed of that point, the distance of which from the axis of gyration is expressed by one." Euler was probably the first to adopt this convention, referred to as the radian convention, which gives the simple formula for angular velocity . As discussed in , the radian convention has been widely adopted, while dimensionally consistent formulations require the insertion of a dimensional constant, for example .
Prior to the term radian becoming widespread, the unit was commonly called circular measure of an angle. The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson (brother of Lord Kelvin) at Queen's College, Belfast. He had used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, vacillated between the terms rad, radial, and radian. In 1874, after a consultation with James Thomson, Muir adopted radian. The name radian was not universally adopted for some time after this. Longmans' School Trigonometry still called the radian circular measure when published in 1890.
In 1893 Alexander Macfarlane wrote "the true analytical argument for the circular ratios is not the ratio of the arc to the radius, but the ratio of twice the area of a sector to the square on the radius." However, the paper was withdrawn from the published proceedings of mathematical congress held in connection with World's Columbian Exposition in Chicago (acknowledged at page 167), and privately published in his Papers on Space Analysis (1894). Macfarlane reached this idea or ratios of areas while considering the basis for hyperbolic angle which is analogously defined.
As an SI unit
As Paul Quincey et al. write, "the status of angles within the International System of Units (SI) has long been a source of controversy and confusion." In 1960, the General Conference on Weights and Measures (CGPM) established the SI and the radian was classified as a "supplementary unit" along with the steradian. This special class was officially regarded "either as base units or as derived units", as the CGPM could not reach a decision on whether the radian was a base unit or a derived unit. Richard Nelson writes "This ambiguity [in the classification of the supplemental units] prompted a spirited discussion over their proper interpretation." In May 1980 the Consultative Committee for Units (CCU) considered a proposal for making radians an SI base unit, using a constant , but turned it down to avoid an upheaval to current practice.
In October 1980 the CGPM decided that supplementary units were dimensionless derived units for which the CGPM allowed the freedom of using them or not using them in expressions for SI derived units, on the basis that "[no formalism] exists which is at the same time coherent and convenient and in which the quantities plane angle and solid angle might be considered as base quantities" and that "[the possibility of treating the radian and steradian as SI base units] compromises the internal coherence of the SI based on only seven base units". In 1995 the CGPM eliminated the class of supplementary units and defined the radian and the steradian as "dimensionless derived units, the names and symbols of which may, but need not, be used in expressions for other SI derived units, as is convenient". Mikhail Kalinin writing in 2019 has criticized the 1980 CGPM decision as "unfounded" and says that the 1995 CGPM decision used inconsistent arguments and introduced "numerous discrepancies, inconsistencies, and contradictions in the wordings of the SI".
At the 2013 meeting of the CCU, Peter Mohr gave a presentation on alleged inconsistencies arising from defining the radian as a dimensionless unit rather than a base unit. CCU President Ian M. Mills declared this to be a "formidable problem" and the CCU Working Group on Angles and Dimensionless Quantities in the SI was established. The CCU met in 2021, but did not reach a consensus. A small number of members argued strongly that the radian should be a base unit, but the majority felt the status quo was acceptable or that the change would cause more problems than it would solve. A task group was established to "review the historical use of SI supplementary units and consider whether reintroduction would be of benefit", among other activities.
| Physical sciences | Angle | null |
26073 | https://en.wikipedia.org/wiki/Right%20ascension | Right ascension | Right ascension (abbreviated RA; symbol ) is the angular distance of a particular point measured eastward along the celestial equator from the Sun at the March equinox to the (hour circle of the) point in question above the Earth.<ref
></ref>
When paired with declination, these astronomical coordinates specify the location of a point on the celestial sphere in the equatorial coordinate system.
An old term, right ascension ()<ref
>, "Ascensio recta Solis, stellæ, aut alterius cujusdam signi, est gradus æquatorus cum quo simul exoritur in sphæra recta"; roughly translated, "Right ascension of the Sun, stars, or any other sign, is the degree of the equator that rises together in a right sphere"</ref> refers to the ascension, or the point on the celestial equator that rises with any celestial object as seen from Earth's equator, where the celestial equator intersects the horizon at a right angle. It contrasts with oblique ascension, the point on the celestial equator that rises with any celestial object as seen from most latitudes on Earth, where the celestial equator intersects the horizon at an oblique angle.<ref
></ref>
Explanation
Right ascension is the celestial equivalent of terrestrial longitude. Both right ascension and longitude measure an angle from a primary direction (a zero point) on an equator. Right ascension is measured from the Sun at the March equinox i.e. the First Point of Aries, which is the place on the celestial sphere where the Sun crosses the celestial equator from south to north at the March equinox and is currently located in the constellation Pisces. Right ascension is measured continuously in a full circle from that alignment of Earth and Sun in space, that equinox, the measurement increasing towards the east.<ref
></ref>
As seen from Earth (except at the poles), objects noted to have 12 are longest visible (appear throughout the night) at the March equinox; those with 0 (apart from the sun) do so at the September equinox. On those dates at midnight, such objects will reach ("culminate" at) their highest point (their meridian). How high depends on their declination; if 0° declination (i.e. on the celestial equator) then at Earth's equator they are directly overhead (at zenith).
Any angular unit could have been chosen for right ascension, but it is customarily measured in hours (h), minutes (m), and seconds (s), with 24h being equivalent to a full circle. Astronomers have chosen this unit to measure right ascension because they measure a star's location by timing its passage through the highest point in the sky as the Earth rotates. The line which passes through the highest point in the sky, called the meridian, is the projection of a longitude line onto the celestial sphere. Since a complete circle contains 24h of right ascension or 360° (degrees of arc), of a circle is measured as 1h of right ascension, or 15°; of a circle is measured as 1m of right ascension, or 15 minutes of arc (also written as 15′); and of a circle contains 1s of right ascension, or 15 seconds of arc (also written as 15″). A full circle, measured in right-ascension units, contains , or , or 24h.
Because right ascensions are measured in hours (of rotation of the Earth), they can be used
to time the positions of objects in the sky. For example, if a star with RA = is at its meridian, then a star with RA = will be on the/at its meridian (at its apparent highest point) 18.5 sidereal hours later.
Sidereal hour angle, used in celestial navigation, is similar to right ascension but increases westward rather than eastward. Usually measured in degrees (°), it is the complement of right ascension with respect to 24h. It is important not to confuse sidereal hour angle with the astronomical concept of hour angle, which measures the angular distance of an object westward from the local meridian.
Symbols and abbreviations
Effects of precession
The Earth's axis traces a small circle (relative to its celestial equator) slowly westward about the celestial poles, completing one cycle in about 26,000 years. This movement, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates (including right ascension) are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch. Right ascension for "fixed stars" on the equator increases by about 3.1 seconds per year or 5.1 minutes per century, but for fixed stars away from the equator the rate of change can be anything from negative infinity to positive infinity. (To this must be added the proper motion of a star.) Over a precession cycle of 26,000 years, "fixed stars" that are far from the ecliptic poles increase in right ascension by 24h, or about 5.6' per century, whereas stars within 23.5° of an ecliptic pole undergo a net change of0h. The right ascension of Polaris is increasing quicklyin AD 2000 it was 2.5h, but when it gets closest to the north celestial pole in 2100 its right ascension will be 6h. The North Ecliptic Pole in Draco and the South Ecliptic Pole in Dorado are always at right ascension 18h and 6h respectively.
The currently used standard epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. Prior to J2000.0, astronomers used the successive Besselian epochs B1875.0, B1900.0, and B1950.0.
History
The concept of right ascension has been known at least as far back as Hipparchus who measured stars in equatorial coordinates in the 2nd century BC. But Hipparchus and his successors made their star catalogs in ecliptic coordinates, and the use of RA was limited to special cases.
With the invention of the telescope, it became possible for astronomers to observe celestial objects in greater detail, provided that the telescope could be kept pointed at the object for a period of time. The easiest way to do that is to use an equatorial mount, which allows the telescope to be aligned with one of its two pivots parallel to the Earth's axis. A motorized clock drive often is used with an equatorial mount to cancel out the Earth's rotation. As the equatorial mount became widely adopted for observation, the equatorial coordinate system, which includes right ascension, was adopted at the same time for simplicity. Equatorial mounts could then be accurately pointed at objects with known right ascension and declination by the use of setting circles. The first star catalog to use right ascension and declination was John Flamsteed's Historia Coelestis Britannica (1712, 1725).
| Physical sciences | Celestial sphere: General | Astronomy |
26088 | https://en.wikipedia.org/wiki/Red%20wolf | Red wolf | The red wolf (Canis rufus) is a canine native to the southeastern United States. Its size is intermediate between the coyote (Canis latrans) and gray wolf (Canis lupus).
The red wolf's taxonomic classification as being a separate species has been contentious for nearly a century, being classified either as a subspecies of the gray wolf Canis lupus rufus, or a coywolf (a genetic admixture of wolf and coyote). Because of this, it is sometimes excluded from endangered species lists, despite its critically low numbers. Under the Endangered Species Act of 1973, the U.S. Fish and Wildlife Service recognizes the red wolf as an endangered species and grants it protected status. Since 1996, the IUCN has listed the red wolf as a Critically Endangered species; however, it is not listed in the CITES Appendices of endangered species.
History
Red wolves were once distributed throughout the southeastern and south-central United States from the Atlantic Ocean to central Texas, southeastern Oklahoma and southwestern Illinois in the west, and in the north from the Ohio River Valley, northern Pennsylvania, southern New York, and extreme southern Ontario in Canada south to the Gulf of Mexico. The red wolf was nearly driven to extinction by the mid-1900s due to aggressive predator-control programs, habitat destruction, and extensive hybridization with coyotes. By the late 1960s, it occurred in small numbers in the Gulf Coast of western Louisiana and eastern Texas.
Fourteen of these survivors were selected to be the founders of a captive-bred population, which was established in the Point Defiance Zoo and Aquarium between 1974 and 1980. After a successful experimental relocation to Bulls Island off the coast of South Carolina in 1978, the red wolf was declared extinct in the wild in 1980 so that restoration efforts could proceed. In 1987, the captive animals were released into the Alligator River National Wildlife Refuge (ARNWR) on the Albemarle Peninsula in North Carolina, with a second unsuccessful release taking place two years later in the Great Smoky Mountains National Park. Of 63 red wolves released from 1987 to 1994, the population rose to as many as 100–120 individuals in 2012, but due to the lack of regulation enforcement by the US Fish and Wildlife Service, the population has declined to 40 individuals in 2018, about 14 in 2019 and 8 as of October 2021. No wild litters were born between 2019 and 2020.
Under pressure from conservation groups, the US Fish and Wildlife Service resumed reintroductions in 2021 and increased protection. In 2022, the first wild litter was born since 2018. As of 2023, there are between 15 and 17 wild red wolves in ARNWR.
Description and behavior
The red wolf's appearance is typical of the genus Canis, and is generally intermediate in size between the coyote and gray wolf, though some specimens may overlap in size with small gray wolves. A study of Canis morphometrics conducted in eastern North Carolina reported that red wolves are morphometrically distinct from coyotes and hybrids. Adults measure 136–165 cm (53.5–65 in) in length, comprising a tail of about 37 cm (14.6 in). Their weight ranges from 20 to 39 kg (44–85 lbs) with males averaging 29 kg (64 lbs) and females 25 kg (55 lbs). Its pelage is typically more reddish and sparsely furred than the coyote's and gray wolf's, though melanistic individuals do occur. Its fur is generally tawny to grayish in color, with light markings around the lips and eyes. The red wolf has been compared by some authors to the greyhound in general form, owing to its relatively long and slender limbs. The ears are also proportionately larger than the coyote's and gray wolf's. The skull is typically narrow, with a long and slender rostrum, a small braincase and a well developed sagittal crest. Its cerebellum is unlike that of other Canis species, being closer in form to that of canids of the Vulpes and Urocyon genera, thus indicating that the red wolf is one of the more plesiomorphic members of its genus.
The red wolf is more sociable than the coyote, but less so than the gray wolf. It mates in January–February, with an average of 6–7 pups being born in March, April, and May. It is monogamous, with both parents participating in the rearing of young. Denning sites include hollow tree trunks, along stream banks and the abandoned earths of other animals. By the age of six weeks, the pups distance themselves from the den, and reach full size at the age of one year, becoming sexually mature two years later.
Using long-term data on red wolf individuals of known pedigree, it was found that inbreeding among first-degree relatives was rare. A likely mechanism for avoidance of inbreeding is independent dispersal trajectories from the natal pack. Many of the young wolves spend time alone or in small non-breeding packs composed of unrelated individuals. The union of two unrelated individuals in a new home range is the predominant pattern of breeding pair formation. Inbreeding is avoided because it results in progeny with reduced fitness (inbreeding depression) that is predominantly caused by the homozygous expression of recessive deleterious alleles.
Prior to its extinction in the wild, the red wolf's diet consisted of rabbits, rodents, and nutria (an introduced species). In contrast, the red wolves from the restored population rely on white-tailed deer, pig, raccoon, rice rats, muskrats, nutria, rabbits and carrion. White-tailed deer were largely absent from the last wild refuge of red wolves on the Gulf Coast between Texas and Louisiana (where specimens were trapped from the last wild population for captive breeding), which likely accounts for the discrepancy in their dietary habits listed here. Historical accounts of wolves in the southeast by early explorers such as William Hilton, who sailed along the Cape Fear River in what is now North Carolina in 1644, also note that they ate deer.
Range and habitat
The originally recognized red wolf range extended throughout the southeastern United States from the Atlantic and Gulf Coasts, north to the Ohio River Valley and central Pennsylvania, and west to Central Texas and southeastern Missouri. Research into paleontological, archaeological and historical specimens of red wolves by Ronald Nowak expanded their known range to include land south of the Saint Lawrence River in Canada, along the eastern seaboard, and west to Missouri and mid-Illinois, terminating in the southern latitudes of Central Texas.
Given their wide historical distribution, red wolves probably used a large suite of habitat types at one time. The last naturally occurring population used coastal prairie marshes, swamps, and agricultural fields used to grow rice and cotton. However, this environment probably does not typify preferred red wolf habitat. Some evidence shows the species was found in highest numbers in the once extensive bottom-land river forests and swamps of the southeastern United States. Red wolves reintroduced into northeastern North Carolina have used habitat types ranging from agricultural lands to forest/wetland mosaics characterized by an overstory of pine and an understory of evergreen shrubs. This suggests that red wolves are habitat generalists and can thrive in most settings where prey populations are adequate and persecution by humans is slight.
Extirpation in the wild
In 1940, the biologist Stanley P. Young noted that the red wolf was still common in eastern Texas, where more than 800 had been caught in 1939 because of their attacks on livestock. He did not believe that they could be exterminated because of their habit of living concealed in thickets. In 1962 a study of skull morphology of wild Canis in the states of Arkansas, Louisiana, Oklahoma, and Texas indicated that the red wolf existed in only a few populations due to hybridization with the coyote. The explanation was that either the red wolf could not adapt to changes to its environment due to human land-use along with its accompanying influx of competing coyotes from the west, or that the red wolf was being hybridized out of existence by the coyote.
Reintroduced habitat
Since 1987, red wolves have been released into northeastern North Carolina, where they roam 1.7 million acres. These lands span five counties (Dare, Hyde, Tyrrell, Washington, and Beaufort) and include three national wildlife refuges, a U.S. Air Force bombing range, and private land. The red wolf recovery program is unique for a large carnivore reintroduction in that more than half of the land used for reintroduction lies on private property. Approximately are federal and state lands, and are private lands.
Beginning in 1991, red wolves were also released into the Great Smoky Mountains National Park in eastern Tennessee. However, due to exposure to environmental disease (parvovirus), parasites, and competition (with coyotes as well as intraspecific aggression), the red wolf was unable to successfully establish a wild population in the park. Low prey density was also a problem, forcing the wolves to leave the park boundaries in pursuit of food in lower elevations. In 1998, the FWS took away the remaining red wolves in the Great Smoky Mountains National Park, relocating them to Alligator River National Wildlife Refuge in eastern North Carolina. Other red wolves have been released on the coastal islands in Florida, Mississippi, and South Carolina as part of the captive breeding management plan. St. Vincent Island in Florida is currently the only active island propagation site.
Captive breeding and reintroduction
After the passage of the Endangered Species Act of 1973, formal efforts backed by the U.S. Fish and Wildlife Service began to save the red wolf from extinction, when a captive-breeding program was established at the Point Defiance Zoological Gardens, Tacoma, Washington. Four hundred animals were captured from southwestern Louisiana and southeastern Texas from 1973 to 1980 by the USFWS.
Measurements, vocalization analyses, and skull X-rays were used to distinguish red wolves from coyotes and red wolf × coyote hybrids. Of the 400 canids captured, only 43 were believed to be red wolves and sent to the breeding facility. The first litters were produced in captivity in May 1977. Some of the pups were determined to be hybrids, and they and their parents were removed from the program. Of the original 43 animals, only 17 were considered pure red wolves and since three were unable to breed, 14 became the breeding stock for the captive-breeding program. These 14 were so closely related that they had the genetic effect of being only eight individuals.
In 1996, the red wolf was listed by the International Union for Conservation of Nature as a critically endangered species.
20th century releases
1976 release in Cape Romain National Wildlife Refuge
In December 1976, two wolves were released onto Cape Romain National Wildlife Refuge's Bulls Island in South Carolina with the intent of testing and honing reintroduction methods. They were not released with the intent of beginning a permanent population on the island. The first experimental translocation lasted for 11 days, during which a mated pair of red wolves was monitored day and night with remote telemetry. A second experimental translocation was tried in 1978 with a different mated pair, and they were allowed to remain on the island for close to nine months. After that, a larger project was executed in 1987 to reintroduce a permanent population of red wolves back to the wild in the Alligator River National Wildlife Refuge (ARNWR) on the eastern coast of North Carolina. Also in 1987, Bulls Island became the first island breeding site. Pups were raised on the island and relocated to North Carolina until 2005.
1986 release in Alligator River National Wildlife Refuge
In September 1987, four male-female pairs of red wolves were released in the Alligator River National Wildlife Refuge, in northeastern North Carolina, and designated as an experimental population. Since then, the experimental population has grown and the recovery area expanded to include four national wildlife refuges, a Department of Defense bombing range, state-owned lands, and private lands, encompassing about .
1989 release on Horn Island, Mississippi
In 1989, the second island propagation project was initiated with release of a population on Horn Island off the Mississippi coast. This population was removed in 1998 because of a likelihood of encounters with humans. The third island propagation project introduced a population on St. Vincent Island, Florida, offshore between Cape San Blas and Apalachicola, Florida, in 1990, and in 1997, the fourth island propagation program introduced a population to Cape St. George Island, Florida, south of Apalachicola.
1991 release in the Great Smoky Mountains
In 1991, two pairs were reintroduced into the Great Smoky Mountains National Park, where the last known red wolf was killed in 1905. Despite some early success, the wolves were relocated to eastern North Carolina in 1998, ending the effort to reintroduce the species to the park.
21st century status
Over 30 facilities participate in the red wolf Species Survival Plan and oversee the breeding and reintroduction of over 150 wolves.
In 2007, the USFWS estimated that 300 red wolves remained in the world, with 207 of those in captivity. By late 2020, the number of wild individuals had shrunk to only about 7 radio-collared and a dozen uncollared individuals, with no wild pups born since 2018. This decline has been linked to shooting and poisoning of wolves by landowners, and suspended conservation efforts by the USFWS.
A 2019 analysis by the Center for Biological Diversity of available habitat throughout the red wolf's former range found that over 20,000 square miles of public land across five sites had viable habitat for red wolves to be reintroduced to in the future. These sites were chosen based on prey levels, isolation from coyotes and human development, and connectivity with other sites. These sites include: the Apalachicola and Osceola National Forests along with the Okefenokee National Wildlife Refuge and nearby protected lands; numerous national parks and national forests in the Appalachian Mountains including the Monongahela, George Washington & Jefferson, Cherokee, Pisgah, Nantahala, Chattahoochee, and Talladega National Forests along with Shenandoah National Park and the lower elevations of Great Smoky Mountains National Park; Croatoan National Forest and Hofmann Forest on the North Carolina coast, and the Ozark, Ouatchita, and Mark Twain National Forests in the central United States.
In late 2018, two canids that are largely coyote were found on Galveston Island, Texas with red wolf alleles (gene expressions) left from a ghost population of red wolves. Since these alleles are from a different population from the red wolves in the North Carolina captive breeding program, there has been a proposal to selectively cross-breed the Galveston Island coyotes into the captive red wolf population. Another study published around the same time analyzing canid scat and hair samples in southwestern Louisiana found genetic evidence of red wolf ancestry in about 55% of sampled canids, with one such individual having between 78 and 100% red wolf ancestry, suggesting the possibility of more red wolf genes in the wild that may not be present in the captive population.
From 2015 to 2019, there were no red wolves released into the wild. But in March 2020, the FWS released a new breeding pair of red wolves, including a young male red wolf from St. Vincent Island, Florida into the Alligator River National Wildlife Refuge. The pair were unsuccessful at producing a litter of pups in the wild. On March 1, 2021, two male red wolves from Florida were paired with two female wild red wolves from eastern North Carolina and released into the wild. One of the male wolves was killed by a car shortly after being released into the wild. On April 30 and May 1, four adult red wolves were released into the wild and four red wolf pups were fostered by a wild female red wolf. In addition to the eight released wolves, the total number of red wolves living in the wild amount to nearly thirty wild individuals, including a dozen other wolves not wearing radio collars.
A study published in 2020 reported camera traps recorded "the presence of a large canid possessing wolf-like characters" in northeast Texas and later hair samples and tracks from the area indicated the presence of red wolves.
By fall of 2021, a total of six red wolves had been killed, including the four adults that had been released in the spring. Three of the released adults had been killed in vehicle collisions, two had died from unknown cases, and the fourth released adult had been shot by a landowner who feared the wolf was attempting to get his chickens. These losses dropped the number of wolves in the wild down to about 20 wild individuals. In the winter of 2021–2022, the Fish and Wildlife Services selected nine captive adult red wolves to be released into the wild. A family of five red wolves were released into the Pocosin Lakes National Wildlife Refuge, while two new breeding pairs of adult wolves were released into the Alligator River National Wildlife Refuge. The release of these new wolves brought the number of wild red wolves in eastern North Carolina up to less than 30 wild individuals.
On April 22, 2022, one of the breeding pairs of adult red wolves produced a litter of six wolf pups, four females and two males. This new litter of red wolf pups became the first litter born in the wild since 2018. As of 2023, there are between 15 and 17 wild red wolves in Alligator River National Wildlife Refuge.
Existing population
In April and May 2023, two captive male red wolves were paired with two wild female wolves in acclimation pens and were later released into the wild. At the same time, the wild breeding pair that produced a litter of pups the previous year gave birth to a second litter of 5 pups, 2 males and 3 females. A male wolf pup from a captive litter was fostered into the pack, and with this new addition, the family of red wolves, which was named the Milltail pack by FWS, has grown to 13 wild individuals. These six new pups has brought the wild population of red wolves up to 23–25 wild individuals.
In May 2023, two families of red wolves were placed in acclamation pens to be released into the wild in the Pocosin Lakes National Wildlife Refuge in Tyrrell County. One family consisted of a breeding pair and three pups, while the other consisted of a breeding pair, a yearling female, and four young pups that were born in the acclamation pen. In early June 2023, the two families of red wolves were released into the wild to roam through PLNWR. With the addition of these two separate packs, the wild population of red wolves had increased to about 35 wild individuals. In addition to the wild population, there are approximately 270 red wolves in zoos and captive breeding programs across the U.S.
Coyote × re-introduced red wolf issues
Interbreeding with the coyote has been recognized as a threat affecting the restoration of red wolves. Adaptive management efforts are making progress in reducing the threat of coyotes to the red wolf population in northeastern North Carolina. Other threats, such as habitat fragmentation, disease, and human-caused mortality, are of concern in the restoration of red wolves. Efforts to reduce the threats are presently being explored.
By 1999, introgression of coyote genes was recognized as the single greatest threat to wild red wolf recovery and an adaptive management plan which included coyote sterilization has been successful, with coyote genes being reduced by 2015 to less than 4% of the wild red wolf population.
Since the 2014 programmatic review, the USFWS ceased implementing the red wolf adaptive management plan that was responsible for preventing red wolf hybridization with coyotes and allowed the release of captive-born red wolves into the wild population. Since then, the wild population has decreased from 100–115 red wolves to less than 30. Despite the controversy over the red wolf's status as a unique taxon as well as the USFWS' apparent disinterest towards wolf conservation in the wild, the vast majority of public comments (including NC residents) submitted to the USFWS in 2017 over their new wolf management plan were in favor of the original wild conservation plan.
A 2016 genetic study of canid scats found that despite high coyote density inside the Red Wolf Experimental Population Area (RWEPA), hybridization occurs rarely (4% are hybrids).
Contested killing of re-introduced red wolves
High wolf mortality related to anthropogenic causes appeared to be the main factor limiting wolf dispersal westward from the RWEPA. High anthropogenic wolf mortality similarly limits expansion of eastern wolves outside of protected areas in south-eastern Canada.
In 2012, the Southern Environmental Law Center filed a lawsuit against the North Carolina Wildlife Resources Commission for jeopardizing the existence of the wild red wolf population by allowing nighttime hunting of coyotes in the five-county restoration area in eastern North Carolina. A 2014 court-approved settlement agreement was reached that banned nighttime hunting of coyotes and requires permitting and reporting coyote hunting. In response to the settlement, the North Carolina Wildlife Resources Commission adopted a resolution requesting the USFWS to remove all wild red wolves from private lands, terminate recovery efforts, and declare red wolves extinct in the wild. This resolution came in the wake of a 2014 programmatic review of the red wolf conservation program conducted by The Wildlife Management Institute. The Wildlife Management Institute indicated the reintroduction of the red wolf was an incredible achievement. The report indicated that red wolves could be released and survive in the wild, but that illegal killing of red wolves threatens the long-term persistence of the population. The report stated that the USFWS needed to update its red wolf recovery plan, thoroughly evaluate its strategy for preventing coyote hybridization and increase its public outreach.
In 2014, the USFWS issued the first take permit for a red wolf to a private landowner. Since then, the USFWS issued several other take permits to landowners in the five-county restoration area. During June 2015, a landowner shot and killed a female red wolf after being authorized a take permit, causing a public outcry. In response, the Southern Environmental Law Center filed a lawsuit against the USFWS for violating the Endangered Species Act.
By 2016, the red wolf population of North Carolina had declined to 45–60 wolves. The largest cause of this decline was gunshot.
In June 2018, the USFWS announced a proposal that would limit the wolves' safe range to only Alligator River National Wildlife Refuge, where only about 35 wolves remain, thus allowing hunting on private land. In November 2018, Chief Judge Terrence W. Boyle found that the USFWS had violated its congressional mandate to protect the red wolf, and ruled that USFWS had no power to give landowners the right to shoot them.
Relationship to humans
Since before European colonization of the Americas, the red wolf has featured prominently in Cherokee spiritual beliefs, where it is known as wa'ya (ᏩᏯ), and is said to be the companion of Kana'ti - the hunter and father of the Aniwaya or Wolf Clan. Traditionally, Cherokee people generally avoid killing red wolves, as such an act is believed to bring about the vengeance of the killed animals' pack-mates.
Gallery
Taxonomy
The taxonomic status of the red wolf is debated. It has been described as either a species with a distinct lineage, a recent hybrid of the gray wolf and the coyote, an ancient hybrid of the gray wolf and the coyote which warrants species status, or a distinct species that has undergone recent hybridization with the coyote.
The naturalists John James Audubon and John Bachman were the first to suggest that the wolves of the southern United States were different from wolves in its other regions. In 1851 they recorded the "Black American Wolf" as C. l. var. ater that existed in Florida, South Carolina, North Carolina, Kentucky, southern Indiana, southern Missouri, Louisiana, and northern Texas. They also recorded the "Red Texan Wolf" as C. l. var. rufus that existed from northern Arkansas, through Texas, and into Mexico. In 1912 the zoologist Gerrit Smith Miller Jr. noted that the designation ater was unavailable and recorded these wolves as C. l. floridanus.
In 1937, the zoologist Edward Alphonso Goldman proposed a new species of wolf Canis rufus. Three subspecies of red wolf were originally recognized by Goldman, with two of these subspecies now being extinct. The Florida black wolf (Canis rufus floridanus) (Maine to Florida) has been extinct since 1908 and the Mississippi Valley red wolf (Canis rufus gregoryi) (south-central United States) was declared extinct by 1980. By the 1970s, the Texas red wolf (Canis rufus rufus) existed only in the coastal prairies and marshes of extreme southeastern Texas and southwestern Louisiana. These were removed from the wild to form a captive breeding program and reintroduced into eastern North Carolina in 1987.
In 1967, the zoologists Barbara Lawrence and William H. Bossert believed that the case for classifying C. rufus as a species was based too heavily on the small red wolves of central Texas, from where it was known that there existed hybridization with the coyote. They said that if an adequate number of specimens had been included from Florida, then the separation of C. rufus from C. lupus would have been unlikely. The taxonomic reference Catalogue of Life classifies the red wolf as a subspecies of Canis lupus. The mammalogist W. Christopher Wozencraft, writing in Mammal Species of the World (2005), regards the red wolf as a hybrid of the gray wolf and the coyote, but due to its uncertain status compromised by recognizing it as a subspecies of the gray wolf Canis lupus rufus.
In 2021, the American Society of Mammalogists considered the red wolf as its own species (Canis rufus).
Taxonomic debate
When European settlers first arrived to North America, the coyote's range was limited to the western half of the continent. They existed in the arid areas and across the open plains, including the prairie regions of the midwestern states. Early explorers found some in Indiana and Wisconsin. From the mid-1800s onward, coyotes began expanding beyond their original range.
The taxonomic debate regarding North American wolves can be summarised as follows:
Fossil evidence
The paleontologist Ronald M. Nowak notes that the oldest fossil remains of the red wolf are 10,000 years old and were found in Florida near Melbourne, Brevard County, Withlacoochee River, Citrus County, and Devil's Den Cave, Levy County. He notes that there are only a few, but questionable, fossil remains of the gray wolf found in the southeastern states. He proposes that following the extinction of the dire wolf, the coyote appears to have been displaced from the southeastern US by the red wolf until the last century, when the extirpation of wolves allowed the coyote to expand its range. He also proposes that the ancestor of all North American and Eurasian wolves was C. mosbachensis, which lived in the Middle Pleistocene 700,000–300,000 years ago.
C. mosbachensis was a wolf that once lived across Eurasia before going extinct. It was smaller than most North American wolf populations and smaller than C. rufus, and has been described as being similar in size to the small Indian wolf, Canis lupus pallipes. He further proposes that C. mosbachensis invaded North America where it became isolated by the later glaciation and there gave rise to C. rufus. In Eurasia, C. mosbachensis evolved into C. lupus, which later invaded North America.
The paleontologist and expert on the genus Canis natural history, Xiaoming Wang, looked at red wolf fossil material but could not state if it was, or was not, a separate species. He said that Nowak had put together more morphometric data on red wolves than anybody else, but Nowak's statistical analysis of the data revealed a red wolf that is difficult to deal with. Wang proposes that studies of ancient DNA taken from fossils might help settle the debate.
Morphological evidence
In 1771, the English naturalist Mark Catesby referred to Florida and the Carolinas when he wrote that "The Wolves in America are like those of Europe, in shape and colour, but are somewhat smaller." They were described as being more timid and less voracious. In 1791 the American naturalist William Bartram wrote in his book Travels about a wolf which he had encountered in Florida that was larger than a dog, but was black in contrast to the larger yellow-brown wolves of Pennsylvania and Canada. In 1851 the naturalists John James Audubon and John Bachman described the "Red Texan Wolf" in detail. They noted that it could be found in Florida and other southeastern states, but it differed from other North American wolves and named it Canis lupus rufus. It was described as being more fox-like than the gray wolf, but retaining the same "sneaking, cowardly, yet ferocious disposition".
In 1905, the mammalogist Vernon Bailey referred to the "Texan Red Wolf" with the first use of the name Canis rufus. In 1937 the zoologist Edward Goldman undertook a morphological study of southeastern wolf specimens. He noted that their skulls and dentition differed from those of gray wolves and closely approached those of coyotes. He identified the specimens as all belonging to the one species which he referred to as Canis rufus. Goldman then examined a large number of southeastern wolf specimens and identified three subspecies, noting that their colors ranged from black, gray, and cinnamon-buff.
It is difficult to distinguish the red wolf from a red wolf × coyote hybrid. During the 1960s, two studies of the skull morphology of wild Canis in the southeastern states found them to belong to the red wolf, the coyote, or many variations in between. The conclusion was that there has been recent massive hybridization with the coyote. In contrast, another 1960s study of Canis morphology concluded that the red wolf, eastern wolf, and domestic dog were closer to the gray wolf than the coyote, while still remaining clearly distinctive from each other. The study regarded these 3 canines as subspecies of the gray wolf. However, the study noted that "red wolf" specimens taken from the edge of their range which they shared with the coyote could not be attributed to any one species because the cranial variation was very wide. The study proposed further research to ascertain if hybridization had occurred.
In 1971, a study of the skulls of C. rufus, C. lupus and C. latrans indicated that C. rufus was distinguishable by being in size and shape midway between the gray wolf and the coyote. A re-examination of museum canine skulls collected from central Texas between 1915 and 1918 showed variations spanning from C. rufus through to C. latrans. The study proposes that by 1930 due to human habitat modification, the red wolf had disappeared from this region and had been replaced by a hybrid swarm. By 1969, this hybrid swarm was moving eastwards into eastern Texas and Louisiana.
In the late 19th century, sheep farmers in Kerr County, Texas, stated that the coyotes in the region were larger than normal coyotes, and they believed that they were a gray wolf and coyote cross. In 1970, the wolf mammalogist L. David Mech proposed that the red wolf was a hybrid of the gray wolf and coyote. However, a 1971 study compared the cerebellum within the brain of six Canis species and found that the cerebellum of the red wolf indicated a distinct species, was closest to that of the gray wolf, but in contrast indicated some characteristics that were more primitive than those found in any of the other Canis species. In 2014, a three-dimensional morphometrics study of Canis species accepted only six red wolf specimens for analysis from those on offer, due to the impact of hybridization on the others.
DNA studies
Different DNA studies may give conflicting results because of the specimens selected, the technology used, and the assumptions made by the researchers.
Phylogenetic trees compiled using different genetic markers have given conflicting results on the relationship between the wolf, dog and coyote. One study based on SNPs (a single mutation), and another based on nuclear gene sequences (taken from the cell nucleus), showed dogs clustering with coyotes and separate from wolves. Another study based on SNPS showed wolves clustering with coyotes and separate from dogs. Other studies based on a number of markers show the more widely accepted result of wolves clustering with dogs separate from coyotes. These results demonstrate that caution is needed when interpreting the results provided by genetic markers.
Genetic marker evidence
In 1980, a study used gel electrophoresis to look at fragments of DNA taken from dogs, coyotes, and wolves from the red wolf's core range. The study found that a unique allele (expression of a gene) associated with Lactate dehydrogenase could be found in red wolves, but not dogs and coyotes. The study suggests that this allele survives in the red wolf. The study did not compare gray wolves for the existence of this allele.
Mitochondrial DNA (mDNA) passes along the maternal line and can date back thousands of years. In 1991, a study of red wolf mDNA indicates that red wolf genotypes match those known to belong to the gray wolf or the coyote. The study concluded that the red wolf is either a wolf × coyote hybrid or a species that has hybridized with the wolf and coyote across its entire range. The study proposed that the red wolf is a southeastern occurring subspecies of the gray wolf that has undergone hybridization due to an expanding coyote population; however, being unique and threatened that it should remain protected. This conclusion led to debate for the remainder of the decade.
In 2000, a study looked at red wolves and eastern Canadian wolves. The study agreed that these two wolves readily hybridize with the coyote. The study used eight microsatellites (genetic markers taken from across the genome of a specimen). The phylogenetic tree produced from the genetic sequences showed red wolves and eastern Canadian wolves clustering together. These then clustered next closer with the coyote and away from the gray wolf. A further analysis using mDNA sequences indicated the presence of coyote in both of these two wolves, and that these two wolves had diverged from the coyote 150,000–300,000 years ago. No gray wolf sequences were detected in the samples. The study proposes that these findings are inconsistent with the two wolves being subspecies of the gray wolf, that red wolves and eastern Canadian wolves evolved in North America after having diverged from the coyote, and therefore they are more likely to hybridize with coyotes.
In 2009, a study of eastern Canadian wolves using microsatellites, mDNA, and the paternally-inherited yDNA markers found that the eastern Canadian wolf was a unique ecotype of the gray wolf that had undergone recent hybridization with other gray wolves and coyotes. It could find no evidence to support the findings of the earlier 2000 study regarding the eastern Canadian wolf. The study did not include the red wolf.
In 2011, a study compared the genetic sequences of 48,000 single nucleotide polymorphisms (mutations) taken from the genomes of canids from around the world. The comparison indicated that the red wolf was about 76% coyote and 24% gray wolf with hybridization having occurred 287–430 years ago. The eastern wolf was 58% gray wolf and 42% coyote with hybridization having occurred 546–963 years ago. The study rejected the theory of a common ancestry for the red and eastern wolves. However the next year, a study reviewed a subset of the 2011 study's Single-nucleotide polymorphism (SNP) data and proposed that its methodology had skewed the results and that the red and eastern wolves are not hybrids but are in fact the same species separate from the gray wolf. The 2012 study proposed that there are three true Canis species in North America: The gray wolf, the western coyote, and the red wolf / eastern wolf. The eastern wolf was represented by the Algonquin wolf. The Great Lakes wolf was found to be a hybrid of the eastern wolf and the gray wolf. Finally, the study found the eastern coyote itself to be yet another a hybrid between the western coyote and the eastern (Algonquin) wolf (for more on eastern North American wolf-coyote hybrids, see coywolf).
Also in 2011, a scientific literature review was undertaken to help assess the taxonomy of North American wolves. One of the findings proposed was that the eastern wolf is supported as a separate species by morphological and genetic data. Genetic data supports a close relationship between the eastern and red wolves, but not close enough to support these as one species. It was "likely" that these were the separate descendants of a common ancestor shared with coyotes. This review was published in 2012. In 2014, the National Center for Ecological Analysis and Synthesis was invited by the United States Fish and Wildlife Service to provide an independent review of its proposed rule relating to gray wolves. The center's panel findings were that the proposed rule depended heavily upon a single analysis contained in a scientific literature review by Chambers et al. (2011 ), that that study was not universally accepted, that the issue was "not settled", and that the rule does not represent the "best available science".
Brzeski et al. (2016) conducted an mDNA analysis of three ancient (300–1,900 years old) wolf-like samples from the southeastern United States found that they grouped with the coyote clade, although their teeth were wolf-like. The study proposed that the specimens were either coyotes and this would mean that coyotes had occupied this region continuously rather than intermittently, a North American evolved red wolf lineage related to coyotes, or an ancient coyote–wolf hybrid. Ancient hybridization between wolves and coyotes would likely have been due to natural events or early human activities, not landscape changes associated with European colonization because of the age of these samples. Coyote–wolf hybrids may have occupied the southeastern United States for a long time, filling an important niche as a medium-large predator.
Whole-genome evidence
In July 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor less than 6,000–117,000 years ago. The study also indicated that all North America wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry, and that the red wolf and Great Lakes region wolf are highly admixed with different proportions of gray wolf and coyote ancestry. One test indicated a wolf/coyote divergence time of 51,000 years before present that matched other studies indicating that the extant wolf came into being around this time. Another test indicated that the red wolf diverged from the coyote between 55,000 and 117,000 years before present and the Great Lakes region wolf 32,000 years before present. Other tests and modelling showed various divergence ranges and the conclusion was a range of less than 6,000 and 117,000 years before present. The study found that coyote ancestry was highest in red wolves from the southeast of the United States and lowest among the Great Lakes region wolves.
The theory proposed was that this pattern matched the south-to-north disappearance of the wolf due to European colonization and its resulting loss of habitat. Bounties led to the extirpation of wolves initially in the southeast, and as the wolf population declined wolf-coyote admixture increased. Later, this process occurred in the Great Lakes region with the influx of coyotes replacing wolves, followed by the expansion of coyotes and their hybrids across the wider region. The red wolf may possess some genomic elements that were unique to gray wolf and coyote lineages from the American South. The proposed timing of the wolf/coyote divergence conflicts with the finding of a coyote-like specimen in strata dated to 1 million years before present, and red wolf fossil specimens dating back 10,000 years ago. The study concluded by stating that because of the extirpation of gray wolves in the American Southeast, "the reintroduced population of red wolves in eastern North Carolina is doomed to genetic swamping by coyotes without the extensive management of hybrids, as is currently practiced by the USFWS."
In September 2016, the USFWS announced a program of changes to the red wolf recovery program and "will begin implementing a series of actions based on the best and latest scientific information". The service will secure the captive population which is regarded as not sustainable, determine new sites for additional experimental wild populations, revise the application of the existing experimental population rule in North Carolina, and complete a comprehensive Species Status Assessment.
In 2017, a group of canid researchers challenged the recent finding that the red wolf and the eastern wolf were the result of recent coyote-wolf hybridization. The group highlight that no testing had been undertaken to ascertain the time period that hybridization had occurred and that, by the previous study's own figures, the hybridization could not have occurred recently but supports a much more ancient hybridization. The group found deficiencies in the previous study's selection of specimens and the findings drawn from the different techniques used. Therefore, the group argues that both the red wolf and the eastern wolf remain genetically distinct North American taxa. This was rebutted by the authors of the earlier study. Another study in late 2018 of wild canids in southwestern Louisiana also supported the red wolf as a separate species, citing distinct red wolf DNA within hybrid canids.
In 2019, a literature review of the previous studies was undertaken by the National Academies of Sciences, Engineering, and Medicine. The position of the National Academies is that the historical red wolf forms a valid taxonomic species, the modern red wolf is distinct from wolves and coyotes, and modern red wolves trace some of their ancestry to historic red wolves. The species Canis rufus is supported for the modern red wolf, unless genomic evidence from historical red wolf specimens changes this assessment, due to a lack of continuity between the historic and the modern red wolves.
Wolf genome
Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome. The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40%:60% wolf to coyote ancestry in red wolves, 60%:40% in Eastern timber wolves, and 75%:25% in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves.
The study shows that the genomic ancestry of red, eastern timber and Great Lakes wolves were the result of admixture between modern gray wolves and modern coyotes. This was then followed by development into local populations. Individuals within each group showed consistent levels of coyote to wolf inheritance, indicating that this was the result of relatively ancient admixture. The eastern timber wolf (Algonquin Provincial Park) is genetically closely related to the Great Lakes wolf (Minnesota, Isle Royale National Park). If a third canid had been involved in the admixture of the North American wolf-like canids, then its genetic signature would have been found in coyotes and wolves, which it has not.
Gray wolves suffered a species-wide population bottleneck (reduction) approximately 25,000 YBP during the Last Glacial Maximum. This was followed by a single population of modern wolves expanding out of a Beringia refuge to repopulate the wolf's former range, replacing the remaining Late Pleistocene wolf populations across Eurasia and North America as they did so. This implies that if the coyote and red wolf were derived from this invasion, their histories date only tens of thousands and not hundreds of thousands of years ago, which is consistent with other studies.
The Endangered Species Act provides protection to endangered species, but does not provide protection for endangered admixed individuals, even if these serve as reservoirs for extinct genetic variation. Researchers on both sides of the red wolf debate argue that admixed canids warrant full protection under this Act.
Separate species that can be strengthened from hybrids
In 2020, a study conducted DNA sequencing of canines across southeastern US to detect those with any red wolf ancestry. The study found that red wolf ancestry exists in the coyote populations of southwestern Louisiana and southeastern Texas, but also newly detected in North Carolina. The red wolf ancestry of these populations possess unique red wolf alleles not found in the current captive red wolf population. The study proposes that the expanding coyotes admixed with red wolves to gain genetic material that was suited to the southeastern environment and would aid their adaptation to it, and that surviving red wolves admixed with coyotes because the red wolves were suffering from inbreeding.
In 2021, a study conducted DNA sequencing of canines across the remnant red wolf hybrid zone of southwestern Louisiana and southeastern Texas. The study found red wolf ancestry in the coyote genomes which increases up to 60% in a westward gradient. This was due to introgression from the remnant red wolf population over the past 100 years. The study proposes that coyotes expanded into the gulf region and admixed with red wolves prior to the red wolf going extinct in the wild due to loss of habitat and persecution. In the past two decades the hybrid region has expanded. The study presented the genetic evidence that the red wolf is a separate species, based on the structure of one of the loci of its X-chromosome which is accepted as a marker for distinct species. As such, the study suggested that the introgressed red wolf ancestry could be de-introgressed back as a basis for breeding further red wolves from the hybrids.
Pre-dates the coyote in North America
In 2021, a study of mitochondrial genomes sourced from specimens dated before the 20th century revealed that red wolves could be found across North America. With the arrival of the gray wolf between 80,000 and 60,000 years ago, the red wolf's range shrank to the eastern forests and California, and the coyote replaced the red wolf mid-continent between 60,000 and 30,000 years ago. The coyote expanded into California at the beginning of the Holocene era 12,000–10,000 years ago and admixed with the red wolf, phenotypically replacing them. The study proposes that the red wolf may pre-date the coyote in North America.
Explanatory footnotes
| Biology and health sciences | Canines | Animals |
26118 | https://en.wikipedia.org/wiki/Roof | Roof | A roof (: roofs or rooves) is the top covering of a building, including all materials and constructions necessary to support it on the walls of the building or on uprights, providing protection against rain, snow, sunlight, extremes of temperature, and wind. A roof is part of the building envelope.
The characteristics of a roof are dependent upon the purpose of the building that it covers, the available roofing materials and the local traditions of construction and wider concepts of architectural design and practice, and may also be governed by local or national legislation. In most countries, a roof protects primarily against rain. A verandah may be roofed with material that protects against sunlight but admits the other elements. The roof of a garden conservatory protects plants from cold, wind, and rain, but admits light.
A roof may also provide additional living space, for example, a roof garden.
Etymology
Old English 'roof, ceiling, top, summit; heaven, sky', also figuratively, 'highest point of something', from Proto-Germanic (cf. Dutch 'deckhouse, cabin, coffin-lid', Middle High German 'penthouse', Old Norse 'boat shed'). There are no apparent connections outside the Germanic family. "English alone has retained the word in a general sense, for which the other languages use forms corresponding to OE. thatch".
Design elements
The elements in the design of a roof are:
the material
the construction
the durability
The material of a roof may range from banana leaves, wheaten straw or seagrass to laminated glass, copper (see: copper roofing), aluminium sheeting and pre-cast concrete. In many parts of the world ceramic roof tiles have been the predominant roofing material for centuries, if not millennia. Other roofing materials include asphalt, coal tar pitch, EPDM rubber, Hypalon, polyurethane foam, PVC, slate, Teflon fabric, TPO, and wood shakes and shingles.
The construction of a roof is determined by its method of support and how the underneath space is bridged and whether or not the roof is pitched. The pitch is the angle at which the roof rises from its lowest to its highest point. Most US domestic architecture, except in very dry regions, has roofs that are sloped, or pitched. Although modern construction elements such as drainpipes may remove the need for pitch, roofs are pitched for reasons of tradition and aesthetics. So the pitch is partly dependent upon stylistic factors, and partially to do with practicalities.
Some types of roofing, for example thatch, require a steep pitch in order to be waterproof and durable. Other types of roofing, for example pantiles, are unstable on a steeply pitched roof but provide excellent weather protection at a relatively low angle. In regions where there is little rain, an almost flat roof with a slight run-off provides adequate protection against an occasional downpour. Drainpipes also remove the need for a sloping roof.
A person that specializes in roof construction is called a roofer.
The durability of a roof is a matter of concern because the roof is often the least accessible part of a building for purposes of repair and renewal, while its damage or destruction can have serious effects.
Form
The shape of roofs differs greatly from region to region. The main factors which influence the shape of roofs are the climate and the materials available for roof structure and the outer covering.
The basic shapes of roofs are flat,
mono-pitched, gabled, mansard, hipped, butterfly, arched and domed. There are many variations on these types. Roofs constructed of flat sections that are sloped are referred to as pitched roofs (generally if the angle exceeds 10 degrees). Pitched roofs, including gabled, hipped and skillion roofs, make up the greatest number of domestic roofs. Some roofs follow organic shapes, either by architectural design or because a flexible material such as thatch has been used in the construction.
Parts
There are two parts to a roof: its supporting structure and its outer skin, or uppermost weatherproof layer. In a minority of buildings, the outer layer is also a self-supporting structure.
The roof structure is generally supported upon walls, although some building styles, for example, geodesic and A-frame, blur the distinction between wall and roof.
Support
The supporting structure of a roof usually comprises beams that are long and of strong, fairly rigid material such as timber, and since the mid-19th century, cast iron or steel. In countries that use bamboo extensively, the flexibility of the material causes a distinctive curving line to the roof, characteristic of Oriental architecture.
Timber lends itself to a great variety of roof shapes. The timber structure can fulfil an aesthetic as well as practical function, when left exposed to view.
Stone lintels have been used to support roofs since prehistoric times, but cannot bridge large distances. The stone arch came into extensive use in the ancient Roman period and in variant forms could be used to span spaces up to across. The stone arch or vault, with or without ribs, dominated the roof structures of major architectural works for about 2,000 years, only giving way to iron beams with the Industrial Revolution and the designing of such buildings as Paxton's Crystal Palace, completed 1851.
With continual improvements in steel girders, these became the major structural support for large roofs, and eventually for ordinary houses as well. Another form of girder is the reinforced concrete beam, in which metal rods are encased in concrete, giving it greater strength under tension.
Roof support can also serve as living spaces as can be seen in roof decking. Roof decking are spaces within the roof structure that is converted into a room of some sort.
Outer layer
This part of the roof shows great variation dependent upon availability of material. In vernacular architecture, roofing material is often vegetation, such as thatches, the most durable being sea grass with a life of perhaps 40 years. In many Asian countries bamboo is used both for the supporting structure and the outer layer where split bamboo stems are laid turned alternately and overlapped. In areas with an abundance of timber, wooden shingles, shakes and boards are used, while in some countries the bark of certain trees can be peeled off in thick, heavy sheets and used for roofing.
The 20th century saw the manufacture of composition asphalt shingles which can last from a thin 20-year shingle to the thickest which are limited lifetime shingles, the cost depending on the thickness and durability of the shingle. When a layer of shingles wears out, they are usually stripped, along with the underlay and roofing nails, allowing a new layer to be installed. An alternative method is to install another layer directly over the worn layer. While this method is faster, it does not allow the roof sheathing to be inspected and water damage, often associated with worn shingles, to be repaired. Having multiple layers of old shingles under a new layer causes roofing nails to be located further from the sheathing, weakening their hold. The greatest concern with this method is that the weight of the extra material could exceed the dead load capacity of the roof structure and cause collapse. Because of this, jurisdictions which use the International Building Code prohibit the installation of new roofing on top of an existing roof that has two or more applications of any type of roof covering; the existing roofing material must be removed before installing a new roof.
Slate is an ideal, and durable material, while in the Swiss Alps roofs are made from huge slabs of stone, several inches thick. The slate roof is often considered the best type of roofing. A slate roof may last 75 to 150 years, and even longer. However, slate roofs are often expensive to install – in the US, for example, a slate roof may have the same cost as the rest of the house. Often, the first part of a slate roof to fail is the fixing nails; they corrode, allowing the slates to slip. In the UK, this condition is known as "nail sickness". Because of this problem, fixing nails made of stainless steel or copper are recommended, and even these must be protected from the weather.
Asbestos, usually in bonded corrugated panels, has been used widely in the 20th century as an inexpensive, non-flammable roofing material with excellent insulating properties. Health and legal issues involved in the mining and handling of asbestos products means that it is no longer used as a new roofing material. However, many asbestos roofs continue to exist, particularly in South America and Asia.
Roofs made of cut turf (modern ones known as green roofs, traditional ones as sod roofs) have good insulating properties and are increasingly encouraged as a way of "greening" the Earth. The soil and vegetation function as living insulation, moderating building temperatures. Adobe roofs are roofs of clay, mixed with binding material such as straw or animal hair, and plastered on lathes to form a flat or gently sloped roof, usually in areas of low rainfall.
In areas where clay is plentiful, roofs of baked tiles have been the major form of roofing. The casting and firing of roof tiles is an industry that is often associated with brickworks. While the shape and colour of tiles was once regionally distinctive, now tiles of many shapes and colours are produced commercially, to suit the taste and pocketbook of the purchaser. Concrete roof tiles are also a common choice, being available in many different styles and shapes.
Sheet metal in the form of copper and lead has also been used for many hundreds of years. Both are expensive but durable, the vast copper roof of Chartres Cathedral, oxidised to a pale green colour, having been in place for hundreds of years. Lead, which is sometimes used for church roofs, was most commonly used as flashing in valleys and around chimneys on domestic roofs, particularly those of slate. Copper was used for the same purpose.
In the 19th century, iron, electroplated with zinc to improve its resistance to rust, became a light-weight, easily transported, waterproofing material. Its low cost and easy application made it the most accessible commercial roofing, worldwide. Since then, many types of metal roofing have been developed. Steel shingle or standing-seam roofs last about 50 years or more depending on both the method of installation and the moisture barrier (underlayment) used and are between the cost of shingle roofs and slate roofs.
In the 20th century, a large number of roofing materials were developed, including roofs based on bitumen (already used in previous centuries), on rubber and on a range of synthetics such as thermoplastic and on fibreglass.
Functions
A roof assembly has more than one function. It may provide any or all of the following functions:
1. To shed water i.e., prevent water from standing on the roof surface. Water standing on the roof surface increases the live load on the roof structure, which is a safety issue. Standing water also contributes to premature deterioration of most roofing materials. Some roofing manufacturers' warranties are rendered void due to standing water.
2. To protect the building interior from the effects of weather elements such as rain, wind, sun, heat and snow.
3. To provide thermal insulation. Most modern commercial/industrial roof assemblies incorporate insulation boards or batt insulation. In most cases, the International Building Code and International Residential Code establish the minimum R-value required within the roof assembly.
4. To perform for the expected service life. All standard roofing materials have established histories of their respective longevity, based on anecdotal evidence. Most roof materials will last long after the manufacturer's warranty has expired, given adequate ongoing maintenance, and absent storm damage. Metal and tile roofs may last fifty years or more. Asphalt shingles may last 30–50 years. Coal tar built-up roofs may last forty or more years. Single-ply roofs may last twenty or more years.
5. Provide a desired, unblemished appearance. Some roofs are selected not only for the above functions, but also for aesthetics, similar to wall cladding. Premium prices are often paid for certain systems because of their attractive appearance and "curb appeal."
Insulation
Because the purpose of a roof is to secure people and their possessions from climatic elements, the insulating properties of a roof are a consideration in its structure and the choice of roofing material.
Some roofing materials, particularly those of natural fibrous material, such as thatch, have excellent insulating properties. For those that do not, extra insulation is often installed under the outer layer. In developed countries, the majority of dwellings have a ceiling installed under the structural members of the roof. The purpose of a ceiling is to insulate against heat and cold, noise, dirt and often from the droppings and lice of birds who frequently choose roofs as nesting places.
Concrete tiles can be used as insulation. When installed leaving a space between the tiles and the roof surface, it can reduce heating caused by the sun.
Forms of insulation are felt or plastic sheeting, sometimes with a reflective surface, installed directly below the tiles or other material; synthetic foam batting laid above the ceiling and recycled paper products and other such materials that can be inserted or sprayed into roof cavities. Cool roofs are becoming increasingly popular, and in some cases are mandated by local codes. Cool roofs are defined as roofs with both high reflectivity and high thermal emittance.
Poorly insulated and ventilated roofing can suffer from problems such as the formation of ice dams around the overhanging eaves in cold weather, causing water from melted snow on upper parts of the roof to penetrate the roofing material. Ice dams occur when heat escapes through the uppermost part of the roof, and the snow at those points melts, refreezing as it drips along the shingles, and collecting in the form of ice at the lower points. This can result in structural damage from stress, including the destruction of gutter and drainage systems.
Drainage
The primary job of most roofs is to keep out water. The large area of a roof repels a lot of water, which must be directed in some suitable way, so that it does not cause damage or inconvenience.
Flat roof of adobe dwellings generally have a very slight slope. In a Middle Eastern country, where the roof may be used for recreation, it is often walled, and drainage holes must be provided to stop water from pooling and seeping through the porous roofing material.
While flat roofs are more prone to drainage issues, poorly designed or textured sloping roofs can face similar problems. Standing water on a roof can lead to mold growth, which is highly damaging to both the building’s structure and the health of its occupants. Repairing drainage issues is significantly less costly than fixing the damage caused by mold.
Similar problems, although on a very much larger scale, confront the builders of modern commercial properties which often have flat roofs. Because of the very large nature of such roofs, it is essential that the outer skin be of a highly impermeable material. Most industrial and commercial structures have conventional roofs of low pitch.
In general, the pitch of the roof is proportional to the amount of precipitation. Houses in areas of low rainfall frequently have roofs of low pitch while those in areas of high rainfall and snow, have steep roofs. The longhouses of Papua New Guinea, for example, being roof-dominated architecture, the high roofs sweeping almost to the ground. The high steeply-pitched roofs of Germany and Holland are typical in regions of snowfall. In parts of North America such as Buffalo, New York, United States, or Montreal, Quebec, Canada, there is a required minimum slope of 6 in 12 (1:2, a pitch of 30°).
There are regional building styles which contradict this trend, the stone roofs of the Alpine chalets being usually of gentler incline. These buildings tend to accumulate a large amount of snow on them, which is seen as a factor in their insulation. The pitch of the roof is in part determined by the roofing material available, a pitch of 3 in 12 (1:4) or greater slope generally being covered with asphalt shingles, wood shake, corrugated steel, slate or tile.
The water repelled by the roof during a rainstorm is potentially damaging to the building that the roof protects. If it runs down the walls, it may seep into the mortar or through panels. If it lies around the foundations it may cause seepage to the interior, rising damp or dry rot. For this reason most buildings have a system in place to protect the walls of a building from most of the roof water. Overhanging eaves are commonly employed for this purpose. Most modern roofs and many old ones have systems of valleys, gutters, waterspouts, waterheads and drainpipes to remove the water from the vicinity of the building. In many parts of the world, roofwater is collected and stored for domestic use.
Areas prone to heavy snow benefit from a metal roof because their smooth surfaces shed the weight of snow more easily and resist the force of wind better than a wood shingle or a concrete tile roof.
Solar roofs
Newer systems include solar shingles which generate electricity as well as cover the roof. There are also solar systems available that generate hot water or hot air and which can also act as a roof covering. More complex systems may carry out all of these functions: generate electricity, recover thermal energy, and also act as a roof covering.
Solar systems can be integrated with roofs by:
integration in the covering of pitched roofs, e.g. solar shingles,
mounting on an existing roof, e.g. solar panel on a tile roof,
integration in a flat roof membrane using heat welding (e.g. PVC) or
mounting on a flat roof with a construction and additional weight to prevent uplift from wind.
Gallery of roof shapes
Gallery of significant roofs
| Technology | Architectural elements | null |
26123 | https://en.wikipedia.org/wiki/Real-time%20operating%20system | Real-time operating system | A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in multitasking or multiprogramming environments. All operations must verifiably complete within given time and resource constraints or else fail safe. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.
Characteristics
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is "jitter". A "hard" real-time operating system (hard RTOS) has less jitter than a "soft" real-time operating system (soft RTOS); a late answer is a wrong answer in a hard RTOS while a late answer is acceptable in a soft RTOS. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS.
An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.
Design philosophies
An RTOS is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
The most common designs are:
Event-driven – switches tasks only when an event of higher priority needs servicing; called preemptive priority, or priority scheduling.
Time-sharing – switches tasks on a regular clocked interrupt, and on events; called round-robin.
Time sharing designs switch tasks more often than strictly needed, but give smoother multitasking, giving the illusion that a process or user has sole use of a machine.
Early CPU designs needed many cycles to switch tasks during which the CPU could do nothing else useful. Because switching took so long, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching.
Scheduling
In typical designs, a task has three states:
Running (executing on the CPU);
Ready (ready to be executed);
Blocked (waiting for an event, I/O for example).
Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU core. The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state (resource starvation).
Usually, the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled, but the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority, so that finding the highest priority task to run does not require traversing the list. Instead, inserting a task requires walking the list.
During this search, preemption should not be inhibited. Long critical sections should be divided into smaller pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
Algorithms
Some commonly used RTOS scheduling algorithms are:
Cooperative scheduling
Preemptive scheduling
Rate-monotonic scheduling
Round-robin scheduling
Fixed-priority pre-emptive scheduling, an implementation of preemptive time slicing
Fixed-Priority Scheduling with Deferred Preemption
Fixed-Priority Non-preemptive Scheduling
Critical section preemptive scheduling
Static-time scheduling
Earliest Deadline First approach
Stochastic digraphs with multi-threaded graph traversal
Intertask communication and resource sharing
A multitasking operating system like Unix is poor at real-time tasks. The scheduler gives the highest priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical job will have access to enough resources. Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or hardware resource simultaneously. There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts
General-purpose operating systems usually do not allow user programs to mask (disable) interrupts, because the user program could control the CPU for as long as it is made to. Some modern CPUs do not allow user mode code to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
On single-processor systems, an application running in kernel mode and masking interrupts is the lowest overhead method to prevent simultaneous access to a shared resource. While interrupts are masked and the current task does not make a blocking OS call, the current task has exclusive use of the CPU since no other task or interrupt can take control, so the critical section is protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute. Temporarily masking interrupts should only be done when the longest path through the critical section is shorter than the desired maximum interrupt latency. Typically this method of protection is used only when the critical section is just a few instructions and contains no loops. This method is ideal for protecting hardware bit-mapped registers when the bits are controlled by different tasks.
Mutexes
When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as a mutex and OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
A (non-recursive) mutex is either locked or unlocked. When a task has locked the mutex, all other tasks must wait for the mutex to be unlocked by its owner - the original thread. A task may set a timeout on its wait for a mutex. There are several well-known problems with mutex based designs such as priority inversion and deadlocks.
In priority inversion a high priority task waits because a low priority task has a mutex, but the lower priority task is not given CPU time to finish its work. A typical solution is to have the task that owns a mutex 'inherit' the priority of the highest waiting task. But this simple approach gets more complex when there are multiple levels of waiting: task A waits for a mutex locked by task B, which waits for a mutex locked by task C. Handling multiple levels of inheritance causes other code to run in high priority context and thus can cause starvation of medium-priority threads.
In a deadlock, two or more tasks lock mutex without timeouts and then wait forever for the other task's mutex, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks alternately lock two mutex, but in the opposite order. Deadlock is prevented by careful design.
Message passing
The other approach to resource sharing is for tasks to send messages in an organized message passing scheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Interrupt handlers and the scheduler
Since an interrupt handler blocks the highest priority task from running, and since real-time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on. Updates to this catalogue must be strictly controlled. For this reason, it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so. The OS function called from an interrupt handler could find the object database to be in an inconsistent state because of the application's update. There are two major approaches to deal with this problem: the unified architecture and the segmented architecture. RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated. The downside of this is that interrupt latency increases, potentially losing interrupts. The segmented architecture does not make direct OS calls but delegates the OS related work to a separate handler. This handler runs at a higher priority than any thread but lower than the interrupt handlers. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture.
Similarly, the System Management Mode on x86 compatible hardware can take a lot of time before it returns control to the operating system.
Memory allocation
Memory allocation is more critical in a real-time operating system than in other operating systems.
First, for stability there cannot be memory leaks (memory that is allocated but not freed after use). The device should work indefinitely, without ever needing a reboot. For this reason, dynamic memory allocation is frowned upon. Whenever possible, all required memory allocation is specified statically at compile time.
Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocation and releasing of small chunks of memory, a situation may occur where available memory is divided into several sections and the RTOS cannot allocate a large enough continuous block of memory, although there is enough free memory. Secondly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block, which is unacceptable in a RTOS since memory allocation has to occur within a certain amount of time.
Because mechanical disks have much longer and more unpredictable response times, swapping to disk files is not used for the same reasons as RAM allocation discussed above.
The simple fixed-size-blocks algorithm works quite well for simple embedded systems because of its low overhead.
| Technology | Operating systems | null |
26132 | https://en.wikipedia.org/wiki/Rankine%20scale | Rankine scale | The Rankine scale ( ) is an absolute scale of thermodynamic temperature named after the University of Glasgow engineer and physicist Macquorn Rankine, who proposed it in 1859.
History
Similar to the Kelvin scale, which was first proposed in 1848, zero on the Rankine scale is absolute zero, but a temperature difference of one Rankine degree (°R or °Ra) is defined as equal to one Fahrenheit degree, rather than the Celsius degree used on the Kelvin scale. In converting from kelvin to degrees Rankine, 1 K = °R or 1 K = 1.8 °R. A temperature of 0 K (−273.15 °C; −459.67 °F) is equal to 0 °R.
Usage
The Rankine scale is used in engineering systems where heat computations are done using degrees Fahrenheit.
The symbol for degrees Rankine is °R (or °Ra if necessary to distinguish it from the Rømer and Réaumur scales). By analogy with the SI unit kelvin, some authors term the unit Rankine, omitting the degree symbol.
Some temperatures relating the Rankine scale to other temperature scales are shown in the table below.
| Physical sciences | Temperature | Basics and measurement |
26176 | https://en.wikipedia.org/wiki/Rayleigh%20scattering | Rayleigh scattering | Rayleigh scattering ( ) is the scattering or deflection of light, or other electromagnetic radiation, by particles with a size much smaller than the wavelength of the radiation. For light frequencies well below the resonance frequency of the scattering medium (normal dispersion regime), the amount of scattering is inversely proportional to the fourth power of the wavelength (e.g., a blue color is scattered much more than a red color as light propagates through air). The phenomenon is named after the 19th-century British physicist Lord Rayleigh (John William Strutt).
Rayleigh scattering results from the electric polarizability of the particles. The oscillating electric field of a light wave acts on the charges within a particle, causing them to move at the same frequency. The particle, therefore, becomes a small radiating dipole whose radiation we see as scattered light. The particles may be individual atoms or molecules; it can occur when light travels through transparent solids and liquids, but is most prominently seen in gases.
Rayleigh scattering of sunlight in Earth's atmosphere causes diffuse sky radiation, which is the reason for the blue color of the daytime and twilight sky, as well as the yellowish to reddish hue of the low Sun. Sunlight is also subject to Raman scattering, which changes the rotational state of the molecules and gives rise to polarization effects.
Scattering by particles with a size comparable to, or larger than, the wavelength of the light is typically treated by the Mie theory, the discrete dipole approximation and other computational techniques. Rayleigh scattering applies to particles that are small with respect to wavelengths of light, and that are optically "soft" (i.e., with a refractive index close to 1). Anomalous diffraction theory applies to optically soft but larger particles.
History
In 1869, while attempting to determine whether any contaminants remained in the purified air he used for infrared experiments, John Tyndall discovered that bright light scattering off nanoscopic particulates was faintly blue-tinted. He conjectured that a similar scattering of sunlight gave the sky its blue hue, but he could not explain the preference for blue light, nor could atmospheric dust explain the intensity of the sky's color.
In 1871, Lord Rayleigh published two papers on the color and polarization of skylight to quantify Tyndall's effect in water droplets in terms of the tiny particulates' volumes and refractive indices. In 1881, with the benefit of James Clerk Maxwell's 1865 proof of the electromagnetic nature of light, he showed that his equations followed from electromagnetism. In 1899, he showed that they applied to individual molecules, with terms containing particulate volumes and refractive indices replaced with terms for molecular polarizability.
Small size parameter approximation
The size of a scattering particle is often parameterized by the ratio
where r is the particle's radius, λ is the wavelength of the light and x is a dimensionless parameter that characterizes the particle's interaction with the incident radiation such that: Objects with x ≫ 1 act as geometric shapes, scattering light according to their projected area. At the intermediate x ≃ 1 of Mie scattering, interference effects develop through phase variations over the object's surface. Rayleigh scattering applies to the case when the scattering particle is very small (x ≪ 1, with a particle size < 1/10 of wavelength) and the whole surface re-radiates with the same phase. Because the particles are randomly positioned, the scattered light arrives at a particular point with a random collection of phases; it is incoherent and the resulting intensity is just the sum of the squares of the amplitudes from each particle and therefore proportional to the inverse fourth power of the wavelength and the sixth power of its size. The wavelength dependence is characteristic of dipole scattering and the volume dependence will apply to any scattering mechanism. In detail, the intensity of light scattered by any one of the small spheres of radius r and refractive index n from a beam of unpolarized light of wavelength λ and intensity I0 is given by
where R is the distance to the particle and θ is the scattering angle. Averaging this over all angles gives the Rayleigh scattering cross-section of the particles in air:
Here n is the refractive index of the spheres that approximate the molecules of the gas; the index of the gas surrounding the spheres is neglected, an approximation that introduces an error of less than 0.05%.
The fraction of light scattered by scattering particles over the unit travel length (e.g., meter) is the number of particles per unit volume N times the cross-section. For example, air has a refractive index of 1.0002793 at atmospheric pressure, where there are about molecules per cubic meter, and therefore the major constituent of the atmosphere, nitrogen, has a Rayleigh cross section of at a wavelength of 532 nm (green light). This means that about a fraction 10−5 of the light will be scattered for every meter of travel.
The strong wavelength dependence of the scattering (~λ−4) means that shorter (blue) wavelengths are scattered more strongly than longer (red) wavelengths.
From molecules
The expression above can also be written in terms of individual molecules by expressing the dependence on refractive index in terms of the molecular polarizability α, proportional to the dipole moment induced by the electric field of the light. In this case, the Rayleigh scattering intensity for a single particle is given in CGS-units by
and in SI-units by
Effect of fluctuations
When the dielectric constant of a certain region of volume is different from the average dielectric constant of the medium , then any incident light will be scattered according to the following equation
where represents the variance of the fluctuation in the dielectric constant .
Cause of the blue color of the sky
The blue color of the sky is a consequence of three factors:
the blackbody spectrum of sunlight coming into the Earth's atmosphere,
Rayleigh scattering of that light off oxygen and nitrogen molecules, and
the response of the human visual system.
The strong wavelength dependence of the Rayleigh scattering (~λ−4) means that shorter (blue) wavelengths are scattered more strongly than longer (red) wavelengths. This results in the indirect blue and violet light coming from all regions of the sky. The human eye responds to this wavelength combination as if it were a combination of blue and white light.
Some of the scattering can also be from sulfate particles. For years after large Plinian eruptions, the blue cast of the sky is notably brightened by the persistent sulfate load of the stratospheric gases. Some works of the artist J. M. W. Turner may owe their vivid red colours to the eruption of Mount Tambora in his lifetime.
In locations with little light pollution, the moonlit night sky is also blue, because moonlight is reflected sunlight, with a slightly lower color temperature due to the brownish color of the Moon. The moonlit sky is not perceived as blue, however, because at low light levels human vision comes mainly from rod cells that do not produce any color perception (Purkinje effect).
Of sound in amorphous solids
Rayleigh scattering is also an important mechanism of wave scattering in amorphous solids such as glass, and is responsible for acoustic wave damping and phonon damping in glasses and granular matter at low or not too high temperatures. This is because in glasses at higher temperatures the Rayleigh-type scattering regime is obscured by the anharmonic damping (typically with a ~λ−2 dependence on wavelength), which becomes increasingly more important as the temperature rises.
In amorphous solids – glasses – optical fibers
Rayleigh scattering is an important component of the scattering of optical signals in optical fibers. Silica fibers are glasses, disordered materials with microscopic variations of density and refractive index. These give rise to energy losses due to the scattered light, with the following coefficient:
where n is the refraction index, p is the photoelastic coefficient of the glass, k is the Boltzmann constant, and β is the isothermal compressibility. Tf is a fictive temperature, representing the temperature at which the density fluctuations are "frozen" in the material.
In porous materials
Rayleigh-type λ−4 scattering can also be exhibited by porous materials. An example is the strong optical scattering by nanoporous materials. The strong contrast in refractive index between pores and solid parts of sintered alumina results in very strong scattering, with light completely changing direction each five micrometers on average. The λ−4-type scattering is caused by the nanoporous structure (a narrow pore size distribution around ~70 nm) obtained by sintering monodispersive alumina powder.
| Physical sciences | Optics | Physics |
26194 | https://en.wikipedia.org/wiki/Restriction%20enzyme | Restriction enzyme | A restriction enzyme, restriction endonuclease, REase, ENase or restrictase is an enzyme that cleaves DNA into fragments at or near specific recognition sites within molecules known as restriction sites. Restriction enzymes are one class of the broader endonuclease group of enzymes. Restriction enzymes are commonly classified into five types, which differ in their structure and whether they cut their DNA substrate at their recognition site, or if the recognition and cleavage sites are separate from one another. To cut DNA, all restriction enzymes make two incisions, once through each sugar-phosphate backbone (i.e. each strand) of the DNA double helix.
These enzymes are found in bacteria and archaea and provide a defense mechanism against invading viruses. Inside a prokaryote, the restriction enzymes selectively cut up foreign DNA in a process called restriction digestion; meanwhile, host DNA is protected by a modification enzyme (a methyltransferase) that modifies the prokaryotic DNA and blocks cleavage. Together, these two processes form the restriction modification system.
More than 3,600 restriction endonucleases are known which represent over 250 different specificities. Over 3,000 of these have been studied in detail, and more than 800 of these are available commercially. These enzymes are routinely used for DNA modification in laboratories, and they are a vital tool in molecular cloning.
History
The term restriction enzyme originated from the studies of phage λ, a virus that infects bacteria, and the phenomenon of host-controlled restriction and modification of such bacterial phage or bacteriophage. The phenomenon was first identified in work done in the laboratories of Salvador Luria, Jean Weigle and Giuseppe Bertani in the early 1950s. It was found that, for a bacteriophage λ that can grow well in one strain of Escherichia coli, for example E. coli C, when grown in another strain, for example E. coli K, its yields can drop significantly, by as much as three to five orders of magnitude. The host cell, in this example E. coli K, is known as the restricting host and appears to have the ability to reduce the biological activity of the phage λ. If a phage becomes established in one strain, the ability of that phage to grow also becomes restricted in other strains. In the 1960s, it was shown in work done in the laboratories of Werner Arber and Matthew Meselson that the restriction is caused by an enzymatic cleavage of the phage DNA, and the enzyme involved was therefore termed a restriction enzyme.
The restriction enzymes studied by Arber and Meselson were type I restriction enzymes, which cleave DNA randomly away from the recognition site. In 1970, Hamilton O. Smith, Thomas Kelly and Kent Wilcox isolated and characterized the first type II restriction enzyme, HindII, from the bacterium Haemophilus influenzae. Restriction enzymes of this type are more useful for laboratory work as they cleave DNA at the site of their recognition sequence and are the most commonly used as a molecular biology tool. Later, Daniel Nathans and Kathleen Danna showed that cleavage of simian virus 40 (SV40) DNA by restriction enzymes yields specific fragments that can be separated using polyacrylamide gel electrophoresis, thus showing that restriction enzymes can also be used for mapping DNA. For their work in the discovery and characterization of restriction enzymes, the 1978 Nobel Prize for Physiology or Medicine was awarded to Werner Arber, Daniel Nathans, and Hamilton O. Smith. The discovery of restriction enzymes allows DNA to be manipulated, leading to the development of recombinant DNA technology that has many applications, for example, allowing the large scale production of proteins such as human insulin used by diabetic patients.
Origins
Restriction enzymes likely evolved from a common ancestor and became widespread via horizontal gene transfer. In addition, there is mounting evidence that restriction endonucleases evolved as a selfish genetic element.
Recognition site
Restriction enzymes recognize a specific sequence of nucleotides and produce a double-stranded cut in the DNA. The recognition sequences can also be classified by the number of bases in its recognition site, usually between 4 and 8 bases, and the number of bases in the sequence will determine how often the site will appear by chance in any given genome, e.g., a 4-base pair sequence would theoretically occur once every 4^4 or 256bp, 6 bases, 4^6 or 4,096bp, and 8 bases would be 4^8 or 65,536bp. Many of them are palindromic, meaning the base sequence reads the same backwards and forwards. In theory, there are two types of palindromic sequences that can be possible in DNA. The mirror-like palindrome is similar to those found in ordinary text, in which a sequence reads the same forward and backward on a single strand of DNA, as in GTAATG. The inverted repeat palindrome is also a sequence that reads the same forward and backward, but the forward and backward sequences are found in complementary DNA strands (i.e., of double-stranded DNA), as in GTATAC (GTATAC being complementary to CATATG). Inverted repeat palindromes are more common and have greater biological importance than mirror-like palindromes.
EcoRI digestion produces "sticky" ends,
whereas SmaI restriction enzyme cleavage produces "blunt" ends:
Recognition sequences in DNA differ for each restriction enzyme, producing differences in the length, sequence and strand orientation (5' end or 3' end) of a sticky-end "overhang" of an enzyme restriction.
Different restriction enzymes that recognize the same sequence are known as neoschizomers. These often cleave in different locales of the sequence. Different enzymes that recognize and cleave in the same location are known as isoschizomers.
Types
Naturally occurring restriction endonucleases are categorized into five groups (Types I, II, III, IV, and V) based on their composition and enzyme cofactor requirements, the nature of their target sequence, and the position of their DNA cleavage site relative to the target sequence. DNA sequence analysis of restriction enzymes however show great variations, indicating that there are more than four types. All types of enzymes recognize specific short DNA sequences and carry out the endonucleolytic cleavage of DNA to give specific fragments with terminal 5'-phosphates. They differ in their recognition sequence, subunit composition, cleavage position, and cofactor requirements, as summarised below:
Type I enzymes () cleave at sites remote from a recognition site; require both ATP and S-adenosyl-L-methionine to function; multifunctional protein with both restriction digestion and methylase () activities.
Type II enzymes () cleave within or at short specific distances from a recognition site; most require magnesium; single function (restriction digestion) enzymes independent of methylase.
Type III enzymes () cleave at sites a short distance from a recognition site; require ATP (but do not hydrolyse it); S-adenosyl-L-methionine stimulates the reaction but is not required; exist as part of a complex with a modification methylase ().
Type IV enzymes target modified DNA, e.g. methylated, hydroxymethylated and glucosyl-hydroxymethylated DNA
Type V enzymes utilize guide RNAs (gRNAs)
Type l
Type I restriction enzymes were the first to be identified and were first identified in two different strains (K-12 and B) of E. coli. These enzymes cut at a site that differs, and is a random distance (at least 1000 bp) away, from their recognition site. Cleavage at these random sites follows a process of DNA translocation, which shows that these enzymes are also molecular motors. The recognition site is asymmetrical and is composed of two specific portions—one containing 3–4 nucleotides, and another containing 4–5 nucleotides—separated by a non-specific spacer of about 6–8 nucleotides. These enzymes are multifunctional and are capable of both restriction digestion and modification activities, depending upon the methylation status of the target DNA. The cofactors S-Adenosyl methionine (AdoMet), hydrolyzed adenosine triphosphate (ATP), and magnesium (Mg2+) ions, are required for their full activity. Type I restriction enzymes possess three subunits called HsdR, HsdM, and HsdS; HsdR is required for restriction digestion; HsdM is necessary for adding methyl groups to host DNA (methyltransferase activity), and HsdS is important for specificity of the recognition (DNA-binding) site in addition to both restriction digestion (DNA cleavage) and modification (DNA methyltransferase) activity.
Type II
Typical type II restriction enzymes differ from type I restriction enzymes in several ways. They form homodimers, with recognition sites that are usually undivided and palindromic and 4–8 nucleotides in length. They recognize and cleave DNA at the same site, and they do not use ATP or AdoMet for their activity—they usually require only Mg2+ as a cofactor. These enzymes cleave the phosphodiester bond of double helix DNA. It can either cleave at the center of both strands to yield a blunt end, or at a staggered position leaving overhangs called sticky ends. These are the most commonly available and used restriction enzymes. In the 1990s and early 2000s, new enzymes from this family were discovered that did not follow all the classical criteria of this enzyme class, and new subfamily nomenclature was developed to divide this large family into subcategories based on deviations from typical characteristics of type II enzymes. These subgroups are defined using a letter suffix.
Type IIB restriction enzymes (e.g., BcgI and BplI) are multimers, containing more than one subunit. They cleave DNA on both sides of their recognition to cut out the recognition site. They require both AdoMet and Mg2+ cofactors. Type IIE restriction endonucleases (e.g., NaeI) cleave DNA following interaction with two copies of their recognition sequence. One recognition site acts as the target for cleavage, while the other acts as an allosteric effector that speeds up or improves the efficiency of enzyme cleavage. Similar to type IIE enzymes, type IIF restriction endonucleases (e.g. NgoMIV) interact with two copies of their recognition sequence but cleave both sequences at the same time. Type IIG restriction endonucleases (e.g., RM.Eco57I) do have a single subunit, like classical Type II restriction enzymes, but require the cofactor AdoMet to be active. Type IIM restriction endonucleases, such as DpnI, are able to recognize and cut methylated DNA. Type IIS restriction endonucleases (e.g. FokI) cleave DNA at a defined distance from their non-palindromic asymmetric recognition sites; this characteristic is widely used to perform in-vitro cloning techniques such as Golden Gate cloning. These enzymes may function as dimers. Similarly, Type IIT restriction enzymes (e.g., Bpu10I and BslI) are composed of two different subunits. Some recognize palindromic sequences while others have asymmetric recognition sites.
Type III
Type III restriction enzymes (e.g., EcoP15) recognize two separate non-palindromic sequences that are inversely oriented. They cut DNA about 20–30 base pairs after the recognition site. These enzymes contain more than one subunit and require AdoMet and ATP cofactors for their roles in DNA methylation and restriction digestion, respectively. They are components of prokaryotic DNA restriction-modification mechanisms that protect the organism against invading foreign DNA. Type III enzymes are hetero-oligomeric, multifunctional proteins composed of two subunits, Res () and Mod (). The Mod subunit recognises the DNA sequence specific for the system and is a modification methyltransferase; as such, it is functionally equivalent to the M and S subunits of type I restriction endonuclease. Res is required for restriction digestion, although it has no enzymatic activity on its own. Type III enzymes recognise short 5–6 bp-long asymmetric DNA sequences and cleave 25–27 bp downstream to leave short, single-stranded 5' protrusions. They require the presence of two inversely oriented unmethylated recognition sites for restriction digestion to occur. These enzymes methylate only one strand of the DNA, at the N-6 position of adenine residues, so newly replicated DNA will have only one strand methylated, which is sufficient to protect against restriction digestion. Type III enzymes belong to the beta-subfamily of N6 adenine methyltransferases, containing the nine motifs that characterise this family, including motif I, the AdoMet binding pocket (FXGXG), and motif IV, the catalytic region (S/D/N (PP) Y/F).
Type IV
Type IV enzymes recognize modified, typically methylated DNA and are exemplified by the McrBC and Mrr systems of E. coli.
Type V
Type V restriction enzymes (e.g., the cas9-gRNA complex from CRISPRs) utilize guide RNAs to target specific non-palindromic sequences found on invading organisms. They can cut DNA of variable length, provided that a suitable guide RNA is provided. The flexibility and ease of use of these enzymes make them promising for future genetic engineering applications.
Artificial restriction enzymes
Artificial restriction enzymes can be generated by fusing a natural or engineered DNA-binding domain to a nuclease domain (often the cleavage domain of the type IIS restriction enzyme FokI). Such artificial restriction enzymes can target large DNA sites (up to 36 bp) and can be engineered to bind to desired DNA sequences. Zinc finger nucleases are the most commonly used artificial restriction enzymes and are generally used in genetic engineering applications, but can also be used for more standard gene cloning applications. Other artificial restriction enzymes are based on the DNA binding domain of TAL effectors.
In 2013, a new technology CRISPR-Cas9, based on a prokaryotic viral defense system, was engineered for editing the genome, and it was quickly adopted in laboratories. For more detail, read CRISPR (Clustered regularly interspaced short palindromic repeats).
In 2017, a group from University of Illinois reported using an Argonaute protein taken from Pyrococcus furiosus (PfAgo) along with guide DNA to edit DNA in vitro as artificial restriction enzymes.
Artificial ribonucleases that act as restriction enzymes for RNA have also been developed. A PNA-based system, called a PNAzyme, has a Cu(II)-2,9-dimethylphenanthroline group that mimics ribonucleases for specific RNA sequence and cleaves at a non-base-paired region (RNA bulge) of the targeted RNA formed when the enzyme binds the RNA. This enzyme shows selectivity by cleaving only at one site that either does not have a mismatch or is kinetically preferred out of two possible cleavage sites.
Nomenclature
Since their discovery in the 1970s, many restriction enzymes have been identified; for example, more than 3500 different Type II restriction enzymes have been characterized. Each enzyme is named after the bacterium from which it was isolated, using a naming system based on bacterial genus, species and strain. For example, the name of the EcoRI restriction enzyme was derived as shown in the box.
Applications
Isolated restriction enzymes are used to manipulate DNA for different scientific applications.
They are used to assist insertion of genes into plasmid vectors during gene cloning and protein production experiments. For optimal use, plasmids that are commonly used for gene cloning are modified to include a short polylinker sequence (called the multiple cloning site, or MCS) rich in restriction recognition sequences. This allows flexibility when inserting gene fragments into the plasmid vector; restriction sites contained naturally within genes influence the choice of endonuclease for digesting the DNA, since it is necessary to avoid restriction of wanted DNA while intentionally cutting the ends of the DNA. To clone a gene fragment into a vector, both plasmid DNA and gene insert are typically cut with the same restriction enzymes, and then glued together with the assistance of an enzyme known as a DNA ligase.
Restriction enzymes can also be used to distinguish gene alleles by specifically recognizing single base changes in DNA known as single-nucleotide polymorphisms (SNPs). This is however only possible if a SNP alters the restriction site present in the allele. In this method, the restriction enzyme can be used to genotype a DNA sample without the need for expensive gene sequencing. The sample is first digested with the restriction enzyme to generate DNA fragments, and then the different sized fragments separated by gel electrophoresis. In general, alleles with correct restriction sites will generate two visible bands of DNA on the gel, and those with altered restriction sites will not be cut and will generate only a single band. A DNA map by restriction digest can also be generated that can give the relative positions of the genes. The different lengths of DNA generated by restriction digest also produce a specific pattern of bands after gel electrophoresis, and can be used for DNA fingerprinting.
In a similar manner, restriction enzymes are used to digest genomic DNA for gene analysis by Southern blot. This technique allows researchers to identify how many copies (or paralogues) of a gene are present in the genome of one individual, or how many gene mutations (polymorphisms) have occurred within a population. The latter example is called restriction fragment length polymorphism (RFLP).
Artificial restriction enzymes created by linking the FokI DNA cleavage domain with an array of DNA binding proteins or zinc finger arrays, denoted zinc finger nucleases (ZFN), are a powerful tool for host genome editing due to their enhanced sequence specificity. ZFN work in pairs, their dimerization being mediated in-situ through the FokI domain. Each zinc finger array (ZFA) is capable of recognizing 9–12 base pairs, making for 18–24 for the pair. A 5–7 bp spacer between the cleavage sites further enhances the specificity of ZFN, making them a safe and more precise tool that can be applied in humans. A recent Phase I clinical trial of ZFN for the targeted abolition of the CCR5 co-receptor for HIV-1 has been undertaken.
Others have proposed using the bacteria R-M system as a model for devising human anti-viral gene or genomic vaccines and therapies since the RM system serves an innate defense-role in bacteria by restricting tropism by bacteriophages. There is research on REases and ZFN that can cleave the DNA of various human viruses, including HSV-2, high-risk HPVs and HIV-1, with the ultimate goal of inducing target mutagenesis and aberrations of human-infecting viruses. The human genome already contains remnants of retroviral genomes that have been inactivated and harnessed for self-gain. Indeed, the mechanisms for silencing active L1 genomic retroelements by the three prime repair exonuclease 1 (TREX1) and excision repair cross complementing 1(ERCC) appear to mimic the action of RM-systems in bacteria, and the non-homologous end-joining (NHEJ) that follows the use of ZFN without a repair template.
Examples
Examples of restriction enzymes include:
Key:
* = blunt ends
N = C or G or T or A
W = A or T
| Biology and health sciences | Molecular biology | Biology |
26214 | https://en.wikipedia.org/wiki/Reverse%20transcriptase | Reverse transcriptase | A reverse transcriptase (RT) is an enzyme used to convert RNA genome to DNA, a process termed reverse transcription. Reverse transcriptases are used by viruses such as HIV, COVID-19, and hepatitis B to replicate their genomes, by retrotransposon mobile genetic elements to proliferate within the host genome, and by eukaryotic cells to extend the telomeres at the ends of their linear chromosomes. Contrary to a widely held belief, the process does not violate the flows of genetic information as described by the classical central dogma, as transfers of information from RNA to DNA are explicitly held possible.
Retroviral RT has three sequential biochemical activities: RNA-dependent DNA polymerase activity, ribonuclease H (RNase H), and DNA-dependent DNA polymerase activity. Collectively, these activities enable the enzyme to convert single-stranded RNA into double-stranded cDNA. In retroviruses and retrotransposons, this cDNA can then integrate into the host genome, from which new RNA copies can be made via host-cell transcription. The same sequence of reactions is widely used in the laboratory to convert RNA to DNA for use in molecular cloning, RNA sequencing, polymerase chain reaction (PCR), or genome analysis.
History
Reverse transcriptases were discovered by Howard Temin at the University of Wisconsin–Madison in Rous sarcoma virions and independently isolated by David Baltimore in 1970 at MIT from two RNA tumour viruses: murine leukemia virus and again Rous sarcoma virus. For their achievements, they shared the 1975 Nobel Prize in Physiology or Medicine (with Renato Dulbecco).
Well-studied reverse transcriptases include:
HIV-1 reverse transcriptase from human immunodeficiency virus type 1 () has two subunits, which have respective molecular weights of 66 and 51 kDas.
M-MLV reverse transcriptase from the Moloney murine leukemia virus is a single 75 kDa monomer.
AMV reverse transcriptase from the avian myeloblastosis virus also has two subunits, a 63 kDa subunit and a 95 kDa subunit.
Telomerase reverse transcriptase that maintains the telomeres of eukaryotic chromosomes.
Function in viruses
The enzymes are encoded and used by viruses that use reverse transcription as a step in the process of replication. Reverse-transcribing RNA viruses, such as retroviruses, use the enzyme to reverse-transcribe their RNA genomes into DNA, which is then integrated into the host genome and replicated along with it. Reverse-transcribing DNA viruses, such as the hepadnaviruses, can allow RNA to serve as a template in assembling and making DNA strands. HIV infects humans with the use of this enzyme. Without reverse transcriptase, the viral genome would not be able to incorporate into the host cell, resulting in failure to replicate.
Process of reverse transcription or retrotranscription
Reverse transcriptase creates double-stranded DNA from an RNA template.
In virus species with reverse transcriptase lacking DNA-dependent DNA polymerase activity, creation of double-stranded DNA can possibly be done by host-encoded DNA polymerase δ, mistaking the viral DNA-RNA for a primer and synthesizing a double-stranded DNA by a similar mechanism as in primer removal, where the newly synthesized DNA displaces the original RNA template.
The process of reverse transcription, also called retrotranscription or retrotras, is extremely error-prone, and it is during this step that mutations may occur. Such mutations may cause drug resistance.
Retroviral reverse transcription
Retroviruses, also referred to as class VI ssRNA-RT viruses, are RNA reverse-transcribing viruses with a DNA intermediate. Their genomes consist of two molecules of positive-sense single-stranded RNA with a 5' cap and 3' polyadenylated tail. Examples of retroviruses include the human immunodeficiency virus (HIV) and the human T-lymphotropic virus (HTLV). Creation of double-stranded DNA occurs in the cytosol as a series of these steps:
Lysyl tRNA acts as a primer and hybridizes to a complementary part of the virus RNA genome called the primer binding site or PBS.
Reverse transcriptase then adds DNA nucleotides onto the 3' end of the primer, synthesizing DNA complementary to the U5 (non-coding region) and R region (a direct repeat found at both ends of the RNA molecule) of the viral RNA.
A domain on the reverse transcriptase enzyme called RNAse H degrades the U5 and R regions on the 5' end of the RNA.
The tRNA primer then "jumps" to the 3' end of the viral genome, and the newly synthesised DNA strands hybridizes to the complementary R region on the RNA.
The complementary DNA (cDNA) added in (2) is further extended.
The majority of viral RNA is degraded by RNAse H, leaving only the PP sequence.
Synthesis of the second DNA strand begins, using the remaining PP fragment of viral RNA as a primer.
The tRNA primer leaves and a "jump" happens. The PBS from the second strand hybridizes with the complementary PBS on the first strand.
Both strands are extended to form a complete double-stranded DNA copy of the original viral RNA genome, which can then be incorporated into the host's genome by the enzyme integrase.
Creation of double-stranded DNA also involves strand transfer, in which there is a translocation of short DNA product from initial RNA-dependent DNA synthesis to acceptor template regions at the other end of the genome, which are later reached and processed by the reverse transcriptase for its DNA-dependent DNA activity.
Retroviral RNA is arranged in 5' terminus to 3' terminus. The site where the primer is annealed to viral RNA is called the primer-binding site (PBS). The RNA 5'end to the PBS site is called U5, and the RNA 3' end to the PBS is called the leader. The tRNA primer is unwound between 14 and 22 nucleotides and forms a base-paired duplex with the viral RNA at PBS. The fact that the PBS is located near the 5' terminus of viral RNA is unusual because reverse transcriptase synthesize DNA from 3' end of the primer in the 5' to 3' direction (with respect to the newly synthesized DNA strand). Therefore, the primer and reverse transcriptase must be relocated to 3' end of viral RNA. In order to accomplish this reposition, multiple steps and various enzymes including DNA polymerase, ribonuclease H(RNase H) and polynucleotide unwinding are needed.
The HIV reverse transcriptase also has ribonuclease activity that degrades the viral RNA during the synthesis of cDNA, as well as DNA-dependent DNA polymerase activity that copies the sense cDNA strand into an antisense DNA to form a double-stranded viral DNA intermediate (vDNA). The HIV viral RNA structural elements regulate the progression of reverse transcription.
In cellular life
Self-replicating stretches of eukaryotic genomes known as retrotransposons utilize reverse transcriptase to move from one position in the genome to another via an RNA intermediate. They are found abundantly in the genomes of plants and animals. Telomerase is another reverse transcriptase found in many eukaryotes, including humans, which carries its own RNA template; this RNA is used as a template for DNA replication.
Initial reports of reverse transcriptase in prokaryotes came as far back as 1971 in France (Beljanski et al., 1971a, 1972) and a few years later in the USSR (Romashchenko 1977). These have since been broadly described as part of bacterial Retrons, distinct sequences that code for reverse transcriptase, and are used in the synthesis of msDNA. In order to initiate synthesis of DNA, a primer is needed. In bacteria, the primer is synthesized during replication.
Valerian Dolja of Oregon State argues that viruses, due to their diversity, have played an evolutionary role in the development of cellular life, with reverse transcriptase playing a central role.
Structure
The reverse transcriptase employs a "right hand" structure similar to that found in other viral nucleic acid polymerases. In addition to the transcription function, retroviral reverse transcriptases have a domain belonging to the RNase H family, which is vital to their replication. By degrading the RNA template, it allows the other strand of DNA to be synthesized. Some fragments from the digestion also serve as the primer for the DNA polymerase (either the same enzyme or a host protein), responsible for making the other (plus) strand.
Replication fidelity
There are three different replication systems during the life cycle of a retrovirus. The first process is the reverse transcriptase synthesis of viral DNA from viral RNA, which then forms newly made complementary DNA strands. The second replication process occurs when host cellular DNA polymerase replicates the integrated viral DNA. Lastly, RNA polymerase II transcribes the proviral DNA into RNA, which will be packed into virions. Mutation can occur during one or all of these replication steps.
Reverse transcriptase has a high error rate when transcribing RNA into DNA since, unlike most other DNA polymerases, it has no proofreading ability. This high error rate allows mutations to accumulate at an accelerated rate relative to proofread forms of replication. The commercially available reverse transcriptases produced by Promega are quoted by their manuals as having error rates in the range of 1 in 17,000 bases for AMV and 1 in 30,000 bases for M-MLV.
Other than creating single-nucleotide polymorphisms, reverse transcriptases have also been shown to be involved in processes such as transcript fusions, exon shuffling and creating artificial antisense transcripts. It has been speculated that this template switching activity of reverse transcriptase, which can be demonstrated completely in vivo, may have been one of the causes for finding several thousand unannotated transcripts in the genomes of model organisms.
Template switching
Two RNA genomes are packaged into each retrovirus particle, but, after an infection, each virus generates only one provirus. After infection, reverse transcription is accompanied by template switching between the two genome copies (copy choice recombination). There are two models that suggest why RNA transcriptase switches templates. The first, the forced copy-choice model, proposes that reverse transcriptase changes the RNA template when it encounters a nick, implying that recombination is obligatory to maintaining virus genome integrity. The second, the dynamic choice model, suggests that reverse transcriptase changes templates when the RNAse function and the polymerase function are not in sync rate-wise, implying that recombination occurs at random and is not in response to genomic damage. A study by Rawson et al. supported both models of recombination. From 5 to 14 recombination events per genome occur at each replication cycle. Template switching (recombination) appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes.
Applications
Antiviral drugs
As HIV uses reverse transcriptase to copy its genetic material and generate new viruses (part of a retrovirus proliferation circle), specific drugs have been designed to disrupt the process and thereby suppress its growth. Collectively, these drugs are known as reverse-transcriptase inhibitors and include the nucleoside and nucleotide analogues zidovudine (trade name Retrovir), lamivudine (Epivir) and tenofovir (Viread), as well as non-nucleoside inhibitors, such as nevirapine (Viramune).
Molecular biology
Reverse transcriptase is commonly used in research to apply the polymerase chain reaction technique to RNA in a technique called reverse transcription polymerase chain reaction (RT-PCR). The classical PCR technique can be applied only to DNA strands, but, with the help of reverse transcriptase, RNA can be transcribed into DNA, thus making PCR analysis of RNA molecules possible. Reverse transcriptase is used also to create cDNA libraries from mRNA. The commercial availability of reverse transcriptase greatly improved knowledge in the area of molecular biology, as, along with other enzymes, it allowed scientists to clone, sequence, and characterise RNA.
| Biology and health sciences | Molecular biology | Biology |
26229 | https://en.wikipedia.org/wiki/Riboflavin | Riboflavin | Riboflavin, also known as vitamin B2, is a vitamin found in food and sold as a dietary supplement. It is essential to the formation of two major coenzymes, flavin mononucleotide and flavin adenine dinucleotide. These coenzymes are involved in energy metabolism, cellular respiration, and antibody production, as well as normal growth and development. The coenzymes are also required for the metabolism of niacin, vitamin B6, and folate. Riboflavin is prescribed to treat corneal thinning, and taken orally, may reduce the incidence of migraine headaches in adults.
Riboflavin deficiency is rare and is usually accompanied by deficiencies of other vitamins and nutrients. It may be prevented or treated by oral supplements or by injections. As a water-soluble vitamin, any riboflavin consumed in excess of nutritional requirements is not stored; it is either not absorbed or is absorbed and quickly excreted in urine, causing the urine to have a bright yellow tint. Natural sources of riboflavin include meat, fish and fowl, eggs, dairy products, green vegetables, mushrooms, and almonds. Some countries require its addition to grains.
In its purified, solid form, it is a water-soluble yellow-orange crystalline powder. In addition to its function as a vitamin, it is used as a food coloring agent. Biosynthesis takes place in bacteria, fungi and plants, but not animals. Industrial synthesis of riboflavin was initially achieved using a chemical process, but current commercial manufacturing relies on fermentation methods using strains of fungi and genetically modified bacteria.
Definition
Riboflavin, also known as vitamin B2, is a water-soluble vitamin and is one of the B vitamins. Unlike folate and vitamin B6, which occur in several chemically related forms known as vitamers, riboflavin is only one chemical compound. It is a starting compound in the synthesis of the coenzymes flavin mononucleotide (FMN, also known as riboflavin-5'-phosphate) and flavin adenine dinucleotide (FAD). FAD is the more abundant form of flavin, reported to bind to 75% of the number of flavin-dependent protein encoded genes in the all-species genome (the flavoproteome) and serves as a co-enzyme for 84% of human-encoded flavoproteins.
In its purified, solid form, riboflavin is a yellow-orange crystalline powder with a slight odor and bitter taste. It is soluble in polar solvents, such as water and aqueous sodium chloride solutions, and slightly soluble in alcohols. It is not soluble in non-polar or weakly polar organic solvents such as chloroform, benzene or acetone. In solution or during dry storage as a powder, riboflavin is heat stable if not exposed to light. When heated to decompose, it releases toxic fumes containing nitric oxide.
Functions
Riboflavin is essential to the formation of two major coenzymes, FMN and FAD. These coenzymes are involved in energy metabolism, cell respiration, antibody production, growth and development. Riboflavin is essential for the metabolism of carbohydrates, protein and fats. FAD contributes to the conversion of tryptophan to niacin (vitamin B3) and the conversion of vitamin B6 to the coenzyme pyridoxal 5'-phosphate requires FMN. Riboflavin is involved in maintaining normal circulating levels of homocysteine; in riboflavin deficiency, homocysteine levels increase, elevating the risk of cardiovascular diseases.
Redox reactions
Redox reactions are processes that involve the transfer of electrons. The flavin coenzymes support the function of roughly 70-80 flavoenzymes in humans (and hundreds more across all organisms, including those encoded by archeal, bacterial and fungal genomes) that are responsible for one- or two-electron redox reactions which capitalize on the ability of flavins to be converted between oxidized, half-reduced and fully reduced forms. FAD is also required for the activity of glutathione reductase, an essential enzyme in the formation of the endogenous antioxidant, glutathione.
Micronutrient metabolism
Riboflavin, FMN, and FAD are involved in the metabolism of niacin, vitamin B6, and folate. The synthesis of the niacin-containing coenzymes, NAD and NADP, from tryptophan involves the FAD-dependent enzyme, kynurenine 3-monooxygenase. Dietary deficiency of riboflavin can decrease the production of NAD and NADP, thereby promoting niacin deficiency. Conversion of vitamin B6 to its coenzyme, pyridoxal 5'-phosphate, involves the enzyme, pyridoxine 5'-phosphate oxidase, which requires FMN. An enzyme involved in folate metabolism, 5,10-methylenetetrahydrofolate reductase, requires FAD to form the amino acid, methionine, from homocysteine.
Riboflavin deficiency appears to impair the metabolism of the dietary mineral, iron, which is essential to the production of hemoglobin and red blood cells. Alleviating riboflavin deficiency in people who are deficient in both riboflavin and iron improves the effectiveness of iron supplementation for treating iron-deficiency anemia.
Synthesis
Biosynthesis
Biosynthesis takes place in bacteria, fungi and plants, but not animals. The biosynthetic precursors to riboflavin are ribulose 5-phosphate and guanosine triphosphate. The former is converted to L-3,4-dihydroxy-2-butanone-4-phosphate while the latter is transformed in a series of reactions that lead to 5-amino-6-(D-ribitylamino)uracil. These two compounds are then the substrates for the penultimate step in the pathway, catalysed by the enzyme lumazine synthase in reaction .
In the final step of the biosynthesis, two molecules of 6,7-dimethyl-8-ribityllumazine are combined by the enzyme riboflavin synthase in a dismutation reaction. This generates one molecule of riboflavin and one of 5-amino-6-(D-ribitylamino) uracil. The latter is recycled to the previous reaction in the sequence.
Conversions of riboflavin to the cofactors FMN and FAD are carried out by the enzymes riboflavin kinase and FAD synthetase acting sequentially.
Industrial synthesis
The industrial-scale production of riboflavin uses various microorganisms, including filamentous fungi such as Ashbya gossypii, Candida famata and Candida flaveri, as well as the bacteria Corynebacterium ammoniagenes and Bacillus subtilis. B. subtilis that has been genetically modified to both increase the production of riboflavin and to introduce an antibiotic (ampicillin) resistance marker, is employed at a commercial scale to produce riboflavin for feed and food fortification. By 2012, over 4,000 tonnes per annum were produced by such fermentation processes.
In the presence of high concentrations of hydrocarbons or aromatic compounds, some bacteria overproduce riboflavin, possibly as a protective mechanism. One such organism is Micrococcus luteus (American Type Culture Collection strain number ATCC 49442), which develops a yellow color due to production of riboflavin while growing on pyridine, but not when grown on other substrates, such as succinic acid.
Laboratory synthesis
The first total synthesis of riboflavin was carried out by Richard Kuhn's group. A substituted aniline, produced by reductive amination using D-ribose, was condensed with alloxan in the final step:
Uses
Treatment of corneal thinning
Keratoconus is the most common form of corneal ectasia, a progressive thinning of the cornea. The condition is treated by corneal collagen cross-linking, which increases corneal stiffness. Cross-linking is achieved by applying a topical riboflavin solution to the cornea, which is then exposed to ultraviolet A light.
Migraine prevention
In its 2012 guidelines, the American Academy of Neurology stated that high-dose riboflavin (400 mg) is "probably effective and should be considered for migraine prevention," a recommendation also provided by the UK National Migraine Centre. A 2017 review reported that daily riboflavin taken at 400 mg per day for at least three months may reduce the frequency of migraine headaches in adults. Research on high-dose riboflavin for migraine prevention or treatment in children and adolescents is inconclusive, and so supplements are not recommended.
Food coloring
Riboflavin is used as a food coloring (yellow-orange crystalline powder), and is designated with the E number, E101, in Europe for use as a food additive.
Dietary recommendations
The National Academy of Medicine updated the Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for riboflavin in 1998. for riboflavin for women and men aged 14 and over are 0.9 mg/day and 1.1 mg/day, respectively; the RDAs are 1.1 and 1.3 mg/day, respectively. RDAs are higher than EARs to provide adequate intake levels for individuals with higher than average requirements. The RDA during pregnancy is 1.4 mg/day and the RDA for lactating females is 1.6 mg/day. For infants up to the age of 12 months, the Adequate Intake (AI) is 0.3–0.4 mg/day and for children aged 1–13 years the RDA increases with age from 0.5 to 0.9 mg/day. As for safety, the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of riboflavin there is no UL, as there is no human data for adverse effects from high doses. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs).
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in United States. For women and men aged 15 and older the PRI is set at 1.6 mg/day. The PRI during pregnancy is 1.9 mg/day and the PRI for lactating females is 2.0 mg/day. For children aged 1–14 years the PRIs increase with age from 0.6 to 1.4 mg/day. These PRIs are higher than the U.S. RDAs. The EFSA also considered the maximum safe intake and like the U.S. National Academy of Medicine, decided that there was not sufficient information to set an UL.
Safety
In humans, there is no evidence for riboflavin toxicity produced by excessive intakes and absorption becomes less efficient as dosage increases. Any excess riboflavin is excreted via the kidneys into urine, resulting in a bright yellow color known as flavinuria. During a clinical trial on the effectiveness of riboflavin for treating the frequency and severity of migraines, subjects were given up to 400 mg of riboflavin orally per day for periods of 3–12 months. Abdominal pains and diarrhea were among the side effects reported.
Labeling
For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For riboflavin labeling purposes 100% of the Daily Value was 1.7 mg, but as of May 27, 2016, it was revised to 1.3 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake.
Sources
The United States Department of Agriculture, Agricultural Research Service maintains a food composition database from which riboflavin content in hundreds of foods can be searched.
The white flour produced after milling of wheat has only 67% of its original riboflavin amount left, so white flour is enriched in some countries. Riboflavin is also added to ready-to-eat breakfast cereals. It is difficult to incorporate riboflavin into liquid products because it has poor solubility in water, hence the requirement for riboflavin-5'-phosphate (FMN, also called E101 when used as colorant), a more soluble form of riboflavin. The enrichment of bread and ready-to-eat breakfast cereals contributes significantly to the dietary supply of the vitamin. Free riboflavin is naturally present in animal-sourced foods along with protein-bound FMN and FAD. Cows' milk contains mainly free riboflavin, but both FMN and FAD are present at low concentrations.
Fortification
Some countries require or recommend fortification of grain foods. As of 2024, 57 countries, mostly in North and South America and southeast Africa, require food fortification of wheat flour or maize (corn) flour with riboflavin or riboflavin-5'-phosphate sodium. The amounts stipulated range from 1.3 to 5.75 mg/kg. An additional 16 countries have a voluntary fortification program. For example, the Indian government recommends 4.0 mg/kg for "maida" (white) and "atta" (whole wheat) flour.
Absorption, metabolism, excretion
More than 90% of riboflavin in the diet is in the form of protein-bound FMN and FAD. Exposure to gastric acid in the stomach releases the coenzymes, which are subsequently enzymatically hydrolyzed in the proximal small intestine to release free riboflavin.
Absorption occurs via a rapid active transport system, with some additional passive diffusion occurring at high concentrations. Bile salts facilitate uptake, so absorption is improved when the vitamin is consumed with a meal. The majority of newly absorbed riboflavin is taken up by the liver on the first pass, indicating that postprandial appearance of riboflavin in blood plasma may underestimate absorption. Three riboflavin transporter proteins have been identified: RFVT1 is present in the small intestine and also in the placenta; RFVT2 is highly expressed in brain and salivary glands; and RFVT3 is most highly expressed in the small intestine, testes, and prostate. Infants with mutations in the genes encoding these transport proteins can be treated with riboflavin administered orally.
Riboflavin is reversibly converted to FMN and then FAD. From riboflavin to FMN is the function of zinc-requiring riboflavin kinase; the reverse is accomplished by a phosphatase. From FMN to FAD is the function of magnesium-requiring FAD synthase; the reverse is accomplished by a pyrophosphatase. FAD appears to be an inhibitory end-product that down-regulates its own formation.
When excess riboflavin is absorbed by the small intestine, it is quickly removed from the blood and excreted in urine. Urine color is used as a hydration status biomarker and, under normal conditions, correlates with urine specific gravity and urine osmolality. However, riboflavin supplementation in large excess of requirements causes urine to appear more yellow than normal. With normal dietary intake, about two-thirds of urinary output is riboflavin, the remainder having been partially metabolized to hydroxymethylriboflavin from oxidation within cells, and as other metabolites. When consumption exceeds the ability to absorb, riboflavin passes into the large intestine, where it is catabolized by bacteria to various metabolites that can be detected in feces. There is speculation that unabsorbed riboflavin could affect the large intestine microbiome.
Deficiency
Prevalence
Riboflavin deficiency is uncommon in the United States and in other countries with wheat flour or corn meal fortification programs. From data collected in biannual surveys of the U.S. population, for ages 20 and over, 22% of women and 19% of men reported consuming a supplement that contained riboflavin, typically a vitamin-mineral multi-supplement. For the non-supplement users, the dietary intake of adult women averaged 1.74 mg/day and men 2.44 mg/day. These amounts exceed the RDAs for riboflavin of 1.1 and 1.3 mg/day respectively. For all age groups, on average, consumption from food exceeded the RDAs. A 2001-02 U.S. survey reported that less than 3% of the population consumed less than the Estimated Average Requirement of riboflavin.
Signs and symptoms
Riboflavin deficiency (also called ariboflavinosis) results in stomatitis, symptoms of which include chapped and fissured lips, inflammation of the corners of the mouth (angular stomatitis), sore throat, painful red tongue, and hair loss. The eyes can become itchy, watery, bloodshot, and sensitive to light. Riboflavin deficiency is associated with anemia. Prolonged riboflavin insufficiency may cause degeneration of the liver and nervous system. Riboflavin deficiency may increase the risk of preeclampsia in pregnant women. Deficiency of riboflavin during pregnancy can result in fetal birth defects, including heart and limb deformities.
Risk factors
People at risk of having low riboflavin levels include alcoholics, vegetarian athletes, and practitioners of veganism. Pregnant or lactating women and their infants may also be at risk, if the mother avoids meat and dairy products. Anorexia and lactose intolerance increase the risk of riboflavin deficiency. People with physically demanding lives, such as athletes and laborers, may require higher riboflavin intake. The conversion of riboflavin into FAD and FMN is impaired in people with hypothyroidism, adrenal insufficiency, and riboflavin transporter deficiency.
Causes
Riboflavin deficiency is usually found together with other nutrient deficiencies, particularly of other water-soluble vitamins. A deficiency of riboflavin can be primary (i.e. caused by poor vitamin sources in the regular diet) or secondary, which may be a result of conditions that affect absorption in the intestine. Secondary deficiencies are typically caused by the body not being able to use the vitamin, or by an increased rate of excretion of the vitamin. Diet patterns that increase risk of deficiency include veganism and low-dairy vegetarianism. Diseases such as cancer, heart disease and diabetes may cause or exacerbate riboflavin deficiency.
There are rare genetic defects that compromise riboflavin absorption, transport, metabolism or use by flavoproteins. One of these is riboflavin transporter deficiency, previously known as Brown–Vialetto–Van Laere syndrome. Variants of the genes SLC52A2 and SLC52A3 which code for transporter proteins RDVT2 and RDVT3, respectively, are defective. Infants and young children present with muscle weakness, cranial nerve deficits including hearing loss, sensory symptoms including sensory ataxia, feeding difficulties, and respiratory distress caused by a sensorimotor axonal neuropathy and cranial nerve pathology. When untreated, infants with riboflavin transporter deficiency have labored breathing and are at risk of dying in the first decade of life. Treatment with oral supplementation of high amounts of riboflavin is lifesaving.
Other inborn errors of metabolism include riboflavin-responsive multiple acyl-CoA dehydrogenase deficiency, also known as a subset of glutaric acidemia type 2, and the C677T variant of the methylenetetrahydrofolate reductase enzyme, which in adults has been associated with risk of high blood pressure.
Diagnosis and assessment
The assessment of riboflavin status is essential for confirming cases with non-specific symptoms whenever deficiency is suspected. Total riboflavin excretion in healthy adults with normal riboflavin intake is about 120 micrograms per day, while excretion of less than 40 micrograms per day indicates deficiency. Riboflavin excretion rates decrease as a person ages, but increase during periods of chronic stress and the use of some prescription drugs.
Indicators used in humans are erythrocyte glutathione reductase (EGR), erythrocyte flavin concentration and urinary excretion. The erythrocyte glutathione reductase activity coefficient (EGRAC) provides a measure of tissue saturation and long-term riboflavin status. Results are expressed as an activity coefficient ratio, determined by enzyme activity with and without the addition of FAD to the culture medium. An EGRAC of 1.0 to 1.2 indicates that adequate amounts of riboflavin are present; 1.2 to 1.4 is considered low, greater than 1.4 indicates deficient. For the less sensitive "erythrocyte flavin method", values greater than 400 nmol/L are considered adequate and values below 270 nmol/L are considered deficient. Urinary excretion is expressed as nmol of riboflavin per gram of creatinine. Low is defined as in the range of 50 to 72 nmol/g. Deficient is below 50 nmol/g. Urinary excretion load tests have been used to determine dietary requirements. For adult men, as oral doses were increased from 0.5 mg to 1.1 mg, there was a modest linear increase in urinary riboflavin, reaching 100 micrograms for a subsequent 24-hour urine collection. Beyond a load dose of 1.1 mg, urinary excretion increased rapidly, so that with a dose of 2.5 mg, urinary output was 800 micrograms for a 24-hour urine collection.
History
The name "riboflavin" comes from "ribose" (the sugar whose reduced form, ribitol, forms part of its structure) and "flavin", the ring-moiety that imparts the yellow color to the oxidized molecule (from Latin flavus, "yellow"). The reduced form, which occurs in metabolism along with the oxidized form, appears as orange-yellow needles or crystals. The earliest reported identification, predating any concept of vitamins as essential nutrients, was by Alexander Wynter Blyth. In 1879, Blyth isolated a water-soluble component of cows' milk whey, which he named "lactochrome", that fluoresced yellow-green when exposed to light.
In the early 1900s, several research laboratories were investigating constituents of foods, essential to maintain growth in rats. These constituents were initially divided into fat-soluble "vitamine" A and water-soluble "vitamine" B. (The "e" was dropped in 1920.) Vitamin B was further thought to have two components, a heat-labile substance called B1 and a heat-stable substance called B2. Vitamin B2 was tentatively identified to be the factor necessary for preventing pellagra, but that was later confirmed to be due to niacin (vitamin B3) deficiency. The confusion was due to the fact that riboflavin (B2) deficiency causes stomatitis symptoms similar to those seen in pellagra, but without the widespread peripheral skin lesions. For this reason, early in the history of identifying riboflavin deficiency in humans the condition was sometimes called "pellagra sine pellagra" (pellagra without pellagra).
In 1935, Paul Gyorgy, in collaboration with chemist Richard Kuhn and physician T. Wagner-Jauregg, reported that rats kept on a B2-free diet were unable to gain weight. Isolation of B2 from yeast revealed the presence of a bright yellow-green fluorescent product that restored normal growth when fed to rats. The growth restored was directly proportional to the intensity of the fluorescence. This observation enabled the researchers to develop a rapid chemical bioassay in 1933, and then isolate the factor from egg white, calling it ovoflavin. The same group then isolated the a similar preparation from whey and called it lactoflavin. In 1934, Kuhn's group identified the chemical structure of these flavins as identical, settled on "riboflavin" as a name, and were also able to synthesize the vitamin.
Circa 1937, riboflavin was also referred to as "Vitamin G". In 1938, Richard Kuhn was awarded the Nobel Prize in Chemistry for his work on vitamins, which had included B2 and B6. In 1939, it was confirmed that riboflavin is essential for human health through a clinical trial conducted by William H. Sebrell and Roy E. Butler. Women fed a diet low in riboflavin developed stomatitis and other signs of deficiency, which were reversed when treated with synthetic riboflavin. The symptoms returned when the supplements were stopped.
| Biology and health sciences | Vitamins | Health |
26262 | https://en.wikipedia.org/wiki/Redshift | Redshift | In physics, a redshift is an increase in the wavelength, and corresponding decrease in the frequency and photon energy, of electromagnetic radiation (such as light). The opposite change, a decrease in wavelength and increase in frequency and energy, is known as a blueshift, or negative redshift. The terms derive from the colours red and blue which form the extremes of the visible light spectrum. The main causes of electromagnetic redshift in astronomy and cosmology are the relative motions of radiation sources, which give rise to the relativistic Doppler effect, and gravitational potentials, which gravitationally redshift escaping radiation. All sufficiently distant light sources show cosmological redshift corresponding to recession speeds proportional to their distances from Earth, a fact known as Hubble's law that implies the universe is expanding.
All redshifts can be understood under the umbrella of frame transformation laws. Gravitational waves, which also travel at the speed of light, are subject to the same redshift phenomena. The value of a redshift is often denoted by the letter , corresponding to the fractional change in wavelength (positive for redshifts, negative for blueshifts), and by the wavelength ratio (which is greater than 1 for redshifts and less than 1 for blueshifts).
Examples of strong redshifting are a gamma ray perceived as an X-ray, or initially visible light perceived as radio waves. Subtler redshifts are seen in the spectroscopic observations of astronomical objects, and are used in terrestrial technologies such as Doppler radar and radar guns.
Other physical processes exist that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from (astronomical) redshift and are not generally referred to as such (see section on physical optics and radiative transfer).
History
The history of the subject began in the 19th century, with the development of classical wave mechanics and the exploration of phenomena which are associated with the Doppler effect. The effect is named after the Austrian mathematician, Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. In 1845, the hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot. Doppler correctly predicted that the phenomenon would apply to all waves and, in particular, suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations.
The Doppler redshift was first described by French physicist Hippolyte Fizeau in 1848, who noted the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the "Doppler–Fizeau effect". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by the method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines, using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the "annual Doppler effect", the yearly change in the Doppler shift of stars located near the ecliptic, due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors.
In the earlier part of the twentieth century, Slipher, Wirtz and others made the first measurements of the redshifts and blueshifts of galaxies beyond the Milky Way. They initially interpreted these redshifts and blueshifts as being due to random motions, but later Lemaître (1927) and Hubble (1929), using previous data, discovered a roughly linear correlation between the increasing redshifts of, and distances to, galaxies. Lemaître realized that these observations could be explained by a mechanism of producing redshifts seen in Friedmann's solutions to Einstein's equations of general relativity. The correlation between redshifts and distances arises in all expanding models.
Beginning with observations in 1912, Vesto Slipher discovered that most spiral galaxies, then mostly thought to be spiral nebulae, had considerable redshifts. Slipher first reported on his measurement in the inaugural volume of the Lowell Observatory Bulletin. Three years later, he wrote a review in the journal Popular Astronomy. In it he stated that "the early discovery that the great Andromeda spiral had the quite exceptional velocity of –300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well."
Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable "positive" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such "nebulae", and the distances to them, with the formulation of his eponymous Hubble's law. Milton Humason worked on those observations with Hubble. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the Friedmann–Lemaître equations. They are now considered to be strong evidence for an expanding universe and the Big Bang theory.
Arthur Eddington used the term "red-shift" as early as 1923, although the word does not appear unhyphenated until about 1934, when Willem de Sitter used it.
Measurement, characterization, and interpretation
The spectrum of light that comes from a source (see idealized spectrum illustration top-right) can be measured. To determine the redshift, one searches for features in the spectrum such as absorption lines, emission lines, or other variations in light intensity. If found, these features can be compared with known features in the spectrum of various chemical compounds found in experiments where that compound is located on Earth. A very common atomic element in space is hydrogen.
The spectrum of originally featureless light shone through hydrogen will show a signature spectrum specific to hydrogen that has features at regular intervals. If restricted to absorption lines it would look similar to the illustration (top right). If the same pattern of intervals is seen in an observed spectrum from a distant source but occurring at shifted wavelengths, it can be identified as hydrogen too. If the same spectral line is identified in both spectra—but at different wavelengths—then the redshift can be calculated using the table below.
Determining the redshift of an object in this way requires a frequency or wavelength range. In order to calculate the redshift, one has to know the wavelength of the emitted light in the rest frame of the source: in other words, the wavelength that would be measured by an observer located adjacent to and comoving with the source. Since in astronomical applications this measurement cannot be done directly, because that would require traveling to the distant star of interest, the method using spectral lines described here is used instead. Redshifts cannot be calculated by looking at unidentified features whose rest-frame frequency is unknown, or with a spectrum that is featureless or white noise (random fluctuations in a spectrum).
Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In astronomy, it is customary to refer to this change using a dimensionless quantity called . If represents wavelength and represents frequency (note, where is the speed of light), then is defined by the equations:
After is measured, the distinction between redshift and blueshift is simply a matter of whether is positive or negative. For example, Doppler effect blueshifts () are associated with objects approaching (moving closer to) the observer with the light shifting to greater energies. Conversely, Doppler effect redshifts () are associated with objects receding (moving away) from the observer with the light shifting to lower energies. Likewise, gravitational blueshifts are associated with light emitted from a source residing within a weaker gravitational field as observed from within a stronger gravitational field, while gravitational redshifting implies the opposite conditions.
Physical origins
A redshift can occur due to relative motion of the source and observer, due to the expansion of the cosmos after emission, or due to the effect of mass-energy density on the space between the emitter and observer. The following sections explain each origin.
Doppler effect
If a source of the light is moving away from an observer, then redshift () occurs; if the source moves towards the observer, then blueshift () occurs. This is true for all electromagnetic waves and is explained by the Doppler effect. Consequently, this type of redshift is called the Doppler redshift. If the source moves away from the observer with velocity , which is much less than the speed of light (), the redshift is given by
(since )
where is the speed of light. In the classical Doppler effect, the frequency of the source is not modified, but the recessional motion causes the illusion of a lower frequency.
A more complete treatment of the Doppler redshift requires considering relativistic effects associated with motion of sources close to the speed of light. A complete derivation of the effect can be found in the article on the relativistic Doppler effect. In brief, objects moving close to the speed of light will experience deviations from the above formula due to the time dilation of special relativity which can be corrected for by introducing the Lorentz factor into the classical Doppler formula as follows (for motion solely in the line of sight):
This phenomenon was first observed in a 1938 experiment performed by Herbert E. Ives and G.R. Stilwell, called the Ives–Stilwell experiment.
Since the Lorentz factor is dependent only on the magnitude of the velocity, this causes the redshift associated with the relativistic correction to be independent of the orientation of the source movement. In contrast, the classical part of the formula is dependent on the projection of the movement of the source into the line-of-sight which yields different results for different orientations. If is the angle between the direction of relative motion and the direction of emission in the observer's frame (zero angle is directly away from the observer), the full form for the relativistic Doppler effect becomes:
and for motion solely in the line of sight (), this equation reduces to:
For the special case that the light is moving at right angle () to the direction of relative motion in the observer's frame, the relativistic redshift is known as the transverse redshift, and a redshift:
is measured, even though the object is not moving away from the observer. Even when the source is moving towards the observer, if there is a transverse component to the motion then there is some speed at which the dilation just cancels the expected blueshift and at higher speed the approaching source will be redshifted.
Cosmic expansion
The observations of increasing redshifts from more and more distant galaxies can be modeled assuming a homogeneous and isotropic universe combined with general relativity. This cosmological redshift can be written as a function of , the time-dependent cosmic scale factor:
The scale factor is monotonically increasing as time passes. Thus is positive, close to zero for local stars, and increasing for distant galaxies that appear redshifted.
Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time–redshift relation. Denote a density ratio as :
with the critical density demarcating a universe that eventually crunches from one that simply expands. This density is about three hydrogen atoms per cubic meter of space. At large redshifts, , one finds:
where is the present-day Hubble constant, and is the redshift.
The cosmological redshift is commonly attributed to stretching of the wavelengths of photons due to the stretching of space. This interpretation can be misleading.
As required by general relativity, the cosmological expansion of space has no effect on local physics. There is no term related to expansion in Maxwell's equations that govern light propagation. The cosmological redshift is can be interpreted as an accumulation of infinitesimal Doppler shifts along the trajectory of the light.
There are several websites for calculating various times and distances from redshift, as the precise calculations require numerical integrals for most values of the parameters.
Distinguishing between cosmological and local effects
For cosmological redshifts of additional Doppler redshifts and blueshifts due to the peculiar motions of the galaxies relative to one another cause a wide scatter from the standard Hubble Law. The resulting situation can be illustrated by the Expanding Rubber Sheet Universe, a common cosmological analogy used to describe the expansion of the universe. If two objects are represented by ball bearings and spacetime by a stretching rubber sheet, the Doppler effect is caused by rolling the balls across the sheet to create peculiar motion. The cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched.
The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, "Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that..." Steven Weinberg clarified, "The increase of wavelength from emission to absorption of light does not depend on the rate of change of [the scale factor] at the times of emission or absorption, but on the increase of in the whole period from emission to absorption."
If the universe were contracting instead of expanding, we would see distant galaxies blueshifted by an amount proportional to their distance instead of redshifted.
Gravitational redshift
In the theory of general relativity, there is time dilation within a gravitational well. This is known as the gravitational redshift or Einstein Shift. The theoretical derivation of this effect follows from the Schwarzschild solution of the Einstein equations which yields the following formula for redshift associated with a photon traveling in the gravitational field of an uncharged, nonrotating, spherically symmetric mass:
where
is the gravitational constant,
is the mass of the object creating the gravitational field,
is the radial coordinate of the source (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate), and
is the speed of light.
This gravitational redshift result can be derived from the assumptions of special relativity and the equivalence principle; the full theory of general relativity is not required.
The effect is very small but measurable on Earth using the Mössbauer effect and was first observed in the Pound–Rebka experiment. However, it is significant near a black hole, and as an object approaches the event horizon the red shift becomes infinite. It is also the dominant cause of large angular-scale temperature fluctuations in the cosmic microwave background radiation (see Sachs–Wolfe effect).
Summary table
Several important special-case formulae for redshift in certain special spacetime geometries, as summarized in the following table. In all cases the magnitude of the shift (the value of ) is independent of the wavelength.
Observations in astronomy
The redshift observed in astronomy can be measured because the emission and absorption spectra for atoms are distinctive and well known, calibrated from spectroscopic experiments in laboratories on Earth. When the redshift of various absorption and emission lines from a single astronomical object is measured, is found to be remarkably constant. Although distant objects may be slightly blurred and lines broadened, it is by no more than can be explained by thermal or mechanical motion of the source. For these reasons and others, the consensus among astronomers is that the redshifts they observe are due to some combination of the three established forms of Doppler-like redshifts. Alternative hypotheses and explanations for redshift such as tired light are not generally considered plausible.
Spectroscopy, as a measurement, is considerably more difficult than simple photometry, which measures the brightness of astronomical objects through certain filters. When photometric data is all that is available (for example, the Hubble Deep Field and the Hubble Ultra Deep Field), astronomers rely on a technique for measuring photometric redshifts. Due to the broad wavelength ranges in photometric filters and the necessary assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to , and are much less reliable than spectroscopic determinations.
However, photometry does at least allow a qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of , it would be brightest in the infrared(1000nm) rather than at the blue-green(500nm) color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of four, . Both the photon count rate and the photon energy are redshifted. (See K correction for more details on the photometric consequences of redshift.)
Local observations
In nearby objects (within our Milky Way galaxy) observed redshifts are almost always related to the line-of-sight velocities associated with the objects being observed. Observations of such redshifts and blueshifts have enabled astronomers to measure velocities and parametrize the masses of the orbiting stars in spectroscopic binaries, a method first employed in 1868 by British astronomer William Huggins. Similarly, small redshifts and blueshifts detected in the spectroscopic measurements of individual stars are one way astronomers have been able to diagnose and measure the presence and characteristics of planetary systems around other stars and have even made very detailed differential measurements of redshifts during planetary transits to determine precise orbital parameters.
Finely detailed measurements of redshifts are used in helioseismology to determine the precise movements of the photosphere of the Sun. Redshifts have also been used to make the first measurements of the rotation rates of planets, velocities of interstellar clouds, the rotation of galaxies, and the dynamics of accretion onto neutron stars and black holes which exhibit both Doppler and gravitational redshifts. The temperatures of various emitting and absorbing objects can be obtained by measuring Doppler broadening—effectively redshifts and blueshifts over a single emission or absorption line. By measuring the broadening and shifts of the 21-centimeter hydrogen line in different directions, astronomers have been able to measure the recessional velocities of interstellar gas, which in turn reveals the rotation curve of our Milky Way. Similar measurements have been performed on other galaxies, such as Andromeda. As a diagnostic tool, redshift measurements are one of the most important spectroscopic measurements made in astronomy.
Extragalactic observations
The most distant objects exhibit larger redshifts corresponding to the Hubble flow of the universe. The largest-observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about ( corresponds to present time), and it shows the state of the universe about 13.8 billion years ago, and 379,000 years after the initial moments of the Big Bang.
The luminous point-like cores of quasars were the first "high-redshift" () objects discovered before the improvement of telescopes allowed for the discovery of other high-redshift galaxies.
For galaxies more distant than the Local Group and the nearby Virgo Cluster, but within a thousand megaparsecs or so, the redshift is approximately proportional to the galaxy's distance. This correlation was first observed by Edwin Hubble and has come to be known as Hubble's law. Vesto Slipher was the first to discover galactic redshifts, in about 1912, while Hubble correlated Slipher's measurements with distances he measured by other means to formulate his Law. Hubble's law follows in part from the Copernican principle. Because it is usually not known how luminous objects are, measuring the redshift is easier than more direct distance measurements, so redshift is sometimes in practice converted to a crude distance measurement using Hubble's law.
Gravitational interactions of galaxies with each other and clusters cause a significant scatter in the normal plot of the Hubble diagram. The peculiar velocities associated with galaxies superimpose a rough trace of the mass of virialized objects in the universe. This effect leads to such phenomena as nearby galaxies (such as the Andromeda Galaxy) exhibiting blueshifts as we fall towards a common barycenter, and redshift maps of clusters showing a fingers of god effect due to the scatter of peculiar velocities in a roughly spherical distribution. This added component gives cosmologists a chance to measure the masses of objects independent of the mass-to-light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter.
The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble "constant", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.
While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, observations beginning in 1988 of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate.
Highest redshifts
Currently, the objects with the highest known redshifts are galaxies and the objects producing gamma ray bursts. The most reliable redshifts are from spectroscopic data, and the highest-confirmed spectroscopic redshift of a galaxy is that of JADES-GS-z14-0 with a redshift of , corresponding to 290 million years after the Big Bang. The previous record was held by GN-z11, with a redshift of , corresponding to 400 million years after the Big Bang, and by UDFy-38135539 at a redshift of , corresponding to 600 million years after the Big Bang.
Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift and the next highest being . The most distant-observed gamma-ray burst with a spectroscopic redshift measurement was GRB 090423, which had a redshift of . The most distant-known quasar, ULAS J1342+0928, is at . The highest-known redshift radio galaxy (TGSS1530) is at a redshift and the highest-known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at .
Extremely red objects (EROs) are astronomical sources of radiation that radiate energy in the red and near infrared part of the electromagnetic spectrum. These may be starburst galaxies that have a high redshift accompanied by reddening from intervening dust, or they could be highly redshifted elliptical galaxies with an older (and therefore redder) stellar population. Objects that are even redder than EROs are termed hyper extremely red objects (HEROs).
The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of .
In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at . Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it.
Redshift surveys
With advent of automated telescopes and improvements in spectroscopes, a number of collaborations have been made to map the universe in redshift space. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure properties of the large-scale structure of the universe. The Great Wall, a vast supercluster of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect.
The first redshift survey was the CfA Redshift Survey, started in 1977 with the initial data collection completed in 1982. More recently, the 2dF Galaxy Redshift Survey determined the large-scale structure of one section of the universe, measuring redshifts for over 220,000 galaxies; data collection was completed in 2002, and the final data set was released 30 June 2003. The Sloan Digital Sky Survey (SDSS), is ongoing as of 2013 and aims to measure the redshifts of around 3 million objects. SDSS has recorded redshifts for galaxies as high as 0.8, and has been involved in the detection of quasars beyond . The DEEP2 Redshift Survey uses the Keck telescopes with the new "DEIMOS" spectrograph; a follow-up to the pilot program DEEP1, DEEP2 is designed to measure faint galaxies with redshifts 0.7 and above, and it is therefore planned to provide a high-redshift complement to SDSS and 2dF.
Effects from physical optics or radiative transfer
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference frames. Such shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as "redshifts" and "blueshifts", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as "reddening" rather than "redshifting" which, as a term, is normally reserved for the effects discussed above.
In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy). Except possibly under carefully controlled conditions, scattering does not produce the same relative change in wavelength across the whole spectrum; that is, any calculated is generally a function of wavelength. Furthermore, scattering from random media generally occurs at many angles, and is a function of the scattering angle. If multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well.
In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening—similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight.
Blueshift
The opposite of a redshift is a blueshift. A blueshift is any decrease in wavelength (increase in energy), with a corresponding increase in frequency, of an electromagnetic wave. In visible light, this shifts a color towards the blue end of the spectrum.
Doppler blueshift
Doppler blueshift is caused by movement of a source towards the observer. The term applies to any decrease in wavelength and increase in frequency caused by relative motion, even outside the visible spectrum. Only objects moving at near-relativistic speeds toward the observer are noticeably bluer to the naked eye, but the wavelength of any reflected or emitted photon or other particle is shortened in the direction of travel.
Doppler blueshift is used in astronomy to determine relative motion:
The Andromeda Galaxy is moving toward our own Milky Way galaxy within the Local Group; thus, when observed from Earth, its light is undergoing a blueshift.
Components of a binary star system will be blueshifted when moving towards Earth
When observing spiral galaxies, the side spinning toward us will have a slight blueshift relative to the side spinning away from us (see Tully–Fisher relation).
Blazars are known to propel relativistic jets toward us, emitting synchrotron radiation and bremsstrahlung that appears blueshifted.
Nearby stars such as Barnard's Star are moving toward us, resulting in a very small blueshift.
Doppler blueshift of distant objects with a high z can be subtracted from the much larger cosmological redshift to determine relative motion in the expanding universe.
Gravitational blueshift
Unlike the relative Doppler blueshift, caused by movement of a source towards the observer and thus dependent on the received angle of the photon, gravitational blueshift is absolute and does not depend on the received angle of the photon:
It is a natural consequence of conservation of energy and mass–energy equivalence, and was confirmed experimentally in 1959 with the Pound–Rebka experiment. Gravitational blueshift contributes to cosmic microwave background (CMB) anisotropy via the Sachs–Wolfe effect: when a gravitational well evolves while a photon is passing, the amount of blueshift on approach will differ from the amount of gravitational redshift as it leaves the region.
Blue outliers
There are faraway active galaxies that show a blueshift in their [O III] emission lines. One of the largest blueshifts is found in the narrow-line quasar, PG 1543+489, which has a relative velocity of -1150 km/s. These types of galaxies are called "blue outliers".
Cosmological blueshift
In a hypothetical universe undergoing a runaway Big Crunch contraction, a cosmological blueshift would be observed, with galaxies further away being increasingly blueshifted—the exact opposite of the actually observed cosmological redshift in the present expanding universe.
| Physical sciences | Physical cosmology | null |
26286 | https://en.wikipedia.org/wiki/Rocket-propelled%20grenade | Rocket-propelled grenade | A rocket-propelled grenade (RPG) is a shoulder-fired rocket weapon that launches rockets equipped with an explosive warhead. Most RPGs can be carried by an individual soldier, and are frequently used as anti-tank weapons. These warheads are affixed to a rocket motor which propels the RPG towards the target and they are stabilized in flight with fins. Some types of RPG are reloadable with new rocket-propelled grenades, while others are single-use. RPGs are generally loaded from the front.
RPGs with high-explosive anti-tank (HEAT) warheads are very effective against lightly armored vehicles such as armored personnel carriers (APCs) and armored cars. However, modern, heavily-armored vehicles, such as upgraded APCs and main battle tanks, are generally too well-protected (with thick composite or reactive armor) to be penetrated by an RPG, unless less armored sections of the vehicle are exploited. Various warheads are also capable of causing secondary damage to vulnerable systems (especially sights, tracks, rear and roof of turrets) and other unarmored targets.
The term "rocket-propelled grenade" is from the Russian acronym РПГ (Ручной Противотанковый Гранатомёт, Ruchnoy Protivotankovy Granatomyot), meaning "handheld anti-tank grenade launcher", the name given to early Russian designs.
History
Predecessor weapons
The static nature of trench warfare in World War I encouraged the use of shielded defenses, even including personal armor, that were impenetrable by standard rifle ammunition. This led to some isolated experiments with higher caliber rifles, similar to elephant guns, using armor-piercing ammunition. The first tanks, the British Mark I, could be penetrated by these weapons under the right conditions. Mark IV tanks, however, had slightly thicker armor. In response, the Germans rushed to create an upgraded version of these early anti-armor rifles, the Tankgewehr M1918, the first anti-tank rifle. In the inter-war years, tank armor continued to increase overall, to the point that anti-tank rifles could no longer be effective against anything but light tanks; any rifle made powerful enough for heavier tanks would exceed the ability of a soldier to carry and fire the weapon.
Even with the first tanks, artillery officers often used field guns depressed to fire directly at armored targets. However, this practice expended much valuable ammunition and was of increasingly limited effectiveness as tank armor became thicker. This led to the concept of anti-tank guns, a form of artillery specifically designed to destroy armored fighting vehicles, normally from static defensive positions (that is, immobile during a battle).
The first dedicated anti-tank artillery began appearing in the 1920s, and by World War II was a common appearance in most armies. In order to penetrate armor they fired specialized ammunition from proportionally longer barrels to achieve a higher muzzle velocity than field guns. Most anti-tank guns were developed in the 1930s as improvements in tanks were noted, and nearly every major arms manufacturer produced one type or another.
Anti-tank guns deployed during World War II were manned by specialist infantry rather than artillery crews, and issued to infantry units accordingly. The anti-tank guns of the 1930s were of small caliber; nearly all major armies possessing them used ammunition, except for the British Army, which had developed the Ordnance QF 2-pounder. As World War II progressed, the appearance of heavier tanks rendered these weapons obsolete and anti-tank guns likewise began firing larger calibre and more effective armor-piercing shells. Although a number of large caliber guns were developed during the war that were capable of knocking out the most heavily armored tanks, they proved slow to set up and difficult to conceal. The latter generation of low-recoil anti-tank weapons, which allowed projectiles the size of an artillery shell to be fired from a man's shoulder, was considered a far more viable option for arming infantrymen.
First shaped charge, portable weapons
The RPG has its roots in the 20th century with the early development of the explosive shaped charge, in which the explosive is made with a conical hollow, which concentrates its power on the impact point. Before the adoption of the shaped charge, anti-tank guns and tank guns relied primarily on kinetic energy of metal projectiles to defeat armor. Soldier-carried anti-tank rifles such as the Boys anti-tank rifle could be used against lightly-armored tankettes and light armored vehicles. However, as tank armor increased in thickness and effectiveness, the anti-tank guns needed to defeat them became increasingly heavy, cumbersome and expensive. During WW II, as tank armor got thicker, larger calibre anti-tank guns were developed to defeat this thicker armor.
While larger anti-tank guns were more effective, the weight of these anti-tank guns meant that they increasingly were mounted on wheeled, towed platforms. This meant that if the infantry was on foot, they might not have access to these wheeled, vehicle-towed anti-tank guns. This led to situations where infantry could find themselves defenseless against tanks and unable to attack tanks. Armies found that they needed to give infantry a human-portable (i.e., can be carried by one soldier) weapon to defeat enemy armor when no wheeled anti-tank guns were available, since anti tank rifles were no longer effective. Initial attempts to put such weapons in the hands of the infantry resulted in weapons like the Soviet RPG-40 "blast effect" hand grenade (where "RPG" stood for ruchnaya protivotankovaya granata, meaning hand-held anti-tank grenade). However, being hand thrown weapons, they still had to be deployed at suicidally close range to be effective. What was needed was a means of delivering the shaped charge warhead from a distance. Different approaches to this goal would lead to the anti-tank spigot mortar, the recoilless rifle, and, from the development of practical rocketry, the RPG.
Research occasioned by World War II produced such weapons as the American bazooka, British/Allied PIAT and German Panzerfaust, which combined portability with effectiveness against armored vehicles, such as tanks. The Soviet-developed RPG-7 is the most widely distributed, recognizable and used RPG in the world. The basic design of this RPG was developed by the Soviets shortly after World War II in the form of the RPG-2, which is similar in function to the Bazooka (due to the reloadability) and the Panzerfaust (due to an oversized grenade that protrudes outside of a smaller launch tube and the recoilless launch), though the rounds it fires lack a form of propulsion in addition to the launch charge (unlike the RPG-7 rounds, which also feature a sustainer motor, effectively making the rounds rocket propelled grenades).
Soviet RPGs were used extensively during the Vietnam War (by the Vietnam People's Army and Vietcong), as well as during the Soviet invasion of Afghanistan by the Mujahideen and against South Africans in Angola and Namibia (formerly South West Africa) by SWAPO guerillas during what the South Africans called the South African Border War. In the 2000s, they were still being used widely in conflict areas such as Chechnya, Iraq, and Sri Lanka. Militants have also used RPGs against helicopters: Taliban fighters shot down U.S. CH-47 Chinook helicopters in June 2005 and August 2011; and Somali militiamen shot down two U.S. UH-60 Black Hawk helicopters during the Battle of Mogadishu in 1993.
RPGs were used by militants to destroy "hundreds" of vehicles (AFVs, armored Humvees etc) in the War in Afghanistan (2001–2021).
Design
The RPG warhead being used against tanks and other armor often has a shaped charge explosive warhead. A shaped charge is an explosive charge shaped to focus the effect of the explosive's energy. Various types are used to penetrate tank armor; typical modern lined shaped charge can penetrate steel armor to a depth of seven or more times the diameter of the charge (charge diameters, CD), though greater depths of 10 CD and above have been achieved. Despite the popular misconception that shaped charges "melt" tank armor, the shaped charge does not depend in any way on heating or melting for its effectiveness; that is, the superplastic metal jet from a shaped charge impact on armor forms mainly due to a sudden and intense mechanical stress and does not melt its way through armor, as its effect is purely due to kinetic energy in nature.
An RPG comprises two main parts: the launcher and a rocket equipped with a warhead that follows a ballistic trajectory after the rocket motor has completed its burn. The most common types of warheads are high explosive (HE) and high-explosive anti-tank (HEAT) rounds. HE rounds can be used against troops or unarmored structures or vehicles. HEAT rounds can be used against armored vehicles. These warheads are affixed to a rocket motor and stabilized in flight with fins. Some types of RPG are single-use disposable units, such as the RPG-22 and M72 LAW; with these units, once the rocket is fired, the entire launcher is disposed of. Others are reloadable, such as the Soviet RPG-7 and the Israeli B-300. With reloadable RPGs, a new rocket can be inserted into the muzzle of the weapon after firing.
An issue that affected the earliest RPG weapon systems such as the German Panzerschreck was that rocket motor propellant could harm the operator. The weapon therefore featured a metal shield attached to the launch tube to protect the operator’s face from the blast. With later designs such as the RPG-7, the rocket exits the launcher with a low-powered gunpowder charge, and the main rocket motor then fires after the rocket has travelled . In some other designs, the propellant charge burns completely within the tube.
An RPG is an inexpensive way for an infantryman to safely deliver an explosive payload or warhead over a short distance with reasonable accuracy. Substantially more expensive guided anti-tank missiles are used at larger distances when accuracy or an overfly top attack are paramount. Anti-tank missiles such as the Malyutka can be guided by the operator after firing by sight, heat sensors or IR signatures; an RPG however is not guided towards the target. Nor can RPG rockets be controlled in flight after being aimed and launched. While the lack of active targeting technologies or after-firing guidance input can be viewed as a weakness, it also makes RPGs cheap and hard to defend against with electronic countermeasures or decoying. For example, if a soldier or other fighter launches an RPG at a hovering helicopter, even if the helicopter releases flares to confuse optical seekers, chaff to confuse radar, or engages in signal jamming, these will have no effect on an in-flight RPG warhead, even though these measures might protect against more sophisticated surface-to-air missiles.
Warheads
The HEAT (high-explosive anti-tank) round is a standard shaped charge warhead, similar in concept to those used in many tank cannon rounds. In this type of warhead, the shape of the explosive material within the warhead focuses the explosive energy on a copper (or similar metal) lining. This heats the metal lining and propels some of it forward at a very high velocity in a highly plastic state. The resulting narrow jet of metal can defeat armor equivalent to several hundred millimeters of RHA, such as that used in light and medium armored vehicles. However, heavily armored vehicles, such as main battle tanks, are generally too well armored to be penetrated by an RPG, unless weaker sections of the armor are exploited. Various warheads are also capable of causing secondary damage to vulnerable systems (especially sights, tracks, rear and roof of turrets) and other soft targets. The warhead detonates on impact or when the fuse runs out; usually the fuse is set to the maximum burn of the rocket motor, but it can be shortened for improvised anti aircraft purposes.
Specialized warheads are available for illumination, smoke, tear gas, and white phosphorus. Russia, China, and many former Warsaw Pact nations have also developed a fuel-air explosive (thermobaric) warhead. Another recent development is a tandem HEAT warhead capable of penetrating reactive armor.
So-called PRIGs (propelled recoilless improvised grenade) were improvised warheads used by the Provisional IRA.
Effectiveness
The RPG-29 uses a tandem-charge high-explosive anti-tank warhead to penetrate explosive reactive armor (ERA) as well as composite armor behind it.
In August 2006, in al-Amarah, in Iraq, a Soviet RPG-29 damaged the front underside of a Challenger 2 tank, detonating ERA in the area of the driver's cabin. The driver lost part of a foot and two more of the crew were injured, but the driver was able to reverse to an aid post. The incident was not made public until May 2007; in response to accusations, the MoD said "We have never claimed that the Challenger 2 is impenetrable." Since then, the ERA has been replaced with a Dorchester block and the steel underbelly lined with armor, as part of the 'Streetfighter' upgrade, which was a direct response to this incident. In May 2008, The New York Times disclosed that an American M1 tank had also been damaged by an RPG-29 in Iraq. The American army is ranking the RPG-29 threat to American armor as high; they have refused to allow the newly formed Iraqi army to buy it, fearing that it would fall into the hands of insurgents.
Various armies and manufacturers have developed add-on tank armor and other systems for urban combat, such as the Tank Urban Survival Kit (TUSK) for M1 Abrams, slat armor for the Stryker, ERA kit for the FV432, AZUR for Leclerc, and others. Similar solutions are active protection systems (APS), engaging and destroying closing projectiles, such as the Russian Drozd and Arena, as well as the recent Israeli Trophy Active Protection System.
The RPG-30 was designed to address the threat of active protection systems on tanks by using a false target to trick the APS. The RPG-30 shares a close resemblance with the RPG-27 in that it is a man-portable, disposable anti-tank rocket launcher with a single-shot capacity. However, unlike the RPG-27, there is a smaller diameter precursor round in a smaller side barrel tube in addition to the main round in the main tube. This precursor round acts as a false target, tricking the target's active protection system into engaging it, allowing the main round a clear path into the target, while the APS is stuck in the 0.2–0.4 second delay it needs to start its next engagement. Recent German systems were able to reduce reaction delay to mere milliseconds, cancelling this advantage.
The PG-30 is the main round of the RPG-30. The round is a tandem shaped charge with a weight of and has a range of and a stated penetration capability in excess of rolled homogeneous armor (RHA) (after ERA), reinforced concrete, , and of soil. Reactive armor, including explosive reactive armor (ERA), can be defeated with multiple hits into the same place, such as by tandem-charge weapons, which fire two or more shaped charges in rapid succession.
Protection
An early method of disabling shaped charges developed during World War II was to apply thin skirt armor or meshwire at a distance around the hull and turret of the tank. The skirt or mesh armor (cage armor) triggers the RPG on contact and much of the energy that a shaped charge produces dissipates before coming into contact with the main armor of the vehicle. Well-sloped armor also gives some protection because the shaped charge is forced to penetrate a greater amount of armor due to the oblique angle. The benefits of cage armor are still considered great in modern battlefields in the Middle East, and although similar effects can be obtained using spaced armor, either as a part of the original design or as appliqué armor fitted later, cage armor is preferable due to its low weight and ease of repair.
Today, technologically advanced armies have implemented composite armors such as Chobham armour, which provide superior protection to steel. For added protection, vehicles may be retrofitted with reactive armor; on impact, reactive tiles explode or deform, disrupting the normal function of the shaped charge. Russian and Israeli vehicles also use active protection systems such as Drozd, Arena APS or Trophy. Such a system detects and shoots down incoming projectiles before they reach the vehicle. As in all arms races, these developments in armor countermeasures have led to the development of RPG rounds designed specifically to defeat them, with methods such as a tandem-charge warhead, which has two shaped charges, of which the first is meant to activate any reactive armor, and the second to penetrate the vehicle.
Weapons by country
Soviet Union and Russian Federation
Specific types of RPGs (current, past and under development) include:
Anti-personnel explosives
RPG-7: Reloadable RPG launcher, TBG-7V thermobaric rocket and OG-7V fragmentation grenade
RPG-27 "Tavolga": One-shot disposable RPG launcher, RShG-1 thermobaric rockets
RShG-1
RShG-2
RPG-29 "Vampir": Reloadable RPG launcher, TBG-29 thermobaric rockets
RPO Rys
RPO-A Shmel
MGK Bur
MRO-A
Anti-tank explosives
RPG-1
RPG-2
RPG-7: Reloadable RPG launcher, PG-7VL with ≈ RHA penetration, PG-7VR with ≈ RHA penetration after ERA
RPG-16: Reloadable RPG launcher, PG-16 with ≈ RHA penetration, higher accuracy and four times the range of the RPG-7
RPG-18 "Muha (Fly)": One-shot disposable RPG launcher, PG-18 with ≈ RHA penetration
RPG-22 "Netto (Nett)": One-shot disposable RPG launcher, PG-22 with ≈ RHA penetration
RPG-26 "Aglen": One-shot disposable RPG launcher, PG-26 with ≈ RHA penetration
RPG-27 "Tavolga": One-shot disposable RPG launcher, PG-27 with ≈ RHA penetration after ERA
RPG-28 "Klyukva": One-shot disposable RPG launcher, with ≈ RHA penetration after ERA
RPG-29 "Vampir": Reloadable RPG launcher, PG-29V with ≈ RHA penetration after ERA
RPG-30 One-shot disposable RPG launcher, with a 'precursor' forerunner additional sub-munition, intended to defeat Active Defense Systems like Trophy. ≈ RHA penetration after active protection.
RPG-32 "Barkas": Latest variant of RPG Caliber, PG-32V with ≈ RHA penetration
Bunker buster explosives
RMG One-shot disposable RPG launcher of 105 mm (4.1 in) Caliber, PG-27V with ≈ RHA penetration
United States
The United States Army developed a lightweight antitank weapon (LAW) in the middle 1950s. By 1961, the M72 LAW was in use. It is a shoulder-fired, disposable rocket launcher with HEAT warhead. It is a recoilless weapon, which is easy to use, and effective against armored vehicles. It was used during the Vietnam War, and is still in use today. It uses a fin-stabilized rocket. In response to the threat of thicker armor, this weapon was replaced by the AT4 recoilless rifle, a larger & non-collapsible – albeit still single-shot weapon.
The United States Army and Marine Corps also use a different launcher, which is reloadable – the M3 Multi-role Anti-armor Anti-tank Weapon System (MAAWs) derived from the 84mm Carl Gustav and the 83mm Shoulder-Launched Multipurpose Assault Weapon (SMAW) derived from the Israeli B-300. Unlike the RPG, they are both reloaded from the breech-end rather than the muzzle.
Bazooka
M72 LAW
M3 Multi-role Anti-armor Anti-tank Weapon System (MAAWS)
Mk 153 Shoulder-Launched Multipurpose Assault Weapon (SMAW)
M141 Bunker Defeat Munition
PSRL-1
France
LRAC F1
RAC 112 (Apilas)
Germany
Raketenpanzerbüchse Panzerschreck
Panzerfaust 2
Panzerfaust 3
Israel
B-300 (SMAW)
IMI Shipon
MATADOR
Spain
C90-CR (M3)
Czechoslovakia
RPG-75
Poland
RPG-76 Komar
Serbia
M90 Stršljen
Yugoslavia
M79 Osa
M80 Zolja
China
Type 69 RPG
PF-89
Palestine
Al-Bana
Batar
Yasin
Ukraine
RK-4 Ingul
Tactics
One of the first instances the weapon was used by militants was on January 13, 1975, at Orly Airport in France, when Carlos the Jackal, together with another member from the PFLP, used two Soviet RPG-7 grenades to attack an Israeli El Al airliner. Both missed the target, with one hitting a Yugoslav Airlines's DC-9 instead.
In Afghanistan, Mujahideen guerrillas used RPG-7s to destroy Soviet vehicles. To assure a kill, two to four RPG operators would be assigned to each vehicle. Each armored-vehicle hunter-killer team can have as many as 15 RPGs. In areas where vehicles were confined to a single path (a mountain road, swamps, snow, urban areas), RPG teams trapped convoys by destroying the first and last vehicles in line, preventing movement of the other vehicles. This tactic was especially effective in cities. Convoys learned to avoid approaches with overhangs and to send infantrymen forward in hazardous areas to detect the RPG teams.
Multiple shooters were also effective against heavy tanks with reactive armor: The first shot would be against the driver's viewing prisms. Following shots would be in pairs, one to set off the reactive armor, the second to penetrate the tank's armor. Favored weak spots were the top and rear of the turret.
Afghans sometimes used RPG-7s at extreme range, exploded by their 4.5-second self-destruct timer, which translates to roughly flight distance, as a method of long distance approach denial for infantry and reconnaissance. The most noteworthy use of RPGs against aircraft in Afghanistan occurred on August 6, 2011, when Taliban fighters shot down a U.S. CH-47 Chinook helicopter killing all 38 personnel on board including SEAL Team 6 from a range of . An earlier anti-aircraft kill by the Taliban occurred during Operation Red Wings, on June 28, 2005, when a Chinook helicopter was destroyed by unguided RPG.
In the period following the 2003 invasion of Iraq, the RPG-7 became a favorite weapon of the insurgent forces fighting U.S. troops. Since most of the readily available RPG-7 rounds cannot penetrate M1 Abrams tank armor from almost any angle, it is primarily effective against soft-skinned or lightly armored vehicles, and infantry. Even if the RPG hit does not completely disable the tank or kill the crew, it can still damage external equipment, lowering the tank's effectiveness or forcing the crew to abandon and destroy it. Newer RPG-7 rounds are more capable, and in August 2006, an RPG-29 round penetrated the frontal ERA of a Challenger 2 tank during an engagement in al-Amarah, Iraq, and wounded several crew members.
RPGs were a main tool used by the FMLN's guerrilla forces in the Salvadoran Civil War. For example, during the June 19, 1986, overrun of the San Miguel Army base, FMLN sappers dressed only in black shorts, their faces blacked out with grease, sneaked through barbed wire at night, avoiding the searchlights, they made it to within firing range of the outer wall. Using RPGs to initiate the attack, they blew through the wall and killed a number of Salvadorean soldiers. They eliminated the outermost sentries and searchlights with the rockets, then made it into the inner wall, which they also punched through. They were then able to create mayhem as their comrades attacked from the outside.
During the First (1994–1996) and Second Chechen Wars (1999–2009), Chechen rebels used RPGs to attack Russian tanks from basements and high rooftops. This tactic was effective because tank main guns could not be depressed or raised far enough to return fire, in addition, armor on the very top and bottom of tanks is usually the weakest. Russian forces had to rely on artillery suppression, good crew gunners and infantry screens to prevent such attacks. Tank columns were eventually protected by attached self-propelled anti-aircraft guns (ZSU-23-4 Shilka, 9K22 Tunguska) used in the ground role to suppress and destroy Chechen ambushes. Chechen fighters formed independent "cells" that worked together to destroy a specific Russian armored target. Each cell contained small arms and some form of RPG (RPG-7V or RPG-18, for example). The small arms were used to button the tank up and keep any infantry occupied, while the RPG gunner struck at the tank. While doing so, other teams would attempt to fire at the target in order to overwhelm the Russians' ability to effectively counter the attack. To further increase the chance of success, the teams took up positions at different elevations where possible. Firing from the third and higher floors allowed good shots at the weakest armor (the top). When the Russians began moving in tanks fitted with explosive reactive armor (ERA), the Chechens had to adapt their tactics, because the RPGs they had access to were unlikely to result in the destruction of the tank.
Using RPGs as improvised anti-aircraft batteries has proved successful in Somalia, Afghanistan, and Chechnya. Helicopters are typically ambushed as they land, take off or hover. In Afghanistan, the Mujahideen often modified RPGs for use against Soviet helicopters by adding a curved pipe to the rear of the launcher tube, which diverted the backblast, allowing the RPG to be fired upward at aircraft from a prone position. This made the operator less visible prior to firing and decreased the risk of injury from hot exhaust gases. The Mujahideen also utilized the 4.5-second timer on RPG rounds to make the weapon function as part of a flak battery, using multiple launchers to increase hit probabilities. At the time, Soviet helicopters countered the threat from RPGs at landing zones by first clearing them with anti-personnel saturation fire. The Soviets also varied the number of accompanying helicopters (two or three) in an effort to upset Afghan force estimations and preparation. In response, the Mujahideen prepared dug-in firing positions with top cover, and again, Soviet forces altered their tactics by using air-dropped thermobaric fuel-air bombs on such landing zones. As the U.S.-supplied Stinger surface-to-air missiles became available to them, the Afghans abandoned RPG attacks as the smart missiles proved especially efficient in the destruction of unarmed Soviet transport helicopters, such as Mil Mi-17. In Somalia, both of the UH-60 Black Hawk helicopters lost by U.S. forces during the Battle of Mogadishu in 1993 were downed by RPG-7s.
| Technology | Missiles | null |
26298 | https://en.wikipedia.org/wiki/Radiometric%20dating | Radiometric dating | Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay. The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale. Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.
Fundamentals
Radioactive decay
All ordinary matter is made up of combinations of chemical elements, each with its own atomic number, indicating the number of protons in the atomic nucleus. Additionally, elements may exist in different isotopes, with each isotope of an element differing in the number of neutrons in the nucleus. A particular isotope of a particular element is called a nuclide. Some nuclides are inherently unstable. That is, at some point in time, an atom of such a nuclide will undergo radioactive decay and spontaneously transform into a different nuclide. This transformation may be accomplished in a number of different ways, including alpha decay (emission of alpha particles) and beta decay (electron emission, positron emission, or electron capture). Another possibility is spontaneous fission into two or more nuclides.
While the moment in time at which a particular nucleus decays is unpredictable, a collection of atoms of a radioactive nuclide decays exponentially at a rate described by a parameter known as the half-life, usually given in units of years when discussing dating techniques. After one half-life has elapsed, one half of the atoms of the nuclide in question will have decayed into a "daughter" nuclide or decay product. In many cases, the daughter nuclide itself is radioactive, resulting in a decay chain, eventually ending with the formation of a stable (nonradioactive) daughter nuclide; each step in such a chain is characterized by a distinct half-life. In these cases, usually the half-life of interest in radiometric dating is the longest one in the chain, which is the rate-limiting factor in the ultimate transformation of the radioactive nuclide into its stable daughter. Isotopic systems that have been exploited for radiometric dating have half-lives ranging from only about 10 years (e.g., tritium) to over 100 billion years (e.g., samarium-147).
For most radioactive nuclides, the half-life depends solely on nuclear properties and is essentially constant. This is known because decay constants measured by different techniques give consistent values within analytical errors and the ages of the same materials are consistent from one method to another. It is not affected by external factors such as temperature, pressure, chemical environment, or presence of a magnetic or electric field. The only exceptions are nuclides that decay by the process of electron capture, such as beryllium-7, strontium-85, and zirconium-89, whose decay rate may be affected by local electron density. For all other nuclides, the proportion of the original nuclide to its decay products changes in a predictable way as the original nuclide decays over time. This predictability allows the relative abundances of related nuclides to be used as a clock to measure the time from the incorporation of the original nuclides into a material to the present.
Decay constant determination
The radioactive decay constant, the probability that an atom will decay per year, is the solid foundation of the common measurement of radioactivity. The accuracy and precision of the determination of an age (and a nuclide's half-life) depends on the accuracy and precision of the decay constant measurement. The in-growth method is one way of measuring the decay constant of a system, which involves accumulating daughter nuclides. Unfortunately for nuclides with high decay constants (which are useful for dating very old samples), long periods of time (decades) are required to accumulate enough decay products in a single sample to accurately measure them. A faster method involves using particle counters to determine alpha, beta or gamma activity, and then dividing that by the number of radioactive nuclides. However, it is challenging and expensive to accurately determine the number of radioactive nuclides. Alternatively, decay constants can be determined by comparing isotope data for rocks of known age. This method requires at least one of the isotope systems to be very precisely calibrated, such as the Pb–Pb system.
Accuracy of radiometric dating
The basic equation of radiometric dating requires that neither the parent nuclide nor the daughter product can enter or leave the material after its formation. The possible confounding effects of contamination of parent and daughter isotopes have to be considered, as do the effects of any loss or gain of such isotopes since the sample was created. It is therefore essential to have as much information as possible about the material being dated and to check for possible signs of alteration. Precision is enhanced if measurements are taken on multiple samples from different locations of the rock body. Alternatively, if several different minerals can be dated from the same sample and are assumed to be formed by the same event and were in equilibrium with the reservoir when they formed, they should form an isochron. This can reduce the problem of contamination. In uranium–lead dating, the concordia diagram is used which also decreases the problem of nuclide loss. Finally, correlation between different isotopic dating methods may be required to confirm the age of a sample. For example, the age of the Amitsoq gneisses from western Greenland was determined to be 3.60 ± 0.05 Ga (billion years ago) using uranium–lead dating and 3.56 ± 0.10 Ga (billion years ago) using lead–lead dating, results that are consistent with each other.
Accurate radiometric dating generally requires that the parent has a long enough half-life that it will be present in significant amounts at the time of measurement (except as described below under "Dating with short-lived extinct radionuclides"), the half-life of the parent is accurately known, and enough of the daughter product is produced to be accurately measured and distinguished from the initial amount of the daughter present in the material. The procedures used to isolate and analyze the parent and daughter nuclides must be precise and accurate. This normally involves isotope-ratio mass spectrometry.
The precision of a dating method depends in part on the half-life of the radioactive isotope involved. For instance, carbon-14 has a half-life of 5,730 years. After an organism has been dead for 60,000 years, so little carbon-14 is left that accurate dating cannot be established. On the other hand, the concentration of carbon-14 falls off so steeply that the age of relatively young remains can be determined precisely to within a few decades.
Closure temperature
The closure temperature or blocking temperature represents the temperature below which the mineral is a closed system for the studied isotopes. If a material that selectively rejects the daughter nuclide is heated above this temperature, any daughter nuclides that have been accumulated over time will be lost through diffusion, resetting the isotopic "clock" to zero. As the mineral cools, the crystal structure begins to form and diffusion of isotopes is less easy. At a certain temperature, the crystal structure has formed sufficiently to prevent diffusion of isotopes. Thus an igneous or metamorphic rock or melt, which is slowly cooling, does not begin to exhibit measurable radioactive decay until it cools below the closure temperature. The age that can be calculated by radiometric dating is thus the time at which the rock or mineral cooled to closure temperature. This temperature varies for every mineral and isotopic system, so a system can be closed for one mineral but open for another. Dating of different minerals and/or isotope systems (with differing closure temperatures) within the same rock can therefore enable the tracking of the thermal history of the rock in question with time, and thus the history of metamorphic events may become known in detail. These temperatures are experimentally determined in the lab by artificially resetting sample minerals using a high-temperature furnace. This field is known as thermochronology or thermochronometry.
The age equation
The mathematical expression that relates radioactive decay to geologic time is
where
is age of the sample,
is number of atoms of the radiogenic daughter isotope in the sample,
is number of atoms of the daughter isotope in the original or initial composition,
is number of atoms of the parent isotope in the sample at time (the present), given by , and
is the decay constant of the parent isotope, equal to the inverse of the radioactive half-life of the parent isotope times the natural logarithm of 2.
The equation is most conveniently expressed in terms of the measured quantity N(t) rather than the constant initial value No.
To calculate the age, it is assumed that the system is closed (neither parent nor daughter isotopes have been lost from system), D0 either must be negligible or can be accurately estimated, λ is known to high precision, and one has accurate and precise measurements of D* and N(t).
The above equation makes use of information on the composition of parent and daughter isotopes at the time the material being tested cooled below its closure temperature. This is well established for most isotopic systems. However, construction of an isochron does not require information on the original compositions, using merely the present ratios of the parent and daughter isotopes to a standard isotope. An isochron plot is used to solve the age equation graphically and calculate the age of the sample and the original composition.
Modern dating methods
Radiometric dating has been carried out since 1905 when it was invented by Ernest Rutherford as a method by which one might determine the age of the Earth. In the century since then the techniques have been greatly improved and expanded. Dating can now be performed on samples as small as a nanogram using a mass spectrometer. The mass spectrometer was invented in the 1940s and began to be used in radiometric dating in the 1950s. It operates by generating a beam of ionized atoms from the sample under test. The ions then travel through a magnetic field, which diverts them into different sampling sensors, known as "Faraday cups," depending on their mass and level of ionization. On impact in the cups, the ions set up a very weak current that can be measured to determine the rate of impacts and the relative concentrations of different atoms in the beams.
Uranium–lead dating method
Uranium–lead radiometric dating involves using uranium-235 or uranium-238 to date a substance's absolute age. This scheme has been refined to the point that the error margin in dates of rocks can be as low as less than two million years in two-and-a-half billion years. An error margin of 2–5% has been achieved on younger Mesozoic rocks.
Uranium–lead dating is often performed on the mineral zircon (ZrSiO4), though it can be used on other materials, such as baddeleyite and monazite (see: monazite geochronology). Zircon and baddeleyite incorporate uranium atoms into their crystalline structure as substitutes for zirconium, but strongly reject lead. Zircon has a very high closure temperature, is resistant to mechanical weathering and is very chemically inert. Zircon also forms multiple crystal layers during metamorphic events, which each may record an isotopic age of the event. In situ micro-beam analysis can be achieved via laser ICP-MS or SIMS techniques.
One of its great advantages is that any sample provides two clocks, one based on uranium-235's decay to lead-207 with a half-life of about 700 million years, and one based on uranium-238's decay to lead-206 with a half-life of about 4.5 billion years, providing a built-in crosscheck that allows accurate determination of the age of the sample even if some of the lead has been lost. This can be seen in the concordia diagram, where the samples plot along an errorchron (straight line) which intersects the concordia curve at the age of the sample.
Samarium–neodymium dating method
This involves the alpha decay of 147Sm to 143Nd with a half-life of 1.06 x 1011 years. Accuracy levels of within twenty million years in ages of two-and-a-half billion years are achievable.
Potassium–argon dating method
This involves electron capture or positron decay of potassium-40 to argon-40. Potassium-40 has a half-life of 1.3 billion years, so this method is applicable to the oldest rocks. Radioactive potassium-40 is common in micas, feldspars, and hornblendes, though the closure temperature is fairly low in these materials, about 350 °C (mica) to 500 °C (hornblende).
Rubidium–strontium dating method
This is based on the beta decay of rubidium-87 to strontium-87, with a half-life of 50 billion years. This scheme is used to date old igneous and metamorphic rocks, and has also been used to date lunar samples. Closure temperatures are so high that they are not a concern. Rubidium-strontium dating is not as precise as the uranium–lead method, with errors of 30 to 50 million years for a 3-billion-year-old sample. Application of in situ analysis (Laser-Ablation ICP-MS) within single mineral grains in faults have shown that the Rb-Sr method can be used to decipher episodes of fault movement.
Uranium–thorium dating method
A relatively short-range dating technique is based on the decay of uranium-234 into thorium-230, a substance with a half-life of about 80,000 years. It is accompanied by a sister process, in which uranium-235 decays into protactinium-231, which has a half-life of 32,760 years.
While uranium is water-soluble, thorium and protactinium are not, and so they are selectively precipitated into ocean-floor sediments, from which their ratios are measured. The scheme has a range of several hundred thousand years. A related method is ionium–thorium dating, which measures the ratio of ionium (thorium-230) to thorium-232 in ocean sediment.
Radiocarbon dating method
Radiocarbon dating is also simply called carbon-14 dating. Carbon-14 is a radioactive isotope of carbon, with a half-life of 5,730 years (which is very short compared with the above isotopes), and decays into nitrogen. In other radiometric dating methods, the heavy parent isotopes were produced by nucleosynthesis in supernovas, meaning that any parent isotope with a short half-life should be extinct by now. Carbon-14, though, is continuously created through collisions of neutrons generated by cosmic rays with nitrogen in the upper atmosphere and thus remains at a near-constant level on Earth. The carbon-14 ends up as a trace component in atmospheric carbon dioxide (CO2).
A carbon-based life form acquires carbon during its lifetime. Plants acquire it through photosynthesis, and animals acquire it from consumption of plants and other animals. When an organism dies, it ceases to take in new carbon-14, and the existing isotope decays with a characteristic half-life (5730 years). The proportion of carbon-14 left when the remains of the organism are examined provides an indication of the time elapsed since its death. This makes carbon-14 an ideal dating method to date the age of bones or the remains of an organism. The carbon-14 dating limit lies around 58,000 to 62,000 years.
The rate of creation of carbon-14 appears to be roughly constant, as cross-checks of carbon-14 dating with other dating methods show it gives consistent results. However, local eruptions of volcanoes or other events that give off large amounts of carbon dioxide can reduce local concentrations of carbon-14 and give inaccurate dates. The releases of carbon dioxide into the biosphere as a consequence of industrialization have also depressed the proportion of carbon-14 by a few percent; in contrast, the amount of carbon-14 was increased by above-ground nuclear bomb tests that were conducted into the early 1960s. Also, an increase in the solar wind or the Earth's magnetic field above the current value would depress the amount of carbon-14 created in the atmosphere.
Fission track dating method
This involves inspection of a polished slice of a material to determine the density of "track" markings left in it by the spontaneous fission of uranium-238 impurities. The uranium content of the sample has to be known, but that can be determined by placing a plastic film over the polished slice of the material, and bombarding it with slow neutrons. This causes induced fission of 235U, as opposed to the spontaneous fission of 238U. The fission tracks produced by this process are recorded in the plastic film. The uranium content of the material can then be calculated from the number of tracks and the neutron flux.
This scheme has application over a wide range of geologic dates. For dates up to a few million years micas, tektites (glass fragments from volcanic eruptions), and meteorites are best used. Older materials can be dated using zircon, apatite, titanite, epidote and garnet which have a variable amount of uranium content. Because the fission tracks are healed by temperatures over about 200 °C the technique has limitations as well as benefits. The technique has potential applications for detailing the thermal history of a deposit.
Chlorine-36 dating method
Large amounts of otherwise rare 36Cl (half-life ~300ky) were produced by irradiation of seawater during atmospheric detonations of nuclear weapons between 1952 and 1958. The residence time of 36Cl in the atmosphere is about 1 week. Thus, as an event marker of 1950s water in soil and ground water, 36Cl is also useful for dating waters less than 50 years before the present. 36Cl has seen use in other areas of the geological sciences, including dating ice and sediments.
Luminescence dating methods
Luminescence dating methods are not radiometric dating methods in that they do not rely on abundances of isotopes to calculate age. Instead, they are a consequence of background radiation on certain minerals. Over time, ionizing radiation is absorbed by mineral grains in sediments and archaeological materials such as quartz and potassium feldspar. The radiation causes charge to remain within the grains in structurally unstable "electron traps". Exposure to sunlight or heat releases these charges, effectively "bleaching" the sample and resetting the clock to zero. The trapped charge accumulates over time at a rate determined by the amount of background radiation at the location where the sample was buried. Stimulating these mineral grains using either light (optically stimulated luminescence or infrared stimulated luminescence dating) or heat (thermoluminescence dating) causes a luminescence signal to be emitted as the stored unstable electron energy is released, the intensity of which varies depending on the amount of radiation absorbed during burial and specific properties of the mineral.
These methods can be used to date the age of a sediment layer, as layers deposited on top would prevent the grains from being "bleached" and reset by sunlight. Pottery shards can be dated to the last time they experienced significant heat, generally when they were fired in a kiln.
Other methods
Other methods include:
Argon–argon (Ar–Ar)
Iodine–xenon (I–Xe)
Lanthanum–barium (La–Ba)
Lead–lead (Pb–Pb)
Lutetium–hafnium (Lu–Hf)
Hafnium–tungsten dating (Hf-W)
Potassium–calcium (K–Ca)
Rhenium–osmium (Re–Os)
Uranium–uranium (U–U)
Krypton–krypton (Kr–Kr)
Beryllium (10Be–9Be)
Dating with decay products of short-lived extinct radionuclides
Absolute radiometric dating requires a measurable fraction of parent nucleus to remain in the sample rock. For rocks dating back to the beginning of the solar system, this requires extremely long-lived parent isotopes, making measurement of such rocks' exact ages imprecise. To be able to distinguish the relative ages of rocks from such old material, and to get a better time resolution than that available from long-lived isotopes, short-lived isotopes that are no longer present in the rock can be used.
At the beginning of the solar system, there were several relatively short-lived radionuclides like 26Al, 60Fe, 53Mn, and 129I present within the solar nebula. These radionuclides—possibly produced by the explosion of a supernova—are extinct today, but their decay products can be detected in very old material, such as that which constitutes meteorites. By measuring the decay products of extinct radionuclides with a mass spectrometer and using isochronplots, it is possible to determine relative ages of different events in the early history of the solar system. Dating methods based on extinct radionuclides can also be calibrated with the U–Pb method to give absolute ages. Thus both the approximate age and a high time resolution can be obtained. Generally a shorter half-life leads to a higher time resolution at the expense of timescale.
The 129I – 129Xe chronometer
beta-decays to with a half-life of . The iodine-xenon chronometer is an isochron technique. Samples are exposed to neutrons in a nuclear reactor. This converts the only stable isotope of iodine () into via neutron capture followed by beta decay (of ). After irradiation, samples are heated in a series of steps and the xenon isotopic signature of the gas evolved in each step is analysed. When a consistent / ratio is observed across several consecutive temperature steps, it can be interpreted as corresponding to a time at which the sample stopped losing xenon.
Samples of a meteorite called Shallowater are usually included in the irradiation to monitor the conversion efficiency from to . The difference between the measured / ratios of the sample and Shallowater then corresponds to the different ratios of / when they each stopped losing xenon. This in turn corresponds to a difference in age of closure in the early solar system.
The 26Al – 26Mg chronometer
Another example of short-lived extinct radionuclide dating is the – chronometer, which can be used to estimate the relative ages of chondrules. decays to with a half-life of 720 000 years. The dating is simply a question of finding the deviation from the natural abundance of (the product of decay) in comparison with the ratio of the stable isotopes /.
The excess of (often designated *) is found by comparing the / ratio to that of other Solar System materials.
The – chronometer gives an estimate of the time period for formation of primitive meteorites of only a few million years (1.4 million years for Chondrule formation).
A terminology issue
In a July 2022 paper in the journal Applied Geochemistry, the authors proposed that the terms "parent isotope" and "daughter isotope" be avoided in favor of the more descriptive "precursor isotope" and "product isotope", analogous to "precursor ion" and "product ion" in mass spectrometry.
| Physical sciences | Geochronology | Earth science |
26301 | https://en.wikipedia.org/wiki/Rocket | Rocket | A rocket (from , and so named for its shape) is a vehicle that uses jet propulsion to accelerate without using any surrounding air. A rocket engine produces thrust by reaction to exhaust expelled at high speed. Rocket engines work entirely from propellant carried within the vehicle; therefore a rocket can fly in the vacuum of space. Rockets work more efficiently in a vacuum and incur a loss of thrust due to the opposing pressure of the atmosphere.
Multistage rockets are capable of attaining escape velocity from Earth and therefore can achieve unlimited maximum altitude. Compared with airbreathing engines, rockets are lightweight and powerful and capable of generating large accelerations. To control their flight, rockets rely on momentum, airfoils, auxiliary reaction engines, gimballed thrust, momentum wheels, deflection of the exhaust stream, propellant flow, spin, or gravity.
Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology for the Space Age, including setting foot on the Moon. Rockets are now used for fireworks, missiles and other weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight, and space exploration.
Chemical rockets are the most common type of high power rocket, typically creating a high speed exhaust by the combustion of fuel with an oxidizer. The stored propellant can be a simple pressurized gas or a single liquid fuel that disassociates in the presence of a catalyst (monopropellant), two liquids that spontaneously react on contact (hypergolic propellants), two liquids that must be ignited to react (like kerosene (RP1) and liquid oxygen, used in most liquid-propellant rockets), a solid combination of fuel with oxidizer (solid fuel), or solid fuel with liquid or gaseous oxidizer (hybrid propellant system). Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
History
In China, gunpowder-powered rockets evolved in medieval China under the Song dynasty by the 13th century. They also developed an early form of multiple rocket launcher during this time. The Mongols adopted Chinese rocket technology and the invention spread via the Mongol invasions to the Middle East and to Europe in the mid-13th century. According to Joseph Needham, the Song navy used rockets in a military exercise dated to 1245. Internal-combustion rocket propulsion is mentioned in a reference to 1264, recording that the "ground-rat", a type of firework, had frightened the Empress-Mother Gongsheng at a feast held in her honor by her son the Emperor Lizong. Subsequently, rockets are included in the military treatise Huolongjing, also known as the Fire Drake Manual, written by the Chinese artillery officer Jiao Yu in the mid-14th century. This text mentions the first known multistage rocket, the 'fire-dragon issuing from the water' (Huo long chu shui), thought to have been used by the Chinese navy.
Medieval and early modern rockets were used militarily as incendiary weapons in sieges. Between 1270 and 1280, Hasan al-Rammah wrote al-furusiyyah wa al-manasib al-harbiyya (The Book of Military Horsemanship and Ingenious War Devices), which included 107 gunpowder recipes, 22 of them for rockets.
In Europe, Roger Bacon mentioned firecrackers made in various parts of the world in the Opus Majus of 1267. Between 1280 and 1300, the Liber Ignium gave instructions for constructing devices that are similar to firecrackers based on second hand accounts. Konrad Kyeser described rockets in his military treatise Bellifortis around 1405. Giovanni Fontana, a Paduan engineer in 1420, created rocket-propelled animal figures.
The name "rocket" comes from the Italian rocchetta, meaning "bobbin" or "little spindle", given due to the similarity in shape to the bobbin or spool used to hold the thread from a spinning wheel.
Leonhard Fronsperger and Conrad Haas adopted the Italian term into German in the mid-16th century; "rocket" appears in English by the early 17th century.
Artis Magnae Artilleriae pars prima, an important early modern work on rocket artillery, by Casimir Siemienowicz, was first printed in Amsterdam in 1650.
The Mysorean rockets were the first successful iron-cased rockets, developed in the late 18th century in the Kingdom of Mysore (part of present-day India) under the rule of Hyder Ali.
The Congreve rocket was a British weapon designed and developed by Sir William Congreve in 1804. This rocket was based directly on the Mysorean rockets, used compressed powder and was fielded in the Napoleonic Wars. It was Congreve rockets to which Francis Scott Key was referring, when he wrote of the "rockets' red glare" while held captive on a British ship that was laying siege to Fort McHenry in 1814. Together, the Mysorean and British innovations increased the effective range of military rockets from .
The first mathematical treatment of the dynamics of rocket propulsion is due to William Moore (1813). In 1814, Congreve published a book in which he discussed the use of multiple rocket launching apparatus. In 1815 Alexander Dmitrievich Zasyadko constructed rocket-launching platforms, which allowed rockets to be fired in salvos (6 rockets at a time), and gun-laying devices. William Hale in 1844 greatly increased the accuracy of rocket artillery. Edward Mounier Boxer further improved the Congreve rocket in 1865.
William Leitch first proposed the concept of using rockets to enable human spaceflight in 1861. Leitch's rocket spaceflight description was first provided in his 1861 essay "A Journey Through Space", which was later published in his book God's Glory in the Heavens (1862). Konstantin Tsiolkovsky later (in 1903) also conceived this idea, and extensively developed a body of theory that has provided the foundation for subsequent spaceflight development.
The British Royal Flying Corps designed a guided rocket during World War I. Archibald Low stated "...in 1917 the Experimental Works designed an electrically steered rocket… Rocket experiments were conducted under my own patents with the help of Cdr. Brock." The patent "Improvements in Rockets" was raised in July 1918 but not published until February 1923 for security reasons. Firing and guidance controls could be either wire or wireless. The propulsion and guidance rocket eflux emerged from the deflecting cowl at the nose.
In 1920, Professor Robert Goddard of Clark University published proposed improvements to rocket technology in A Method of Reaching Extreme Altitudes. In 1923, Hermann Oberth (1894–1989) published Die Rakete zu den Planetenräumen (The Rocket into Planetary Space). Modern rockets originated in 1926 when Goddard attached a supersonic (de Laval) nozzle to a high pressure combustion chamber. These nozzles turn the hot gas from the combustion chamber into a cooler, hypersonic, highly directed jet of gas, more than doubling the thrust and raising the engine efficiency from 2% to 64%. His use of liquid propellants instead of gunpowder greatly lowered the weight and increased the effectiveness of rockets.
In 1921, the Soviet research and development laboratory Gas Dynamics Laboratory began developing solid-propellant rockets, which resulted in the first launch in 1928, which flew for approximately 1,300 metres. These rockets were used in 1931 for the world's first successful use of rockets for jet-assisted takeoff of aircraft and became the prototypes for the Katyusha rocket launcher, which were used during World War II.
In 1929, Fritz Lang's German science fiction film Woman in the Moon was released. It showcased the use of a multi-stage rocket, and also pioneered the concept of a rocket launch pad (a rocket standing upright against a tall building before launch having been slowly rolled into place) and the rocket-launch countdown clock. The Guardian film critic Stephen Armstrong states Lang "created the rocket industry". Lang was inspired by the 1923 book The Rocket into Interplanetary Space by Hermann Oberth, who became the film's scientific adviser and later an important figure in the team that developed the V-2 rocket. The film was thought to be so realistic that it was banned by the Nazis when they came to power for fear it would reveal secrets about the V-2 rockets.
In 1943 production of the V-2 rocket began in Germany. It was designed by the Peenemünde Army Research Center with Wernher von Braun serving as the technical director. The V-2 became the first artificial object to travel into space by crossing the Kármán line with the vertical launch of MW 18014 on 20 June 1944. Doug Millard, space historian and curator of space technology at the Science Museum, London, where a V-2 is exhibited in the main exhibition hall, states: "The V-2 was a quantum leap of technological change. We got to the Moon using V-2 technology but this was technology that was developed with massive resources, including some particularly grim ones. The V-2 programme was hugely expensive in terms of lives, with the Nazis using slave labour to manufacture these rockets". In parallel with the German guided-missile programme, rockets were also used on aircraft, either for assisting horizontal take-off (RATO), vertical take-off (Bachem Ba 349 "Natter") or for powering them (Me 163, see list of World War II guided missiles of Germany). The Allies' rocket programs were less technological, relying mostly on unguided missiles like the Soviet Katyusha rocket in the artillery role, and the American anti tank bazooka projectile. These used solid chemical propellants.
The Americans captured a large number of German rocket scientists, including Wernher von Braun, in 1945, and brought them to the United States as part of Operation Paperclip. After World War II scientists used rockets to study high-altitude conditions, by radio telemetry of temperature and pressure of the atmosphere, detection of cosmic rays, and further techniques; note too the Bell X-1, the first crewed vehicle to break the sound barrier (1947). Independently, in the Soviet Union's space program research continued under the leadership of the chief designer Sergei Korolev (1907–1966).
During the Cold War rockets became extremely important militarily with the development of modern intercontinental ballistic missiles (ICBMs).
The 1960s saw rapid development of rocket technology, particularly in the Soviet Union (Vostok, Soyuz, Proton) and in the United States (e.g. the X-15). Rockets came into use for space exploration. American crewed programs (Project Mercury, Project Gemini and later the Apollo programme) culminated in 1969 with the first crewed landing on the Moon – using equipment launched by the Saturn V rocket.
Types
Vehicle configurations
Rocket vehicles are often constructed in the archetypal tall thin "rocket" shape that takes off vertically, but there are actually many different types of rockets including:
tiny models such as balloon rockets, water rockets, skyrockets or small solid rockets that can be purchased at a hobby store
missiles
space rockets such as the enormous Saturn V used for the Apollo program
rocket cars
rocket bike
rocket-powered aircraft (including rocket-assisted takeoff of conventional aircraft – RATO)
rocket sleds
rocket trains
rocket torpedoes
rocket-powered jet packs
rapid escape systems such as ejection seats and launch escape systems
space probes
Design
A rocket design can be as simple as a cardboard tube filled with black powder, but to make an efficient, accurate rocket or missile involves overcoming a number of difficult problems. The main difficulties include cooling the combustion chamber, pumping the fuel (in the case of a liquid fuel), and controlling and correcting the direction of motion.
Components
Rockets consist of a propellant, a place to put propellant (such as a propellant tank), and a nozzle. They may also have one or more rocket engines, directional stabilization device(s) (such as fins, vernier engines or engine gimbals for thrust vectoring, gyroscopes) and a structure (typically monocoque) to hold these components together. Rockets intended for high speed atmospheric use also have an aerodynamic fairing such as a nose cone, which usually holds the payload.
As well as these components, rockets can have any number of other components, such as wings (rocketplanes), parachutes, wheels (rocket cars), even, in a sense, a person (rocket belt). Vehicles frequently possess navigation systems and guidance systems that typically use satellite navigation and inertial navigation systems.
Engines
Rocket engines employ the principle of jet propulsion. The rocket engines powering rockets come in a great variety of different types; a comprehensive list can be found in the main article, Rocket engine. Most current rockets are chemically powered rockets (usually internal combustion engines, but some employ a decomposing monopropellant) that emit a hot exhaust gas. A rocket engine can use gas propellants, solid propellant, liquid propellant, or a hybrid mixture of both solid and liquid. Some rockets use heat or pressure that is supplied from a source other than the chemical reaction of propellant(s), such as steam rockets, solar thermal rockets, nuclear thermal rocket engines or simple pressurized rockets such as water rocket or cold gas thrusters. With combustive propellants a chemical reaction is initiated between the fuel and the oxidizer in the combustion chamber, and the resultant hot gases accelerate out of a rocket engine nozzle (or nozzles) at the rearward-facing end of the rocket. The acceleration of these gases through the engine exerts force ("thrust") on the combustion chamber and nozzle, propelling the vehicle (according to Newton's Third Law). This actually happens because the force (pressure times area) on the combustion chamber wall is unbalanced by the nozzle opening; this is not the case in any other direction. The shape of the nozzle also generates force by directing the exhaust gas along the axis of the rocket.
Propellant
Rocket propellant is mass that is stored, usually in some form of propellant tank or casing, prior to being used as the propulsive mass that is ejected from a rocket engine in the form of a fluid jet to produce thrust. For chemical rockets often the propellants are a fuel such as liquid hydrogen or kerosene burned with an oxidizer such as liquid oxygen or nitric acid to produce large volumes of very hot gas. The oxidiser is either kept separate and mixed in the combustion chamber, or comes premixed, as with solid rockets.
Sometimes the propellant is not burned but still undergoes a chemical reaction, and can be a 'monopropellant' such as hydrazine, nitrous oxide or hydrogen peroxide that can be catalytically decomposed to hot gas.
Alternatively, an inert propellant can be used that can be externally heated, such as in steam rocket, solar thermal rocket or nuclear thermal rockets.
For smaller, low performance rockets such as attitude control thrusters where high performance is less necessary, a pressurised fluid is used as propellant that simply escapes the spacecraft through a propelling nozzle.
Pendulum rocket fallacy
The first liquid-fuel rocket, constructed by Robert H. Goddard, differed significantly from modern rockets. The rocket engine was at the top and the fuel tank at the bottom of the rocket, based on Goddard's belief that the rocket would achieve stability by "hanging" from the engine like a pendulum in flight. However, the rocket veered off course and crashed away from the launch site, indicating that the rocket was no more stable than one with the rocket engine at the base.
Uses
Rockets or other similar reaction devices carrying their own propellant must be used when there is no other substance (land, water, or air) or force (gravity, magnetism, light) that a vehicle may usefully employ for propulsion, such as in space. In these circumstances, it is necessary to carry all the propellant to be used.
However, they are also useful in other situations:
Military
Some military weapons use rockets to propel warheads to their targets. A rocket and its payload together are generally referred to as a missile when the weapon has a guidance system (not all missiles use rocket engines, some use other engines such as jets) or as a rocket if it is unguided. Anti-tank and anti-aircraft missiles use rocket engines to engage targets at high speed at a range of several miles, while intercontinental ballistic missiles can be used to deliver multiple nuclear warheads from thousands of miles, and anti-ballistic missiles try to stop them. Rockets have also been tested for reconnaissance, such as the Ping-Pong rocket, which was launched to surveil enemy targets, however, recon rockets have never come into wide use in the military.
Science and research
Sounding rockets are commonly used to carry instruments that take readings from to above the surface of the Earth.
The first images of Earth from space were obtained from a V-2 rocket in 1946 (flight #13).
Rocket engines are also used to propel rocket sleds along a rail at extremely high speed. The world record for this is Mach 8.5.
Spaceflight
Larger rockets are normally launched from a launch pad that provides stable support until a few seconds after ignition. Due to their high exhaust velocity——rockets are particularly useful when very high speeds are required, such as orbital speed at approximately . Spacecraft delivered into orbital trajectories become artificial satellites, which are used for many commercial purposes. Indeed, rockets remain the only way to launch spacecraft into orbit and beyond. They are also used to rapidly accelerate spacecraft when they change orbits or de-orbit for landing. Also, a rocket may be used to soften a hard parachute landing immediately before touchdown (see retrorocket).
Rescue
Rockets were used to propel a line to a stricken ship so that a Breeches buoy can be used to rescue those on board. Rockets are also used to launch emergency flares.
Some crewed rockets, notably the Saturn V and Soyuz, have launch escape systems. This is a small, usually solid rocket that is capable of pulling the crewed capsule away from the main vehicle towards safety at a moments notice. These types of systems have been operated several times, both in testing and in flight, and operated correctly each time.
This was the case when the Safety Assurance System (Soviet nomenclature) successfully pulled away the L3 capsule during three of the four failed launches of the Soviet Moon rocket, N1 vehicles 3L, 5L and 7L. In all three cases the capsule, albeit uncrewed, was saved from destruction. Only the three aforementioned N1 rockets had functional Safety Assurance Systems. The outstanding vehicle, 6L, had dummy upper stages and therefore no escape system giving the N1 booster a 100% success rate for egress from a failed launch.
A successful escape of a crewed capsule occurred when Soyuz T-10, on a mission to the Salyut 7 space station, exploded on the pad.
Solid rocket propelled ejection seats are used in many military aircraft to propel crew away to safety from a vehicle when flight control is lost.
Hobby, sport, and entertainment
A model rocket is a small rocket designed to reach low altitudes (e.g., for model) and be recovered by a variety of means.
According to the United States National Association of Rocketry (nar) Safety Code, model rockets are constructed of paper, wood, plastic and other lightweight materials. The code also provides guidelines for motor use, launch site selection, launch methods, launcher placement, recovery system design and deployment and more. Since the early 1960s, a copy of the Model Rocket Safety Code has been provided with most model rocket kits and motors. Despite its inherent association with extremely flammable substances and objects with a pointed tip traveling at high speeds, model rocketry historically has proven to be a very safe hobby and has been credited as a significant source of inspiration for children who eventually become scientists and engineers.
Hobbyists build and fly a wide variety of model rockets. Many companies produce model rocket kits and parts but due to their inherent simplicity some hobbyists have been known to make rockets out of almost anything. Rockets are also used in some types of consumer and professional fireworks. A water rocket is a type of model rocket using water as its reaction mass. The pressure vessel (the engine of the rocket) is usually a used plastic soft drink bottle. The water is forced out by a pressurized gas, typically compressed air. It is an example of Newton's third law of motion.
The scale of amateur rocketry can range from a small rocket launched in one's own backyard to a rocket that reached space. Amateur rocketry is split into three categories according to total engine impulse: low-power, mid-power, and high-power.
Hydrogen peroxide rockets are used to power jet packs, and have been used to power cars and a rocket car holds the all time (albeit unofficial) drag racing record.
Corpulent Stump is the most powerful non-commercial rocket ever launched on an Aerotech engine in the United Kingdom.
Flight
Launches for orbital spaceflights, or into interplanetary space, are usually from a fixed location on the ground, but would also be possible from an aircraft or ship.
Rocket launch technologies include the entire set of systems needed to successfully launch a vehicle, not just the vehicle itself, but also the firing control systems, mission control center, launch pad, ground stations, and tracking stations needed for a successful launch or recovery or both. These are often collectively referred to as the "ground segment".
Orbital launch vehicles commonly take off vertically, and then begin to progressively lean over, usually following a gravity turn trajectory.
Once above the majority of the atmosphere, the vehicle then angles the rocket jet, pointing it largely horizontally but somewhat downwards, which permits the vehicle to gain and then maintain altitude while increasing horizontal speed. As the speed grows, the vehicle will become more and more horizontal until at orbital speed, the engine will cut off.
All current vehicles stage, that is, jettison hardware on the way to orbit. Although vehicles have been proposed which would be able to reach orbit without staging, none have ever been constructed, and, if powered only by rockets, the exponentially increasing fuel requirements of such a vehicle would make its useful payload tiny or nonexistent. Most current and historical launch vehicles "expend" their jettisoned hardware, typically by allowing it to crash into the ocean, but some have recovered and reused jettisoned hardware, either by parachute or by propulsive landing.
When launching a spacecraft to orbit, a "" is a guided, powered turn during ascent phase that causes a rocket's flight path to deviate from a "straight" path. A dogleg is necessary if the desired launch azimuth, to reach a desired orbital inclination, would take the ground track over land (or over a populated area, e.g. Russia usually does launch over land, but over unpopulated areas), or if the rocket is trying to reach an orbital plane that does not reach the latitude of the launch site. Doglegs are undesirable due to extra onboard fuel required, causing heavier load, and a reduction of vehicle performance.
Noise
Rocket exhaust generates a significant amount of acoustic energy. As the supersonic exhaust collides with the ambient air, shock waves are formed. The sound intensity from these shock waves depends on the size of the rocket as well as the exhaust velocity. The sound intensity of large, high performance rockets could potentially kill at close range.
The Space Shuttle generated 180 dB of noise around its base. To combat this, NASA developed a sound suppression system which can flow water at rates up to 900,000 gallons per minute (57 m3/s) onto the launch pad. The water reduces the noise level from 180 dB down to 142 dB (the design requirement is 145 dB). Without the sound suppression system, acoustic waves would reflect off of the launch pad towards the rocket, vibrating the sensitive payload and crew. These acoustic waves can be so severe as to damage or destroy the rocket.
Noise is generally most intense when a rocket is close to the ground, since the noise from the engines radiates up away from the jet, as well as reflecting off the ground. This noise can be reduced somewhat by flame trenches with roofs, by water injection around the jet and by deflecting the jet at an angle.
For crewed rockets various methods are used to reduce the sound intensity for the passengers, and typically the placement of the astronauts far away from the rocket engines helps significantly. For the passengers and crew, when a vehicle goes supersonic the sound cuts off as the sound waves are no longer able to keep up with the vehicle.
Physics
Operation
The effect of the combustion of propellant in the rocket engine is to increase the internal energy of the resulting gases, utilizing the stored chemical energy in the fuel. As the internal energy increases, pressure increases, and a nozzle is used to convert this energy into a directed kinetic energy. This produces thrust against the ambient environment to which these gases are released. The ideal direction of motion of the exhaust is in the direction so as to cause thrust. At the top end of the combustion chamber the hot, energetic gas fluid cannot move forward, and so, it pushes upward against the top of the rocket engine's combustion chamber. As the combustion gases approach the exit of the combustion chamber, they increase in speed. The effect of the convergent part of the rocket engine nozzle on the high pressure fluid of combustion gases, is to cause the gases to accelerate to high speed. The higher the speed of the gases, the lower the pressure of the gas (Bernoulli's principle or conservation of energy) acting on that part of the combustion chamber. In a properly designed engine, the flow will reach Mach 1 at the throat of the nozzle. At which point the speed of the flow increases. Beyond the throat of the nozzle, a bell shaped expansion part of the engine allows the gases that are expanding to push against that part of the rocket engine. Thus, the bell part of the nozzle gives additional thrust. Simply expressed, for every action there is an equal and opposite reaction, according to Newton's third law with the result that the exiting gases produce the reaction of a force on the rocket causing it to accelerate the rocket.
In a closed chamber, the pressures are equal in each direction and no acceleration occurs. If an opening is provided in the bottom of the chamber then the pressure is no longer acting on the missing section. This opening permits the exhaust to escape. The remaining pressures give a resultant thrust on the side opposite the opening, and these pressures are what push the rocket along.
The shape of the nozzle is important. Consider a balloon propelled by air coming out of a tapering nozzle. In such a case the combination of air pressure and viscous friction is such that the nozzle does not push the balloon but is pulled by it. Using a convergent/divergent nozzle gives more force since the exhaust also presses on it as it expands outwards, roughly doubling the total force. If propellant gas is continuously added to the chamber then these pressures can be maintained for as long as propellant remains. Note that in the case of liquid propellant engines, the pumps moving the propellant into the combustion chamber must maintain a pressure larger than the combustion chamber—typically on the order of 100 atmospheres.
As a side effect, these pressures on the rocket also act on the exhaust in the opposite direction and accelerate this exhaust to very high speeds (according to Newton's Third Law). From the principle of conservation of momentum the speed of the exhaust of a rocket determines how much momentum increase is created for a given amount of propellant. This is called the rocket's specific impulse. Because a rocket, propellant and exhaust in flight, without any external perturbations, may be considered as a closed system, the total momentum is always constant. Therefore, the faster the net speed of the exhaust in one direction, the greater the speed of the rocket can achieve in the opposite direction. This is especially true since the rocket body's mass is typically far lower than the final total exhaust mass.
Forces on a rocket in flight
The general study of the forces on a rocket is part of the field of ballistics. Spacecraft are further studied in the subfield of astrodynamics.
Flying rockets are primarily affected by the following:
Thrust from the engine(s)
Gravity from celestial bodies
Drag if moving in atmosphere
Lift; usually relatively small effect except for rocket-powered aircraft
In addition, the inertia and centrifugal pseudo-force can be significant due to the path of the rocket around the center of a celestial body; when high enough speeds in the right direction and altitude are achieved a stable orbit or escape velocity is obtained.
These forces, with a stabilizing tail (the empennage) present will, unless deliberate control efforts are made, naturally cause the vehicle to follow a roughly parabolic trajectory termed a gravity turn, and this trajectory is often used at least during the initial part of a launch. (This is true even if the rocket engine is mounted at the nose.) Vehicles can thus maintain low or even zero angle of attack, which minimizes transverse stress on the launch vehicle, permitting a weaker, and hence lighter, launch vehicle.
Drag
Drag is a force opposite to the direction of the rocket's motion relative to any air it is moving through. This slows the speed of the vehicle and produces structural loads. The deceleration forces for fast-moving rockets are calculated using the drag equation.
Drag can be minimised by an aerodynamic nose cone and by using a shape with a high ballistic coefficient (the "classic" rocket shape—long and thin), and by keeping the rocket's angle of attack as low as possible.
During a launch, as the vehicle speed increases, and the atmosphere thins, there is a point of maximum aerodynamic drag called max Q. This determines the minimum aerodynamic strength of the vehicle, as the rocket must avoid buckling under these forces.
Net thrust
A typical rocket engine can handle a significant fraction of its own mass in propellant each second, with the propellant leaving the nozzle at several kilometres per second. This means that the thrust-to-weight ratio of a rocket engine, and often the entire vehicle can be very high, in extreme cases over 100. This compares with other jet propulsion engines that can exceed 5 for some of the better engines.
The net thrust of a rocket is
where
The effective exhaust velocity is more or less the speed the exhaust leaves the vehicle, and in the vacuum of space, the effective exhaust velocity is often equal to the actual average exhaust speed along the thrust axis. However, the effective exhaust velocity allows for various losses, and notably, is reduced when operated within an atmosphere.
The rate of propellant flow through a rocket engine is often deliberately varied over a flight, to provide a way to control the thrust and thus the airspeed of the vehicle. This, for example, allows minimization of aerodynamic losses and can limit the increase of g-forces due to the reduction in propellant load.
Total impulse
Impulse is defined as a force acting on an object over time, which in the absence of opposing forces (gravity and aerodynamic drag), changes the momentum (integral of mass and velocity) of the object. As such, it is the best performance class (payload mass and terminal velocity capability) indicator of a rocket, rather than takeoff thrust, mass, or "power". The total impulse of a rocket (stage) burning its propellant is:
When there is fixed thrust, this is simply:
The total impulse of a multi-stage rocket is the sum of the impulses of the individual stages.
Specific impulse
As can be seen from the thrust equation, the effective speed of the exhaust controls the amount of thrust produced from a particular quantity of fuel burnt per second.
An equivalent measure, the net impulse per weight unit of propellant expelled, is called specific Impulse, , and this is one of the most important figures that describes a rocket's performance. It is defined such that it is related to the effective exhaust velocity by:
where:
Thus, the greater the specific impulse, the greater the net thrust and performance of the engine. is determined by measurement while testing the engine. In practice the effective exhaust velocities of rockets varies but can be extremely high, ~4500 m/s, about 15 times the sea level speed of sound in air.
Delta-v (rocket equation)
The delta-v capacity of a rocket is the theoretical total change in velocity that a rocket can achieve without any external interference (without air drag or gravity or other forces).
When is constant, the delta-v that a rocket vehicle can provide can be calculated from the Tsiolkovsky rocket equation:
where:
When launched from the Earth practical delta-vs for a single rockets carrying payloads can be a few km/s. Some theoretical designs have rockets with delta-vs over 9 km/s.
The required delta-v can also be calculated for a particular manoeuvre; for example the delta-v to launch from the surface of the Earth to low Earth orbit is about 9.7 km/s, which leaves the vehicle with a sideways speed of about 7.8 km/s at an altitude of around 200 km. In this manoeuvre about 1.9 km/s is lost in air drag, gravity drag and gaining altitude.
The ratio is sometimes called the mass ratio.
Mass ratios
Almost all of a launch vehicle's mass consists of propellant. Mass ratio is, for any 'burn', the ratio between the rocket's initial mass and its final mass. Everything else being equal, a high mass ratio is desirable for good performance, since it indicates that the rocket is lightweight and hence performs better, for essentially the same reasons that low weight is desirable in sports cars.
Rockets as a group have the highest thrust-to-weight ratio of any type of engine; and this helps vehicles achieve high mass ratios, which improves the performance of flights. The higher the ratio, the less engine mass is needed to be carried. This permits the carrying of even more propellant, enormously improving the delta-v. Alternatively, some rockets such as for rescue scenarios or racing carry relatively little propellant and payload and thus need only a lightweight structure and instead achieve high accelerations. For example, the Soyuz escape system can produce 20 g.
Achievable mass ratios are highly dependent on many factors such as propellant type, the design of engine the vehicle uses, structural safety margins and construction techniques.
The highest mass ratios are generally achieved with liquid rockets, and these types are usually used for orbital launch vehicles, a situation which calls for a high delta-v. Liquid propellants generally have densities similar to water (with the notable exceptions of liquid hydrogen and liquid methane), and these types are able to use lightweight, low pressure tanks and typically run high-performance turbopumps to force the propellant into the combustion chamber.
Some notable mass fractions are found in the following table (some aircraft are included for comparison purposes):
Staging
Thus far, the required velocity (delta-v) to achieve orbit has been unattained by any single rocket because the propellant, tankage, structure, guidance, valves and engines and so on, take a particular minimum percentage of take-off mass that is too great for the propellant it carries to achieve that delta-v carrying reasonable payloads. Since Single-stage-to-orbit has so far not been achievable, orbital rockets always have more than one stage.
For example, the first stage of the Saturn V, carrying the weight of the upper stages, was able to achieve a mass ratio of about 10, and achieved a specific impulse of 263 seconds. This gives a delta-v of around 5.9 km/s whereas around 9.4 km/s delta-v is needed to achieve orbit with all losses allowed for.
This problem is frequently solved by staging—the rocket sheds excess weight (usually empty tankage and associated engines) during launch. Staging is either serial where the rockets light after the previous stage has fallen away, or parallel, where rockets are burning together and then detach when they burn out.
The maximum speeds that can be achieved with staging is theoretically limited only by the speed of light. However the payload that can be carried goes down geometrically with each extra stage needed, while the additional delta-v for each stage is simply additive.
Acceleration and thrust-to-weight ratio
From Newton's second law, the acceleration, , of a vehicle is simply:
where is the instantaneous mass of the vehicle and is the net force acting on the rocket (mostly thrust, but air drag and other forces can play a part).
As the remaining propellant decreases, rocket vehicles become lighter and their acceleration tends to increase until the propellant is exhausted. This means that much of the speed change occurs towards the end of the burn when the vehicle is much lighter. However, the thrust can be throttled to offset or vary this if needed. Discontinuities in acceleration also occur when stages burn out, often starting at a lower acceleration with each new stage firing.
Peak accelerations can be increased by designing the vehicle with a reduced mass, usually achieved by a reduction in the fuel load and tankage and associated structures, but obviously this reduces range, delta-v and burn time. Still, for some applications that rockets are used for, a high peak acceleration applied for just a short time is highly desirable.
The minimal mass of vehicle consists of a rocket engine with minimal fuel and structure to carry it. In that case the thrust-to-weight ratio of the rocket engine limits the maximum acceleration that can be designed. It turns out that rocket engines generally have truly excellent thrust to weight ratios (137 for the NK-33 engine; some solid rockets are over 1000), and nearly all really high-g vehicles employ or have employed rockets.
The high accelerations that rockets naturally possess means that rocket vehicles are often capable of vertical takeoff, and in some cases, with suitable guidance and control of the engines, also vertical landing. For these operations to be done it is necessary for a vehicle's engines to provide more than the local gravitational acceleration.
Energy
Energy efficiency
The energy density of a typical rocket propellant is often around one-third that of conventional hydrocarbon fuels; the bulk of the mass is (often relatively inexpensive) oxidizer. Nevertheless, at take-off the rocket has a great deal of energy in the fuel and oxidizer stored within the vehicle. It is of course desirable that as much of the energy of the propellant end up as kinetic or potential energy of the body of the rocket as possible.
Energy from the fuel is lost in air drag and gravity drag and is used for the rocket to gain altitude and speed. However, much of the lost energy ends up in the exhaust.
In a chemical propulsion device, the engine efficiency is simply the ratio of the kinetic power of the exhaust gases and the power available from the chemical reaction:
100% efficiency within the engine (engine efficiency ) would mean that all the heat energy of the combustion products is converted into kinetic energy of the jet. This is not possible, but the near-adiabatic high expansion ratio nozzles that can be used with rockets come surprisingly close: when the nozzle expands the gas, the gas is cooled and accelerated, and an energy efficiency of up to 70% can be achieved. Most of the rest is heat energy in the exhaust that is not recovered. The high efficiency is a consequence of the fact that rocket combustion can be performed at very high temperatures and the gas is finally released at much lower temperatures, and so giving good Carnot efficiency.
However, engine efficiency is not the whole story. In common with the other jet-based engines, but particularly in rockets due to their high and typically fixed exhaust speeds, rocket vehicles are extremely inefficient at low speeds irrespective of the engine efficiency. The problem is that at low speeds, the exhaust carries away a huge amount of kinetic energy rearward. This phenomenon is termed propulsive efficiency ().
However, as speeds rise, the resultant exhaust speed goes down, and the overall vehicle energetic efficiency rises, reaching a peak of around 100% of the engine efficiency when the vehicle is travelling exactly at the same speed that the exhaust is emitted. In this case the exhaust would ideally stop dead in space behind the moving vehicle, taking away zero energy, and from conservation of energy, all the energy would end up in the vehicle. The efficiency then drops off again at even higher speeds as the exhaust ends up traveling forwards – trailing behind the vehicle.
From these principles it can be shown that the propulsive efficiency for a rocket moving at speed with an exhaust velocity is:
And the overall (instantaneous) energy efficiency is:
For example, from the equation, with an of 0.7, a rocket flying at Mach 0.85 (which most aircraft cruise at) with an exhaust velocity of Mach 10, would have a predicted overall energy efficiency of 5.9%, whereas a conventional, modern, air-breathing jet engine achieves closer to 35% efficiency. Thus a rocket would need about 6x more energy; and allowing for the specific energy of rocket propellant being around one third that of conventional air fuel, roughly 18x more mass of propellant would need to be carried for the same journey. This is why rockets are rarely if ever used for general aviation.
Since the energy ultimately comes from fuel, these considerations mean that rockets are mainly useful when a very high speed is required, such as ICBMs or orbital launch. For example, NASA's Space Shuttle fired its engines for around 8.5 minutes, consuming 1,000 tonnes of solid propellant (containing 16% aluminium) and an additional 2,000,000 litres of liquid propellant (106,261 kg of liquid hydrogen fuel) to lift the 100,000 kg vehicle (including the 25,000 kg payload) to an altitude of 111 km and an orbital velocity of 30,000 km/h. At this altitude and velocity, the vehicle had a kinetic energy of about 3 TJ and a potential energy of roughly 200 GJ. Given the initial energy of 20 TJ, the Space Shuttle was about 16% energy efficient at launching the orbiter.
Thus jet engines, with a better match between speed and jet exhaust speed (such as turbofans—in spite of their worse )—dominate for subsonic and supersonic atmospheric use, while rockets work best at hypersonic speeds. On the other hand, rockets serve in many short-range relatively low speed military applications where their low-speed inefficiency is outweighed by their extremely high thrust and hence high accelerations.
Oberth effect
One subtle feature of rockets relates to energy. A rocket stage, while carrying a given load, is capable of giving a particular delta-v. This delta-v means that the speed increases (or decreases) by a particular amount, independent of the initial speed. However, because kinetic energy is a square law on speed, this means that the faster the rocket is travelling before the burn the more orbital energy it gains or loses.
This fact is used in interplanetary travel. It means that the amount of delta-v to reach other planets, over and above that to reach escape velocity can be much less if the delta-v is applied when the rocket is travelling at high speeds, close to the Earth or other planetary surface; whereas waiting until the rocket has slowed at altitude multiplies up the effort required to achieve the desired trajectory.
Safety, reliability and accidents
The reliability of rockets, as for all physical systems, is dependent on the quality of engineering design and construction.
Because of the enormous chemical energy in rocket propellants (greater energy by weight than explosives, but lower than gasoline), consequences of accidents can be severe. Most space missions have some problems. In 1986, following the Space Shuttle Challenger disaster, American physicist Richard Feynman, having served on the Rogers Commission, estimated that the chance of an unsafe condition for a launch of the Shuttle was very roughly 1%; more recently the historical per person-flight risk in orbital spaceflight has been calculated to be around 2% or 4%.
In May 2003 the astronaut office made clear its position on the need and feasibility of improving crew safety for future NASA crewed missions indicating their "consensus that an order of magnitude reduction in the risk of human life during ascent, compared to the Space Shuttle, is both achievable with current technology and consistent with NASA's focus on steadily improving rocket reliability".
Costs and economics
The costs of rockets can be roughly divided into propellant costs, the costs of obtaining and/or producing the 'dry mass' of the rocket, and the costs of any required support equipment and facilities.
Most of the takeoff mass of a rocket is normally propellant. However propellant is seldom more than a few times more expensive than gasoline per kilogram (as of 2009 gasoline was about or less), and although substantial amounts are needed, for all but the very cheapest rockets, it turns out that the propellant costs are usually comparatively small, although not completely negligible. With liquid oxygen costing and liquid hydrogen , the Space Shuttle in 2009 had a liquid propellant expense of approximately $1.4 million for each launch that cost $450 million from other expenses (with 40% of the mass of propellants used by it being liquids in the external fuel tank, 60% solids in the SRBs).
Even though a rocket's non-propellant, dry mass is often only between 5–20% of total mass, nevertheless this cost dominates. For hardware with the performance used in orbital launch vehicles, expenses of $2000–$10,000+ per kilogram of dry weight are common, primarily from engineering, fabrication, and testing; raw materials amount to typically around 2% of total expense. For most rockets except reusable ones (shuttle engines) the engines need not function more than a few minutes, which simplifies design.
Extreme performance requirements for rockets reaching orbit correlate with high cost, including intensive quality control to ensure reliability despite the limited safety factors allowable for weight reasons. Components produced in small numbers if not individually machined can prevent amortization of R&D and facility costs over mass production to the degree seen in more pedestrian manufacturing. Amongst liquid-fueled rockets, complexity can be influenced by how much hardware must be lightweight, like pressure-fed engines can have two orders of magnitude lesser part count than pump-fed engines but lead to more weight by needing greater tank pressure, most often used in just small maneuvering thrusters as a consequence.
To change the preceding factors for orbital launch vehicles, proposed methods have included mass-producing simple rockets in large quantities or on large scale, or developing reusable rockets meant to fly very frequently to amortize their up-front expense over many payloads, or reducing rocket performance requirements by constructing a non-rocket spacelaunch system for part of the velocity to orbit (or all of it but with most methods involving some rocket use).
The costs of support equipment, range costs and launch pads generally scale up with the size of the rocket, but vary less with launch rate, and so may be considered to be approximately a fixed cost.
Rockets in applications other than launch to orbit (such as military rockets and rocket-assisted take off), commonly not needing comparable performance and sometimes mass-produced, are often relatively inexpensive.
2010s emerging private competition
Since the early 2010s, new private options for obtaining spaceflight services emerged, bringing substantial price pressure into the existing market.
| Technology | Space | null |
26310 | https://en.wikipedia.org/wiki/Reproduction | Reproduction | Reproduction (or procreation or breeding) is the biological process by which new individual organisms – "offspring" – are produced from their "parent" or parents. There are two forms of reproduction: asexual and sexual.
In asexual reproduction, an organism can reproduce without the involvement of another organism. Asexual reproduction is not limited to single-celled organisms. The cloning of an organism is a form of asexual reproduction. By asexual reproduction, an organism creates a genetically similar or identical copy of itself. The evolution of sexual reproduction is a major puzzle for biologists. The two-fold cost of sexual reproduction is that only 50% of organisms reproduce and organisms only pass on 50% of their genes.
Sexual reproduction typically requires the sexual interaction of two specialized reproductive cells, called gametes, which contain half the number of chromosomes of normal cells and are created by meiosis, with typically a male fertilizing a female of the same species to create a fertilized zygote. This produces offspring organisms whose genetic characteristics are derived from those of the two parental organisms.
Asexual
Asexual reproduction is a process by which organisms create genetically similar or identical copies of themselves without the contribution of genetic material from another organism. Bacteria divide asexually via binary fission; viruses take control of host cells to produce more viruses; Hydras (invertebrates of the order Hydroidea) and yeasts are able to reproduce by budding. These organisms often do not possess different sexes, and they are capable of "splitting" themselves into two or more copies of themselves. Most plants have the ability to reproduce asexually and the ant species Mycocepurus smithii is thought to reproduce entirely by asexual means.
Some species that are capable of reproducing asexually, like hydra, yeast (See Mating of yeasts) and jellyfish, may also reproduce sexually. For instance, most plants are capable of vegetative reproductionreproduction without seeds or sporesbut can also reproduce sexually. Likewise, bacteria may exchange genetic information by conjugation.
Other ways of asexual reproduction include parthenogenesis, fragmentation and spore formation that involves only mitosis. Parthenogenesis is the growth and development of embryo or seed without fertilization. Parthenogenesis occurs naturally in some species, including lower plants (where it is called apomixis), invertebrates (e.g. water fleas, aphids, some bees and parasitic wasps), and vertebrates (e.g. some
reptiles, some fish, and very rarely, domestic birds).
Sexual
Sexual reproduction is a biological process that creates a new organism by combining the genetic material of two organisms in a process that starts with meiosis, a specialized type of cell division. Each of two parent organisms contributes half of the offspring's genetic makeup by creating haploid gametes. Most organisms form two different types of gametes. In these anisogamous species, the two sexes are referred to as male (producing sperm or microspores) and female (producing ova or megaspores). In isogamous species, the gametes are similar or identical in form (isogametes), but may have separable properties and then may be given other different names (see isogamy). Because both gametes look alike, they generally cannot be classified as male or female. For example, in the green alga, Chlamydomonas reinhardtii, there are so-called "plus" and "minus" gametes. A few types of organisms, such as many fungi and the ciliate Paramecium aurelia, have more than two "sexes", called mating types.
Most animals (including humans) and plants reproduce sexually. Sexually reproducing organisms have different sets of genes for every trait (called alleles). Offspring inherit one allele for each trait from each parent. Thus, offspring have a combination of the parents' genes. It is believed that "the masking of deleterious alleles favors the evolution of a dominant diploid phase in organisms that alternate between haploid and diploid phases" where recombination occurs freely.
Bryophytes reproduce sexually, but the larger and commonly-seen organisms are haploid and produce gametes. The gametes fuse to form a zygote which develops into a sporangium, which in turn produces haploid spores. The diploid stage is relatively small and short-lived compared to the haploid stage, i.e. haploid dominance. The advantage of diploidy, heterosis, only exists in the diploid life generation. Bryophytes retain sexual reproduction despite the fact that the haploid stage does not benefit from heterosis. This may be an indication that the sexual reproduction has advantages other than heterosis, such as genetic recombination between members of the species, allowing the expression of a wider range of traits and thus making the population more able to survive environmental variation.
Allogamy
Allogamy is the fertilization of flowers through cross-pollination, this occurs when a flower's ovum is fertilized by spermatozoa from the pollen of a different plant's flower. Pollen may be transferred through pollen vectors or abiotic carriers such as wind. Fertilization begins when the pollen is brought to a female gamete through the pollen tube. Allogamy is also known as cross fertilization, in contrast to autogamy or geitonogamy which are methods of self-fertilization.
Autogamy
Self-fertilization, also known as autogamy, occurs in hermaphroditic organisms where the two gametes fused in fertilization come from the same individual, e.g., many vascular plants, some foraminiferans, some ciliates. The term "autogamy" is sometimes substituted for autogamous pollination (not necessarily leading to successful fertilization) and describes self-pollination within the same flower, distinguished from geitonogamous pollination, transfer of pollen to a different flower on the same flowering plant, or within a single monoecious gymnosperm plant.
Mitosis and meiosis
Mitosis and meiosis are types of cell division. Mitosis occurs in somatic cells, while meiosis occurs in gametes.
Mitosis
The resultant number of cells in mitosis is twice the number of original cells. The number of chromosomes in the offspring cells is the same as that of the parent cell.
Meiosis
The resultant number of cells is four times the number of original cells. This results in cells with half the number of chromosomes present in the parent cell. A diploid cell duplicates itself, then undergoes two divisions (tetraploid to diploid to haploid), in the process forming four haploid cells. This process occurs in two phases, meiosis I and meiosis II.
Gametogenesis
Animals, including mammals, produce gametes (sperm and egg) by means of meiosis in gonads (testicles in males and ovaries in females). Sperm are produced by spermatogenesis and eggs are produced by oogenesis. During gametogenesis in mammals numerous genes encoding proteins that participate in DNA repair mechanisms exhibit enhanced or specialized expression. Male germ cells produced in the testes of animals are capable of special DNA repair processes that function during meiosis to repair DNA damages and to maintain the integrity of the genomes that are to be passed on to progeny. Such DNA repair processes include homologous recombinational repair as well as non-homologous end joining. Oocytes located in the primordial follicle of the ovary are in a non-growing prophase arrested state, but are able to undergo highly efficient homologous recombinational repair of DNA damages including double-strand breaks. These repair processes allow the integrity of the genome to be maintained and offspring health to be protected.
Same-sex
Scientific research is currently investigating the possibility of same-sex procreation, which would produce offspring with equal genetic contributions from either two females or two males. The obvious approaches, subject to a growing amount of activity, are female sperm and male eggs. In 2004, by altering the function of a few genes involved with imprinting, other Japanese scientists combined two mouse eggs to produce daughter mice and in 2018 Chinese scientists created 29 female mice from two female mice mothers but were unable to produce viable offspring from two father mice. Researches noted that there is little chance these techniques would be applied to humans in the near future.
Strategies
There are a wide range of reproductive strategies employed by different species. Some animals, such as the human and northern gannet, do not reach sexual maturity for many years after birth and even then produce few offspring. Others reproduce quickly; but, under normal circumstances, most offspring do not survive to adulthood. For example, a rabbit (mature after 8 months) can produce 10–30 offspring per year, and a fruit fly (mature after 10–14 days) can produce up to 900 offspring per year. These two main strategies are known as K-selection (few offspring) and r-selection (many offspring). Which strategy is favoured by evolution depends on a variety of circumstances. Animals with few offspring can devote more resources to the nurturing and protection of each individual offspring, thus reducing the need for many offspring. On the other hand, animals with many offspring may devote fewer resources to each individual offspring; for these types of animals it is common for many offspring to die soon after birth, but enough individuals typically survive to maintain the population. Some organisms such as honey bees and fruit flies retain sperm in a process called sperm storage thereby increasing the duration of their fertility.
Other types
Polycyclic animals reproduce intermittently throughout their lives.
Semelparous organisms reproduce only once in their lifetime, such as annual plants (including all grain crops), and certain species of salmon, spider, bamboo and century plant. Often, they die shortly after reproduction. This is often associated with r-strategists.
Iteroparous organisms produce offspring in successive (e.g. annual or seasonal) cycles, such as perennial plants. Iteroparous animals survive over multiple seasons (or periodic condition changes). This is more associated with K-strategists.
Asexual vs. sexual reproduction
Organisms that reproduce through asexual reproduction tend to grow in number exponentially. However, because they rely on mutation for variations in their DNA, all members of the species have similar vulnerabilities. Organisms that reproduce sexually yield a smaller number of offspring, but the large amount of variation in their genes makes them less susceptible to disease.
Many organisms can reproduce sexually as well as asexually. Aphids, slime molds, sea anemones, some species of starfish (by fragmentation), and many plants are examples. When environmental factors are favorable, asexual reproduction is employed to exploit suitable conditions for survival such as an abundant food supply, adequate shelter, favorable climate, disease, optimum pH or a proper mix of other lifestyle requirements. Populations of these organisms increase exponentially via asexual reproductive strategies to take full advantage of the rich supply resources.
When food sources have been depleted, the climate becomes hostile, or individual survival is jeopardized by some other adverse change in living conditions, these organisms switch to sexual forms of reproduction. Sexual reproduction ensures a mixing of the gene pool of the species. The variations found in offspring of sexual reproduction allow some individuals to be better suited for survival and provide a mechanism for selective adaptation to occur. The meiosis stage of the sexual cycle also allows especially effective repair of DNA damages (see Meiosis). In addition, sexual reproduction usually results in the formation of a life stage that is able to endure the conditions that threaten the offspring of an asexual parent. Thus, seeds, spores, eggs, pupae, cysts or other "over-wintering" stages of sexual reproduction ensure the survival during unfavorable times and the organism can "wait out" adverse situations until a swing back to suitability occurs.
Life without
The existence of life without reproduction is the subject of some speculation. The biological study of how the origin of life produced reproducing organisms from non-reproducing elements is called abiogenesis. Whether or not there were several independent abiogenetic events, biologists believe that the last universal ancestor to all present life on Earth lived about 3.5 billion years ago.
Scientists have speculated about the possibility of creating life non-reproductively in the laboratory. Several scientists have succeeded in producing simple viruses from entirely non-living materials. However, viruses are often regarded as not alive. Being nothing more than a bit of RNA or DNA in a protein capsule, they have no metabolism and can only replicate with the assistance of a hijacked cell's metabolic machinery.
The production of a truly living organism (e.g. a simple bacterium) with no ancestors would be a much more complex task, but may well be possible to some degree according to current biological knowledge. A synthetic genome has been transferred into an existing bacterium where it replaced the native DNA, resulting in the artificial production of a new M. mycoides organism.
There is some debate within the scientific community over whether this cell can be considered completely synthetic on the grounds that the chemically synthesized genome was an almost 1:1 copy of a naturally occurring genome and, the recipient cell was a naturally occurring bacterium. The Craig Venter Institute maintains the term "synthetic bacterial cell" but they also clarify "...we do not consider this to be "creating life from scratch" but rather we are creating new life out of already existing life using synthetic DNA". Venter plans to patent his experimental cells, stating that "they are pretty clearly human inventions". Its creators suggests that building 'synthetic life' would allow researchers to learn about life by building it, rather than by tearing it apart. They also propose to stretch the boundaries between life and machines until the two overlap to yield "truly programmable organisms". Researchers involved stated that the creation of "true synthetic biochemical life" is relatively close in reach with current technology and cheap compared to the effort needed to place man on the Moon.
Lottery principle
Sexual reproduction has many drawbacks, since it requires far more energy than asexual reproduction and diverts the organisms from other pursuits, and there is some argument about why so many species use it. George C. Williams used lottery tickets as an analogy in one explanation for the widespread use of sexual reproduction. He argued that asexual reproduction, which produces little or no genetic variety in offspring, was like buying many tickets that all have the same number, limiting the chance of "winning" – that is, producing surviving offspring. Sexual reproduction, he argued, was like purchasing fewer tickets but with a greater variety of numbers and therefore a greater chance of success. The point of this analogy is that since asexual reproduction does not produce genetic variations, there is little ability to quickly adapt to a changing environment. The lottery principle is less accepted these days because of evidence that asexual reproduction is more prevalent in unstable environments, the opposite of what it predicts.
| Biology and health sciences | Biology | null |
26350 | https://en.wikipedia.org/wiki/Radiation%20therapy | Radiation therapy | Radiation therapy or radiotherapy (RT, RTx, or XRT) is a treatment using ionizing radiation, generally provided as part of cancer therapy to either kill or control the growth of malignant cells. It is normally delivered by a linear particle accelerator. Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body, and have not spread to other parts. It may also be used as part of adjuvant therapy, to prevent tumor recurrence after surgery to remove a primary malignant tumor (for example, early stages of breast cancer). Radiation therapy is synergistic with chemotherapy, and has been used before, during, and after chemotherapy in susceptible cancers. The subspecialty of oncology concerned with radiotherapy is called radiation oncology. A physician who practices in this subspecialty is a radiation oncologist.
Radiation therapy is commonly applied to the cancerous tumor because of its ability to control cell growth. Ionizing radiation works by damaging the DNA of cancerous tissue leading to cellular death. To spare normal tissues (such as skin or organs which radiation must pass through to treat the tumor), shaped radiation beams are aimed from several angles of exposure to intersect at the tumor, providing a much larger absorbed dose there than in the surrounding healthy tissue. Besides the tumor itself, the radiation fields may also include the draining lymph nodes if they are clinically or radiologically involved with the tumor, or if there is thought to be a risk of subclinical malignant spread. It is necessary to include a margin of normal tissue around the tumor to allow for uncertainties in daily set-up and internal tumor motion. These uncertainties can be caused by internal movement (for example, respiration and bladder filling) and movement of external skin marks relative to the tumor position.
Radiation oncology is the medical specialty concerned with prescribing radiation, and is distinct from radiology, the use of radiation in medical imaging and diagnosis. Radiation may be prescribed by a radiation oncologist with intent to cure or for adjuvant therapy. It may also be used as palliative treatment (where cure is not possible and the aim is for local disease control or symptomatic relief) or as therapeutic treatment (where the therapy has survival benefit and can be curative). It is also common to combine radiation therapy with surgery, chemotherapy, hormone therapy, immunotherapy or some mixture of the four. Most common cancer types can be treated with radiation therapy in some way.
The precise treatment intent (curative, adjuvant, neoadjuvant therapeutic, or palliative) will depend on the tumor type, location, and stage, as well as the general health of the patient. Total body irradiation (TBI) is a radiation therapy technique used to prepare the body to receive a bone marrow transplant. Brachytherapy, in which a radioactive source is placed inside or next to the area requiring treatment, is another form of radiation therapy that minimizes exposure to healthy tissue during procedures to treat cancers of the breast, prostate, and other organs. Radiation therapy has several applications in non-malignant conditions, such as the treatment of trigeminal neuralgia, acoustic neuromas, severe thyroid eye disease, pterygium, pigmented villonodular synovitis, and prevention of keloid scar growth, vascular restenosis, and heterotopic ossification. The use of radiation therapy in non-malignant conditions is limited partly by worries about the risk of radiation-induced cancers.
Medical uses
It is estimated that half of the US' 1.2M invasive cancer cases diagnosed in 2022 received radiation therapy in their treatment program. Different cancers respond to radiation therapy in different ways.
The response of a cancer to radiation is described by its radiosensitivity.
Highly radiosensitive cancer cells are rapidly killed by modest doses of radiation. These include leukemias, most lymphomas, and germ cell tumors.
The majority of epithelial cancers are only moderately radiosensitive, and require a significantly higher dose of radiation (60–70 Gy) to achieve a radical cure.
Some types of cancer are notably radioresistant, that is, much higher doses are required to produce a radical cure than may be safe in clinical practice. Renal cell cancer and melanoma are generally considered to be radioresistant but radiation therapy is still a palliative option for many patients with metastatic melanoma. Combining radiation therapy with immunotherapy is an active area of investigation and has shown some promise for melanoma and other cancers.
It is important to distinguish the radiosensitivity of a particular tumor, which to some extent is a laboratory measure, from the radiation "curability" of a cancer in actual clinical practice. For example, leukemias are not generally curable with radiation therapy, because they are disseminated through the body. Lymphoma may be radically curable if it is localized to one area of the body. Similarly, many of the common, moderately radioresponsive tumors are routinely treated with curative doses of radiation therapy if they are at an early stage. For example, non-melanoma skin cancer, head and neck cancer, breast cancer, non-small cell lung cancer, cervical cancer, anal cancer, and prostate cancer. With the exception of oligometastatic disease, metastatic cancers are incurable with radiation therapy because it is not possible to treat the whole body.
Modern radiation therapy relies on a CT scan to identify the tumor and surrounding normal structures and to perform dose calculations for the creation of a complex radiation treatment plan. The patient receives small skin marks to guide the placement of treatment fields. Patient positioning is crucial at this stage as the patient will have to be placed in an identical position during each treatment. Many patient positioning devices have been developed for this purpose, including masks and cushions which can be molded to the patient. Image-guided radiation therapy is a method that uses imaging to correct for positional errors of each treatment session.
The response of a tumor to radiation therapy is also related to its size. Due to complex radiobiology, very large tumors are affected less by radiation compared to smaller tumors or microscopic disease. Various strategies are used to overcome this effect. The most common technique is surgical resection prior to radiation therapy. This is most commonly seen in the treatment of breast cancer with wide local excision or mastectomy followed by adjuvant radiation therapy. Another method is to shrink the tumor with neoadjuvant chemotherapy prior to radical radiation therapy. A third technique is to enhance the radiosensitivity of the cancer by giving certain drugs during a course of radiation therapy. Examples of radiosensitizing drugs include cisplatin, nimorazole, and cetuximab.
The impact of radiotherapy varies between different types of cancer and different groups. For example, for breast cancer after breast-conserving surgery, radiotherapy has been found to halve the rate at which the disease recurs. In pancreatic cancer, radiotherapy has increased survival times for inoperable tumors.
Side effects
Radiation therapy (RT) is in itself painless, but has iatrogenic side effect risks. Many low-dose palliative treatments (for example, radiation therapy to bony metastases) cause minimal or no side effects, although short-term pain flare-up can be experienced in the days following treatment due to oedema compressing nerves in the treated area. Higher doses can cause varying side effects during treatment (acute side effects), in the months or years following treatment (long-term side effects), or after re-treatment (cumulative side effects). The nature, severity, and longevity of side effects depends on the organs that receive the radiation, the treatment itself (type of radiation, dose, fractionation, concurrent chemotherapy), and the patient. Serious radiation complications may occur in 5% of RT cases. Acute (near immediate) or sub-acute (2 to 3 months post RT) radiation side effects may develop after 50 Gy RT dosing. Late or delayed radiation injury (6 months to decades) may develop after 65 Gy.
Most side effects are predictable and expected. Side effects from radiation are usually limited to the area of the patient's body that is under treatment. Side effects are dose-dependent; for example, higher doses of head and neck radiation can be associated with cardiovascular complications, thyroid dysfunction, and pituitary axis dysfunction. Modern radiation therapy aims to reduce side effects to a minimum and to help the patient understand and deal with side effects that are unavoidable.
The main side effects reported are fatigue and skin irritation, like a mild to moderate sun burn. The fatigue often sets in during the middle of a course of treatment and can last for weeks after treatment ends. The irritated skin will heal, but may not be as elastic as it was before.
Acute side effects
Nausea and vomiting
This is not a general side effect of radiation therapy, and mechanistically is associated only with treatment of the stomach or abdomen (which commonly react a few hours after treatment), or with radiation therapy to certain nausea-producing structures in the head during treatment of certain head and neck tumors, most commonly the vestibules of the inner ears. As with any distressing treatment, some patients vomit immediately during radiotherapy, or even in anticipation of it, but this is considered a psychological response. Nausea for any reason can be treated with antiemetics.
Damage to the epithelial surfaces
Epithelial surfaces may sustain damage from radiation therapy. Depending on the area being treated, this may include the skin, oral mucosa, pharyngeal, bowel mucosa, and ureter. The rates of onset of damage and recovery from it depend upon the turnover rate of epithelial cells. Typically the skin starts to become pink and sore several weeks into treatment. The reaction may become more severe during the treatment and for up to about one week following the end of radiation therapy, and the skin may break down. Although this moist desquamation is uncomfortable, recovery is usually quick. Skin reactions tend to be worse in areas where there are natural folds in the skin, such as underneath the female breast, behind the ear, and in the groin.
Mouth, throat and stomach sores
If the head and neck area is treated, temporary soreness and ulceration commonly occur in the mouth and throat. If severe, this can affect swallowing, and the patient may need painkillers and nutritional support/food supplements. The esophagus can also become sore if it is treated directly, or if, as commonly occurs, it receives a dose of collateral radiation during treatment of lung cancer. When treating liver malignancies and metastases, it is possible for collateral radiation to cause gastric, stomach, or duodenal ulcers This collateral radiation is commonly caused by non-targeted delivery (reflux) of the radioactive agents being infused. Methods, techniques and devices are available to lower the occurrence of this type of adverse side effect.
Intestinal discomfort
The lower bowel may be treated directly with radiation (treatment of rectal or anal cancer) or be exposed by radiation therapy to other pelvic structures (prostate, bladder, female genital tract). Typical symptoms are soreness, diarrhoea, and nausea. Nutritional interventions may be able to help with diarrhoea associated with radiotherapy. Studies in people having pelvic radiotherapy as part of anticancer treatment for a primary pelvic cancer found that changes in dietary fat, fibre and lactose during radiotherapy reduced diarrhoea at the end of treatment.
Swelling
As part of the general inflammation that occurs, swelling of soft tissues may cause problems during radiation therapy. This is a concern during treatment of brain tumors and brain metastases, especially where there is pre-existing raised intracranial pressure or where the tumor is causing near-total obstruction of a lumen (e.g., trachea or main bronchus). Surgical intervention may be considered prior to treatment with radiation. If surgery is deemed unnecessary or inappropriate, the patient may receive steroids during radiation therapy to reduce swelling.
Infertility
The gonads (ovaries and testicles) are very sensitive to radiation. They may be unable to produce gametes following direct exposure to most normal treatment doses of radiation. Treatment planning for all body sites is designed to minimize, if not completely exclude dose to the gonads if they are not the primary area of treatment.
Late side effects
Late side effects occur months to years after treatment and are generally limited to the area that has been treated. They are often due to damage of blood vessels and connective tissue cells. Many late effects are reduced by fractionating treatment into smaller parts.
Fibrosis
Tissues which have been irradiated tend to become less elastic over time due to a diffuse scarring process.
Epilation
Epilation (hair loss) may occur on any hair bearing skin with doses above 1 Gy. It only occurs within the radiation field/s. Hair loss may be permanent with a single dose of 10 Gy, but if the dose is fractionated permanent hair loss may not occur until dose exceeds 45 Gy.
Dryness
The salivary glands and tear glands have a radiation tolerance of about 30 Gy in 2 Gy fractions, a dose which is exceeded by most radical head and neck cancer treatments. Dry mouth (xerostomia) and dry eyes (xerophthalmia) can become irritating long-term problems and severely reduce the patient's quality of life. Similarly, sweat glands in treated skin (such as the armpit) tend to stop working, and the naturally moist vaginal mucosa is often dry following pelvic irradiation.
Chronic sinus drainage
Radiation therapy treatments to the head and neck regions for soft tissue, palate or bone cancer can cause chronic sinus tract draining and fistulae from the bone.
Lymphedema
Lymphedema, a condition of localized fluid retention and tissue swelling, can result from damage to the lymphatic system sustained during radiation therapy. It is the most commonly reported complication in breast radiation therapy patients who receive adjuvant axillary radiotherapy following surgery to clear the axillary lymph nodes .
Cancer
Radiation is a potential cause of cancer, and secondary malignancies are seen in some patients. Cancer survivors are already more likely than the general population to develop malignancies due to a number of factors including lifestyle choices, genetics, and previous radiation treatment. It is difficult to directly quantify the rates of these secondary cancers from any single cause. Studies have found radiation therapy as the cause of secondary malignancies for only a small minority of patients, e.g., exposure to ionizing radiation is an identified risk factor for subsequent glioma; see main topic Glioma#Causes. The combined risk of a radiation-induced glioblastoma or astrocytoma within 15 years of the initial radiotherapy is 0.5-2.7%.
New techniques such as proton beam therapy and carbon ion radiotherapy which aim to reduce dose to healthy tissues will lower these risks. It starts to occur 4–6 years following treatment, although some haematological malignancies may develop within 3 years. In the vast majority of cases, this risk is greatly outweighed by the reduction in risk conferred by treating the primary cancer even in pediatric malignancies which carry a higher burden of secondary malignancies.
Cardiovascular disease
Radiation can increase the risk of heart disease and death as observed in previous breast cancer RT regimens. Therapeutic radiation increases the risk of a subsequent cardiovascular event (i.e., heart attack or stroke) by 1.5 to 4 times a person's normal rate, aggravating factors included. The increase is dose dependent, related to the RT's dose strength, volume and location. Use of concomitant chemotherapy, e.g. anthracyclines, is an aggravating risk factor. The occurrence rate of RT induced cardiovascular disease is estimated between 10 and 30%.
Cardiovascular late side effects have been termed radiation-induced heart disease (RIHD) and radiation-induced cardiovascular disease (RIVD). Symptoms are dose dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side effect symptoms. Most radiation-induced cardiovascular diseases occur 10 or more years post treatment, making causality determinations more difficult.
Cognitive decline
In cases of radiation applied to the head radiation therapy may cause cognitive decline. Cognitive decline was especially apparent in young children, between the ages of 5 and 11. Studies found, for example, that the IQ of 5-year-old children declined each year after treatment by several IQ points.
Radiation enteropathy
The gastrointestinal tract can be damaged following abdominal and pelvic radiotherapy. Atrophy, fibrosis and vascular changes produce malabsorption, diarrhea, steatorrhea and bleeding with bile acid diarrhea and vitamin B12 malabsorption commonly found due to ileal involvement. Pelvic radiation disease includes radiation proctitis, producing bleeding, diarrhoea and urgency, and can also cause radiation cystitis when the bladder is affected.
Lung injury
Radiation-induced lung injury (RILI) encompasses radiation pneumonitis and pulmonary fibrosis. Lung tissue is sensitive to ionizing radiation, tolerating only 18–20 Gy, a fraction of typical therapeutic dosage levels. The lung's terminal airways and associated alveoli can become damaged, preventing effective respiratory gas exchange. The adverse effects of radiation are often asymptomatic with clinically significant RILI occurrence rates varying widely in literature, affecting 5–25% of those treated for thoracic and mediastinal malignancies and 1–5% of those treated for breast cancer.
Radiation-induced polyneuropathy
Radiation treatments may damage nerves near the target area or within the delivery path as nerve tissue is also radiosensitive. Nerve damage from ionizing radiation occurs in phases, the initial phase from microvascular injury, capillary damage and nerve demyelination. Subsequent damage occurs from vascular constriction and nerve compression due to uncontrolled fibrous tissue growth caused by radiation. Radiation-induced polyneuropathy, ICD-10-CM Code G62.82, occurs in approximately 1–5% of those receiving radiation therapy.
Depending upon the irradiated zone, late effect neuropathy may occur in either the central nervous system (CNS) or the peripheral nervous system (PNS). In the CNS for example, cranial nerve injury typically presents as a visual acuity loss 1–14 years post treatment. In the PNS, injury to the plexus nerves presents as radiation-induced brachial plexopathy or radiation-induced lumbosacral plexopathy appearing up to 3 decades post treatment.
Myokymia (muscle cramping, spasms or twitching) may develop. Radiation-induced nerve injury, chronic compressive neuropathies and polyradiculopathies are the most common cause of myokymic discharges. Clinically, the majority of patients receiving radiation therapy have measurable myokymic discharges within their field of radiation which present as focal or segmental myokymia. Common areas affected include the arms, legs or face depending upon the location of nerve injury. Myokymia is more frequent when radiation doses exceed 10 gray (Gy).
Radiation necrosis
Radiation necrosis is the death of healthy tissue near the irradiated site. It is a type of coagulative necrosis that occurs because the radiation directly or indirectly damages blood vessels in the area, which reduces the blood supply to the remaining healthy tissue, causing it to die by ischemia, similar to what happens in an ischemic stroke. Because it is an indirect effect of the treatment, it occurs months to decades after radiation exposure. Radiation necrosis most commonly presents as osteoradionecrosis, vaginal radionecrosis, soft tissue radionecrosis, or laryngeal radionecrosis.
Cumulative side effects
Cumulative effects from this process should not be confused with long-term effects – when short-term effects have disappeared and long-term effects are subclinical, reirradiation can still be problematic. These doses are calculated by the radiation oncologist and many factors are taken into account before the subsequent radiation takes place.
Effects on reproduction
During the first two weeks after fertilization, radiation therapy is lethal but not teratogenic. High doses of radiation during pregnancy induce anomalies, impaired growth and intellectual disability, and there may be an increased risk of childhood leukemia and other tumors in the offspring.
In males previously having undergone radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. However, the use of assisted reproductive technologies and micromanipulation techniques might increase this risk.
Effects on pituitary system
Hypopituitarism commonly develops after radiation therapy for sellar and parasellar neoplasms, extrasellar brain tumors, head and neck tumors, and following whole body irradiation for systemic malignancies. 40–50% of children treated for childhood cancer develop some endocrine side effect. Radiation-induced hypopituitarism mainly affects growth hormone and gonadal hormones. In contrast, adrenocorticotrophic hormone (ACTH) and thyroid stimulating hormone (TSH) deficiencies are the least common among people with radiation-induced hypopituitarism. Changes in prolactin-secretion is usually mild, and vasopressin deficiency appears to be very rare as a consequence of radiation.
Effects on subsequent surgery
Delayed tissue injury with impaired wound healing capability often develops after receiving doses in excess of 65 Gy. A diffuse injury pattern due to the external beam radiotherapy's holographic isodosing occurs. While the targeted tumor receives the majority of radiation, healthy tissue at incremental distances from the center of the tumor are also irradiated in a diffuse pattern due to beam divergence. These wounds demonstrate progressive, proliferative endarteritis, inflamed arterial linings that disrupt the tissue's blood supply. Such tissue ends up chronically hypoxic, fibrotic, and without an adequate nutrient and oxygen supply. Surgery of previously irradiated tissue has a very high failure rate, e.g. women who have received radiation for breast cancer develop late effect chest wall tissue fibrosis and hypovascularity, making successful reconstruction and healing difficult, if not impossible.
Radiation therapy accidents
There are rigorous procedures in place to minimise the risk of accidental overexposure of radiation therapy to patients. However, mistakes do occasionally occur; for example, the radiation therapy machine Therac-25 was responsible for at least six accidents between 1985 and 1987, where patients were given up to one hundred times the intended dose; two people were killed directly by the radiation overdoses. From 2005 to 2010, a hospital in Missouri overexposed 76 patients (most with brain cancer) during a five-year period because new radiation equipment had been set up incorrectly.
Although medical errors are exceptionally rare, radiation oncologists, medical physicists and other members of the radiation therapy treatment team are working to eliminate them. In 2010 the American Society for Radiation Oncology (ASTRO) launched a safety initiative called Target Safely that, among other things, aimed to record errors nationwide so that doctors can learn from each and every mistake and prevent them from recurring. ASTRO also publishes a list of questions for patients to ask their doctors about radiation safety to ensure every treatment is as safe as possible.
Use in non-cancerous diseases
Radiation therapy is used to treat early stage Dupuytren's disease and Ledderhose disease. When Dupuytren's disease is at the nodules and cords stage or fingers are at a minimal deformation stage of less than 10 degrees, then radiation therapy is used to prevent further progress of the disease. Radiation therapy is also used post surgery in some cases to prevent the disease continuing to progress. Low doses of radiation are used typically three gray of radiation for five days, with a break of three months followed by another phase of three gray of radiation for five days.
Technique
Mechanism of action
Radiation therapy works by damaging the DNA of cancer cells and can cause them to undergo mitotic catastrophe. This DNA damage is caused by one of two types of energy, photon or charged particle. This damage is either direct or indirect ionization of the atoms which make up the DNA chain. Indirect ionization happens as a result of the ionization of water, forming free radicals, notably hydroxyl radicals, which then damage the DNA.
In photon therapy, most of the radiation effect is through free radicals. Cells have mechanisms for repairing single-strand DNA damage and double-stranded DNA damage. However, double-stranded DNA breaks are much more difficult to repair, and can lead to dramatic chromosomal abnormalities and genetic deletions. Targeting double-stranded breaks increases the probability that cells will undergo cell death. Cancer cells are generally less differentiated and more stem cell-like; they reproduce more than most healthy differentiated cells, and have a diminished ability to repair sub-lethal damage. Single-strand DNA damage is then passed on through cell division; damage to the cancer cells' DNA accumulates, causing them to die or reproduce more slowly.
One of the major limitations of photon radiation therapy is that the cells of solid tumors become deficient in oxygen. Solid tumors can outgrow their blood supply, causing a low-oxygen state known as hypoxia. Oxygen is a potent radiosensitizer, increasing the effectiveness of a given dose of radiation by forming DNA-damaging free radicals. Tumor cells in a hypoxic environment may be as much as 2 to 3 times more resistant to radiation damage than those in a normal oxygen environment. Much research has been devoted to overcoming hypoxia including the use of high pressure oxygen tanks, hyperthermia therapy (heat therapy which dilates blood vessels to the tumor site), blood substitutes that carry increased oxygen, hypoxic cell radiosensitizer drugs such as misonidazole and metronidazole, and hypoxic cytotoxins (tissue poisons), such as tirapazamine. Newer research approaches are currently being studied, including preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as a radiosensitizer.
Charged particles such as protons and boron, carbon, and neon ions can cause direct damage to cancer cell DNA through high-LET (linear energy transfer) and have an antitumor effect independent of tumor oxygen supply because these particles act mostly via direct energy transfer usually causing double-stranded DNA breaks. Due to their relatively large mass, protons and other charged particles have little lateral side scatter in the tissue – the beam does not broaden much, stays focused on the tumor shape, and delivers small dose side-effects to surrounding tissue. They also more precisely target the tumor using the Bragg peak effect. See proton therapy for a good example of the different effects of intensity-modulated radiation therapy (IMRT) vs. charged particle therapy. This procedure reduces damage to healthy tissue between the charged particle radiation source and the tumor and sets a finite range for tissue damage after the tumor has been reached. In contrast, IMRT's use of uncharged particles causes its energy to damage healthy cells when it exits the body. This exiting damage is not therapeutic, can increase treatment side effects, and increases the probability of secondary cancer induction. This difference is very important in cases where the close proximity of other organs makes any stray ionization very damaging (example: head and neck cancers). This X-ray exposure is especially bad for children, due to their growing bodies, and while depending on a multitude of factors, they are around 10 times more sensitive to developing secondary malignancies after radiotherapy as compared to adults.
Dose
The amount of radiation used in photon radiation therapy is measured in grays (Gy), and varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy.
Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers.) Many other factors are considered by radiation oncologists when selecting a dose, including whether the patient is receiving chemotherapy, patient comorbidities, whether radiation therapy is being administered before or after surgery, and the degree of success of surgery.
Delivery parameters of a prescribed dose are determined during treatment planning (part of dosimetry). Treatment planning is generally performed on dedicated computers using specialized treatment planning software. Depending on the radiation delivery method, several angles or sources may be used to sum to the total necessary dose. The planner will try to design a plan that delivers a uniform prescription dose to the tumor and minimizes dose to surrounding healthy tissues.
In radiation therapy, three-dimensional dose distributions may be evaluated using the dosimetry technique known as gel dosimetry.
Fractionation
The total dose is fractionated (spread out over time) for several important reasons. Fractionation allows normal cells time to recover, while tumor cells are generally less efficient in repair between fractions. Fractionation also allows tumor cells that were in a relatively radio-resistant phase of the cell cycle during one treatment to cycle into a sensitive phase of the cycle before the next fraction is given. Similarly, tumor cells that were chronically or acutely hypoxic (and therefore more radioresistant) may reoxygenate between fractions, improving the tumor cell kill.
Fractionation regimens are individualised between different radiation therapy centers and even between individual doctors. In North America, Australia, and Europe, the typical fractionation schedule for adults is 1.8 to 2 Gy per day, five days a week. In some cancer types, prolongation of the fraction schedule over too long can allow for the tumor to begin repopulating, and for these tumor types, including head-and-neck and cervical squamous cell cancers, radiation treatment is preferably completed within a certain amount of time. For children, a typical fraction size may be 1.5 to 1.8 Gy per day, as smaller fraction sizes are associated with reduced incidence and severity of late-onset side effects in normal tissues.
In some cases, two fractions per day are used near the end of a course of treatment. This schedule, known as a concomitant boost regimen or hyperfractionation, is used on tumors that regenerate more quickly when they are smaller. In particular, tumors in the head-and-neck demonstrate this behavior.
Patients receiving palliative radiation to treat uncomplicated painful bone metastasis should not receive more than a single fraction of radiation. A single treatment gives comparable pain relief and morbidity outcomes to multiple-fraction treatments, and for patients with limited life expectancy, a single treatment is best to improve patient comfort.
Schedules for fractionation
One fractionation schedule that is increasingly being used and continues to be studied is hypofractionation. This is a radiation treatment in which the total dose of radiation is divided into large doses. Typical doses vary significantly by cancer type, from 2.2 Gy/fraction to 20 Gy/fraction, the latter being typical of stereotactic treatments (stereotactic ablative body radiotherapy, or SABR – also known as SBRT, or stereotactic body radiotherapy) for subcranial lesions, or SRS (stereotactic radiosurgery) for intracranial lesions. The rationale of hypofractionation is to reduce the probability of local recurrence by denying clonogenic cells the time they require to reproduce and also to exploit the radiosensitivity of some tumors. In particular, stereotactic treatments are intended to destroy clonogenic cells by a process of ablation, i.e., the delivery of a dose intended to destroy clonogenic cells directly, rather than to interrupt the process of clonogenic cell division repeatedly (apoptosis), as in routine radiotherapy.
Estimation of dose based on target sensitivity
Different cancer types have different radiation sensitivity. While predicting the sensitivity based on genomic or proteomic analyses of biopsy samples has proven challenging, the predictions of radiation effect on individual patients from genomic signatures of intrinsic cellular radiosensitivity have been shown to associate with clinical outcome. An alternative approach to genomics and proteomics was offered by the discovery that radiation protection in microbes is offered by non-enzymatic complexes of manganese and small organic metabolites. The content and variation of manganese (measurable by electron paramagnetic resonance) were found to be good predictors of radiosensitivity, and this finding extends also to human cells. An association was confirmed between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients.
Types
Historically, the three main divisions of radiation therapy are:
external beam radiation therapy (EBRT or XRT) or teletherapy;
brachytherapy or sealed source radiation therapy; and
systemic radioisotope therapy or unsealed source radiotherapy.
The differences relate to the position of the radiation source; external is outside the body, brachytherapy uses sealed radioactive sources placed precisely in the area under treatment, and systemic radioisotopes are given by infusion or oral ingestion. Brachytherapy can use temporary or permanent placement of radioactive sources. The temporary sources are usually placed by a technique called afterloading. In afterloading a hollow tube or applicator is placed surgically in the organ to be treated, and the sources are loaded into the applicator after the applicator is implanted. This minimizes radiation exposure to health care personnel.
Particle therapy is a special case of external beam radiation therapy where the particles are protons or heavier ions.
A review of radiation therapy randomised clinical trials from 2018 to 2021 found many practice-changing data and new concepts that emerge from RCTs, identifying techniques that improve the therapeutic ratio, techniques that lead to more tailored treatments, stressing the importance of patient satisfaction, and identifying areas that require further study.
External beam radiation therapy
The following three sections refer to treatment using X-rays.
Conventional external beam radiation therapy
Historically conventional external beam radiation therapy (2DXRT) was delivered via two-dimensional beams using kilovoltage therapy X-ray units, medical linear accelerators that generate high-energy X-rays, or with machines that were similar to a linear accelerator in appearance, but used a sealed radioactive source like the one shown above. 2DXRT mainly consists of a single beam of radiation delivered to the patient from several directions: often front or back, and both sides.
Conventional refers to the way the treatment is planned or simulated on a specially calibrated diagnostic X-ray machine known as a simulator because it recreates the linear accelerator actions (or sometimes by eye), and to the usually well-established arrangements of the radiation beams to achieve a desired plan. The aim of simulation is to accurately target or localize the volume which is to be treated. This technique is well established and is generally quick and reliable. The worry is that some high-dose treatments may be limited by the radiation toxicity capacity of healthy tissues which lie close to the target tumor volume.
An example of this problem is seen in radiation of the prostate gland, where the sensitivity of the adjacent rectum limited the dose which could be safely prescribed using 2DXRT planning to such an extent that tumor control may not be easily achievable. Prior to the invention of the CT, physicians and physicists had limited knowledge about the true radiation dosage delivered to both cancerous and healthy tissue. For this reason, 3-dimensional conformal radiation therapy has become the standard treatment for almost all tumor sites. More recently other forms of imaging are used including MRI, PET, SPECT and Ultrasound.
Stereotactic radiation
Stereotactic radiation is a specialized type of external beam radiation therapy. It uses focused radiation beams targeting a well-defined tumor using extremely detailed imaging scans. Radiation oncologists perform stereotactic treatments, often with the help of a neurosurgeon for tumors in the brain or spine.
There are two types of stereotactic radiation. Stereotactic radiosurgery (SRS) is when doctors use a single or several stereotactic radiation treatments of the brain or spine. Stereotactic body radiation therapy (SBRT) refers to one or several stereotactic radiation treatments with the body, such as the lungs.
Some doctors say an advantage to stereotactic treatments is that they deliver the right amount of radiation to the cancer in a shorter amount of time than traditional treatments, which can often take 6 to 11 weeks. Plus treatments are given with extreme accuracy, which should limit the effect of the radiation on healthy tissues. One problem with stereotactic treatments is that they are only suitable for certain small tumors.
Stereotactic treatments can be confusing because many hospitals call the treatments by the name of the manufacturer rather than calling it SRS or SBRT. Brand names for these treatments include Axesse, Cyberknife, Gamma Knife, Novalis, Primatom, Synergy, X-Knife, TomoTherapy, Trilogy and Truebeam. This list changes as equipment manufacturers continue to develop new, specialized technologies to treat cancers.
Virtual simulation, and 3-dimensional conformal radiation therapy
The planning of radiation therapy treatment has been revolutionized by the ability to delineate tumors and adjacent normal structures in three dimensions using specialized CT and/or MRI scanners and planning software.
Virtual simulation, the most basic form of planning, allows more accurate placement of radiation beams than is possible using conventional X-rays, where soft-tissue structures are often difficult to assess and normal tissues difficult to protect.
An enhancement of virtual simulation is 3-dimensional conformal radiation therapy (3DCRT), in which the profile of each radiation beam is shaped to fit the profile of the target from a beam's eye view (BEV) using a multileaf collimator (MLC) and a variable number of beams. When the treatment volume conforms to the shape of the tumor, the relative toxicity of radiation to the surrounding normal tissues is reduced, allowing a higher dose of radiation to be delivered to the tumor than conventional techniques would allow.
Intensity-modulated radiation therapy (IMRT)
Intensity-modulated radiation therapy (IMRT) is an advanced type of high-precision radiation that is the next generation of 3DCRT. IMRT also improves the ability to conform the treatment volume to concave tumor shapes, for example when the tumor is wrapped around a vulnerable structure such as the spinal cord or a major organ or blood vessel. Computer-controlled X-ray accelerators distribute precise radiation doses to malignant tumors or specific areas within the tumor. The pattern of radiation delivery is determined using highly tailored computing applications to perform optimization and treatment simulation (Treatment Planning). The radiation dose is consistent with the 3-D shape of the tumor by controlling, or modulating, the radiation beam's intensity. The radiation dose intensity is elevated near the gross tumor volume while radiation among the neighboring normal tissues is decreased or avoided completely. This results in better tumor targeting, lessened side effects, and improved treatment outcomes than even 3DCRT.
3DCRT is still used extensively for many body sites but the use of IMRT is growing in more complicated body sites such as CNS, head and neck, prostate, breast, and lung. Unfortunately, IMRT is limited by its need for additional time from experienced medical personnel. This is because physicians must manually delineate the tumors one CT image at a time through the entire disease site which can take much longer than 3DCRT preparation. Then, medical physicists and dosimetrists must be engaged to create a viable treatment plan. Also, the IMRT technology has only been used commercially since the late 1990s even at the most advanced cancer centers, so radiation oncologists who did not learn it as part of their residency programs must find additional sources of education before implementing IMRT.
Proof of improved survival benefit from either of these two techniques over conventional radiation therapy (2DXRT) is growing for many tumor sites, but the ability to reduce toxicity is generally accepted. This is particularly the case for head and neck cancers in a series of pivotal trials performed by Professor Christopher Nutting of the Royal Marsden Hospital. Both techniques enable dose escalation, potentially increasing usefulness. There has been some concern, particularly with IMRT, about increased exposure of normal tissue to radiation and the consequent potential for secondary malignancy. Overconfidence in the accuracy of imaging may increase the chance of missing lesions that are invisible on the planning scans (and therefore not included in the treatment plan) or that move between or during a treatment (for example, due to respiration or inadequate patient immobilization). New techniques are being developed to better control this uncertainty – for example, real-time imaging combined with real-time adjustment of the therapeutic beams. This new technology is called image-guided radiation therapy or four-dimensional radiation therapy.
Another technique is the real-time tracking and localization of one or more small implantable electric devices implanted inside or close to the tumor. There are various types of medical implantable devices that are used for this purpose. It can be a magnetic transponder which senses the magnetic field generated by several transmitting coils, and then transmits the measurements back to the positioning system to determine the location. The implantable device can also be a small wireless transmitter sending out an RF signal which then will be received by a sensor array and used for localization and real-time tracking of the tumor position.
A well-studied issue with IMRT is the "tongue and groove effect" which results in unwanted underdosing, due to irradiating through extended tongues and grooves of overlapping MLC (multileaf collimator) leaves. While solutions to this issue have been developed, which either reduce the TG effect to negligible amounts or remove it completely, they depend upon the method of IMRT being used and some of them carry costs of their own. Some texts distinguish "tongue and groove error" from "tongue or groove error", according as both or one side of the aperture is occluded.
Volumetric modulated arc therapy (VMAT)
Volumetric modulated arc therapy (VMAT) is a radiation technique introduced in 2007 which can achieve highly conformal dose distributions on target volume coverage and sparing of normal tissues. The specificity of this technique is to modify three parameters during the treatment. VMAT delivers radiation by rotating gantry (usually 360° rotating fields with one or more arcs), changing speed and shape of the beam with a multileaf collimator (MLC) ("sliding window" system of moving) and fluence output rate (dose rate) of the medical linear accelerator. VMAT has an advantage in patient treatment, compared with conventional static field intensity modulated radiotherapy (IMRT), of reduced radiation delivery times. Comparisons between VMAT and conventional IMRT for their sparing of healthy tissues and Organs at Risk (OAR) depends upon the cancer type. In the treatment of nasopharyngeal, oropharyngeal and hypopharyngeal carcinomas VMAT provides equivalent or better protection of the organ at risk (OAR). In the treatment of prostate cancer the OAR protection result is mixed with some studies favoring VMAT, others favoring IMRT.
Temporally feathered radiation therapy (TFRT)
Temporally feathered radiation therapy (TFRT) is a radiation technique introduced in 2018 which aims to use the inherent non-linearities in normal tissue repair to allow for sparing of these tissues without affecting the dose delivered to the tumor. The application of this technique, which has yet to be automated, has been described carefully to enhance the ability of departments to perform it, and in 2021 it was reported as feasible in a small clinical trial, though its efficacy has yet to be formally studied.
Automated planning
Automated treatment planning has become an integrated part of radiotherapy treatment planning. There are in general two approaches of automated planning. 1) Knowledge based planning where the treatment planning system has a library of high quality plans, from which it can predict the target and dose-volume histogram of the organ at risk. 2) The other approach is commonly called protocol based planning, where the treatment planning system tried to mimic an experienced treatment planner and through an iterative process evaluates the plan quality from on the basis of the protocol.
Particle therapy
In particle therapy (proton therapy being one example), energetic ionizing particles (protons or carbon ions) are directed at the target tumor. The dose increases while the particle penetrates the tissue, up to a maximum (the Bragg peak) that occurs near the end of the particle's range, and it then drops to (almost) zero. The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue.
Auger therapy
Auger therapy (AT) makes use of a very high dose of ionizing radiation in situ that provides molecular modifications at an atomic scale. AT differs from conventional radiation therapy in several aspects; it neither relies upon radioactive nuclei to cause cellular radiation damage at a cellular dimension, nor engages multiple external pencil-beams from different directions to zero-in to deliver a dose to the targeted area with reduced dose outside the targeted tissue/organ locations. Instead, the in situ delivery of a very high dose at the molecular level using AT aims for in situ molecular modifications involving molecular breakages and molecular re-arrangements such as a change of stacking structures as well as cellular metabolic functions related to the said molecule structures.
Motion compensation
In many types of external beam radiotherapy, motion can negatively impact the treatment delivery by moving target tissue out of, or other healthy tissue into, the intended beam path. Some form of patient immobilisation is common, to prevent the large movements of the body during treatment, however this cannot prevent all motion, for example as a result of breathing. Several techniques have been developed to account for motion like this. Deep inspiration breath-hold (DIBH) is commonly used for breast treatments where it is important to avoid irradiating the heart. In DIBH the patient holds their breath after breathing in to provide a stable position for the treatment beam to be turned on. This can be done automatically using an external monitoring system such as a spirometer or a camera and markers. The same monitoring techniques, as well as 4DCT imaging, can also be for respiratory gated treatment, where the patient breathes freely and the beam is only engaged at certain points in the breathing cycle. Other techniques include using 4DCT imaging to plan treatments with margins that account for motion, and active movement of the treatment couch, or beam, to follow motion.
Contact X-ray brachytherapy
Contact X-ray brachytherapy (also called "CXB", "electronic brachytherapy" or the "Papillon Technique") is a type of radiation therapy using low energy (50 kVp) kilovoltage X-rays applied directly to the tumor to treat rectal cancer. The process involves endoscopic examination first to identify the tumor in the rectum and then inserting treatment applicator on the tumor through the anus into the rectum and placing it against the cancerous tissue. Finally, treatment tube is inserted into the applicator to deliver high doses of X-rays (30Gy) emitted directly onto the tumor at two weekly intervals for three times over four weeks period. It is typically used for treating early rectal cancer in patients who may not be candidates for surgery. A 2015 NICE review found the main side effect to be bleeding that occurred in about 38% of cases, and radiation-induced ulcer which occurred in 27% of cases.
Brachytherapy (sealed source radiotherapy)
Brachytherapy is delivered by placing radiation source(s) inside or next to the area requiring treatment. Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, and skin cancer and can also be used to treat tumors in many other body sites.
In brachytherapy, radiation sources are precisely placed directly at the site of the cancerous tumor. This means that the irradiation only affects a very localized area – exposure to radiation of healthy tissues further away from the sources is reduced. These characteristics of brachytherapy provide advantages over external beam radiation therapy – the tumor can be treated with very high doses of localized radiation, whilst reducing the probability of unnecessary damage to surrounding healthy tissues. A course of brachytherapy can often be completed in less time than other radiation therapy techniques. This can help reduce the chance of surviving cancer cells dividing and growing in the intervals between each radiation therapy dose.
As one example of the localized nature of breast brachytherapy, the SAVI device delivers the radiation dose through multiple catheters, each of which can be individually controlled. This approach decreases the exposure of healthy tissue and resulting side effects, compared both to external beam radiation therapy and older methods of breast brachytherapy.
Radionuclide therapy
Radionuclide therapy (also known as systemic radioisotope therapy, radiopharmaceutical therapy, or molecular radiotherapy), is a form of targeted therapy. Targeting can be due to the chemical properties of the isotope such as radioiodine which is specifically absorbed by the thyroid gland a thousandfold better than other bodily organs. Targeting can also be achieved by attaching the radioisotope to another molecule or antibody to guide it to the target tissue. The radioisotopes are delivered through infusion (into the bloodstream) or ingestion. Examples are the infusion of metaiodobenzylguanidine (MIBG) to treat neuroblastoma, of oral iodine-131 to treat thyroid cancer or thyrotoxicosis, and of hormone-bound lutetium-177 and yttrium-90 to treat neuroendocrine tumors (peptide receptor radionuclide therapy).
Another example is the injection of radioactive yttrium-90 or holmium-166 microspheres into the hepatic artery to radioembolize liver tumors or liver metastases. These microspheres are used for the treatment approach known as selective internal radiation therapy. The microspheres are approximately 30 μm in diameter (about one-third of a human hair) and are delivered directly into the artery supplying blood to the tumors. These treatments begin by guiding a catheter up through the femoral artery in the leg, navigating to the desired target site and administering treatment. The blood feeding the tumor will carry the microspheres directly to the tumor enabling a more selective approach than traditional systemic chemotherapy. There are currently three different kinds of microspheres: SIR-Spheres, TheraSphere and QuiremSpheres.
A major use of systemic radioisotope therapy is in the treatment of bone metastasis from cancer. The radioisotopes travel selectively to areas of damaged bone, and spare normal undamaged bone. Isotopes commonly used in the treatment of bone metastasis are radium-223, strontium-89 and samarium (153Sm) lexidronam.
In 2002, the United States Food and Drug Administration (FDA) approved ibritumomab tiuxetan (Zevalin), which is an anti-CD20 monoclonal antibody conjugated to yttrium-90.
In 2003, the FDA approved the tositumomab/iodine (131I) tositumomab regimen (Bexxar), which is a combination of an iodine-131 labelled and an unlabelled anti-CD20 monoclonal antibody.
These medications were the first agents of what is known as radioimmunotherapy, and they were approved for the treatment of refractory non-Hodgkin's lymphoma.
Intraoperative radiotherapy
Intraoperative radiation therapy (IORT) is applying therapeutic levels of radiation to a target area, such as a cancer tumor, while the area is exposed during surgery.
Rationale
The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid.
History
Medicine has used radiation therapy as a treatment for cancer for more than 100 years, with its earliest roots traced from the discovery of X-rays in 1895 by Wilhelm Röntgen. Emil Grubbe of Chicago was possibly the first American physician to use X-rays to treat cancer, beginning in 1896.
The field of radiation therapy began to grow in the early 1900s largely due to the groundbreaking work of Nobel Prize–winning scientist Marie Curie (1867–1934), who discovered the radioactive elements polonium and radium in 1898. This began a new era in medical treatment and research. Through the 1920s the hazards of radiation exposure were not understood, and little protection was used. Radium was believed to have wide curative powers and radiotherapy was applied to many diseases.
Prior to World War 2, the only practical sources of radiation for radiotherapy were radium, its "emanation", radon gas, and the X-ray tube. External beam radiotherapy (teletherapy) began at the turn of the century with relatively low voltage (<150 kV) X-ray machines. It was found that while superficial tumors could be treated with low voltage X-rays, more penetrating, higher energy beams were required to reach tumors inside the body, requiring higher voltages. Orthovoltage X-rays, which used tube voltages of 200-500 kV, began to be used during the 1920s. To reach the most deeply buried tumors without exposing intervening skin and tissue to dangerous radiation doses required rays with energies of 1 MV or above, called "megavolt" radiation. Producing megavolt X-rays required voltages on the X-ray tube of 3 to 5 million volts, which required huge expensive installations. Megavoltage X-ray units were first built in the late 1930s but because of cost were limited to a few institutions. One of the first, installed at St. Bartholomew's hospital, London in 1937 and used until 1960, used a 30 foot long X-ray tube and weighed 10 tons. Radium produced megavolt gamma rays, but was extremely rare and expensive due to its low occurrence in ores. In 1937 the entire world supply of radium for radiotherapy was 50 grams, valued at £800,000, or $50 million in 2005 dollars.
The invention of the nuclear reactor in the Manhattan Project during World War 2 made possible the production of artificial radioisotopes for radiotherapy. Cobalt therapy, teletherapy machines using megavolt gamma rays emitted by cobalt-60, a radioisotope produced by irradiating ordinary cobalt metal in a reactor, revolutionized the field between the 1950s and the early 1980s. Cobalt machines were relatively cheap, robust and simple to use, although due to its 5.27 year half-life the cobalt had to be replaced about every 5 years.
Medical linear particle accelerators, developed since the 1940s, began replacing X-ray and cobalt units in the 1980s and these older therapies are now declining. The first medical linear accelerator was used at the Hammersmith Hospital in London in 1953. Linear accelerators can produce higher energies, have more collimated beams, and do not produce radioactive waste with its attendant disposal problems like radioisotope therapies.
With Godfrey Hounsfield's invention of computed tomography (CT) in 1971, three-dimensional planning became a possibility and created a shift from 2-D to 3-D radiation delivery. CT-based planning allows physicians to more accurately determine the dose distribution using axial tomographic images of the patient's anatomy. The advent of new imaging technologies, including magnetic resonance imaging (MRI) in the 1970s and positron emission tomography (PET) in the 1980s, has moved radiation therapy from 3-D conformal to intensity-modulated radiation therapy (IMRT) and to image-guided radiation therapy tomotherapy. These advances allowed radiation oncologists to better see and target tumors, which have resulted in better treatment outcomes, more organ preservation and fewer side effects.
While access to radiotherapy is improving globally, more than half of patients in low and middle income countries still do not have available access to the therapy as of 2017.
| Biology and health sciences | Medical procedures | null |
26378 | https://en.wikipedia.org/wiki/Rift%20Valley%20fever | Rift Valley fever | Rift Valley fever (RVF) is a viral disease of humans and livestock that can cause mild to severe symptoms. The mild symptoms may include: fever, muscle pains, and headaches which often last for up to a week. The severe symptoms may include: loss of sight beginning three weeks after the infection, infections of the brain causing severe headaches and confusion, and bleeding together with liver problems which may occur within the first few days. Those who have bleeding have a chance of death as high as 50%.
The disease is caused by the RVF virus. It is spread by either touching infected animal blood, breathing in the air around an infected animal being butchered, drinking raw milk from an infected animal, or the bite of infected mosquitoes. Animals like cows, sheep, goats, and camels may be affected. In these animals it is spread mostly by mosquitoes. It does not appear that one person can infect another. The disease is diagnosed by finding antibodies against the virus or the virus itself in the blood.
Prevention of the disease in humans is accomplished by vaccinating animals against the disease. This must be done before an outbreak occurs because if it is done during an outbreak it may worsen the situation. Stopping the movement of animals during an outbreak may also be useful, as may decreasing mosquito numbers and avoiding their bites. There is a human vaccine; however, as of 2010, it is not widely available. There is no specific treatment and medical efforts are supportive.
Outbreaks of the disease have only occurred in Africa and Arabia. Outbreaks usually occur during periods of increased rain which increases the number of mosquitoes. The disease was first reported among livestock in Rift Valley of Kenya in the early 1900s, and the virus was first isolated in 1931.
Signs and symptoms
In humans, the virus can cause several syndromes. Usually, they have either no symptoms or only a mild illness with fever, headache, muscle pains, and liver abnormalities. In a small percentage of cases (< 2%), the illness can progress to hemorrhagic fever syndrome, meningoencephalitis (inflammation of the brain and tissues lining the brain), or affect the eye. Patients who become ill usually experience fever, generalised weakness, back pain, dizziness, and weight loss at the onset of the illness. Typically, people recover within two to seven days after onset. About 1% of people with the disease die of it. In livestock, the fatality level is significantly higher. Pregnant livestock infected with RVF abort virtually 100% of foetuses. An epizootic (animal disease epidemic) of RVF is usually first indicated by a wave of unexplained abortions.
Other signs in livestock include vomiting and diarrhea, respiratory disease, fever, lethargy, anorexia, and sudden death in young animals.
Cause
Virology
The virus belongs to the Bunyavirales order. This is an order of enveloped negative single-stranded RNA viruses. All Bunyaviruses have an outer lipid envelope with two glycoproteins—G(N) and G(C)—required for cell entry. They deliver their genome into the host-cell cytoplasm by fusing their envelope with an endosomal membrane.
The virus' G(C) protein has a class II membrane fusion protein architecture similar to that found in flaviviruses and alphaviruses. This structural similarity suggests that there may be a common origin for these viral families.
The virus' 11.5 kb tripartite genome is composed of single-stranded RNA. As a Phlebovirus, it has an ambisense genome. Its L and M segments are negative-sense, but its S segment is ambisense. These three genome segments code for six major proteins: L protein (viral polymerase), the two glycoproteins G(N) and G(C), the nucleocapsid N protein, and the nonstructural NSs and NSm proteins.
Transmission
The virus is transmitted through mosquito vectors, as well as through contact with the tissue of infected animals. Two species—Culex tritaeniorhynchus and Aedes vexans—are known to transmit the virus. Other potential vectors include Aedes caspius, Aedes mcintosh, Aedes ochraceus, Culex pipiens, Culex antennatus, Culex perexiguus, Culex zombaensis and Culex quinquefasciatus. Contact with infected tissue is considered to be the main source of human infections. The virus has been isolated from two bat species: the Peter's epauletted fruit bat (Micropteropus pusillus) and the aba roundleaf bat (Hipposideros abae), which are believed to be reservoirs for the virus.
Pathogenesis
Although many components of the RVFV's RNA play an important role in the virus' pathology, the nonstructural protein encoded on the S segment (NSs) is the only component that has been found to directly affect the host. NSs is hostile and combative against the host interferon (IFNs) antiviral response. IFNs are essential for the immune system to fight off viral infections in a host. This inhibitory mechanism is believed to be due to several reasons, the first being, competitive inhibition of the formation of the transcription factor. On this transcription factor, NSs interacts with and binds to a subunit that is needed for RNA polymerase I and II. This interaction cause competitive inhibition with another transcription factor component and prevents the assembly process of the transcription factor complex, which results in the suppression of the host antiviral response. Transcription suppression is believed to be another mechanism of this inhibitory process. This occurs when an area of NSs interacts with and binds to the host's protein, SAP30 and forms a complex. This complex causes histone acetylation to regress, which is needed for transcriptional activation of the IFN promoter. This causes IFN expression to be obstructed. Lastly, NSs has also been known to affect regular activity of double-stranded RNA-dependent protein kinase R. This protein is involved in cellular antiviral responses in the host. When RVFV can enter the host's DNA, NSs forms a filamentous structure in the nucleus. This allows the virus to interact with specific areas of the host's DNA that relates to segregation defects and induction of chromosome continuity. This increases host infectivity and decreases the host's antiviral response.
Diagnosis
Diagnosis relies on viral isolation from tissues, or serological testing with an ELISA. Other methods of diagnosis include Nucleic Acid Testing (NAT), cell culture, and IgM antibody assays. As of September 2016, the Kenya Medical Research Institute (KEMRI) has developed a product called Immunoline, designed to diagnose the disease in humans much faster than in previous methods.
Prevention
A person's chances of becoming infected can be reduced by taking measures to decrease contact with blood, body fluids, or tissues of infected animals and protect against mosquitoes and other bloodsucking insects. The use of mosquito repellents and bed nets are two effective methods. For persons working with animals in RVF-endemic areas, wearing protective equipment to avoid any exposure to blood or tissues of animals that may potentially be infected is an important protective measure. Potentially, establishing environmental monitoring and case surveillance systems may aid in the prediction and control of future RVF outbreaks.
No vaccines are currently available for humans. While vaccines have been developed for humans, it has only been used experimentally for scientific personnel in high-risk environments. Trials of several vaccines, such as NDBR-103 and TSI-GSD 200, are ongoing. Different types of vaccines for veterinary use are available. The killed vaccines are impractical in routine animal field vaccination because of the need for multiple injections. Live vaccines require a single injection but are known to cause birth defects and abortions in sheep and induce only low-level protection in cattle. The live-attenuated vaccine, MP-12, has demonstrated promising results in laboratory trials in domesticated animals, but more research is needed before the vaccine can be used in the field. The live-attenuated clone 13 vaccine was recently registered and used in South Africa. Alternative vaccines using molecular recombinant constructs are in development and show promising results.
A vaccine has been conditionally approved for use in animals in the US. It has been shown that knockout of the NSs and NSm nonstructural proteins of this virus produces an effective vaccine in sheep as well.
Epidemiology
RVF outbreaks occur across sub-Saharan Africa, with outbreaks occurring elsewhere infrequently. Outbreaks of this disease usually correspond with the warm phases of the EI Niño/Southern Oscillation. During this time there is an increase in rainfall, flooding, and greenness of vegetation index, which leads to an increase in mosquito vectors. RVFV can be transmitted vertically in mosquitos, meaning that the virus can be passed from the mother to her offspring. During dry conditions, the virus can remain viable for many years in the egg. Mosquitos lay their eggs in water, where they eventually hatch. As water is essential for mosquito eggs to hatch, rainfall and flooding cause an increase in the mosquito population and an increased potential for the virus.
The first documented outbreak was identified in Kenya in 1931, in sheep, cattle, and humans; another severe outbreak in the country in 1950–1951 involved 100,000 deaths in livestock and an unrecorded number of humans with fever. An outbreak occurred in South Africa in 1974–1976, with more than 500,000 infected animals and the first deaths in humans. In Egypt in 1977–78, an estimated 200,000 people were infected and there were at least 594 deaths. In Kenya in 1998, the virus killed more than 400 people. Since then, there have been outbreaks in Saudi Arabia and Yemen (2000), East Africa (2006–2007), Sudan (2007), South Africa (2010), Uganda (2016), Kenya (2018), and Mayotte (2018–2019). 2020–2021 in Kenya, in 2022 an outbreak is ongoing in Burundi.
Biological weapon
Rift Valley fever was one of more than a dozen agents that the United States researched as potential biological weapons before the nation suspended its biological weapons program in 1969.
Research
The disease is one of several identified by WHO as a likely cause of a future epidemic in a new plan developed after the Ebola epidemic for urgent research and development toward new diagnostic tests, vaccines and medicines.
| Biology and health sciences | Viral diseases | Health |
26390 | https://en.wikipedia.org/wiki/Riemann%20integral | Riemann integral | In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration, or simulated using Monte Carlo integration.
Overview
Imagine you have a curve on a graph, and the curve stays above the x-axis between two points, a and b. The area under that curve, from a to b, is what we want to figure out. This area can be described as the set of all points (x, y) on the graph that follow these rules: a ≤ x ≤ b (the x-coordinate is between a and b) and 0 < y < f(x) (the y-coordinate is between 0 and the height of the curve f(x)). Mathematically, this region can be expressed in set-builder notation as
To measure this area, we use a Riemann integral, which is written as:
This notation means “the integral of f(x) from a to b,” and it represents the exact area under the curve f(x) and above the x-axis, between x = a and x = b.
The idea behind the Riemann integral is to break the area into small, simple shapes (like rectangles), add up their areas, and then make the rectangles smaller and smaller to get a better estimate. In the end, when the rectangles are infinitely small, the sum gives the exact area, which is what the integral represents.
If the curve dips below the x-axis, the integral gives a signed area. This means the integral adds the part above the x-axis as positive and subtracts the part below the x-axis as negative. So, the result of \int_a^b f(x)\,dx can be positive, negative, or zero, depending on how much of the curve is above or below the x-axis.
Definition
Partitions of an interval
A partition of an interval is a finite sequence of numbers of the form
Each is called a sub-interval of the partition. The mesh or norm of a partition is defined to be the length of the longest sub-interval, that is,
A tagged partition of an interval is a partition together with a choice of a sample point within each sub-interval: that is, numbers with for each . The mesh of a tagged partition is the same as that of an ordinary partition.
Suppose that two partitions and are both partitions of the interval . We say that is a refinement of if for each integer , with , there exists an integer such that and such that for some with . That is, a tagged partition breaks up some of the sub-intervals and adds sample points where necessary, "refining" the accuracy of the partition.
We can turn the set of all tagged partitions into a directed set by saying that one tagged partition is greater than or equal to another if the former is a refinement of the latter.
Riemann sum
Let be a real-valued function defined on the interval . The Riemann sum of with respect to the tagged partition together with is
Each term in the sum is the product of the value of the function at a given point and the length of an interval. Consequently, each term represents the (signed) area of a rectangle with height and width . The Riemann sum is the (signed) area of all the rectangles.
Closely related concepts are the lower and upper Darboux sums. These are similar to Riemann sums, but the tags are replaced by the infimum and supremum (respectively) of on each sub-interval:
If is continuous, then the lower and upper Darboux sums for an untagged partition are equal to the Riemann sum for that partition, where the tags are chosen to be the minimum or maximum (respectively) of on each subinterval. (When is discontinuous on a subinterval, there may not be a tag that achieves the infimum or supremum on that subinterval.) The Darboux integral, which is similar to the Riemann integral but based on Darboux sums, is equivalent to the Riemann integral.
Riemann integral
Loosely speaking, the Riemann integral is the limit of the Riemann sums of a function as the partitions get finer. If the limit exists then the function is said to be integrable (or more specifically Riemann-integrable). The Riemann sum can be made as close as desired to the Riemann integral by making the partition fine enough.
One important requirement is that the mesh of the partitions must become smaller and smaller, so that it has the limit zero. If this were not so, then we would not be getting a good approximation to the function on certain subintervals. In fact, this is enough to define an integral. To be specific, we say that the Riemann integral of exists and equals if the following condition holds:
For all , there exists such that for any tagged partition and whose mesh is less than , we have
Unfortunately, this definition is very difficult to use. It would help to develop an equivalent definition of the Riemann integral which is easier to work with. We develop this definition now, with a proof of equivalence following. Our new definition says that the Riemann integral of exists and equals if the following condition holds:
For all , there exists a tagged partition and such that for any tagged partition and which is a refinement of and , we have
Both of these mean that eventually, the Riemann sum of with respect to any partition gets trapped close to . Since this is true no matter how close we demand the sums be trapped, we say that the Riemann sums converge to . These definitions are actually a special case of a more general concept, a net.
As we stated earlier, these two definitions are equivalent. In other words, works in the first definition if and only if works in the second definition. To show that the first definition implies the second, start with an , and choose a that satisfies the condition. Choose any tagged partition whose mesh is less than . Its Riemann sum is within of , and any refinement of this partition will also have mesh less than , so the Riemann sum of the refinement will also be within of .
To show that the second definition implies the first, it is easiest to use the Darboux integral. First, one shows that the second definition is equivalent to the definition of the Darboux integral; for this see the Darboux integral article. Now we will show that a Darboux integrable function satisfies the first definition. Fix , and choose a partition such that the lower and upper Darboux sums with respect to this partition are within of the value of the Darboux integral. Let
If , then is the zero function, which is clearly both Darboux and Riemann integrable with integral zero. Therefore, we will assume that . If , then we choose such that
If , then we choose to be less than one. Choose a tagged partition and with mesh smaller than . We must show that the Riemann sum is within of .
To see this, choose an interval . If this interval is contained within some , then
where and are respectively, the infimum and the supremum of f on . If all intervals had this property, then this would conclude the proof, because each term in the Riemann sum would be bounded by a corresponding term in the Darboux sums, and we chose the Darboux sums to be near . This is the case when , so the proof is finished in that case.
Therefore, we may assume that . In this case, it is possible that one of the is not contained in any . Instead, it may stretch across two of the intervals determined by . (It cannot meet three intervals because is assumed to be smaller than the length of any one interval.) In symbols, it may happen that
(We may assume that all the inequalities are strict because otherwise we are in the previous case by our assumption on the length of .) This can happen at most times.
To handle this case, we will estimate the difference between the Riemann sum and the Darboux sum by subdividing the partition at . The term in the Riemann sum splits into two terms:
Suppose, without loss of generality, that . Then
so this term is bounded by the corresponding term in the Darboux sum for . To bound the other term, notice that
It follows that, for some (indeed any) ,
Since this happens at most times, the distance between the Riemann sum and a Darboux sum is at most . Therefore, the distance between the Riemann sum and is at most .
Examples
Let be the function which takes the value 1 at every point. Any Riemann sum of on will have the value 1, therefore the Riemann integral of on is 1.
Let be the indicator function of the rational numbers in ; that is, takes the value 1 on rational numbers and 0 on irrational numbers. This function does not have a Riemann integral. To prove this, we will show how to construct tagged partitions whose Riemann sums get arbitrarily close to both zero and one.
To start, let and be a tagged partition (each is between and ). Choose . The have already been chosen, and we can't change the value of at those points. But if we cut the partition into tiny pieces around each , we can minimize the effect of the . Then, by carefully choosing the new tags, we can make the value of the Riemann sum turn out to be within of either zero or one.
Our first step is to cut up the partition. There are of the , and we want their total effect to be less than . If we confine each of them to an interval of length less than , then the contribution of each to the Riemann sum will be at least and at most . This makes the total sum at least zero and at most . So let be a positive number less than . If it happens that two of the are within of each other, choose smaller. If it happens that some is within of some , and is not equal to , choose smaller. Since there are only finitely many and , we can always choose sufficiently small.
Now we add two cuts to the partition for each . One of the cuts will be at , and the other will be at . If one of these leaves the interval [0, 1], then we leave it out. will be the tag corresponding to the subinterval
If is directly on top of one of the , then we let be the tag for both intervals:
We still have to choose tags for the other subintervals. We will choose them in two different ways. The first way is to always choose a rational point, so that the Riemann sum is as large as possible. This will make the value of the Riemann sum at least . The second way is to always choose an irrational point, so that the Riemann sum is as small as possible. This will make the value of the Riemann sum at most .
Since we started from an arbitrary partition and ended up as close as we wanted to either zero or one, it is false to say that we are eventually trapped near some number , so this function is not Riemann integrable. However, it is Lebesgue integrable. In the Lebesgue sense its integral is zero, since the function is zero almost everywhere. But this is a fact that is beyond the reach of the Riemann integral.
There are even worse examples. is equivalent (that is, equal almost everywhere) to a Riemann integrable function, but there are non-Riemann integrable bounded functions which are not equivalent to any Riemann integrable function. For example, let be the Smith–Volterra–Cantor set, and let be its indicator function. Because is not Jordan measurable, is not Riemann integrable. Moreover, no function equivalent to is Riemann integrable: , like , must be zero on a dense set, so as in the previous example, any Riemann sum of has a refinement which is within of 0 for any positive number . But if the Riemann integral of exists, then it must equal the Lebesgue integral of , which is . Therefore, is not Riemann integrable.
Similar concepts
It is popular to define the Riemann integral as the Darboux integral. This is because the Darboux integral is technically simpler and because a function is Riemann-integrable if and only if it is Darboux-integrable.
Some calculus books do not use general tagged partitions, but limit themselves to specific types of tagged partitions. If the type of partition is limited too much, some non-integrable functions may appear to be integrable.
One popular restriction is the use of "left-hand" and "right-hand" Riemann sums. In a left-hand Riemann sum, for all , and in a right-hand Riemann sum, for all . Alone this restriction does not impose a problem: we can refine any partition in a way that makes it a left-hand or right-hand sum by subdividing it at each . In more formal language, the set of all left-hand Riemann sums and the set of all right-hand Riemann sums is cofinal in the set of all tagged partitions.
Another popular restriction is the use of regular subdivisions of an interval. For example, the th regular subdivision of consists of the intervals
Again, alone this restriction does not impose a problem, but the reasoning required to see this fact is more difficult than in the case of left-hand and right-hand Riemann sums.
However, combining these restrictions, so that one uses only left-hand or right-hand Riemann sums on regularly divided intervals, is dangerous. If a function is known in advance to be Riemann integrable, then this technique will give the correct value of the integral. But under these conditions the indicator function will appear to be integrable on with integral equal to one: Every endpoint of every subinterval will be a rational number, so the function will always be evaluated at rational numbers, and hence it will appear to always equal one. The problem with this definition becomes apparent when we try to split the integral into two pieces. The following equation ought to hold:
If we use regular subdivisions and left-hand or right-hand Riemann sums, then the two terms on the left are equal to zero, since every endpoint except 0 and 1 will be irrational, but as we have seen the term on the right will equal 1.
As defined above, the Riemann integral avoids this problem by refusing to integrate The Lebesgue integral is defined in such a way that all these integrals are 0.
Properties
Linearity
The Riemann integral is a linear transformation; that is, if and are Riemann-integrable on and and are constants, then
Because the Riemann integral of a function is a number, this makes the Riemann integral a linear functional on the vector space of Riemann-integrable functions.
Integrability
A bounded function on a compact interval is Riemann integrable if and only if it is continuous almost everywhere (the set of its points of discontinuity has measure zero, in the sense of Lebesgue measure). This is the (of characterization of the Riemann integrable functions). It has been proven independently by Giuseppe Vitali and by Henri Lebesgue in 1907, and uses the notion of measure zero, but makes use of neither Lebesgue's general measure or integral.
The integrability condition can be proven in various ways, one of which is sketched below.
In particular, any set that is at most countable has Lebesgue measure zero, and thus a bounded function (on a compact interval) with only finitely or countably many discontinuities is Riemann integrable. Another sufficient criterion to Riemann integrability over , but which does not involve the concept of measure, is the existence of a right-hand (or left-hand) limit at every point in (or ).
An indicator function of a bounded set is Riemann-integrable if and only if the set is Jordan measurable. The Riemann integral can be interpreted measure-theoretically as the integral with respect to the Jordan measure.
If a real-valued function is monotone on the interval it is Riemann integrable, since its set of discontinuities is at most countable, and therefore of Lebesgue measure zero. If a real-valued function on is Riemann integrable, it is Lebesgue integrable. That is, Riemann-integrability is a stronger (meaning more difficult to satisfy) condition than Lebesgue-integrability. The converse does not hold; not all Lebesgue-integrable functions are Riemann integrable.
The Lebesgue–Vitali theorem does not imply that all type of discontinuities have the same weight on the obstruction that a real-valued bounded function be Riemann integrable on . In fact, certain discontinuities have absolutely no role on the Riemann integrability of the function—a consequence of the classification of the discontinuities of a function.
If is a uniformly convergent sequence on with limit , then Riemann integrability of all implies Riemann integrability of , and
However, the Lebesgue monotone convergence theorem (on a monotone pointwise limit) does not hold for Riemann integrals. Thus, in Riemann integration, taking limits under the integral sign is far more difficult to logically justify than in Lebesgue integration.
Generalizations
It is easy to extend the Riemann integral to functions with values in the Euclidean vector space for any . The integral is defined component-wise; in other words, if then
In particular, since the complex numbers are a real vector space, this allows the integration of complex valued functions.
The Riemann integral is only defined on bounded intervals, and it does not extend well to unbounded intervals. The simplest possible extension is to define such an integral as a limit, in other words, as an improper integral:
This definition carries with it some subtleties, such as the fact that it is not always equivalent to compute the Cauchy principal value
For example, consider the sign function which is 0 at , 1 for , and −1 for . By symmetry,
always, regardless of . But there are many ways for the interval of integration to expand to fill the real line, and other ways can produce different results; in other words, the multivariate limit does not always exist. We can compute
In general, this improper Riemann integral is undefined. Even standardizing a way for the interval to approach the real line does not work because it leads to disturbingly counterintuitive results. If we agree (for instance) that the improper integral should always be
then the integral of the translation is −2, so this definition is not invariant under shifts, a highly undesirable property. In fact, not only does this function not have an improper Riemann integral, its Lebesgue integral is also undefined (it equals ).
Unfortunately, the improper Riemann integral is not powerful enough. The most severe problem is that there are no widely applicable theorems for commuting improper Riemann integrals with limits of functions. In applications such as Fourier series it is important to be able to approximate the integral of a function using integrals of approximations to the function. For proper Riemann integrals, a standard theorem states that if is a sequence of functions that converge uniformly to on a compact set , then
On non-compact intervals such as the real line, this is false. For example, take to be on and zero elsewhere. For all we have:
The sequence converges uniformly to the zero function, and clearly the integral of the zero function is zero. Consequently,
This demonstrates that for integrals on unbounded intervals, uniform convergence of a function is not strong enough to allow passing a limit through an integral sign. This makes the Riemann integral unworkable in applications (even though the Riemann integral assigns both sides the correct value), because there is no other general criterion for exchanging a limit and a Riemann integral, and without such a criterion it is difficult to approximate integrals by approximating their integrands.
A better route is to abandon the Riemann integral for the Lebesgue integral. The definition of the Lebesgue integral is not obviously a generalization of the Riemann integral, but it is not hard to prove that every Riemann-integrable function is Lebesgue-integrable and that the values of the two integrals agree whenever they are both defined. Moreover, a function defined on a bounded interval is Riemann-integrable if and only if it is bounded and the set of points where is discontinuous has Lebesgue measure zero.
An integral which is in fact a direct generalization of the Riemann integral is the Henstock–Kurzweil integral.
Another way of generalizing the Riemann integral is to replace the factors in the definition of a Riemann sum by something else; roughly speaking, this gives the interval of integration a different notion of length. This is the approach taken by the Riemann–Stieltjes integral.
In multivariable calculus, the Riemann integrals for functions from are multiple integrals.
Comparison with other theories of integration
The Riemann integral is unsuitable for many theoretical purposes. Some of the technical deficiencies in Riemann integration can be remedied with the Riemann–Stieltjes integral, and most disappear with the Lebesgue integral, though the latter does not have a satisfactory treatment of improper integrals. The gauge integral is a generalisation of the Lebesgue integral that is at the same time closer to the Riemann integral.
These more general theories allow for the integration of more "jagged" or "highly oscillating" functions whose Riemann integral does not exist; but the theories give the same value as the Riemann integral when it does exist.
In educational settings, the Darboux integral offers a simpler definition that is easier to work with; it can be used to introduce the Riemann integral. The Darboux integral is defined whenever the Riemann integral is, and always gives the same result. Conversely, the gauge integral is a simple but more powerful generalization of the Riemann integral and has led some educators to advocate that it should replace the Riemann integral in introductory calculus courses.
| Mathematics | Integral calculus | null |
26397 | https://en.wikipedia.org/wiki/Red%E2%80%93black%20tree | Red–black tree | In computer science, a red–black tree is a self-balancing binary search tree data structure noted for fast storage and retrieval of ordered information. The nodes in a red-black tree hold an extra "color" bit, often drawn as red and black, which help ensure that the tree is always approximately balanced.
When the tree is modified, the new tree is rearranged and "repainted" to restore the coloring properties that constrain how unbalanced the tree can become in the worst case. The properties are designed such that this rearranging and recoloring can be performed efficiently.
The (re-)balancing is not perfect, but guarantees searching in time, where is the number of entries in the tree. The insert and delete operations, along with tree rearrangement and recoloring, also execute in time.
Tracking the color of each node requires only one bit of information per node because there are only two colors (due to memory alignment present in some programming languages, the real memory consumption may differ). The tree does not contain any other data specific to it being a red–black tree, so its memory footprint is almost identical to that of a classic (uncolored) binary search tree. In some cases, the added bit of information can be stored at no added memory cost.
History
In 1972, Rudolf Bayer invented a data structure that was a special order-4 case of a B-tree. These trees maintained all paths from root to leaf with the same number of nodes, creating perfectly balanced trees. However, they were not binary search trees. Bayer called them a "symmetric binary B-tree" in his paper and later they became popular as 2–3–4 trees or even 2–3 trees.
In a 1978 paper, "A Dichromatic Framework for Balanced Trees", Leonidas J. Guibas and Robert Sedgewick derived the red–black tree from the symmetric binary B-tree. The color "red" was chosen because it was the best-looking color produced by the color laser printer available to the authors while working at Xerox PARC. Another response from Guibas states that it was because of the red and black pens available to them to draw the trees.
In 1993, Arne Andersson introduced the idea of a right leaning tree to simplify insert and delete operations.
In 1999, Chris Okasaki showed how to make the insert operation purely functional. Its balance function needed to take care of only 4 unbalanced cases and one default balanced case.
The original algorithm used 8 unbalanced cases, but reduced that to 6 unbalanced cases. Sedgewick showed that the insert operation can be implemented in just 46 lines of Java code.
In 2008, Sedgewick proposed the left-leaning red–black tree, leveraging Andersson’s idea that simplified the insert and delete operations. Sedgewick originally allowed nodes whose two children are red, making his trees more like 2–3–4 trees, but later this restriction was added, making new trees more like 2–3 trees. Sedgewick implemented the insert algorithm in just 33 lines, significantly shortening his original 46 lines of code.
Terminology
A red–black tree is a special type of binary search tree, used in computer science to organize pieces of comparable data, such as text fragments or numbers (as e.g. the numbers in figures 1 and 2). The nodes carrying keys and/or data are frequently called "internal nodes", but to make this very specific they are also called non-NIL nodes in this article.
The leaf nodes of red–black trees ( NIL in figure 1) do not contain keys or data. These "leaves" need not be explicit individuals in computer memory: a NULL pointer can—as in all binary tree data structures— encode the fact that there is no child node at this position in the (parent) node. Nevertheless, by their position in the tree, these objects are in relation to other nodes that is relevant to the RB-structure, it may have parent, sibling (i.e., the other child of the parent), uncle, even nephew node; and may be child—but never parent—of another node.
It is not really necessary to attribute a "color" to these end-of-path objects, because the condition "is NIL or BLACK" is implied by the condition "is NIL" (see also this remark).
Figure 2 shows the conceptually same red–black tree without these NIL leaves. To arrive at the same notion of a path, one must notice that e.g., 3 paths run through the node 1, namely a path through 1left plus 2 added paths through 1right, namely the paths through 6left and 6right. This way, these ends of the paths are also docking points for new nodes to be inserted, fully equivalent to the NIL leaves of figure 1.
Instead, to save a marginal amount of execution time, these (possibly many) NIL leaves may be implemented as pointers to one unique (and black) sentinel node (instead of pointers of value NULL).
As a conclusion, the fact that a child does not exist (is not a true node, does not contain data) can in all occurrences be specified by the very same NULL pointer or as the very same pointer to a sentinel node. Throughout this article, either choice is called NIL node and has the constant value NIL.
The black depth of a node is defined as the number of black nodes from the root to that node (i.e. the number of black ancestors). The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4, is constant (alternatively, it could be defined as the black depth of any leaf node).
The black height of a node is the black height of the subtree rooted by it. In this article, the black height of a NIL node shall be set to 0, because its subtree is empty as suggested by figure 2, and its tree height is also 0.
Properties
In addition to the requirements imposed on a binary search tree the following must be satisfied by a
Every node is either red or black.
All NIL nodes (figure 1) are considered black.
A red node does not have a red child.
Every path from a given node to any of its descendant NIL nodes goes through the same number of black nodes.
(Conclusion) If a node N has exactly one child, the child must be red, because if it were black, its NIL descendants would sit at a different black depth than N's NIL child, violating requirement 4.
Some authors, e.g. Cormen & al., claim "the root is black" as fifth requirement; but not Mehlhorn & Sanders or Sedgewick & Wayne. Since the root can always be changed from red to black, this rule has little effect on analysis.
This article also omits it, because it slightly disturbs the recursive algorithms and proofs.
As an example, every perfect binary tree that consists only of black nodes is a red–black tree.
The read-only operations, such as search or tree traversal, do not affect any of the requirements. In contrast, the modifying operations insert and delete easily maintain requirements 1 and 2, but with respect to the other requirements some extra effort must be made, to avoid introducing a violation of requirement 3, called a red-violation, or of requirement 4, called a black-violation.
The requirements enforce a critical property of red–black trees: the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf. The result is that the tree is height-balanced. Since operations such as inserting, deleting, and finding values require worst-case time proportional to the height of the tree, this upper bound on the height allows red–black trees to be efficient in the worst case, namely logarithmic in the number of entries, i.e. which is not the case for ordinary binary search trees. For a mathematical proof see section Proof of bounds.
Red–black trees, like all binary search trees, allow quite efficient sequential access (e.g. in-order traversal, that is: in the order Left–Root–Right) of their elements. But they support also asymptotically optimal direct access via a traversal from root to leaf, resulting in search time.
Analogy to 2–3–4 trees
Red–black trees are similar in structure to 2–3–4 trees, which are B-trees of order 4. In 2–3–4 trees, each node can contain between 1 and 3 values and have between 2 and 4 children. These 2–3–4 nodes correspond to black node – red children groups in red-black trees, as shown in figure 3. It is not a 1-to-1 correspondence, because 3-nodes have two equivalent representations: the red child may lie either to the left or right. The left-leaning red-black tree variant makes this relationship exactly 1-to-1, by only allowing the left child representation. Since every 2–3–4 node has a corresponding black node, invariant 4 of red-black trees is equivalent to saying that the leaves of a 2–3–4 tree all lie at the same level.
Despite structural similarities, operations on red–black trees are more economical than B-trees. B-trees require management of vectors of variable length, whereas red-black trees are simply binary trees.
Applications and related data structures
Red–black trees offer worst-case guarantees for insertion time, deletion time, and search time. Not only does this make them valuable in time-sensitive applications such as real-time applications, but it makes them valuable building blocks in other data structures that provide worst-case guarantees. For example, many data structures used in computational geometry are based on red–black trees, and the Completely Fair Scheduler and epoll system call of the Linux kernel use red–black trees.
The AVL tree is another structure supporting search, insertion, and removal. AVL trees can be colored red–black, and thus are a subset of red-black trees. The worst-case height of AVL is 0.720 times the worst-case height of red-black trees, so AVL trees are more rigidly balanced. The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910. The performance of WAVL trees lie in between AVL trees and red-black trees.
Red–black trees are also particularly valuable in functional programming, where they are one of the most common persistent data structures, used to construct associative arrays and sets that can retain previous versions after mutations. The persistent version of red–black trees requires space for each insertion or deletion, in addition to time.
For every 2–3–4 tree, there are corresponding red–black trees with data elements in the same order. The insertion and deletion operations on 2–3–4 trees are also equivalent to color-flipping and rotations in red–black trees. This makes 2–3–4 trees an important tool for understanding the logic behind red–black trees, and this is why many introductory algorithm texts introduce 2–3–4 trees just before red–black trees, even though 2–3–4 trees are not often used in practice.
In 2008, Sedgewick introduced a simpler version of the red–black tree called the left-leaning red–black tree by eliminating a previously unspecified degree of freedom in the implementation. The LLRB maintains an additional invariant that all red links must lean left except during inserts and deletes. Red–black trees can be made isometric to either 2–3 trees, or 2–3–4 trees, for any sequence of operations. The 2–3–4 tree isometry was described in 1978 by Sedgewick. With 2–3–4 trees, the isometry is resolved by a "color flip," corresponding to a split, in which the red color of two children nodes leaves the children and moves to the parent node.
The original description of the tango tree, a type of tree optimised for fast searches, specifically uses red–black trees as part of its data structure.
As of Java 8, the HashMap has been modified such that instead of using a LinkedList to store different elements with colliding hashcodes, a red–black tree is used. This results in the improvement of time complexity of searching such an element from to where is the number of elements with colliding hashcodes.
Operations
The read-only operations, such as search or tree traversal, on a red–black tree require no modification from those used for binary search trees, because every red–black tree is a special case of a simple binary search tree. However, the immediate result of an insertion or removal may violate the properties of a red–black tree, the restoration of which is called rebalancing so that red–black trees become self-balancing.
It requires in the worst case a small number, in Big O notation, where is the number of objects in the tree, on average or amortized , a constant number, of color changes (which are very quick in practice); and no more than three tree rotations (two for insertion).
If the example implementation below is not suitable, other implementations with explanations may be found in Ben Pfaff’s annotated C library GNU libavl (v2.0.3 as of June 2019).
The details of the insert and removal operations will be demonstrated with example C++ code, which uses the type definitions, macros below, and the helper function for rotations:
// Basic type definitions:
enum color_t { BLACK, RED };
struct RBnode { // node of red–black tree
RBnode* parent; // == NIL if root of the tree
RBnode* child[2]; // == NIL if child is empty
// The index is:
// LEFT := 0, if (key < parent->key)
// RIGHT := 1, if (key > parent->key)
enum color_t color;
int key;
};
#define NIL NULL // null pointer or pointer to sentinel node
#define LEFT 0
#define RIGHT 1
#define left child[LEFT]
#define right child[RIGHT]
struct RBtree { // red–black tree
RBnode* root; // == NIL if tree is empty
};
// Get the child direction (∈ { LEFT, RIGHT })
// of the non-root non-NIL RBnode* N:
#define childDir(N) ( N == (N->parent)->right ? RIGHT : LEFT )
RBnode* RotateDirRoot(
RBtree* T, // red–black tree
RBnode* P, // root of subtree (may be the root of T)
int dir) { // dir ∈ { LEFT, RIGHT }
RBnode* G = P->parent;
RBnode* S = P->child[1-dir];
RBnode* C;
assert(S != NIL); // pointer to true node required
C = S->child[dir];
P->child[1-dir] = C; if (C != NIL) C->parent = P;
S->child[ dir] = P; P->parent = S;
S->parent = G;
if (G != NULL)
G->child[ P == G->right ? RIGHT : LEFT ] = S;
else
T->root = S;
return S; // new root of subtree
}
#define RotateDir(N,dir) RotateDirRoot(T,N,dir)
#define RotateLeft(N) RotateDirRoot(T,N,LEFT)
#define RotateRight(N) RotateDirRoot(T,N,RIGHT)
| Mathematics | Data structures and types | null |
26404 | https://en.wikipedia.org/wiki/Risk%20management | Risk management | Risk management is the identification, evaluation, and prioritization of risks, followed by the minimization, monitoring, and control of the impact or probability of those risks occurring. Risks can come from various sources (i.e, threats) including uncertainty in international markets, political instability, dangers of project failures (at any phase in design, development, production, or sustaining of life-cycles), legal liabilities, credit risk, accidents, natural causes and disasters, deliberate attack from an adversary, or events of uncertain or unpredictable root-cause.
There are two types of events wiz. Risks and Opportunities. Negative events can be classified as risks while positive events are classified as opportunities. Risk management standards have been developed by various institutions, including the Project Management Institute, the National Institute of Standards and Technology, actuarial societies, and International Organization for Standardization. Methods, definitions and goals vary widely according to whether the risk management method is in the context of project management, security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and safety. Certain risk management standards have been criticized for having no measurable improvement on risk, whereas the confidence in estimates and decisions seems to increase.
Strategies to manage threats (uncertainties with negative consequences) typically include avoiding the threat, reducing the negative effect or probability of the threat, transferring all or part of the threat to another party, and even retaining some or all of the potential or actual consequences of a particular threat. The opposite of these strategies can be used to respond to opportunities (uncertain future states with benefits).
As a professional role, a risk manager will "oversee the organization's comprehensive insurance and risk management program, assessing and identifying risks that could impede the reputation, safety, security, or financial success of the organization", and then develop plans to minimize and / or mitigate any negative (financial) outcomes. Risk Analysts support the technical side of the organization's risk management approach: once risk data has been compiled and evaluated, analysts share their findings with their managers, who use those insights to decide among possible solutions.
| Technology | Basics | null |
26441 | https://en.wikipedia.org/wiki/Red%20panda | Red panda | The red panda (Ailurus fulgens), also known as the lesser panda, is a small mammal native to the eastern Himalayas and southwestern China. It has dense reddish-brown fur with a black belly and legs, white-lined ears, a mostly white muzzle and a ringed tail. Its head-to-body length is with a tail, and it weighs between . It is well adapted to climbing due to its flexible joints and curved semi-retractile claws.
The red panda was formally described in 1825. The two currently recognised subspecies, the Himalayan and the Chinese red panda, genetically diverged about 250,000 years ago. The red panda's place on the evolutionary tree has been debated, but modern genetic evidence places it in close affinity with raccoons, weasels, and skunks. It is not closely related to the giant panda, which is a bear, though both possess elongated wrist bones or "false thumbs" used for grasping bamboo. The evolutionary lineage of the red panda (Ailuridae) stretches back around , as indicated by extinct fossil relatives found in Eurasia and North America.
The red panda inhabits coniferous forests as well as temperate broadleaf and mixed forests, favouring steep slopes with dense bamboo cover close to water sources. It is solitary and largely arboreal. It feeds mainly on bamboo shoots and leaves, but also on fruits and blossoms. Red pandas mate in early spring, with the females giving birth to litters of up to four cubs in summer. It is threatened by poaching as well as destruction and fragmentation of habitat due to deforestation. The species has been listed as Endangered on the IUCN Red List since 2015. It is protected in all range countries.
Community-based conservation programmes have been initiated in Nepal, Bhutan and northeastern India; in China, it benefits from nature conservation projects. Regional captive breeding programmes for the red panda have been established in zoos around the world. It is featured in animated movies, video games, comic books and as the namesake of companies and music bands.
Etymology
The origin of the name panda is uncertain, but one of the most likely theories is that it derived from the Nepali word "ponya". The word or means "ball of the foot" and "claws". The Nepali words "nigalya ponya" has been translated as "bamboo-footed" and is thought to be the red panda's Nepali name; in English, it was simply called panda, and was the only animal known under this name for more than 40 years; it became known as the red panda or lesser panda to distinguish it from the giant panda, which was formally described and named in 1869.
The genus name Ailurus is adopted from the Ancient Greek word meaning 'cat'. The specific epithet fulgens is Latin for 'shining, bright'.
Taxonomy
The red panda was described and named in 1825 by Frederic Cuvier, who gave it its current scientific name Ailurus fulgens. Cuvier's description was based on zoological specimens, including skin, paws, jawbones and teeth "from the mountains north of India", as well as an account by Alfred Duvaucel. The red panda was described earlier by Thomas Hardwicke in 1821, but his paper was only published in 1827. In 1902, Oldfield Thomas described a skull of a male red panda specimen under the name Ailurus fulgens styani in honour of Frederick William Styan who had collected this specimen in Sichuan.
Subspecies and species
The modern red panda is the only recognised species in the genus Ailurus. It is traditionally divided into two subspecies: the Himalayan red panda (A. f. fulgens) and the Chinese red panda (A. f. styani). The Himalayan subspecies has a straighter profile, a lighter coloured forehead and ochre-tipped hairs on the lower back and rump. The Chinese subspecies has a more curved forehead and sloping snout, a darker coat with a less white face and more contrast between the tail rings.
In 2020, results of a genetic analysis of red panda samples showed that the red panda populations in the Himalayas and China were separated about 250,000 years ago. The researchers suggested that the two subspecies should be treated as distinct species. Red pandas in southeastern Tibet and northern Myanmar were found to be part of styani, while those of southern Tibet were of fulgens in the strict sense. DNA sequencing of 132 red panda faecal samples collected in Northeast India and China also showed two distinct clusters indicating that the Siang River constitutes the boundary between the Himalayan and Chinese red pandas. They probably diverged due to glaciation events on the southern Tibetan Plateau in the Pleistocene.
Phylogeny
The placement of the red panda on the evolutionary tree has been debated. In the early 20th century, various scientists placed it in the family Procyonidae with raccoons and their allies. At the time, most prominent biologists also considered the red panda to be related to the giant panda, which would eventually be found to be a bear. A 1982 study examined the similarities and differences in the skull between the red panda and the giant panda, other bears and procyonids, and placed the species in its own family Ailuridae. The author of the study considered the red panda to be more closely related to bears. A 1995 mitochondrial DNA analysis revealed that the red panda has close affinities with procyonids. Further genetic studies in 2005, 2018 and 2021 have placed the red panda within the clade Musteloidea, which also includes Procyonidae, Mustelidae (weasels and relatives) and Mephitidae (skunks and relatives).
Fossil record
The family Ailuridae appears to have evolved in Europe in either the Late Oligocene or Early Miocene, about . The earliest member Amphictis is known from its skull and may have been around the same size as the modern species. Its dentition consists of sharp premolars and carnassials (P4 and m1) and molars adapted for grinding (M1, M2 and m2), suggesting that it had a generalised carnivorous diet. Its placement within Ailuridae is based on the grooves on the side of its canine teeth. Other early or basal ailurids include Alopecocyon and Simocyon, whose fossils have been found throughout Eurasia and North America dating from the Middle Miocene, the latter of which survived into the Early Pliocene. Both have similar teeth to Amphictis and thus had a similar diet. The puma-sized Simocyon was likely a tree-climber and shared a "false thumb"—an extended wrist bone—with the modern species, suggesting the appendage was an adaptation to arboreal locomotion and not to feed on bamboo.
Later and more advanced ailurids are classified in the subfamily Ailurinae and are known as the "true" red pandas. These animals were smaller and more adapted for an omnivorous or herbivorous diet. The earliest known true panda is Magerictis from the Middle Miocene of Spain and known only from one tooth, a lower second molar. The tooth shows both ancestral and new characteristics having a relatively low and simple crown but also a lengthened crushing surface with developed tooth cusps like later species. Later ailurines include Pristinailurus bristoli which lived in eastern North America from the late Miocene to the Early Pliocene and species of the genus Parailurus which first appear in Early Pliocene Europe, spreading across Eurasia into North America. These animals are classified as a sister taxon to the lineage of the modern red panda. In contrast to the herbivorous modern species, these ancient pandas were likely omnivores, with highly cusped molars and sharp premolars.
The earliest fossil record of the modern genus Ailurus dates no earlier than the Pleistocene and appears to have been limited to Asia. The modern red panda's lineage became adapted for a specialised bamboo diet, having molar-like premolars and more elevated cusps. The false thumb would secondarily gain a function in feeding.
Genomics
Analysis of 53 red panda samples from Sichuan and Yunnan showed a high level of genetic diversity. The full genome of the red panda was sequenced in 2017. Researchers have compared it to the genome of the giant panda to learn the genetics of convergent evolution, as both species have false thumbs and are adapted for a specialised bamboo diet despite having the digestive system of a carnivore. Both pandas show modifications to certain limb development genes (DYNC2H1 and PCNT), which may play roles in the development of the thumbs. In switching from a carnivorous to a herbivorous diet, both species have reactivated taste receptor genes used for detecting bitterness, though the specific genes are different.
Description
The red panda's coat is mainly red or orange-brown with a black belly and legs. The muzzle, cheeks, brows and inner ear margins are mostly white while the bushy tail has red and buff ring patterns and a dark brown tip. The colouration appears to serve as camouflage in habitat with red moss and white lichen-covered trees. The guard hairs are longer and rougher while the dense undercoat is fluffier with shorter hairs. The guard hairs on the back have a circular cross-section and are long. It has moderately long whiskers around the mouth, lower jaw and chin. The hair on the soles of the paws allows the animal to walk in snow.
The red panda has a relatively small head, though proportionally larger than in similarly sized raccoons, with a reduced snout and triangular ears, and nearly evenly lengthed limbs. It has a head-body length of with a tail. The Himalayan red panda is recorded to weigh , while the Chinese red panda weighs for females and for males. It has five curved digits on each foot, each with curved semi-retractile claws that aid in climbing. The pelvis and hindlimbs have flexible joints, adaptations for an arboreal quadrupedal lifestyle. While not prehensile, the tail helps the animal balance while climbing.
The forepaws possess a "false thumb", which is an extension of a wrist bone, the radial sesamoid found in many carnivorans. This thumb allows the animal to grip onto bamboo stalks and both the digits and wrist bones are highly flexible. The red panda shares this feature with the giant panda, which has a larger sesamoid that is more compressed at the sides. In addition, the red panda's sesamoid has a more sunken tip while the giant panda's curves in the middle. These features give the giant panda more developed dexterity.
The red panda's skull is wide, and its lower jaw is robust. However, because it eats leaves and stems, which are not as tough, it has smaller chewing muscles than the giant panda. The digestive system of the red panda is only 4.2 times its body length, with a simple stomach, no noticeable divide between the ileum and colon, and no caecum.
Both sexes have paired anal glands that emit a secretion consisting of long-chain fatty acids, cholesterol, squalene and 2-Piperidinone; the latter is the most odoriferous compound and is perceived by humans as having an ammoniacal or pepper-like odour.
Distribution and habitat
The red panda inhabits Nepal, the states of Sikkim, West Bengal and Arunachal Pradesh in India, Bhutan, southern Tibet, northern Myanmar and China's Sichuan and Yunnan provinces. The global potential habitat of the red panda has been estimated to comprise at most; this habitat is located in the temperate climate zone of the Himalayas with a mean annual temperature range of . Throughout this range, it has been recorded at elevations of .
In Nepal, it lives in six protected area complexes within the Eastern Himalayan broadleaf forests ecoregion. The westernmost records to date were obtained in three community forests in Kalikot District in 2019. Panchthar and Ilam Districts represent its easternmost range in the country, where its habitat in forest patches is surrounded by villages, livestock pastures and roads. The metapopulation in protected areas and wildlife corridors in the Kangchenjunga landscape of Sikkim and northern West Bengal is partly connected through old-growth forests outside protected areas. Forests in this landscape are dominated by Himalayan oaks (Quercus lamellosa and Q. semecarpifolia), Himalayan birch, Himalayan fir, Himalayan maple with bamboo, Rhododendron and some black juniper shrub growing in the understoreys. Records in Bhutan, Arunachal Pradesh's Pangchen Valley, West Kameng and Shi Yomi districts indicate that it frequents habitats with Yushania and Thamnocalamus bamboo, medium-sized Rhododendron, whitebeam and chinquapin trees. In China, it inhabits the Hengduan Mountains subalpine conifer forests and Qionglai-Minshan conifer forests in the Hengduan, Qionglai, Xiaoxiang, Daxiangling and Liangshan Mountains in Sichuan. In the adjacent Yunnan province, it was recorded only in the northwestern montane part.
The red panda prefers microhabitats within of water sources. Fallen logs and tree stumps are important habitat features, as they facilitate access to bamboo leaves. Red pandas have been recorded to use steep slopes of more than 20° and stumps exceeding a diameter of . Red pandas observed in Phrumsengla National Park used foremost easterly and southerly slopes with a mean slope of 34° and a canopy cover of 66 per cent that were overgrown with bamboo about in height. In Dafengding Nature Reserve, it prefers steep south-facing slopes in winter and inhabits forests with bamboo tall. In Gaoligongshan National Nature Reserve, it inhabits mixed coniferous forest with a dense canopy cover of more than 75 per cent, steep slopes and a density of at least . In some parts of China, the red panda coexists with the giant panda. In Fengtongzhai and Yele National Nature Reserves, red panda microhabitat is characterised by steep slopes with lots of bamboo stems, shrubs, fallen logs and stumps, whereas the giant panda prefers gentler slopes with taller but lesser amounts of bamboo and less habitat features overall. Such niche separation lessens competition between the two bamboo-eating species.
Behaviour and ecology
The red panda is difficult to observe in the wild, and most studies on its behaviour have taken place in captivity. The red panda appears to be both nocturnal and crepuscular, sleeping in between periods of activity at night. It typically rests or sleeps in trees or other elevated spaces, stretched out prone on a branch with legs dangling when it is hot, and curled up with its hindlimb over the face when it is cold. It is adapted for climbing and descends to the ground head-first with the hindfeet holding on to the middle of the tree trunk. It moves quickly on the ground by trotting or bounding.
Social spacing
Adult pandas are generally solitary and territorial. Individuals mark their home range or territorial boundaries with urine, faeces and secretions from the anal and surrounding glands. Scent-marking is usually done on the ground, with males marking more often and for longer periods. In China's Wolong National Nature Reserve, the home range of a radio-collared female was , while that of a male was . A one-year-long monitoring study of ten red pandas in eastern Nepal showed that the four males had median home ranges of and the six females of within a forest cover of at least . The females travelled per day and the males . In the mating season from January to March, adults travelled a mean of and subadults a mean of . They all had larger home ranges in areas with low forest cover and reduced their activity in areas that were disturbed by people, livestock and dogs.
Diet and feeding
The red panda is largely herbivorous and feeds primarily on bamboo, mainly the genera Phyllostachys, Sinarundinaria, Thamnocalamus and Chimonobambusa. It also feeds on fruits, blossoms, acorns, eggs, birds and small mammals. Bamboo leaves may be the most abundant food item year-round and the only food they can access during winter. In Wolong National Nature Reserve, leaves of the bamboo species Bashania fangiana were found in nearly 94 per cent of analysed droppings, and its shoots were found in 59 per cent of the droppings found in June.
The diet of red pandas monitored at three sites in Singalila National Park for two years consisted of 40–83 per cent Yushania maling and 51–91.2 per cent Thamnocalamus spathiflorus bamboos supplemented by bamboo shoots, Actinidia strigosa fruits and seasonal berries. In this national park, red panda droppings also contained remains of silky rose and bramble fruit species in the summer season, Actinidia callosa in the post-monsoon season, and Merrilliopanax alpinus, the whitebeam species Sorbus cuspidata and tree rhododendron in both seasons. Droppings were found with 23 plant species including the stone oak species Lithocarpus pachyphyllus, Campbell's magnolia, the chinquapin species Castanopsis tribuloides, Himalayan birch, Litsea sericea and the holly species Ilex fragilis. In Nepal's Rara National Park, Thamnocalamus was found in all the droppings sampled, both before and after the monsoon. Its summer diet in Dhorpatan Hunting Reserve also includes some lichens and barberries. In Bhutan's Jigme Dorji National Park, red panda faeces found in the fruiting season contained seeds of Himalayan ivy.
The red panda grabs food with one of its front paws and usually eats sitting down or standing. When foraging for bamboo, it grabs the plant by the stem and pulls it down towards its jaws. It bites the leaves with the side of the cheek teeth and then shears, chews and swallows. Smaller food like blossoms, berries and small leaves are eaten differently, being clipped by the incisors. Having the gastrointestinal tract of a carnivore, the red panda cannot properly digest bamboo, which passes through its gut in two to four hours. Hence, it must consume large amounts of the most nutritious plant matter. It eats over of fresh leaves or of fresh shoots in a day with crude proteins and fats being the most easily digested. Digestion is highest in summer and fall but lowest in winter, and is easier for shoots than leaves. The red panda's metabolic rate is comparable to other mammals of its size, despite its poor diet. The red panda digests almost a third of dry matter, which is more efficient than the giant panda digesting 17 per cent. Microbes in the gut may aid in its processing of bamboo; the microbiota community in the red panda is less diverse than in other mammals.
Communication
At least seven different vocalisations have been recorded from the red panda, comprising growls, barks, squeals, hoots, bleats, grunts and twitters. Growling, barking, grunting and squealing are produced during fights and aggressive chasing. Hooting is made in response to being approached by another individual. Bleating is associated with scent-marking and sniffing. Males may bleat during mating, while females twitter. During both play fighting and aggressive fighting, individuals curve their backs and tails while slowly moving their heads up and down. They then turn their heads while jaw-clapping, move their heads laterally and lift a forepaw to strike. They stand on their hind legs, raise the forelimbs above the head and then pounce. Two red pandas may "stare" at each other from a distance.
Reproduction and parenting
Red pandas are long-day breeders, reproducing after the winter solstice as daylight grows longer. Mating thus takes place from January to March, with births occurring from May to August. Reproduction is delayed by six months for captive pandas in the southern hemisphere. Oestrous lasts a day, and females can enter oestrous multiple times a season, but it is not known how long the intervals between each cycle last.
As the reproductive season begins, males and females interact more, and will rest, move, and feed near each other. An oestrous female will spend more time marking and males will inspect her anogenital region. Receptive females make tail-flicks and position themselves in a lordosis pose, with the front lowered to the ground and the spine curved. Copulation involves the male mounting the female from behind and on top, though face-to-face matings as well as belly-to-back matings while lying on the sides also occur. The male will grab the female by the sides with his front paws instead of biting her neck. Intromission is 2–25 minutes long, and the couple groom each other between each bout.
Gestation lasts about 131 days. Prior to giving birth, the female selects a denning site, such as a tree, log or stump hollow or rock crevice, and builds a nest using material from nearby, such as twigs, sticks, branches, bark bits, leaves, grass and moss. Litters typically consist of one to four cubs that are born fully furred but blind. They are entirely dependent on their mother for the first three to four months until they first leave the nest. They nurse for their first five months. The bond between mother and offspring lasts until the next mating season. Cubs are fully grown at around 12 months and at around 18 months they reach sexual maturity. Two radio-collared cubs in eastern Nepal separated from their mothers at the age of 7–8 months and left their birth areas three weeks later. They reached new home ranges within 26–42 days and became residents after exploring them for 42–44 days.
Mortality and diseases
The red panda's lifespan in captivity reaches 14 years. They have been recorded falling prey to leopards in the wild. Faecal samples of red panda collected in Nepal contained parasitic protozoa, amoebozoans, roundworms, trematodes and tapeworms. Roundworms, tapeworms and coccidia were also found in red panda scat collected in Rara and Langtang National Parks. Fourteen red pandas at the Knoxville Zoo suffered from severe ringworm, so the tails of two were amputated. Chagas disease was reported as the cause of death of a red panda kept in a Kansas zoo. Amdoparvovirus was detected in the scat of six red pandas in the Sacramento Zoo. Eight captive red pandas in a Chinese zoo suffered from shortness of breath and fever shortly before they died of pneumonia; autopsy revealed that they had antibodies to the protozoans Toxoplasma gondii and Sarcocystis species indicating that they were intermediate hosts. A captive red panda in the Chengdu Research Base of Giant Panda Breeding died of unknown reasons; an autopsy showed that its kidneys, liver and lungs were damaged by a bacterial infection caused by Escherichia coli.
Threats
The red panda is primarily threatened by the destruction and fragmentation of its habitat, the causes of which include increasing human population, deforestation, the unlawful taking of non-wood forest material and disturbances by herders and livestock. Trampling by livestock inhibits bamboo growth, and clearcutting decreases the ability of some bamboo species to regenerate. The cut lumber stock in Sichuan alone reached in 1958–1960, and around of red panda habitat were logged between the mid-1970s and late 1990s. Throughout Nepal, the red panda habitat outside protected areas is negatively affected by solid waste, livestock trails and herding stations, and people collecting firewood and medicinal plants. Threats identified in Nepal's Lamjung District include grazing by livestock during seasonal transhumance, human-made forest fires and the collection of bamboo as cattle fodder in winter. Vehicular traffic is a significant barrier to red panda movement between habitat patches.
Poaching is also a major threat. In Nepal, 121 red panda skins were confiscated between 2008 and 2018. Traps meant for other wildlife have been recorded killing red pandas. In Myanmar, the red panda is threatened by hunting using guns and traps; since roads to the border with China were built starting in the early 2000s, red panda skins and live animals have been traded and smuggled across the border. In southwestern China, the red panda is hunted for its fur, especially for the highly valued bushy tails, from which hats are produced. The red panda population in China has been reported to have decreased by 40 per cent over the last 50 years, and the population in western Himalayan areas are considered to be smaller. Between 2005 and 2017, 35 live and seven dead red pandas were confiscated in Sichuan, and several traders were sentenced to 3–12 years of imprisonment. A month-long survey of 65 shops in nine Chinese counties in the spring of 2017 revealed only one in Yunnan offered hats made of red panda skins, and red panda tails were offered in an online forum.
Conservation
The red panda is listed in CITES Appendix I and protected in all range countries; hunting is illegal. It has been listed as Endangered on the IUCN Red List since 2008 because the global population is estimated at 10,000 individuals, with a decreasing population trend. A large extent of its habitat is part of protected areas.
A red panda anti-poaching unit and community-based monitoring have been established in Langtang National Park. Members of Community Forest User Groups also protect and monitor red panda habitats in other parts of Nepal. Community outreach programs have been initiated in eastern Nepal using information boards, radio broadcasting and the annual International Red Panda Day in September; several schools endorsed a red panda conservation manual as part of their curricula.
Since 2010, community-based conservation programmes have been initiated in 10 districts in Nepal that aim to help villagers reduce their dependence on natural resources through improved herding and food processing practices and alternative income possibilities. The Nepali government ratified a five-year Red Panda Conservation Action Plan in 2019. From 2016 to 2019, of high-elevation rangeland in Merak, Bhutan, was restored and fenced in cooperation with 120 herder families to protect the red panda forest habitat and improve communal land. Villagers in Arunachal Pradesh established two community conservation areas to protect the red panda habitat from disturbance and exploitation of forest resources. China has initiated several projects to protect its environment and wildlife, including Grain for Green, The Natural Forest Protection Project and the National Wildlife/Natural Reserve Construction Project. For the last project, the red panda is not listed as a key species for protection but may benefit from the protection of the giant panda and golden snub-nosed monkey, with which it overlaps in range.
In captivity
The London Zoo received two red pandas in 1869 and 1876, the first of which was caught in Darjeeling. The Calcutta Zoo received a live red panda in 1877, the Philadelphia Zoo in 1906, and Artis and Cologne Zoos in 1908. In 1908, the first captive red panda cubs were born in an Indian zoo. In 1940, the San Diego Zoo imported four red pandas from India that had been caught in Nepal; their first litter was born in 1941. Cubs that were born later were sent to other zoos; by 1969, about 250 red pandas had been exhibited in zoos. The Taronga Conservation Society started keeping red pandas in 1977.
In 1978, a breed registry, the International Red Panda Studbook, was set up, followed by the Red Panda European Endangered Species Programme in 1985. Members of international zoos ratified a global master plan for the captive breeding of the red panda in 1993. By late 2015, 219 red pandas lived in 42 zoos in Japan. The Padmaja Naidu Himalayan Zoological Park participates in the Red Panda Species Survival Plan and kept about 25 red pandas by 2016. By the end of 2019, 182 European zoos kept 407 red pandas. Regional captive breeding programmes have also been established in North American, Australasian and South African zoos.
Cultural significance
The red panda's role in the culture and folklore of local people is limited. A drawing of a red panda exists on a 13th-century Chinese scroll. In Nepal's Taplejung District, red panda claws are used for treating epilepsy; its skin is used in rituals for treating sick people, making hats, scarecrows and decorating houses. In western Nepal, Magar shamans use its skin and fur in their ritual dresses and believe that it protects against evil spirits. People in central Bhutan consider red pandas to be reincarnations of Buddhist monks. Some tribal people in northeast India and the Yi people believe that it brings good luck to wear red panda tails or hats made of its fur. In China, the fur is used for local cultural ceremonies. At weddings, the bridegroom traditionally carries the hide. Hats made of red panda tails are also used by local newlyweds as a "good-luck charm".
The red panda was recognised as the state animal of Sikkim in the early 1990s and was the mascot of the Darjeeling Tea Festival. It has been featured on stamps and coins issued by several red panda range states. Anthropomorphic red pandas feature in animated movies and TV series such as The White Snake Enchantress, Bamboo Bears, Barbie as the Island Princess, DreamWorks' Kung Fu Panda franchise, Aggretsuko and Disney/Pixar's Turning Red, and in several video games and comic books. It is the namesake of the Firefox browser and has been used as the namesake of music bands and of companies. Its appearance has been used for plush toys, t-shirts, postcards and other items.
| Biology and health sciences | Other carnivora | Animals |
26446 | https://en.wikipedia.org/wiki/Recreational%20mathematics | Recreational mathematics | Recreational mathematics is mathematics carried out for recreation (entertainment) rather than as a strictly research-and-application-based professional activity or as a part of a student's formal education. Although it is not necessarily limited to being an endeavor for amateurs, many topics in this field require no knowledge of advanced mathematics. Recreational mathematics involves mathematical puzzles and games, often appealing to children and untrained adults and inspiring their further study of the subject.
The Mathematical Association of America (MAA) includes recreational mathematics as one of its seventeen Special Interest Groups, commenting:
Mathematical competitions (such as those sponsored by mathematical associations) are also categorized under recreational mathematics.
Topics
Some of the more well-known topics in recreational mathematics are Rubik's Cubes, magic squares, fractals, logic puzzles and mathematical chess problems, but this area of mathematics includes the aesthetics and culture of mathematics, peculiar or amusing stories and coincidences about mathematics, and the personal lives of mathematicians.
Mathematical games
Mathematical games are multiplayer games whose rules, strategies, and outcomes can be studied and explained using mathematics. The players of the game may not need to use explicit mathematics in order to play mathematical games. For example, Mancala is studied in the mathematical field of combinatorial game theory, but no mathematics is necessary in order to play it.
Mathematical puzzles
Mathematical puzzles require mathematics in order to solve them. They have specific rules, as do multiplayer games, but mathematical puzzles do not usually involve competition between two or more players. Instead, in order to solve such a puzzle, the solver must find a solution that satisfies the given conditions.
Logic puzzles and classical ciphers are common examples of mathematical puzzles. Cellular automata and fractals are also considered mathematical puzzles, even though the solver only interacts with them by providing a set of initial conditions.
As they often include or require game-like features or thinking, mathematical puzzles are sometimes also called mathematical games.
Mathemagics
Magic tricks based on mathematical principles can produce self-working but surprising effects. For instance, a mathemagician might use the combinatorial properties of a deck of playing cards to guess a volunteer's selected card, or Hamming codes to identify whether a volunteer is lying.
Other activities
Other curiosities and pastimes of non-trivial mathematical interest include:
patterns in juggling
the sometimes profound algorithmic and geometrical characteristics of origami
patterns and process in creating string figures such as Cat's cradles, etc.
fractal-generating software
Online blogs, podcasts, and YouTube channels
There are many blogs and audio or video series devoted to recreational mathematics. Among the notable are the following:
Cut-the-knot by Alexander Bogomolny
Futility Closet by Greg Ross
Mathologer by Burkard Polster
The videos of Vi Hart
Stand-Up Maths by Matt Parker
Numberphile by Brady Haran
Publications
The journal Eureka published by the mathematical society of the University of Cambridge is one of the oldest publications in recreational mathematics. It has been published 60 times since 1939 and authors have included many famous mathematicians and scientists such as Martin Gardner, John Conway, Roger Penrose, Ian Stewart, Timothy Gowers, Stephen Hawking and Paul Dirac.
The Journal of Recreational Mathematics was the largest publication on this topic from its founding in 1968 until 2014 when it ceased publication.
Mathematical Games (1956 to 1981) was the title of a long-running Scientific American column on recreational mathematics by Martin Gardner. He inspired several generations of mathematicians and scientists through his interest in mathematical recreations. "Mathematical Games" was succeeded by 25 "Metamagical Themas" columns (1981-1983), a similarly distinguished, but shorter-running, column by Douglas Hofstadter, then by 78 "Mathematical Recreations" and "Computer Recreations" columns (1984 to 1991) by A. K. Dewdney, then by 96 "Mathematical Recreations" columns (1991 to 2001) by Ian Stewart, and most recently "Puzzling Adventures" by Dennis Shasha.
The Recreational Mathematics Magazine, published by the Ludus Association, is electronic and semiannual, and focuses on results that provide amusing, witty but nonetheless original and scientifically profound mathematical nuggets. The issues are published in the exact moments of the equinox.
People
Prominent practitioners and advocates of recreational mathematics have included professional and amateur mathematicians:
| Mathematics | Basics | null |
26463 | https://en.wikipedia.org/wiki/Rigel | Rigel | Rigel is a blue supergiant star in the constellation of Orion. It has the Bayer designation β Orionis, which is Latinized to Beta Orionis and abbreviated Beta Ori or β Ori. Rigel is the brightest and most massive componentand the eponymof a star system of at least four stars that appear as a single blue-white point of light to the naked eye. This system is located at a distance of approximately from the Sun.
A star of spectral type B8Ia, Rigel is 120,000 times as luminous as the Sun, and is 18 to 24 times as massive, depending on the method and assumptions used. Its radius is more than seventy times that of the Sun, and its surface temperature is . Due to its stellar wind, Rigel's mass-loss is estimated to be ten million times that of the Sun. With an estimated age of seven to nine million years, Rigel has exhausted its core hydrogen fuel, expanded, and cooled to become a supergiant. It is expected to end its life as a typeII supernova, leaving a neutron star or a black hole as a final remnant, depending on the initial mass of the star.
Rigel varies slightly in brightness, its apparent magnitude ranging from 0.05 to 0.18. It is classified as an Alpha Cygni variable due to the amplitude and periodicity of its brightness variation, as well as its spectral type. Its intrinsic variability is caused by pulsations in its unstable atmosphere. Rigel is generally the seventh-brightest star in the night sky and the brightest star in Orion, though it is occasionally outshone by Betelgeuse, which varies over a larger range.
A triple-star system is separated from Rigel by an angle of . It has an apparent magnitude of 6.7, making it 1/400th as bright as Rigel. Two stars in the system can be seen by large telescopes, and the brighter of the two is a spectroscopic binary. These three stars are all blue-white main-sequence stars, each three to four times as massive as the Sun. Rigel and the triple system orbit a common center of gravity with a period estimated to be 24,000 years. The inner stars of the triple system orbit each other every 10 days, and the outer star orbits the inner pair every 63 years. A much fainter star, separated from Rigel and the others by nearly an arc minute, may be part of the same star system.
Nomenclature
In 2016, the International Astronomical Union (IAU) included the name "Rigel" in the IAU Catalog of Star Names. According to the IAU, this proper name applies only to the primary component A of the Rigel system. The system is listed variously in historical astronomical catalogs as or For simplicity, Rigel's companions are referred to as Rigel B, C, and D; the IAU describes such names as "useful nicknames" that are "unofficial". In modern comprehensive catalogs, the whole multiple star system is known as or
The designation of Rigel as (Latinized to beta Orionis) was made by Johann Bayer in 1603. The "beta" designation is usually given to the second-brightest star in each constellation, but Rigel is almost always brighter than (Betelgeuse). Astronomer J.B. Kaler speculated that Bayer assigned letters during a rare period when variable star Betelgeuse temporarily outshone Rigel, resulting in Betelgeuse being designated "alpha" and Rigel designated "beta". However, closer examination of Bayer's method shows that he did not strictly order the stars by brightness, but instead grouped them first by magnitude, then by declination. Rigel and Betelgeuse were both classed as first magnitude, and in Orion the stars of each class appear to have been ordered north to south.
Rigel has many other stellar designations taken from various catalogs, including the (19 Ori), the Bright Star Catalogue entry HR 1713, and the Henry Draper Catalogue number HD 34085. These designations frequently appear in the scientific literature, but rarely in popular writing.
Rigel is listed in the General Catalogue of Variable Stars, but since its familiar Bayer designation is used instead of creating a separate variable star designation.
Observation
Rigel is an intrinsic variable star with an apparent magnitude ranging from 0.05 to 0.18. It is typically the seventh-brightest star in the celestial sphere, excluding the Sun, although occasionally fainter than Betelgeuse. Rigel appears slightly blue-white and has a B-V color index of −0.06. It contrasts strongly with reddish Betelgeuse.
Culminating every year at midnight on 12 December, and at 9:00pm on 24 January, Rigel is visible on winter evenings in the Northern Hemisphere and on summer evenings in the Southern Hemisphere. In the Southern Hemisphere, Rigel is the first bright star of Orion visible as the constellation rises. Correspondingly, it is also the first star of Orion to set in most of the Northern Hemisphere. The star is a vertex of the "Winter Hexagon", an asterism that includes Aldebaran, Capella, Pollux, Procyon, and Sirius. Rigel is a prominent equatorial navigation star, being easily located and readily visible in all the world's oceans (the exception is the area north of the 82nd parallel north).
Spectroscopy
Rigel's spectral type is a defining point of the classification sequence for supergiants. The overall spectrum is typical for a late B class star, with strong absorption lines of the hydrogen Balmer series as well as neutral helium lines and some of heavier elements such as oxygen, calcium, and magnesium. The luminosity class for B8 stars is estimated from the strength and narrowness of the hydrogen spectral lines, and Rigel is assigned to the bright supergiant class Ia. Variations in the spectrum have resulted in the assignment of different classes to Rigel, such as B8 Ia, B8 Iab, and B8 Iae.
As early as 1888, the heliocentric radial velocity of Rigel, as estimated from the Doppler shifts of its spectral lines, was seen to vary. This was confirmed and interpreted at the time as being due to a spectroscopic companion with a period of about 22 days. The radial velocity has since been measured to vary by about around a mean of .
In 1933, the Hα line in Rigel's spectrum was seen to be unusually weak and shifted towards shorter wavelengths, while there was a narrow emission spike about to the long wavelength side of the main absorption line. This is now known as a P Cygni profile after a star that shows this feature strongly in its spectrum. It is associated with mass loss where there is simultaneously emission from a dense wind close to the star and absorption from circumstellar material expanding away from the star.
The unusual Hα line profile is observed to vary unpredictably. It is a normal absorption line around a third of the time. About a quarter of the time, it is a double-peaked line, that is, an absorption line with an emission core or an emission line with an absorption core. About a quarter of the time it has a P Cygni profile; most of the rest of the time, the line has an inverse P Cygni profile, where the emission component is on the short wavelength side of the line. Rarely, there is a pure emission Hα line. The line profile changes are interpreted as variations in the quantity and velocity of material being expelled from the star. Occasional very high-velocity outflows have been inferred, and, more rarely, infalling material. The overall picture is one of large looping structures arising from the photosphere and driven by magnetic fields.
Variability
Rigel has been known to vary in brightness since at least 1930. The small amplitude of Rigel's brightness variation requires photoelectric or CCD photometry to be reliably detected. This brightness variation has no obvious period. Observations over 18 nights in 1984 showed variations at red, blue, and yellow wavelengths of up to 0.13 magnitudes on timescales of a few hours to several days, but again no clear period. Rigel's color index varies slightly, but this is not significantly correlated with its brightness variations.
From analysis of Hipparcos satellite photometry, Rigel is identified as belonging to the Alpha Cygni class of variable stars, defined as "non-radially pulsating supergiants of the Bep–AepIa spectral types". In those spectral types, the 'e' indicates that it displays emission lines in its spectrum, while the 'p' means it has an unspecified spectral peculiarity. Alpha Cygni type variables are generally considered to be irregular or have quasi-periods. Rigel was added to the General Catalogue of Variable Stars in the 74th name-list of variable stars on the basis of the Hipparcos photometry, which showed variations with a photographic amplitude of 0.039 magnitudes and a possible period of 2.075 days. Rigel was observed with the Canadian MOST satellite for nearly 28 days in 2009. Milli-magnitude variations were observed, and gradual changes in flux suggest the presence of long-period pulsation modes.
Mass loss
From observations of the variable Hα spectral line, Rigel's mass-loss rate due to stellar wind is estimated be solar masses per year (/yr)—about ten million times more than the mass-loss rate from the Sun. More detailed optical and Kband infrared spectroscopic observations, together with VLTI interferometry, were taken from 2006 to 2010. Analysis of the Hα and Hγ line profiles, and measurement of the regions producing the lines, show that Rigel's stellar wind varies greatly in structure and strength. Loop and arm structures were also detected within the wind. Calculations of mass loss from the Hγ line give in 2006-7 and in 2009–10. Calculations using the Hα line give lower results, around . The terminal wind velocity is . It is estimated that Rigel has lost about three solar masses () since beginning life as a star of seven to nine million years ago.
Distance
Rigel's distance from the Sun is somewhat uncertain, different estimates being obtained by different methods. Old estimates placed it 166 parsecs (or 541 light years) away from the Sun. The 2007 Hipparcos new reduction of Rigel's parallax is , giving a distance of with a margin of error of about 9%. Rigel B, usually considered to be physically associated with Rigel and at the same distance, has a Gaia Data Release 3 parallax of , suggesting a distance around . However, the measurements for this object may be unreliable.
Indirect distance estimation methods have also been employed. For example, Rigel is believed to be in a region of nebulosity, its radiation illuminating several nearby clouds. Most notable of these is the 5°-long IC 2118 (Witch Head Nebula), located at an angular separation of 2.5° from the star, or a projected distance of away. From measures of other nebula-embedded stars, IC2118's distance is estimated to be .
Rigel is an outlying member of the Orion OB1 association, which is located at a distance of up to from Earth. It is a member of the loosely defined Taurus-Orion R1 Association, somewhat closer at . Rigel is thought to be considerably closer than most of the members of Orion OB1 and the Orion Nebula. Betelgeuse and Saiph lie at a similar distance to Rigel, although Betelgeuse is a runaway star with a complex history and might have originally formed in the main body of the association.
Stellar system
Hierarchical scheme for Rigel's components
The star system of which Rigel is a part has at least four components. Rigel (sometimes called Rigel A to distinguish from the other components) has a visual companion, which is likely a close triple-star system. A fainter star at a wider separation might be a fifth component of the Rigel system.
William Herschel discovered Rigel to be a visual double star on 1 October 1781, cataloguing it as star 33 in the "second class of double stars" in his Catalogue of Double Stars, usually abbreviated to HII33, or as H233 in the Washington Double Star Catalogue. Friedrich Georg Wilhelm von Struve first measured the relative position of the companion in 1822, cataloguing the visual pair as Σ 668. The secondary star is often referred to as Rigel B or β Orionis B. The angular separation of Rigel B from Rigel A is 9.5 arc seconds to its south along position angle 204°. Although not particularly faint at visual magnitude 6.7, the overall difference in brightness from Rigel A (about 6.6 magnitudes or 440 times fainter) makes it a challenging target for telescope apertures smaller than .
At Rigel's estimated distance, Rigel B's projected separation from Rigel A is over 2,200astronomical units (AU). Since its discovery, there has been no sign of orbital motion, although both stars share a similar common proper motion. The pair would have an estimated orbital period of 24,000years. Gaia Data Release 2(DR2) contains a somewhat unreliable parallax for Rigel B, placing it at about , further away than the Hipparcos distance for Rigel, but similar to the Taurus-Orion R1 association. There is no parallax for Rigel in Gaia DR2. The Gaia DR2 proper motions for Rigel B and the Hipparcos proper motions for Rigel are both small, although not quite the same.
In 1871, Sherburne Wesley Burnham suspected Rigel B to be a binary system, and in 1878, he resolved it into two components. This visual companion is designated as component C (Rigel C), with a measured separation from component B that varies from less than to around . In 2009, speckle interferometry showed the two almost identical components separated by , with visual magnitudes of 7.5 and 7.6, respectively. Their estimated orbital period is 63years. Burnham listed the Rigel multiple system as β555 in his double star catalog or BU555 in modern use.
Component B is a double-lined spectroscopic binary system, which shows two sets of spectral lines combined within its single stellar spectrum. Periodic changes observed in relative positions of these lines indicate an orbital period of 9.86days. The two spectroscopic components Rigel Ba and Rigel Bb cannot be resolved in optical telescopes but are known to both be hot stars of spectral type around B9. This spectroscopic binary, together with the close visual component Rigel C, is likely a physical triple-star system, although Rigel C cannot be detected in the spectrum, which is inconsistent with its observed brightness.
In 1878, Burnham found another possibly associated star of approximately 13th magnitude. He listed it as component D of β555, although it is unclear whether it is physically related or a coincidental alignment. Its 2017 separation from Rigel was , almost due north at a position angle of 1°. Gaia DR2 finds it to be a 12th magnitude sunlike star at approximately the same distance as Rigel. Likely a K-type main-sequence star, this star would have an orbital period of around 250,000 years, if it is part of the Rigel system.
A spectroscopic companion to Rigel was reported on the basis of radial velocity variations, and its orbit was even calculated, but subsequent work suggests the star does not exist and that observed pulsations are intrinsic to Rigel itself.
Physical characteristics
Rigel is a blue supergiant that has exhausted the hydrogen fuel in its core, expanded and cooled as it moved away from the main sequence across the upper part of the Hertzsprung–Russell diagram. When it was on the main sequence, its effective temperature would have been around . Rigel's complex variability at visual wavelengths is caused by stellar pulsations similar to those of Deneb. Further observations of radial velocity variations indicate that it simultaneously oscillates in at least 19 non-radial modes with periods ranging from about 1.2 to 74 days.
Estimation of many physical characteristics of blue supergiant stars, including Rigel, is challenging due to their rarity and uncertainty about how far they are from the Sun. As such, their characteristics are mainly estimated from theoretical stellar evolution models. Its effective temperature can be estimated from the spectral type and color to be around . A mass of at an age of million years has been estimated by comparing evolutionary tracks, while atmospheric modeling from the spectrum gives a mass of .
Although Rigel is often considered the most luminous star within 1,000 light-years of the Sun, its energy output is poorly known. Using the Hipparcos distance of , the estimated relative luminosity for Rigel is about 120,000 times that of the Sun (), but another recently published distance of suggests an even higher luminosity of . Other calculations based on theoretical stellar evolutionary models of Rigel's atmosphere give luminosities anywhere between and , while summing the spectral energy distribution from historical photometry with the Hipparcos distance suggests a luminosity as low as . A 2018 study using the Navy Precision Optical Interferometer measured the angular diameter as . After correcting for limb darkening, the angular diameter is found to be , yielding a radius of . An older measurement of the angular diameter gives , equivalent to a radius of at . These radii are calculated assuming the Hipparcos distance of ; adopting a distance of leads to a significantly larger size. Older distance estimates were mostly far lower than modern estimates, leading to lower radius estimates; a 1922 estimate by John Stanley Plaskett gave Rigel a diameter of 25 million miles, or approximately , smaller than its neighbor Aldebaran.
Due to their closeness to each other and ambiguity of the spectrum, little is known about the intrinsic properties of the members of the Rigel BC triple system. All three stars seem to be near equally hot B-type main-sequence stars that are three to four times as massive as the Sun.
Evolution
Stellar evolution models suggest the pulsations of Rigel are powered by nuclear reactions in a hydrogen-burning shell that is at least partially non-convective. These pulsations are stronger and more numerous in stars that have evolved through a red supergiant phase and then increased in temperature to again become a blue supergiant. This is due to the decreased mass and increased levels of fusion products at the surface of the star.
Rigel is likely to be fusing helium in its core. Due to strong convection of helium produced in the core while Rigel was on the main sequence and in the hydrogen-burning shell since it became a supergiant, the fraction of helium at the surface has increased from 26.6% when the star formed to 32% now. The surface abundances of carbon, nitrogen, and oxygen seen in the spectrum are compatible with a post-red supergiant star only if its internal convection zones are modeled using non-homogeneous chemical conditions known as the Ledoux Criteria.
Rigel is expected to eventually end its stellar life as a type II supernova. It is one of the closest known potential supernova progenitors to Earth, and would be expected to have a maximum apparent magnitude of around (about the same brightness as a quarter Moon or around 300 times brighter than Venus ever gets). The supernova would leave behind either a black hole or a neutron star.
Etymology and cultural significance
The earliest known recording of the name Rigel is in the Alfonsine tables of 1521. It is derived from the Arabic name , "the left leg (foot) of Jauzah" (i.e. rijl meaning "leg, foot"), which can be traced to the 10th century. "Jauzah" was a proper name for Orion; an alternative Arabic name was , "the foot of the great one", from which stems the rarely used variant names Algebar or Elgebar. The Alphonsine tables saw its name split into "Rigel" and "Algebar", with the note, et dicitur Algebar. Nominatur etiam Rigel. Alternate spellings from the 17th century include Regel by Italian astronomer Giovanni Battista Riccioli, Riglon by German astronomer Wilhelm Schickard, and Rigel Algeuze or Algibbar by English scholar Edmund Chilmead.
With the constellation representing the mythological Greek huntsman Orion, Rigel is his knee or (as its name suggests) foot; with the nearby star Beta Eridani marking Orion's footstool. Rigel is presumably the star known as "Aurvandil's toe" in Norse mythology. In the Caribbean, Rigel represented the severed leg of the folkloric figure Trois Rois, himself represented by the three stars of Orion's Belt. The leg had been severed with a cutlass by the maiden Bįhi (Sirius). The Lacandon people of southern Mexico knew it as tunsel ("little woodpecker").
Rigel was known as Yerrerdet-kurrk to the Wotjobaluk koori of southeastern Australia, and held to be the mother-in-law of Totyerguil (Altair). The distance between them signified the taboo preventing a man from approaching his mother-in-law. The indigenous Boorong people of northwestern Victoria named Rigel as Collowgullouric Warepil. The Wardaman people of northern Australia know Rigel as the Red Kangaroo Leader Unumburrgu and chief conductor of ceremonies in a songline when Orion is high in the sky. Eridanus, the river, marks a line of stars in the sky leading to it, and the other stars of Orion are his ceremonial tools and entourage. Betelgeuse is Ya-jungin "Owl Eyes Flicking", watching the ceremonies.
The Māori people of New Zealand named Rigel as Puanga, said to be a daughter of Rehua (Antares), the chief of all-stars. Its heliacal rising presages the appearance of Matariki (the Pleiades) in the dawn sky, marking the Māori New Year in late May or early June. The Moriori people of the Chatham Islands, as well as some Māori groups in New Zealand, mark the start of their New Year with Rigel rather than the Pleiades. Puaka is a southern name variant used in the South Island.
In Japan, the Minamoto or Genji clan chose Rigel and its white color as its symbol, calling the star Genji-boshi (), while the Taira or Heike clan adopted Betelgeuse and its red color. The two powerful families fought the Genpei War; the stars were seen as facing off against each other and kept apart only by the three stars of Orion's Belt.
In modern culture
The MS Rigel was originally a Norwegian ship, built in Copenhagen in 1924. It was requisitioned by the Germans during World War II and sunk in 1944 while being used to transport prisoners of war. Two US Navy ships have borne the name USS Rigel. The SSM-N-6 Rigel was a cruise missile program for the US Navy that was cancelled in 1953 before reaching deployment.
The Rigel Skerries are a chain of small islands in Antarctica, renamed after originally being called Utskjera. They were given their current name as Rigel was used as an astrofix. Mount Rigel, elevation , is also in Antarctica.
| Physical sciences | Notable stars | null |
26469 | https://en.wikipedia.org/wiki/General%20recursive%20function | General recursive function | In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.
Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms.
The subset of all total recursive functions with values in is known in computational complexity theory as the complexity class R.
Definition
The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator .
The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive.
Primitive or "basic" functions:
Constant functions : For each natural number and every
Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator.
Successor function S:
Projection function (also called the Identity function): For all natural numbers such that :
Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result):
Composition operator (also called the substitution operator): Given an m-ary function and m k-ary functions :
This means that is defined only if and are all defined.
Primitive recursion operator : Given the k-ary function and k+2 -ary function :
This means that is defined only if and are defined for all
Minimization operator : Given a (k+1)-ary function , the k-ary function is defined by:
Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which is not defined, then the search never terminates, and is not defined for the argument
While some textbooks use the μ-operator as defined here, others demand that the μ-operator is applied to total functions only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below). The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total.
The strong equality relation can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that
holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined.
Examples
Examples not involving the minimization operator can be found at Primitive recursive function#Examples.
The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive.
The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator.
Total recursive function
A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem.
Equivalence with other models of computability
In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function.
The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values).
Normal form theorem
A normal form theorem due to Kleene says that for each k there are primitive recursive functions and such that for any μ-recursive function with k free variables there is an e such that
.
The number e is called an index or Gödel number for the function f. A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function.
Minsky observes the defined above is in essence the μ-recursive equivalent of the universal Turing machine:
Symbolism
A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x:
Constant function: Kleene uses " C(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ":
e.g. C ( r, s, t, u, v, w, x ) = 13
e.g. const13 ( r, s, t, u, v, w, x ) = 13
Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows:
S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc.
Identity function: Kleene (1952) uses " U " to indicate the identity function over the variables xi; B-B-J use the identity function id over the variables x1 to xn:
U( x ) = id( x ) = xi
e.g. U = id ( r, s, t, u, v, w, x ) = t
Composition (Substitution) operator: Kleene uses a bold-face S (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn":
If we are given h( x )= g( f1(x), ... , fm(x) )
h(x) = S(g, f1, ... , fm )
In a similar manner, but without the sub- and superscripts, B-B-J write:
h(x)= Cn[g, f1 ,..., fm](x)
Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given:
base step: h( 0, x )= f( x ), and
induction step: h( y+1, x ) = g( y, h(y, x),x )
Example: primitive recursion definition of a + b:
base step: f( 0, a ) = a = U(a)
induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U( b, c, a ))
R2 { U(a), S [ (U( b, c, a ) ] }
Pr{ U(a), S[ (U( b, c, a ) ] }
Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions
S(a) = a'
U(a) = a
U( b, c, a ) = c
g(b, c, a) = S(U( b, c, a )) = c'
base step: h( 0, a ) = U(a)
induction step: h( b', a ) = g( b, h( b, a ), a )
He arrives at:
a+b = R2[ U, S'(S, U) ]
Examples
Fibonacci number
McCarthy 91 function
| Mathematics | Computability theory | null |
26471 | https://en.wikipedia.org/wiki/Rat | Rat | Rats are various medium-sized, long-tailed rodents. Species of rats are found throughout the order Rodentia, but stereotypical rats are found in the genus Rattus. Other rat genera include Neotoma (pack rats), Bandicota (bandicoot rats) and Dipodomys (kangaroo rats).
Rats are typically distinguished from mice by their size. Usually the common name of a large muroid rodent will include the word "rat", while a smaller muroid's name will include "mouse". The common terms rat and mouse are not taxonomically specific. There are 56 known species of rats in the world.
Species and description
The best-known rat species are the black rat (Rattus rattus) and the brown rat (Rattus norvegicus). This group, generally known as the Old World rats or true rats, originated in Asia. Rats are bigger than most Old World mice, which are their relatives, but seldom weigh over in the wild.
The term rat is also used in the names of other small mammals that are not true rats. Examples include the North American pack rats (aka wood rats) and a number of species loosely called kangaroo rats. Rats such as the bandicoot rat (Bandicota bengalensis) are murine rodents related to true rats but are not members of the genus Rattus.
Male rats are called bucks; unmated females, does, pregnant or parent females, dams; and infants, kittens or pups. A group of rats is referred to as a mischief.
The common species are opportunistic survivors and often live with and near humans; therefore, they are known as commensals. They may cause substantial food losses, especially in developing countries. However, the widely distributed and problematic commensal species of rats are a minority in this diverse genus. Many species of rats are island endemics, some of which have become endangered due to habitat loss or competition with the brown, black, or Polynesian rat.
Wild rodents, including rats, can carry many different zoonotic pathogens, such as Leptospira, Toxoplasma gondii, and Campylobacter. The Black Death is traditionally believed to have been caused by the microorganism Yersinia pestis, carried by the tropical rat flea (Xenopsylla cheopis), which preyed on black rats living in European cities during the epidemic outbreaks of the Middle Ages; these rats were used as transport hosts. Another zoonotic disease linked to the rat is foot-and-mouth disease.
Rats become sexually mature at age 6 weeks, but reach social maturity at about 5 to 6 months of age. The average lifespan of rats varies by species, but many only live about a year due to predation.
The black and brown rats diverged from other Old World rats in the forests of Asia during the beginning of the Pleistocene.
Rat tails
The characteristic long tail of most rodents is a feature that has been extensively studied in various rat species models, which suggest three primary functions of this structure: thermoregulation, minor proprioception, and a nocifensive-mediated degloving response. Rodent tails—particularly in rat models—have been implicated with a thermoregulation function that follows from its anatomical construction. This particular tail morphology is evident across the family Muridae, in contrast to the bushier tails of Sciuridae, the squirrel family. The tail is hairless and thin skinned but highly vascularized, thus allowing for efficient countercurrent heat exchange with the environment. The high muscular and connective tissue densities of the tail, along with ample muscle attachment sites along its plentiful caudal vertebrae, facilitate specific proprioceptive senses to help orient the rodent in a three-dimensional environment. Murids have evolved a unique defense mechanism termed degloving that allows for escape from predation through the loss of the outermost integumentary layer on the tail. However, this mechanism is associated with multiple pathologies that have been the subject of investigation.
Multiple studies have explored the thermoregulatory capacity of rodent tails by subjecting test organisms to varying levels of physical activity and quantifying heat conduction via the animals' tails. One study demonstrated a significant disparity in heat dissipation from a rat's tail relative to its abdomen. This observation was attributed to the higher proportion of vascularity in the tail, as well as its higher surface-area-to-volume ratio, which directly relates to heat's ability to dissipate via the skin. These findings were confirmed in a separate study analyzing the relationships of heat storage and mechanical efficiency in rodents that exercise in warm environments. In this study, the tail was a focal point in measuring heat accumulation and modulation.
On the other hand, the tail's ability to function as a proprioceptive sensor and modulator has also been investigated. As aforementioned, the tail demonstrates a high degree of muscularization and subsequent innervation that ostensibly collaborate in orienting the organism. Specifically, this is accomplished by coordinated flexion and extension of tail muscles to produce slight shifts in the organism's center of mass, orientation, etc., which ultimately assists it with achieving a state of proprioceptive balance in its environment. Further mechanobiological investigations of the constituent tendons in the tail of the rat have identified multiple factors that influence how the organism navigates its environment with this structure. A particular example is that of a study in which the morphology of these tendons is explicated in detail. Namely, cell viability tests of tendons of the rat's tail demonstrate a higher proportion of living fibroblasts that produce the collagen for these fibers. As in humans, these tendons contain a high density of golgi tendon organs that help the animal assess stretching of muscle in situ and adjust accordingly by relaying the information to higher cortical areas associated with balance, proprioception, and movement.
The characteristic tail of murids also displays a unique defense mechanism known as degloving in which the outer layer of the integument can be detached in order to facilitate the animal's escape from a predator. This evolutionary selective pressure has persisted despite a multitude of pathologies that can manifest upon shedding part of the tail and exposing more interior elements to the environment. Paramount among these are bacterial and viral infection, as the high density of vascular tissue within the tail becomes exposed upon avulsion or similar injury to the structure. The degloving response is a nocifensive response, meaning that it occurs when the animal is subjected to acute pain, such as when a predator snatches the organism by the tail.
As pets
Specially bred rats have been kept as pets at least since the late 19th century. Pet rats are typically variants of the species brown rat, but black rats and giant pouched rats are also sometimes kept. Pet rats behave differently from their wild counterparts depending on how many generations they have been kept as pets. Pet rats do not pose any more of a risk of zoonotic diseases than pets such as cats or dogs. Tamed rats are generally friendly and can be taught to perform selected behaviors.
Selective breeding has brought about different color and marking varieties in rats. Genetic mutations have also created different fur types, such as rex and hairless. Congenital malformation in selective breeding have created the dumbo rat, a popular pet choice due to their low, saucer-shaped ears. A breeding standard exists for rat fanciers wishing to breed and show their rat at a rat show.
As subjects for scientific research
In 1895, Clark University in Worcester, Massachusetts, established a population of domestic albino brown rats to study the effects of diet and for other physiological studies. Over the years, rats have been used in many experimental studies, adding to our understanding of genetics, diseases, the effects of drugs, and other topics that have provided a great benefit for the health and wellbeing of humankind.
The aortic arches of the rat are among the most commonly studied in murine models due to marked anatomical homology to the human cardiovascular system. Both rat and human aortic arches exhibit subsequent branching of the brachiocephalic trunk, left common carotid artery, and left subclavian artery, as well as geometrically similar, nonplanar curvature in the aortic branches. Aortic arches studied in rats exhibit abnormalities similar to those of humans, including altered pulmonary arteries and double or absent aortic arches. Despite existing anatomical analogy in the inthrathoracic position of the heart itself, the murine model of the heart and its structures remains a valuable tool for studies of human cardiovascular conditions.
The rat's larynx has been used in experimentations that involve inhalation toxicity, allograft rejection, and irradiation responses. One experiment described four features of the rat's larynx. The first being the location and attachments of the thyroarytenoid muscle, the alar cricoarytenoid muscle, and the superior cricoarytenoid muscle, the other of the newly named muscle that ran from the arytenoid to a midline tubercle on the cricoid. The newly named muscles were not seen in the human larynx. In addition, the location and configuration of the laryngeal alar cartilage was described. The second feature was that the way the newly named muscles appear to be familiar to those in the human larynx. The third feature was that a clear understanding of how MEPs are distributed in each of the laryngeal muscles was helpful in understanding the effects of botulinum toxin injection. The MEPs in the posterior cricoarytenoid muscle, lateral cricoarytenoid muscle, cricothyroid muscle, and superior cricoarytenoid muscle were focused mostly at the midbelly. In addition, the medial thyroarytenoid muscle were focused at the midbelly while the lateral thyroarytenoid muscle MEPs were focused at the anterior third of the belly. The fourth and final feature that was cleared up was how the MEPs were distributed in the thyroarytenoid muscle.
Laboratory rats have also proved valuable in psychological studies of learning and other mental processes (Barnett 2002), as well as to understand group behavior and overcrowding (with the work of John B. Calhoun on behavioral sink). A 2007 study found rats to possess metacognition, a mental ability previously only documented in humans and some primates.
Domestic rats differ from wild rats in many ways. They are calmer and less likely to bite; they can tolerate greater crowding; they breed earlier and produce more offspring; and their brains, livers, kidneys, adrenal glands, and hearts are smaller (Barnett 2002).
Brown rats are often used as model organisms for scientific research. Since the publication of the rat genome sequence, and other advances, such as the creation of a rat SNP chip, and the production of knockout rats, the laboratory rat has become a useful genetic tool, although not as popular as mice. Entirely new breeds or "lines" of brown rats, such as the Wistar rat, have been bred for use in laboratories. Much of the genome of Rattus norvegicus has been sequenced.
When it comes to conducting tests related to intelligence, learning, and drug abuse, rats are a popular choice due to their high intelligence, ingenuity, aggressiveness, and adaptability. Their psychology seems in many ways similar to that of humans. Inspired by B.F. Skinner’s famous box which dispensed food pellets when rats pushed a lever, photographer Augustin Lignier gave two rats periodic, unpredictable rewards for pressing a button. He likened their repeated button-pressing behaviors to people’s fascinations with digital and social media.
General intelligence
Early studies found evidence both for and against measurable intelligence using the "g factor" in rats. Part of the difficulty of understanding animal cognition, generally, is determining what to measure. One aspect of intelligence is the ability to learn, which can be measured using a maze like the T-maze. Experiments done in the 1920s showed that some rats performed better than others in maze tests, and if these rats were selectively bred, their offspring also performed better, suggesting that in rats an ability to learn was heritable in some way.
As food
Rat meat is a food that, while taboo in some cultures, is a dietary staple in others.
Working rats
Rats have been used as working animals. Tasks for working rats include the sniffing of gunpowder residue, demining, acting and animal-assisted therapy. Rats have a keen sense of smell and are easy to train. These characteristics have been employed, for example, by the Belgian non-governmental organization APOPO, which trains rats (specifically African giant pouched rats) to detect landmines and diagnose tuberculosis through smell.
As pests
Rats have long been considered deadly pests. Once considered a modern myth, the rat flood in India occurs every fifty years, as armies of bamboo rats descend upon rural areas and devour everything in their path. Rats have long been held up as the chief villain in the spread of the Bubonic Plague; however, recent studies show that rats alone could not account for the rapid spread of the disease through Europe in the Middle Ages. Still, the Centers for Disease Control does list nearly a dozen diseases directly linked to rats.
Most urban areas battle rat infestations. A 2015 study by the American Housing Survey (AHS) found that eighteen percent of homes in Philadelphia showed evidence of rodents. Boston, New York City, and Washington, D.C., also demonstrated significant rodent infestations. Indeed, rats in New York City are famous for their size and prevalence. The urban legend that the rat population in Manhattan equals that of its human population was definitively refuted by Robert Sullivan in his book Rats but illustrates New Yorkers' awareness of the presence, and on occasion boldness and cleverness, of the rodents. New York has specific regulations for eradicating rats; multifamily residences and commercial businesses must use a specially trained and licensed rat catcher.
Chicago was declared the "rattiest city" in the US by the pest control company Orkin in 2020, for the sixth consecutive time. It's followed by Los Angeles, New York, Washington, DC, and San Francisco. To help combat the problem, a Chicago animal shelter has placed more than 1000 feral cats (sterilized and vaccinated) outside of homes and businesses since 2012, where they hunt and catch rats while also providing a deterrent simply by their presence.
Rats have the ability to swim up sewer pipes into toilets. Rats will infest any area that provides shelter and easy access to sources of food and water, including under sinks, near garbage, and inside walls or cabinets.
In the spread of disease
Rats can serve as zoonotic vectors for certain pathogens and thus spread disease, such as bubonic plague, Lassa fever, leptospirosis, and Hantavirus infection. Researchers studying New York City wastewater have also cited rats as the potential source of "cryptic" SARS-CoV-2 lineages, due to unknown viral RNA fragments in sewage matching mutations previously shown to make SARS-CoV-2 more adept at rodent-based transmission.
Rats are also associated with human dermatitis because they are frequently infested with blood feeding rodent mites such as the tropical rat mite (Ornithonyssus bacoti) and spiny rat mite (Laelaps echidnina), which will opportunistically bite and feed on humans, where the condition is known as rat mite dermatitis.
As invasive species
When introduced into locations where rats previously did not exist, they can wreak an enormous degree of environmental degradation. Rattus rattus, the black rat, is considered to be one of the world's worst invasive species. Also known as the ship rat, it has been carried worldwide as a stowaway on seagoing vessels for millennia and has usually accompanied men to any new area visited or settled by human beings by sea. Rats first got to countries such as America and Australia by stowing away on ships. The similar species Rattus norvegicus, the brown rat or wharf rat, has also been carried worldwide by ships in recent centuries.
The ship or wharf rat has contributed to the extinction of many species of wildlife, including birds, small mammals, reptiles, invertebrates, and plants, especially on islands. True rats are omnivorous, capable of eating a wide range of plant and animal foods, and have a very high birth rate. When introduced to a new area, they quickly reproduce to take advantage of the new food supply. In particular, they prey on the eggs and young of forest birds, which on isolated islands often have no other predators and thus have no fear of predators. Some experts believe that rats are to blame for between forty percent and sixty percent of all seabird and reptile extinctions, with ninety percent of those occurring on islands. Thus man has indirectly caused the extinction of many species by accidentally introducing rats to new areas.
Rat-free areas
Rats are found in nearly all areas of Earth which are inhabited by human beings. The only rat-free continent is Antarctica, which is too cold for rat survival outdoors, and its lack of human habitation does not provide buildings to shelter them from the weather. However, rats have been introduced to many of the islands near Antarctica, and because of their destructive effect on native flora and fauna, efforts to eradicate them are ongoing. In particular, Bird Island (just off rat-infested South Georgia Island), where breeding seabirds could be badly affected if rats were introduced, is subject to special measures and regularly monitored for rat invasions.
As part of island restoration, some islands' rat populations have been eradicated to protect or restore the ecology. Hawadax Island, Alaska was declared rat free after 229 years and Campbell Island, New Zealand after almost 200 years. Breaksea Island in New Zealand was declared rat free in 1988 after an eradication campaign based on a successful trial on the smaller Hawea Island nearby.
In January 2015, an international "Rat Team" (organized by the South Georgia Heritage Trust) set sail from the Falkland Islands for the British Overseas Territory of South Georgia and the South Sandwich Islands on board a ship carrying three helicopters and 100 tons of rat poison with the objective of "reclaiming the island for its seabirds". Rats had wiped out more than 90% of the seabirds on South Georgia, and the sponsors hoped that once the rats were gone, it would regain its former status as home to the greatest concentration of seabirds in the world.
The Canadian province of Alberta is notable for being the largest inhabited area on Earth which is free of true rats due to very aggressive government rat control policies. It has large numbers of native pack rats, also called bushy-tailed wood rats, but they are forest-dwelling vegetarians which are much less destructive than true rats.
Alberta was settled by Europeans relatively late in North American history and only became a province in 1905. Black rats cannot survive in its climate at all, and brown rats must live near people and in their structures to survive the winters. There are numerous predators in Canada's vast natural areas which will eat non-native rats, so it took until 1950 for invading rats to make their way over land from Eastern Canada. Immediately upon their arrival at the eastern border with Saskatchewan, the Alberta government implemented an extremely aggressive rat control program to stop them from advancing further. A systematic detection and eradication system was used throughout a control zone about long and wide along the eastern border to eliminate rat infestations before the rats could spread further into the province. Shotguns, bulldozers, high explosives, poison gas, and incendiaries were used to destroy rats. Numerous farm buildings were destroyed in the process. Initially, tons of arsenic trioxide were spread around thousands of farm yards to poison rats, but soon after the program commenced the rodenticide and medical drug warfarin was introduced, which is much safer for people and more effective at killing rats than arsenic.
Forceful government control measures, strong public support and enthusiastic citizen participation continue to keep rat infestations to a minimum. The effectiveness has been aided by a similar but newer program in Saskatchewan which prevents rats from even reaching the Alberta border. Alberta still employs an armed rat patrol to control rats along Alberta's borders. About ten single rats are found and killed per year, and occasionally a large localized infestation has to be dug out with heavy machinery, but the number of permanent rat infestations is zero.
In culture
Ancient Romans did not generally differentiate between rats and mice, instead referring to the former as mus maximus (big mouse) and the latter as mus minimus (little mouse).
On the Isle of Man, there is a taboo against the word "rat".
Asian cultures
The rat (sometimes referred to as a mouse) is the first of the twelve animals of the Chinese zodiac. People born in this year are expected to possess qualities associated with rats, including creativity, intelligence, honesty, generosity, ambition, a quick temper and wastefulness. People born in a year of the rat are said to get along well with "monkeys" and "dragons", and to get along poorly with "horses".
In Indian tradition, rats are seen as the vehicle of Ganesha, and a rat's statue is always found in a temple of Ganesh. In the northwestern Indian city of Deshnoke, the rats at the Karni Mata Temple are held to be destined for reincarnation as Sadhus (Hindu holy men). The attending priests feed milk and grain to the rats, of which the pilgrims also partake.
European cultures
European associations with the rat are generally negative. For instance, "Rats!" is used as a substitute for various vulgar interjections in the English language. These associations do not draw, per se, from any biological or behavioral trait of the rat, but possibly from the association of rats (and fleas) with the 14th-century medieval plague called the Black Death. Rats are seen as vicious, unclean, parasitic animals that steal food and spread disease. In 1522, the rats in Autun, France were charged and put on trial for destroying crops. However, some people in European cultures keep rats as pets and conversely find them to be tame, clean, intelligent, and playful.
Rats are often used in scientific experiments; animal rights activists allege the treatment of rats in this context is cruel. The term "lab rat" is used, typically in a self-effacing manner, to describe a person whose job function requires them to spend a majority of their work time engaged in bench-level research (such as postgraduate students in the sciences).
Terminology
Rats are frequently blamed for damaging food supplies and other goods, or spreading disease. Their reputation has carried into common parlance: in the English language, "rat" is often an insult or is generally used to signify an unscrupulous character; it is also used, as a synonym for the term nark, to mean an individual who works as a police informant or who has turned state's evidence. Writer/director Preston Sturges created the humorous alias "Ratskywatsky" for a soldier who seduced, impregnated, and abandoned the heroine of his 1944 film, The Miracle of Morgan's Creek. It is a term (noun and verb) in criminal slang for an informant – "to rat on someone" is to betray them by informing the authorities of a crime or misdeed they committed. Describing a person as "rat-like" usually implies he or she is unattractive and suspicious.
Among trade unions, the word "rat" is also a term for nonunion employers or breakers of union contracts, and this is why unions use inflatable rats.
Fiction
Depictions of rats in fiction are historically inaccurate and negative. The most common falsehood is the squeaking almost always heard in otherwise realistic portrayals (i.e. nonanthropomorphic). While the recordings may be of actual squeaking rats, the noise is uncommon – they may do so only if distressed, hurt, or annoyed. Normal vocalizations are very high-pitched, well outside the range of human hearing. Rats are also often cast in vicious and aggressive roles when in fact, their shyness helps keep them undiscovered for so long in an infested home.
The actual portrayals of rats vary from negative to positive with a majority in the negative and ambiguous. The rat plays a villain in several mouse societies; from Brian Jacques's Redwall and Robin Jarvis's The Deptford Mice, to the roles of Disney's Professor Ratigan and Kate DiCamillo's Roscuro and Botticelli. They have often been used as a mechanism in horror; being the titular evil in stories like The Rats or H.P. Lovecraft's The Rats in the Walls and in films like Willard and Ben. Another terrifying use of rats is as a method of torture, for instance in Room 101 in George Orwell's Nineteen Eighty-Four or The Pit and the Pendulum by Edgar Allan Poe.
Selfish helpfulness —those willing to help for a price— has also been attributed to fictional rats. Templeton, from E. B. White's Charlotte's Web, repeatedly reminds the other characters that he is only involved because it means more food for him, and the cellar-rat of John Masefield's The Midnight Folk requires bribery to be of any assistance.
By contrast, the rats appearing in the Doctor Dolittle books tend to be highly positive and likeable characters, many of whom tell their remarkable life stories in the Mouse and Rat Club established by the animal-loving doctor.
Some fictional works use rats as the main characters. Notable examples include the society created by O'Brien's Mrs. Frisby and the Rats of NIMH, and others include Doctor Rat, and Rizzo the Rat from The Muppets. Pixar's 2007 animated film Ratatouille is about a rat described by Roger Ebert as "earnest... lovable, determined, [and] gifted" who lives with a Parisian garbage-boy-turned-chef.
Mon oncle d'Amérique ("My American Uncle"), a 1980 French film, illustrates Henri Laborit's theories on evolutionary psychology and human behaviors by using short sequences in the storyline showing lab rat experiments.
In Harry Turtledove's science fiction novel Homeward Bound, humans unintentionally introduce rats to the ecology at the home world of an alien race which previously invaded Earth and introduced some of its own fauna into its environment. A. Bertram Chandler pitted the space-bound protagonist of a long series of novels, Commodore Grimes, against giant, intelligent rats who took over several stellar systems and enslaved their human inhabitants. "The Stainless Steel Rat" is nickname of the (human) protagonist of a series of humorous science fiction novels written by Harry Harrison.
Wererats, therianthropic creatures able to take the shape of a rat, have appeared in the fantasy or horror genre since the 1970s. The term is a neologism coined in analogy to werewolf. The concept has since become common in role playing games like Dungeons & Dragons and fantasy fiction like the Anita Blake series.
The Pied Piper
One of the oldest and most historic stories about rats is "The Pied Piper of Hamelin", in which a rat-catcher leads away an infestation with enchanted music. The piper is later refused payment, so he in turn leads away the town's children. This tale, traced to Germany around the late 13th century, has inspired adaptations in film, theatre, literature, and even opera. The subject of much research, some theories have intertwined the tale with events related to the Black Plague, in which black rats played an important role. Fictional works based on the tale that focus heavily on the rat aspect include Pratchett's The Amazing Maurice and his Educated Rodents, and Belgian graphic novel (The Ball of the Dead Rat). Furthermore, a linguistic phenomenon when a wh-expression drags with it an entire encompassing phrase to the front of the clause has been named pied-piping after "Pied Piper of Hamlin" (see also pied-piping with inversion).
| Biology and health sciences | Rodents | null |
26477 | https://en.wikipedia.org/wiki/Rust | Rust | Rust is an iron oxide, a usually reddish-brown oxide formed by the reaction of iron and oxygen in the catalytic presence of water or air moisture. Rust consists of hydrous iron(III) oxides (Fe2O3·nH2O) and iron(III) oxide-hydroxide (FeO(OH), Fe(OH)3), and is typically associated with the corrosion of refined iron.
Given sufficient time, any iron mass, in the presence of water and oxygen, could eventually convert entirely to rust. Surface rust is commonly flaky and friable, and provides no passivational protection to the underlying iron, unlike the formation of patina on copper surfaces. Rusting is the common term for corrosion of elemental iron and its alloys such as steel. Many other metals undergo similar corrosion, but the resulting oxides are not commonly called "rust".
Several forms of rust are distinguishable both visually and by spectroscopy, and form under different circumstances. Other forms of rust include the result of reactions between iron and chloride in an environment deprived of oxygen. Rebar used in underwater concrete pillars, which generates green rust, is an example. Although rusting is generally a negative aspect of iron, a particular form of rusting, known as stable rust, causes the object to have a thin coating of rust over the top. If kept in low relative humidity, it makes the "stable" layer protective to the iron below, but not to the extent of other oxides such as aluminium oxide on aluminium.
History
It was assumed that rust, made by dissolved oxygen with iron in the oceans, began to sink beneath the seafloor, forming banded iron formations from 2.5 to 2.2 billion years ago. Afterwards, rust soon uplifted iron metals toward the ocean surface. They would subsequently transform into foundations of iron and steel, which effectively fuelled the Industrial Revolution.
Chemical reactions
Rust is a general name for a complex of oxides and hydroxides of iron, which occur when iron or some alloys that contain iron are exposed to oxygen and moisture for a long period of time. Over time, the oxygen combines with the metal, forming new compounds collectively called rust, in a process called rusting. Rusting is an oxidation reaction specifically occurring with iron. Other metals also corrode via similar oxidation, but such corrosion is not called rusting.
The main catalyst for the rusting process is water. Iron or steel structures might appear to be solid, but water molecules can penetrate the microscopic pits and cracks in any exposed metal. The hydrogen atoms present in water molecules can combine with other elements to form acids, which will eventually cause more metal to be exposed. If chloride ions are present, as is the case with saltwater, the corrosion is likely to occur more quickly. Meanwhile, the oxygen atoms combine with metallic atoms to form the destructive oxide compound. These iron compounds are brittle and crumbly and replace strong metallic iron, reducing the strength of the object.
Oxidation of iron
When iron is in contact with water and oxygen, it rusts. If salt is present, for example in seawater or salt spray, the iron tends to rust more quickly, as a result of chemical reactions. Iron metal is relatively unaffected by pure water or by dry oxygen. As with other metals, like aluminium, a tightly adhering oxide coating, a passivation layer, protects the bulk iron from further oxidation. The conversion of the passivating ferrous oxide layer to rust results from the combined action of two agents, usually oxygen and water.
Other degrading solutions are sulfur dioxide in water and carbon dioxide in water. Under these corrosive conditions, iron hydroxide species are formed. Unlike ferrous oxides, the hydroxides do not adhere to the bulk metal. As they form and flake off from the surface, fresh iron is exposed, and the corrosion process continues until either all of the iron is consumed or all of the oxygen, water, carbon dioxide or sulfur dioxide in the system are removed or consumed.
When iron rusts, the oxides take up more volume than the original metal; this expansion can generate enormous forces, damaging structures made with iron. See economic effect for more details.
Associated reactions
The rusting of iron is an electrochemical process that begins with the transfer of electrons from iron to oxygen. The iron is the reducing agent (gives up electrons) while the oxygen is the oxidizing agent (gains electrons). The rate of corrosion is affected by water and accelerated by electrolytes, as illustrated by the effects of road salt on the corrosion of automobiles. The key reaction is the reduction of oxygen:
O2 + 4 + 2 → 4
Because it forms hydroxide ions, this process is strongly affected by the presence of acid. Likewise, the corrosion of most metals by oxygen is accelerated at low pH. Providing the electrons for the above reaction is the oxidation of iron that may be described as follows:
Fe → Fe2+ + 2
The following redox reaction also occurs in the presence of water and is crucial to the formation of rust:
4 Fe2+ + O2 → 4 Fe3+ + 2 O2−
In addition, the following multistep acid–base reactions affect the course of rust formation:
Fe2+ + 2 H2O ⇌ Fe(OH)2 + 2
Fe3+ + 3 H2O ⇌ Fe(OH)3 + 3
as do the following dehydration equilibria:
Fe(OH)2 ⇌ FeO +
Fe(OH)3 ⇌ FeO(OH) +
2 FeO(OH) ⇌ Fe2O3 +
From the above equations, it is also seen that the corrosion products are dictated by the availability of water and oxygen. With limited dissolved oxygen, iron(II)-containing materials are favoured, including FeO and black lodestone or magnetite (Fe3O4). High oxygen concentrations favour ferric materials with the nominal formulae Fe(OH)3−xO. The nature of rust changes with time, reflecting the slow rates of the reactions of solids.
Furthermore, these complex processes are affected by the presence of other ions, such as Ca2+, which serve as electrolytes which accelerate rust formation, or combine with the hydroxides and oxides of iron to precipitate a variety of Ca, Fe, O, OH species.
The onset of rusting can also be detected in the laboratory with the use of ferroxyl indicator solution. The solution detects both Fe2+ ions and hydroxyl ions. Formation of Fe2+ ions and hydroxyl ions are indicated by blue and pink patches respectively.
Prevention
Because of the widespread use and importance of iron and steel products, the prevention or slowing of rust is the basis of major economic activities in a number of specialized technologies. A brief overview of methods is presented here; for detailed coverage, see the cross-referenced articles.
Rust is permeable to air and water, therefore the interior metallic iron beneath a rust layer continues to corrode. Rust prevention thus requires coatings that preclude rust formation.
Rust-resistant alloys
Stainless steel forms a passivation layer of chromium(III) oxide. Similar passivation behavior occurs with magnesium, titanium, zinc, zinc oxides, aluminium, polyaniline, and other electroactive conductive polymers.
Special "weathering steel" alloys such as Cor-Ten rust at a much slower rate than normal, because the rust adheres to the surface of the metal in a protective layer. Designs using this material must include measures that avoid worst-case exposures since the material still continues to rust slowly even under near-ideal conditions.
Galvanization
Galvanization consists of an application on the object to be protected of a layer of metallic zinc by either hot-dip galvanizing or electroplating. Zinc is traditionally used because it is cheap, adheres well to steel, and provides cathodic protection to the steel surface in case of damage of the zinc layer. In more corrosive environments (such as salt water), cadmium plating is preferred instead of the underlying protected metal. The protective zinc layer is consumed by this action, and thus galvanization provides protection only for a limited period of time.
More modern coatings add aluminium to the coating as zinc-alume; aluminium will migrate to cover scratches and thus provide protection for a longer period. These approaches rely on the aluminium and zinc oxides protecting a once-scratched surface, rather than oxidizing as a sacrificial anode as in traditional galvanized coatings. In some cases, such as very aggressive environments or long design life, both zinc and a coating are applied to provide enhanced corrosion protection.
Typical galvanization of steel products that are to be subjected to normal day-to-day weathering in an outside environment consists of a hot-dipped 85 μm zinc coating. Under normal weather conditions, this will deteriorate at a rate of 1 μm per year, giving approximately 85 years of protection.
Cathodic protection
Cathodic protection is a technique used to inhibit corrosion on buried or immersed structures by supplying an electrical charge that suppresses the electrochemical reaction. If correctly applied, corrosion can be stopped completely. In its simplest form, it is achieved by attaching a sacrificial anode, thereby making the iron or steel the cathode in the cell formed. The sacrificial anode must be made from something with a more negative electrode potential than the iron or steel, commonly zinc, aluminium, or magnesium. The sacrificial anode will eventually corrode away, ceasing its protective action unless it is replaced in a timely manner.
Cathodic protection can also be provided by using an applied electrical current. This would then be known as ICCP Impressed Current Cathodic Protection.
Coatings and painting
Rust formation can be controlled with coatings, such as paint, lacquer, varnish, or wax tapes that isolate the iron from the environment. Large structures with enclosed box sections, such as ships and modern automobiles, often have a wax-based product (technically a "slushing oil") injected into these sections. Such treatments usually also contain rust inhibitors. Covering steel with concrete can provide some protection to steel because of the alkaline pH environment at the steel–concrete interface. However, rusting of steel in concrete can still be a problem, as expanding rust can fracture concrete from within.
As a closely related example, iron clamps were used to join marble blocks during a restoration attempt of the Parthenon in Athens, Greece, in 1898, but caused extensive damage to the marble by the rusting and swelling of unprotected iron. The ancient Greek builders had used a similar fastening system for the marble blocks during construction, however, they also poured molten lead over the iron joints for protection from seismic shocks as well as from corrosion. This method was successful for the 2500-year-old structure, but in less than a century the crude repairs were in imminent danger of collapse.
When only temporary protection is needed for storage or transport, a thin layer of oil, grease or a special mixture such as Cosmoline can be applied to an iron surface. Such treatments are extensively used when "mothballing" a steel ship, automobile, or other equipment for long-term storage.
Special anti-seize lubricant mixtures are available and are applied to metallic threads and other precision machined surfaces to protect them from rust. These compounds usually contain grease mixed with copper, zinc, or aluminium powder, and other proprietary ingredients.
Bluing
Bluing is a technique that can provide limited resistance to rusting for small steel items, such as firearms; for it to be successful, a water-displacing oil is rubbed onto the blued steel and other steel .
Inhibitors
Corrosion inhibitors, such as gas-phase or volatile inhibitors, can be used to prevent corrosion inside sealed systems. They are not effective when air circulation disperses them, and brings in fresh oxygen and moisture.
Humidity control
Rust can be avoided by controlling the moisture in the atmosphere. An example of this is the use of silica gel packets to control humidity in equipment shipped by sea.
Treatment
Rust removal from small iron or steel objects by electrolysis can be done in a home workshop using simple materials such as a plastic bucket filled with an electrolyte consisting of washing soda dissolved in tap water, a length of rebar suspended vertically in the solution to act as an anode, another laid across the top of the bucket to act as a support for suspending the object, baling wire to suspend the object in the solution from the horizontal rebar, and a battery charger as a power source in which the positive terminal is clamped to the anode and the negative terminal is clamped to the object to be treated which becomes the cathode. Hydrogen and oxygen gases are produced at the cathode and anode respectively. This mixture is flammable/explosive. Care should also be taken to avoid hydrogen embrittlement. Overvoltage also produces small amounts of ozone, which is highly toxic, so a low voltage phone charger is a far safer source of DC current. The effects of hydrogen on global warming have also recently come under scrutiny.
Rust may be treated with commercial products known as rust converter which contain tannic acid or phosphoric acid which combines with rust; removed with organic acids like citric acid and vinegar or the stronger hydrochloric acid; or removed with chelating agents as in some commercial formulations or even a solution of molasses.
Economic effect
Rust is associated with the degradation of iron-based tools and structures. As rust has a much higher volume than the originating mass of iron, its buildup can also cause failure by forcing apart adjacent parts — a phenomenon sometimes known as "rust packing". It was the cause of the collapse of the Mianus river bridge in 1983, when the bearings rusted internally and pushed one corner of the road slab off its support.
Rust was an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed in less than a minute, killing 46 drivers and passengers on the bridge at the time. The Kinzua Bridge in Pennsylvania was blown down by a tornado in 2003, largely because the central base bolts holding the structure to the ground had rusted away, leaving the bridge anchored by gravity alone.
Reinforced concrete is also vulnerable to rust damage. Internal pressure caused by expanding corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges and buildings.
Cultural symbolism
Rust is a commonly used metaphor for slow decay due to neglect, since it gradually converts robust iron and steel metal into a soft crumbling powder. A wide section of the industrialized American Midwest and American Northeast, once dominated by steel foundries, the automotive industry, and other manufacturers, has experienced harsh economic cutbacks that have caused the region to be dubbed the "Rust Belt".
In music, literature, and art, rust is associated with images of faded glory, neglect, decay, and ruin.
| Physical sciences | Redox reactions | Chemistry |
26478 | https://en.wikipedia.org/wiki/Real%20analysis | Real analysis | In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.
Real analysis is distinguished from complex analysis, which deals with the study of complex numbers and their functions.
Scope
Construction of the real numbers
The theorems of real analysis rely on the properties of the real number system, which must be established. The real number system consists of an uncountable set (), together with two binary operations denoted and , and a total order denoted . The operations make the real numbers a field, and, along with the order, an ordered field. The real number system is the unique complete ordered field, in the sense that any other complete ordered field is isomorphic to it. Intuitively, completeness means that there are no 'gaps' (or 'holes') in the real numbers. This property distinguishes the real numbers from other ordered fields (e.g., the rational numbers ) and is critical to the proof of several key properties of functions of the real numbers. The completeness of the reals is often conveniently expressed as the least upper bound property (see below).
Order properties of the real numbers
The real numbers have various lattice-theoretic properties that are absent in the complex numbers. Also, the real numbers form an ordered field, in which sums and products of positive numbers are also positive. Moreover, the ordering of the real numbers is total, and the real numbers have the least upper bound property: Every nonempty subset of that has an upper bound has a least upper bound that is also a real number. These order-theoretic properties lead to a number of fundamental results in real analysis, such as the monotone convergence theorem, the intermediate value theorem and the mean value theorem.
However, while the results in real analysis are stated for real numbers, many of these results can be generalized to other mathematical objects. In particular, many ideas in functional analysis and operator theory generalize properties of the real numbers – such generalizations include the theories of Riesz spaces and positive operators. Also, mathematicians consider real and imaginary parts of complex sequences, or by pointwise evaluation of operator sequences.
Topological properties of the real numbers
Many of the theorems of real analysis are consequences of the topological properties of the real number line. The order properties of the real numbers described above are closely related to these topological properties. As a topological space, the real numbers has a standard topology, which is the order topology induced by order . Alternatively, by defining the metric or distance function using the absolute value function as the real numbers become the prototypical example of a metric space. The topology induced by metric turns out to be identical to the standard topology induced by order . Theorems like the intermediate value theorem that are essentially topological in nature can often be proved in the more general setting of metric or topological spaces rather than in only. Often, such proofs tend to be shorter or simpler compared to classical proofs that apply direct methods.
Sequences
A sequence is a function whose domain is a countable, totally ordered set. The domain is usually taken to be the natural numbers, although it is occasionally convenient to also consider bidirectional sequences indexed by the set of all integers, including negative indices.
Of interest in real analysis, a real-valued sequence, here indexed by the natural numbers, is a map . Each is referred to as a term (or, less commonly, an element) of the sequence. A sequence is rarely denoted explicitly as a function; instead, by convention, it is almost always notated as if it were an ordered ∞-tuple, with individual terms or a general term enclosed in parentheses:
A sequence that tends to a limit (i.e., exists) is said to be convergent; otherwise it is divergent. (See the section on limits and convergence for details.) A real-valued sequence is bounded if there exists such that for all . A real-valued sequence is monotonically increasing or decreasing if
or
holds, respectively. If either holds, the sequence is said to be monotonic. The monotonicity is strict if the chained inequalities still hold with or replaced by < or >.
Given a sequence , another sequence is a subsequence of if for all positive integers and is a strictly increasing sequence of natural numbers.
Limits and convergence
Roughly speaking, a limit is the value that a function or a sequence "approaches" as the input or index approaches some value. (This value can include the symbols when addressing the behavior of a function or sequence as the variable increases or decreases without bound.) The idea of a limit is fundamental to calculus (and mathematical analysis in general) and its formal definition is used in turn to define notions like continuity, derivatives, and integrals. (In fact, the study of limiting behavior has been used as a characteristic that distinguishes calculus and mathematical analysis from other branches of mathematics.)
The concept of limit was informally introduced for functions by Newton and Leibniz, at the end of the 17th century, for building infinitesimal calculus. For sequences, the concept was introduced by Cauchy, and made rigorous, at the end of the 19th century by Bolzano and Weierstrass, who gave the modern ε-δ definition, which follows.
Definition. Let be a real-valued function defined on We say that tends to as approaches , or that the limit of as approaches is if, for any , there exists such that for all , implies that . We write this symbolically as
or as
Intuitively, this definition can be thought of in the following way: We say that as , when, given any positive number , no matter how small, we can always find a , such that we can guarantee that and are less than apart, as long as (in the domain of ) is a real number that is less than away from but distinct from . The purpose of the last stipulation, which corresponds to the condition in the definition, is to ensure that does not imply anything about the value of itself. Actually, does not even need to be in the domain of in order for to exist.
In a slightly different but related context, the concept of a limit applies to the behavior of a sequence when becomes large.
Definition. Let be a real-valued sequence. We say that converges to if, for any , there exists a natural number such that implies that . We write this symbolically as
or as
if fails to converge, we say that diverges.
Generalizing to a real-valued function of a real variable, a slight modification of this definition (replacement of sequence and term by function and value and natural numbers and by real numbers and , respectively) yields the definition of the limit of as increases without bound, notated . Reversing the inequality to gives the corresponding definition of the limit of as decreases without bound,
Sometimes, it is useful to conclude that a sequence converges, even though the value to which it converges is unknown or irrelevant. In these cases, the concept of a Cauchy sequence is useful.
Definition. Let be a real-valued sequence. We say that is a Cauchy sequence if, for any , there exists a natural number such that implies that .
It can be shown that a real-valued sequence is Cauchy if and only if it is convergent. This property of the real numbers is expressed by saying that the real numbers endowed with the standard metric, , is a complete metric space. In a general metric space, however, a Cauchy sequence need not converge.
In addition, for real-valued sequences that are monotonic, it can be shown that the sequence is bounded if and only if it is convergent.
Uniform and pointwise convergence for sequences of functions
In addition to sequences of numbers, one may also speak of sequences of functions on , that is, infinite, ordered families of functions , denoted , and their convergence properties. However, in the case of sequences of functions, there are two kinds of convergence, known as pointwise convergence and uniform convergence, that need to be distinguished.
Roughly speaking, pointwise convergence of functions to a limiting function , denoted , simply means that given any , as . In contrast, uniform convergence is a stronger type of convergence, in the sense that a uniformly convergent sequence of functions also converges pointwise, but not conversely. Uniform convergence requires members of the family of functions, , to fall within some error of for every value of , whenever , for some integer . For a family of functions to uniformly converge, sometimes denoted , such a value of must exist for any given, no matter how small. Intuitively, we can visualize this situation by imagining that, for a large enough , the functions are all confined within a 'tube' of width about (that is, between and ) for every value in their domain .
The distinction between pointwise and uniform convergence is important when exchanging the order of two limiting operations (e.g., taking a limit, a derivative, or integral) is desired: in order for the exchange to be well-behaved, many theorems of real analysis call for uniform convergence. For example, a sequence of continuous functions (see below) is guaranteed to converge to a continuous limiting function if the convergence is uniform, while the limiting function may not be continuous if convergence is only pointwise. Karl Weierstrass is generally credited for clearly defining the concept of uniform convergence and fully investigating its implications.
Compactness
Compactness is a concept from general topology that plays an important role in many of the theorems of real analysis. The property of compactness is a generalization of the notion of a set being closed and bounded. (In the context of real analysis, these notions are equivalent: a set in Euclidean space is compact if and only if it is closed and bounded.) Briefly, a closed set contains all of its boundary points, while a set is bounded if there exists a real number such that the distance between any two points of the set is less than that number. In , sets that are closed and bounded, and therefore compact, include the empty set, any finite number of points, closed intervals, and their finite unions. However, this list is not exhaustive; for instance, the set is a compact set; the Cantor ternary set is another example of a compact set. On the other hand, the set is not compact because it is bounded but not closed, as the boundary point 0 is not a member of the set. The set is also not compact because it is closed but not bounded.
For subsets of the real numbers, there are several equivalent definitions of compactness.
Definition. A set is compact if it is closed and bounded.
This definition also holds for Euclidean space of any finite dimension, , but it is not valid for metric spaces in general. The equivalence of the definition with the definition of compactness based on subcovers, given later in this section, is known as the Heine-Borel theorem.
A more general definition that applies to all metric spaces uses the notion of a subsequence (see above).
Definition. A set in a metric space is compact if every sequence in has a convergent subsequence.
This particular property is known as subsequential compactness. In , a set is subsequentially compact if and only if it is closed and bounded, making this definition equivalent to the one given above. Subsequential compactness is equivalent to the definition of compactness based on subcovers for metric spaces, but not for topological spaces in general.
The most general definition of compactness relies on the notion of open covers and subcovers, which is applicable to topological spaces (and thus to metric spaces and as special cases). In brief, a collection of open sets is said to be an open cover of set if the union of these sets is a superset of . This open cover is said to have a finite subcover if a finite subcollection of the could be found that also covers .
Definition. A set in a topological space is compact if every open cover of has a finite subcover.
Compact sets are well-behaved with respect to properties like convergence and continuity. For instance, any Cauchy sequence in a compact metric space is convergent. As another example, the image of a compact metric space under a continuous map is also compact.
Continuity
A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no "holes" or "jumps".
There are several ways to make this intuition mathematically rigorous. Several definitions of varying levels of generality can be given. In cases where two or more definitions are applicable, they are readily shown to be equivalent to one another, so the most convenient definition can be used to determine whether a given function is continuous or not. In the first definition given below, is a function defined on a non-degenerate interval of the set of real numbers as its domain. Some possibilities include , the whole set of real numbers, an open interval or a closed interval Here, and are distinct real numbers, and we exclude the case of being empty or consisting of only one point, in particular.
Definition. If is a non-degenerate interval, we say that is continuous at if . We say that is a continuous map if is continuous at every .
In contrast to the requirements for to have a limit at a point , which do not constrain the behavior of at itself, the following two conditions, in addition to the existence of , must also hold in order for to be continuous at : (i) must be defined at , i.e., is in the domain of ; and (ii) as . The definition above actually applies to any domain that does not contain an isolated point, or equivalently, where every is a limit point of . A more general definition applying to with a general domain is the following:
Definition. If is an arbitrary subset of , we say that is continuous at if, for any , there exists such that for all , implies that . We say that is a continuous map if is continuous at every .
A consequence of this definition is that is trivially continuous at any isolated point . This somewhat unintuitive treatment of isolated points is necessary to ensure that our definition of continuity for functions on the real line is consistent with the most general definition of continuity for maps between topological spaces (which includes metric spaces and in particular as special cases). This definition, which extends beyond the scope of our discussion of real analysis, is given below for completeness.
Definition. If and are topological spaces, we say that is continuous at if is a neighborhood of in for every neighborhood of in . We say that is a continuous map if is open in for every open in .
(Here, refers to the preimage of under .)
Uniform continuity
Definition. If is a subset of the real numbers, we say a function is uniformly continuous on if, for any , there exists a such that for all , implies that .
Explicitly, when a function is uniformly continuous on , the choice of needed to fulfill the definition must work for all of for a given . In contrast, when a function is continuous at every point (or said to be continuous on ), the choice of may depend on both and . In contrast to simple continuity, uniform continuity is a property of a function that only makes sense with a specified domain; to speak of uniform continuity at a single point is meaningless.
On a compact set, it is easily shown that all continuous functions are uniformly continuous. If is a bounded noncompact subset of , then there exists that is continuous but not uniformly continuous. As a simple example, consider defined by . By choosing points close to 0, we can always make for any single choice of , for a given .
Absolute continuity
Definition. Let be an interval on the real line. A function is said to be absolutely continuous on if for every positive number , there is a positive number such that whenever a finite sequence of pairwise disjoint sub-intervals of satisfies
then
Absolutely continuous functions are continuous: consider the case n = 1 in this definition. The collection of all absolutely continuous functions on I is denoted AC(I). Absolute continuity is a fundamental concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral.
Differentiation
The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point , and the slope of the line is the derivative of the function at .
A function is differentiable at if the limit
exists. This limit is known as the derivative of at , and the function , possibly defined on only a subset of , is the derivative (or derivative function) of . If the derivative exists everywhere, the function is said to be differentiable.
As a simple consequence of the definition, is continuous at if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function). It is possible to discuss the existence of higher-order derivatives as well, by finding the derivative of a derivative function, and so on.
One can classify functions by their differentiability class. The class (sometimes to indicate the interval of applicability) consists of all continuous functions. The class consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a function is exactly a function whose derivative exists and is of class . In general, the classes can be defined recursively by declaring to be the set of all continuous functions and declaring for any positive integer to be the set of all differentiable functions whose derivative is in . In particular, is contained in for every , and there are examples to show that this containment is strict. Class is the intersection of the sets as varies over the non-negative integers, and the members of this class are known as the smooth functions. Class consists of all analytic functions, and is strictly contained in (see bump function for a smooth function that is not analytic).
Series
A series formalizes the imprecise notion of taking the sum of an endless sequence of numbers. The idea that taking the sum of an "infinite" number of terms can lead to a finite result was counterintuitive to the ancient Greeks and led to the formulation of a number of paradoxes by Zeno and other philosophers. The modern notion of assigning a value to a series avoids dealing with the ill-defined notion of adding an "infinite" number of terms. Instead, the finite sum of the first terms of the sequence, known as a partial sum, is considered, and the concept of a limit is applied to the sequence of partial sums as grows without bound. The series is assigned the value of this limit, if it exists.
Given an (infinite) sequence , we can define an associated series as the formal mathematical object sometimes simply written as . The partial sums of a series are the numbers . A series is said to be convergent if the sequence consisting of its partial sums, , is convergent; otherwise it is divergent. The sum of a convergent series is defined as the number
The word "sum" is used here in a metaphorical sense as a shorthand for taking the limit of a sequence of partial sums and should not be interpreted as simply "adding" an infinite number of terms. For instance, in contrast to the behavior of finite sums, rearranging the terms of an infinite series may result in convergence to a different number (see the article on the Riemann rearrangement theorem for further discussion).
An example of a convergent series is a geometric series which forms the basis of one of Zeno's famous paradoxes:
In contrast, the harmonic series has been known since the Middle Ages to be a divergent series:
(Here, "" is merely a notational convention to indicate that the partial sums of the series grow without bound.)
A series is said to converge absolutely if is convergent. A convergent series for which diverges is said to converge non-absolutely. It is easily shown that absolute convergence of a series implies its convergence. On the other hand, an example of a series that converges non-absolutely is
Taylor series
The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable at a real or complex number a is the power series
:
which can be written in the more compact sigma notation as
where n! denotes the factorial of n and ƒ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The derivative of order zero ƒ is defined to be ƒ itself and and 0! are both defined to be 1. In the case that , the series is also called a Maclaurin series.
A Taylor series of f about point a may diverge, converge at only the point a, converge for all x such that (the largest such R for which convergence is guaranteed is called the radius of convergence), or converge on the entire real line. Even a converging Taylor series may converge to a value different from the value of the function at that point. If the Taylor series at a point has a nonzero radius of convergence, and sums to the function in the disc of convergence, then the function is analytic. The analytic functions have many fundamental properties. In particular, an analytic function of a real variable extends naturally to a function of a complex variable. It is in this way that the exponential function, the logarithm, the trigonometric functions and their inverses are extended to functions of a complex variable.
Fourier series
Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series typically occurs and is handled within the branch mathematics > mathematical analysis > Fourier analysis.
Integration
Integration is a formalization of the problem of finding the area bound by a curve and the related problems of determining the length of a curve or volume enclosed by a surface. The basic strategy to solving problems of this type was known to the ancient Greeks and Chinese, and was known as the method of exhaustion. Generally speaking, the desired area is bounded from above and below, respectively, by increasingly accurate circumscribing and inscribing polygonal approximations whose exact areas can be computed. By considering approximations consisting of a larger and larger ("infinite") number of smaller and smaller ("infinitesimal") pieces, the area bound by the curve can be deduced, as the upper and lower bounds defined by the approximations converge around a common value.
The spirit of this basic strategy can easily be seen in the definition of the Riemann integral, in which the integral is said to exist if upper and lower Riemann (or Darboux) sums converge to a common value as thinner and thinner rectangular slices ("refinements") are considered. Though the machinery used to define it is much more elaborate compared to the Riemann integral, the Lebesgue integral was defined with similar basic ideas in mind. Compared to the Riemann integral, the more sophisticated Lebesgue integral allows area (or length, volume, etc.; termed a "measure" in general) to be defined and computed for much more complicated and irregular subsets of Euclidean space, although there still exist "non-measurable" subsets for which an area cannot be assigned.
Riemann integration
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let be a closed interval of the real line; then a tagged partition of is a finite sequence
This partitions the interval into sub-intervals indexed by , each of which is "tagged" with a distinguished point . For a function bounded on , we define the Riemann sum of with respect to tagged partition as
where is the width of sub-interval . Thus, each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, . We say that the Riemann integral of on is if for any there exists such that, for any tagged partition with mesh , we have
This is sometimes denoted . When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum is known as the upper (respectively, lower) Darboux sum. A function is Darboux integrable if the upper and lower Darboux sums can be made to be arbitrarily close to each other for a sufficiently small mesh. Although this definition gives the Darboux integral the appearance of being a special case of the Riemann integral, they are, in fact, equivalent, in the sense that a function is Darboux integrable if and only if it is Riemann integrable, and the values of the integrals are equal. In fact, calculus and real analysis textbooks often conflate the two, introducing the definition of the Darboux integral as that of the Riemann integral, due to the slightly easier to apply definition of the former.
The fundamental theorem of calculus asserts that integration and differentiation are inverse operations in a certain sense.
Lebesgue integration and measure
Lebesgue integration is a mathematical construction that extends the integral to a larger class of functions; it also extends the domains on which these functions can be defined. The concept of a measure, an abstraction of length, area, or volume, is central to Lebesgue integral probability theory.
Distributions
Distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
Relation to complex analysis
Real analysis is an area of analysis that studies concepts such as sequences and their limits, continuity, differentiation, integration and sequences of functions. By definition, real analysis focuses on the real numbers, often including positive and negative infinity to form the extended real line. Real analysis is closely related to complex analysis, which studies broadly the same properties of complex numbers. In complex analysis, it is natural to define differentiation via holomorphic functions, which have a number of useful properties, such as repeated differentiability, expressibility as power series, and satisfying the Cauchy integral formula.
In real analysis, it is usually more natural to consider differentiable, smooth, or harmonic functions, which are more widely applicable, but may lack some more powerful properties of holomorphic functions. However, results such as the fundamental theorem of algebra are simpler when expressed in terms of complex numbers.
Techniques from the theory of analytic functions of a complex variable are often used in real analysis – such as evaluation of real integrals by residue calculus.
Important results
Important results include the Bolzano–Weierstrass and Heine–Borel theorems, the intermediate value theorem and mean value theorem, Taylor's theorem, the fundamental theorem of calculus, the Arzelà-Ascoli theorem, the Stone-Weierstrass theorem, Fatou's lemma, and the monotone convergence and dominated convergence theorems.
Generalizations and related areas of mathematics
Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the concepts of Banach spaces and Hilbert spaces and, more generally to functional analysis. Georg Cantor's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis. On the other hand, the generalization of integration from the Riemann sense to that of Lebesgue led to the formulation of the concept of abstract measure spaces, a fundamental concept in measure theory. Finally, the generalization of integration from the real line to curves and surfaces in higher dimensional space brought about the study of vector calculus, whose further generalization and formalization played an important role in the evolution of the concepts of differential forms and smooth (differentiable) manifolds in differential geometry and other closely related areas of geometry and topology.
| Mathematics | Calculus and analysis | null |
26537 | https://en.wikipedia.org/wiki/Rose | Rose | A rose is either a woody perennial flowering plant of the genus Rosa (), in the family Rosaceae (), or the flower it bears. There are over three hundred species and tens of thousands of cultivars. They form a group of plants that can be erect shrubs, climbing, or trailing, with stems that are often armed with sharp prickles. Their flowers vary in size and shape and are usually large and showy, in colours ranging from white through yellows and reds. Most species are native to Asia, with smaller numbers native to Europe, North America, and Northwest Africa. Species, cultivars and hybrids are all widely grown for their beauty and often are fragrant. Roses have acquired cultural significance in many societies. Rose plants range in size from compact, miniature roses to climbers that can reach seven meters in height. Different species hybridize easily, and this has been used in the development of the wide range of garden roses.
Etymology
The name rose comes from Latin rosa, which was perhaps borrowed from Oscan, from Greek ῥόδον rhódon (Aeolic βρόδον wródon), itself borrowed from Old Persian wrd- (wurdi), related to Avestan varəδa, Sogdian ward, Parthian wâr.
Botany
The leaves are borne alternately on the stem. In most species, they are long, pinnate, with (3–) 5–9 (−13) leaflets and basal stipules; the leaflets usually have a serrated margin, and often a few small prickles on the underside of the stem. Most roses are deciduous but a few (particularly from Southeast Asia) are evergreen or nearly so.
The flowers of most species have five petals, with the exception of Rosa omeiensis and Rosa sericea, which usually have only four. Each petal is divided into two distinct lobes and is usually white or pink, though in a few species yellow or red. Beneath the petals are five sepals (or in the case of some Rosa omeiensis and Rosa sericea, four). These may be long enough to be visible when viewed from above and appear as green points alternating with the rounded petals. There are multiple superior ovaries that develop into achenes. Roses are insect-pollinated in nature.
The aggregate fruit of the rose is a berry-like structure called a rose hip. Many of the domestic cultivars do not produce hips, as the flowers are so tightly petalled that they do not provide access for pollination. The hips of most species are red, but a few (e.g. Rosa pimpinellifolia) have dark purple to black hips. Each hip comprises an outer fleshy layer, the hypanthium, which contains 5–160 "seeds" (technically dry single-seeded fruits called achenes) embedded in a matrix of fine, but stiff, hairs. Rose hips of some species, especially the dog rose (Rosa canina) and rugosa rose (R. rugosa), are very rich in vitamin C, among the richest sources of any plant. The hips are eaten by fruit-eating birds such as thrushes and waxwings, which then disperse the seeds in their droppings.
The sharp growths along a rose stem, though commonly called "thorns", are technically prickles, outgrowths of the epidermis (the outer layer of tissue of the stem), unlike true thorns, which are modified stems. Rose prickles are typically sickle-shaped hooks, which aid the rose in hanging onto other vegetation when growing over it. Some species such as Rosa rugosa and [R. pimpinellifolia have densely packed straight prickles, probably an adaptation to reduce browsing by animals, but also possibly an adaptation to trap wind-blown sand and so reduce erosion and protect their roots (both of these species grow naturally on coastal sand dunes). Despite the presence of prickles, roses are frequently browsed by deer. A few species of roses have only vestigial prickles that have no points.
Plant geneticist Zachary Lippman of Cold Spring Harbor Laboratory found that prickles are controlled by the LOG gene. Blocking the LOG gene in roses reduced the thorns (large prickles) into tiny buds.
Evolution
The oldest remains of roses are from the Late Eocene Florissant Formation of Colorado. Roses were present in Europe by the early Oligocene.
Today's garden roses come from 18th-century China. Among the old Chinese garden roses, the Old Blush group is the most primitive, while newer groups are the most diverse.
Genome
A study of the patterns of natural selection in the genome of roses indicated that genes related to DNA damage repair and stress adaptation have been positively selected, likely during their domestication. This rapid evolution may reflect an adaptation to genome confliction resulting from frequent intra- and inter-species hybridization and switching environmental conditions of growth.
Species
The genus Rosa is composed of 140–180 species and divided into four subgenera:
Hulthemia (formerly Simplicifoliae, meaning "with single leaves") containing two species from Southwest Asia, Rosa persica and Rosa berberifolia, which are the only roses without compound leaves or stipules.
Hesperrhodos (from the Greek for "western rose") contains Rosa minutifolia and Rosa stellata, from North America.
Platyrhodon (from the Greek for "flaky rose", referring to flaky bark) with one species from east Asia, Rosa roxburghii (also known as the chestnut rose).
Rosa (the type subgenus, sometimes incorrectly called Eurosa) containing all the other roses. This subgenus is subdivided into 11 sections.
Banksianae – white and yellow flowered roses from China.
Bracteatae – three species, two from China and one from India.
Caninae – pink and white flowered species from Asia, Europe and North Africa.
Carolinae – white, pink, and bright pink flowered species all from North America.
Chinensis – white, pink, yellow, red and mixed-colour roses from China and Burma.
Gallicanae – pink to crimson and striped flowered roses from western Asia and Europe.
Gymnocarpae – one species in western North America (Rosa gymnocarpa), others in east Asia.
Laevigatae – a single white flowered species from China.
Pimpinellifoliae – white, pink, bright yellow, mauve and striped roses from Asia and Europe.
Rosa (syn. sect. Cinnamomeae) – white, pink, lilac, mulberry and red roses from everywhere but North Africa.
Synstylae – white, pink, and crimson flowered roses from all areas.
Ecology
Some birds, particularly finches, eat the seeds.
Pests and diseases
Wild roses are host plants for a number of pests and diseases. Many of these affect other plants, including other genera of the Rosaceae.
Cultivated roses are often subject to severe damage from insect, arachnid and fungal pests and diseases. In many cases they cannot be usefully grown without regular treatment to control these problems.
Uses
Roses are best known as ornamental plants grown for their flowers in the garden and sometimes indoors. They have also been used for commercial perfumery and commercial cut flower crops. Some are used as landscape plants, for hedging and for other utilitarian purposes such as game cover and slope stabilization.
Ornamental plants
The majority of ornamental roses are hybrids that were bred for their flowers. A few, mostly species roses are grown for attractive or scented foliage (such as Rosa glauca and R. rubiginosa), ornamental thorns (such as R. sericea) or for their showy fruit (such as R. moyesii).
Ornamental roses have been cultivated for millennia, with the earliest known cultivation known to date from at least 500 BC in Mediterranean countries, Persia, and China. It is estimated that 30 to 35 thousand rose hybrids and cultivars have been bred and selected for garden use as flowering plants. Most are double-flowered with many or all of the stamens having morphed into additional petals.
In the early 19th century the Empress Josephine of France patronized the development of rose breeding at her gardens at Malmaison. As long ago as 1840 a collection numbering over one thousand different cultivars, varieties and species was possible when a rosarium was planted by Loddiges nursery for Abney Park Cemetery, an early Victorian garden cemetery and arboretum in England.
Cut flowers
Roses are a popular crop for both domestic and commercial cut flowers. Generally they are harvested and cut when in bud, and held in refrigerated conditions until ready for display at their point of sale.
In temperate climates, cut roses are often grown in greenhouses, and in warmer countries they may also be grown under cover in order to ensure that the flowers are not damaged by weather and that pest and disease control can be carried out effectively. Significant quantities are grown in some tropical countries, and these are shipped by air to markets across the world.
Some kind of roses are artificially coloured using dyed water, like rainbow roses.
Perfume
Rose perfumes are made from rose oil (also called attar of roses), which is a mixture of volatile essential oils obtained by steam distilling the crushed petals of roses. An associated product is rose water which is used for cooking, cosmetics, medicine and religious practices. The production technique originated in Persia and then spread through Arabia and India, and more recently into eastern Europe. In Bulgaria, Iran and Germany, damask roses (Rosa × damascena 'Trigintipetala') are used. In other parts of the world Rosa × centifolia is commonly used. The oil is transparent pale yellow or yellow-grey in colour. 'Rose Absolute' is solvent-extracted with hexane and produces a darker oil, dark yellow to orange in colour. The weight of oil extracted is about one three-thousandth to one six-thousandth of the weight of the flowers; for example, about two thousand flowers are required to produce one gram of oil.
The main constituents of attar of roses are the fragrant alcohols geraniol and L-citronellol and rose camphor, an odorless solid composed of alkanes, which separates from rose oil. β-Damascenone is also a significant contributor to the scent.
Food and drink
Rose hips are high in vitamin C, are, after the removal of the irritant hairs, edible raw, and occasionally are made into jam, jelly, marmalade, and soup, or brewed for tea. They are also pressed and filtered to make rose hip syrup. Rose hips are also used to produce rose hip seed oil, which is used in skin products and some makeup products.
Rose water has a very distinctive flavour and is used in Middle Eastern, Persian, and South Asian cuisine—especially in sweets such as Turkish delight, barfi, baklava, halva, gulab jamun, knafeh, and nougat. Rose petals or flower buds are sometimes used to flavour ordinary tea, or combined with other herbs to make herbal teas. A sweet preserve of rose petals called gulkand is common in the Indian subcontinent. The leaves and washed roots are also sometimes used to make tea.
In France, there is much use of rose syrup, most commonly made from an extract of rose petals. In the Indian subcontinent, Rooh Afza, a concentrated squash made with roses, is popular, as are rose-flavoured frozen desserts such as ice cream and kulfi.
The flower stems and young shoots are edible, as are the petals (sans the white or green bases). The latter are usually used as flavouring or to add their scent to food. Other minor uses include candied rose petals.
Rose creams (rose-flavoured fondant covered in chocolate, often topped with a crystallised rose petal) are a traditional English confectionery widely available from numerous producers in the UK.
Under the American Federal Food, Drug, and Cosmetic Act, there are only certain Rosa species, varieties, and parts are listed as generally recognized as safe (GRAS).
Rose absolute: Rosa alba L., Rosa centifolia L., Rosa damascena Mill., Rosa gallica L., and vars. of these spp.
Rose (otto of roses, attar of roses): Ditto
Rose buds
Rose flowers
Rose fruit (hips)
Rose leaves: Rosa spp.
As a food ingredient
The rose hip, usually from R. canina, is used as a minor source of vitamin C. Diarrhodon (Gr διάρροδον, "compound of roses", from ῥόδων, "of roses") is a name given to various compounds in which red roses are an ingredient.
Art and symbolism
The long cultural history of the rose has led to it being used often as a symbol. In ancient Greece, the rose was closely associated with the goddess Aphrodite. In the Iliad, Aphrodite protects the body of Hector using the "immortal oil of the rose" and the archaic Greek lyric poet Ibycus praises a beautiful youth saying that Aphrodite nursed him "among rose blossoms". The second-century AD Greek travel writer Pausanias associates the rose with the story of Adonis and states that the rose is red because Aphrodite wounded herself on one of its thorns and stained the flower red with her blood. Book Eleven of the ancient Roman novel The Golden Ass by Apuleius contains a scene in which the goddess Isis, who is identified with Venus, instructs the main character, Lucius, who has been transformed into a donkey, to eat rose petals from a crown of roses worn by a priest as part of a religious procession in order to regain his humanity. French writer René Rapin invented a myth in which a beautiful Corinthian queen named Rhodanthe ("she with rose flowers") was besieged inside a temple of Artemis by three ardent suitors who wished to worship her as a goddess; the god Apollo then transformed her into a rosebush.
Following the Christianization of the Roman Empire, the rose became identified with the Virgin Mary. The colour of the rose and the number of roses received has symbolic representation. The rose symbol eventually led to the creation of the rosary and other devotional prayers in Christianity.
Ever since the 1400s, the Franciscans have had a Crown Rosary of the Seven Joys of the Blessed Virgin Mary. In the 1400s and 1500s, the Carthusians promoted the idea of sacred mysteries associated with the rose symbol and rose gardens. Albrecht Dürer's painting The Feast of the Rosary (1506) depicts the Virgin Mary distributing garlands of roses to her devotees.
Roses symbolised the Houses of York and Lancaster in a conflict known as the Wars of the Roses. Subsequently roses of the corresponding colours have been used a emblems for the English counties of Yorkshire and Lancashire.
The Tudor rose combines the colours of the roses of York and Lancaster, and is an emblem of then Tudor dynasty and of England.
Roses are a favored subject in art and appear in portraits, illustrations, on stamps, as ornaments or as architectural elements. The Luxembourg-born Belgian artist and botanist Pierre-Joseph Redouté is known for his detailed watercolours of flowers, particularly roses.
Henri Fantin-Latour was also a prolific painter of still life, particularly flowers including roses. The rose 'Fantin-Latour' was named after the artist.
Other impressionists including Claude Monet, Paul Cézanne and Pierre-Auguste Renoir have paintings of roses among their works. In the 19th century, for example, artists associated the city of Trieste with a certain rare white rose, and this rose developed as the city's symbol. It was not until 2021 that the rose, which was believed to be extinct, was rediscovered there.
In 1986 President Ronald Reagan signed legislation to make the rose the floral emblem of the United States.
The rose is often exchanged on St. Valentines Day and is used often as a symbol of such.
| Biology and health sciences | Rosales | null |
26561 | https://en.wikipedia.org/wiki/Rank%20%28linear%20algebra%29 | Rank (linear algebra) | In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted by or ; sometimes the parentheses are not written, as in .
Main definitions
In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these.
The column rank of is the dimension of the column space of , while the row rank of is the dimension of the row space of .
A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in , below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of .
A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank.
The rank of a linear map or operator is defined as the dimension of its image:where is the dimension of a vector space, and is the image of a map.
Examples
The matrix
has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3.
The matrix
has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose
of has rank 1. Indeed, since the column vectors of are the row vectors of the transpose of , the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., .
Computing the rank of a matrix
Rank from row echelon forms
A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows.
For example, the matrix given by
can be put in reduced row-echelon form by using the following elementary row operations:
The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix is 2.
Computation
When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.
Proofs that column rank = row rank
Proof using row reduction
The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in . Here is a variant of this proof:
It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries.
We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014).
Proof using linear combinations
Let be an matrix. Let the column rank of be , and let be any basis for the column space of . Place these as the columns of an matrix . Every column of can be expressed as a linear combination of the columns in . This means that there is an matrix such that . is the matrix whose th column is formed from the coefficients giving the th column of as a linear combination of the columns of . In other words, is the matrix which contains the multiples for the bases of the column space of (which is ), which are then used to form as a whole. Now, each row of is given by a linear combination of the rows of . Therefore, the rows of form a spanning set of the row space of and, by the Steinitz exchange lemma, the row rank of cannot exceed . This proves that the row rank of is less than or equal to the column rank of . This result can be applied to any matrix, so apply the result to the transpose of . Since the row rank of the transpose of is the column rank of and the column rank of the transpose of is the row rank of , this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of . (Also see Rank factorization.)
Proof using orthogonality
Let be an matrix with entries in the real numbers whose row rank is . Therefore, the dimension of the row space of is . Let be a basis of the row space of . We claim that the vectors are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients :
where . We make two observations: (a) is a linear combination of vectors in the row space of , which implies that belongs to the row space of , and (b) since , the vector is orthogonal to every row vector of and, hence, is orthogonal to every vector in the row space of . The facts (a) and (b) together imply that is orthogonal to itself, which proves that or, by the definition of ,
But recall that the were chosen as a basis of the row space of and so are linearly independent. This implies that . It follows that are linearly independent.
Now, each is obviously a vector in the column space of . So, is a set of linearly independent vectors in the column space of and, hence, the dimension of the column space of (i.e., the column rank of ) must be at least as big as . This proves that row rank of is no larger than the column rank of . Now apply this result to the transpose of to get the reverse inequality and conclude as in the previous proof.
Alternative definitions
In all the definitions in this section, the matrix is taken to be an matrix over an arbitrary field .
Dimension of image
Given the matrix , there is an associated linear mapping
defined by
The rank of is the dimension of the image of . This definition has the advantage that it can be applied to any linear map without need for a specific matrix.
Rank in terms of nullity
Given the same linear mapping as above, the rank is minus the dimension of the kernel of . The rank–nullity theorem states that this definition is equivalent to the preceding one.
Column rank – dimension of column space
The rank of is the maximal number of linearly independent columns of ; this is the dimension of the column space of (the column space being the subspace of generated by the columns of , which is in fact just the image of the linear map associated to ).
Row rank – dimension of row space
The rank of is the maximal number of linearly independent rows of ; this is the dimension of the row space of .
Decomposition rank
The rank of is the smallest integer such that can be factored as , where is an matrix and is a matrix. In fact, for all integers , the following are equivalent:
the column rank of is less than or equal to ,
there exist columns of size such that every column of is a linear combination of ,
there exist an matrix and a matrix such that (when is the rank, this is a rank factorization of ),
there exist rows of size such that every row of is a linear combination of ,
the row rank of is less than or equal to .
Indeed, the following equivalences are obvious: .
For example, to prove (3) from (2), take to be the matrix whose columns are from (2).
To prove (2) from (3), take to be the columns of .
It follows from the equivalence that the row rank is equal to the column rank.
As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map is the minimal dimension of an intermediate space such that can be written as the composition of a map and a map . Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details.
Rank in terms of singular values
The rank of equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition
Determinantal rank – size of largest non-vanishing minor
The rank of is the largest order of any non-zero minor in . (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix.
A non-vanishing -minor ( submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of vectors has dimension , then of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of vectors has dimension , then of these vectors span the space and there is a set of coordinates on which they are linearly independent).
Tensor rank – minimum number of simple tensors
The rank of is the smallest number such that can be written as a sum of rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product of a column vector and a row vector . This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition.
Properties
We assume that is an matrix, and we define the linear map by as above.
The rank of an matrix is a nonnegative integer and cannot be greater than either or . That is, A matrix that has rank is said to have full rank; otherwise, the matrix is rank deficient.
Only a zero matrix has rank zero.
is injective (or "one-to-one") if and only if has rank (in this case, we say that has full column rank).
is surjective (or "onto") if and only if has rank (in this case, we say that has full row rank).
If is a square matrix (i.e., ), then is invertible if and only if has rank (that is, has full rank).
If is any matrix, then
If is an matrix of rank , then
If is an matrix of rank , then
The rank of is equal to if and only if there exists an invertible matrix and an invertible matrix such that where denotes the identity matrix.
Sylvester’s rank inequality: if is an matrix and is , then This is a special case of the next inequality.
The inequality due to Frobenius: if , and are defined, then
Subadditivity: when and are of the same dimension. As a consequence, a rank- matrix can be written as the sum of rank-1 matrices, but not fewer.
The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.)
If is a matrix over the real numbers then the rank of and the rank of its corresponding Gram matrix are equal. Thus, for real matrices This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors for which If this condition is fulfilled, we also have
If is a matrix over the complex numbers and denotes the complex conjugate of and the conjugate transpose of (i.e., the adjoint of ), then
Applications
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions.
In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.
In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function.
Generalization
There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist.
Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices.
There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.
Matrices as tensors
Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details.
The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed.
| Mathematics | Linear algebra | null |
26573 | https://en.wikipedia.org/wiki/Rabbit | Rabbit | Rabbits are small mammals in the family Leporidae (which also includes the hares), which is in the order Lagomorpha (which also includes pikas). They are familiar throughout the world as a small herbivore, a prey animal, a domesticated form of livestock, and a pet, having a widespread effect on ecologies and cultures. The most widespread rabbit genera are Oryctolagus and Sylvilagus. The former, Oryctolagus, includes the European rabbit, Oryctolagus cuniculus, which is the ancestor of the hundreds of breeds of domestic rabbit and has been introduced on every continent except Antarctica. The latter, Sylvilagus, includes over 13 wild rabbit species, among them the cottontails and tapetis. Wild rabbits not included in Oryctolagus and Sylvilagus include several species of limited distribution, including the pygmy rabbit, volcano rabbit, and Sumatran striped rabbit.
Rabbits are a paraphyletic grouping, and do not constitute a clade, as hares (belonging to the genus Lepus) are nested within the Leporidae clade and are not described as rabbits. Although once considered rodents, lagomorphs diverged earlier and have a number of traits rodents lack, including two extra incisors. Similarities between rabbits and rodents were once attributed to convergent evolution, but studies in molecular biology have found a common ancestor between lagomorphs and rodents and place them in the clade Glires.
Rabbit physiology is suited to escaping predators and surviving in various habitats, living either alone or in groups in nests or burrows. As prey animals, rabbits are constantly aware of their surroundings, having a wide field of vision and ears with high surface area to detect potential predators. The ears of a rabbit are essential for thermoregulation and contain a high density of blood vessels. The bone structure of a rabbit's hind legs, which is longer than that of the fore legs, allows for quick hopping, which is beneficial for escaping predators and can provide powerful kicks if captured. Rabbits are typically nocturnal and often sleep with their eyes open. They reproduce quickly, having short pregnancies, large litters of four to twelve kits, and no particular mating season; however, the mortality rate of rabbit embryos is high, and there exist several widespread diseases that affect rabbits, such as rabbit hemorrhagic disease and myxomatosis. In some regions, especially Australia, rabbits have caused ecological problems and are regarded as a pest.
Humans have used rabbits as livestock since at least the first century BC in ancient Rome, raising them for their meat, fur and wool. The various breeds of the European rabbit have been developed to suit each of these products; the practice of raising and breeding rabbits as livestock is known as cuniculture. Rabbits are seen in human culture globally, appearing as a symbol of fertility, cunning, and innocence in major religions, historical and contemporary art.
Terminology and etymology
The word rabbit derives from the Middle English ("young of the coney"), a borrowing from the Walloon , which was a diminutive of the French or Middle Dutch ("rabbit"), a term of unknown origin. The term coney is a term for an adult rabbit used until the 18th century; rabbit once referred only to the young animals. More recently, the term kit or kitten has been used to refer to a young rabbit. The endearing word bunny is attested by the 1680s as a diminutive of bun, a term used in Scotland to refer to rabbits and squirrels.
Coney is derived from cuniculus, a Latin term referring to rabbits which has been in use from at least the first century BCE in Hispania. The word cuniculus may originate from a diminutive form of the word for "dog" in the Celtic languages.
A group of rabbits is known as a colony, nest, or warren, though the latter term more commonly refers to where the rabbits live. A group of baby rabbits produced from a single mating is referred to as a litter and a group of domestic rabbits living together is sometimes called a herd.
A male rabbit is called a buck, as are male goats and deer, derived from the Old English or , meaning "he-goat" or "male deer", respectively. A female is called a doe, derived from the Old English , related to ("to suck").
Taxonomy and evolution
Rabbits and hares were formerly classified in the order Rodentia (rodents) until 1912, when they were moved into the order Lagomorpha (which also includes pikas). Since 1945, there has been support for the clade Glires that includes both rodents and lagomorphs, though the two groups have always been closely associated in taxonomy; fossil, DNA, and retrotransposon studies in the 2000s have solidified support for the clade. Studies in paleontology and molecular biology suggest that rodents and lagomorphs diverged at the start of the Tertiary period.
The extant species of family Leporidae, of which there are more than 70, are contained within 11 genera, one of which is Lepus, the hares. There are 32 extant species within Lepus. The cladogram is from Matthee et al., 2004, based on nuclear and mitochondrial gene analysis.
Classification
Order Lagomorpha
Family Leporidae (in part):
Genus Brachylagus
Pygmy rabbit, Brachylagus idahoensis
Genus Bunolagus
Riverine rabbit, Bunolagus monticularis
Genus Caprolagus
Hispid hare, Caprolagus hispidus
Genus Lepus
Genus Nesolagus
Sumatran striped rabbit, Nesolagus netscheri
Annamite striped rabbit, Nesolagus timminsi
Genus Oryctolagus
European rabbit, Oryctolagus cuniculus
Genus Pentalagus
Amami rabbit/Ryūkyū rabbit, Pentalagus furnessi
Genus Poelagus
Bunyoro rabbit, Poelagus marjorita
Genus Pronolagus
Natal red rock hare, Pronolagus crassicaudatus
Jameson's red rock hare, Pronolagus randensis
Smith's red rock hare, Pronolagus rupestris
Hewitt's red rock hare, Pronolagus saundersiae
Genus Romerolagus
Volcano rabbit, Romerolagus diazi
Genus Sylvilagus
Andean tapeti, Sylvilagus andinus
Swamp rabbit, Sylvilagus aquaticus
Desert cottontail, Sylvilagus audubonii
Brush rabbit, Sylvilagus bachmani
Common tapeti, Sylvilagus brasiliensis
Mexican cottontail, Sylvilagus cunicularis
Dice's cottontail, Sylvilagus dicei
Eastern cottontail, Sylvilagus floridanus
Central American tapeti, Sylvilagus gabbi
Tres Marias cottontail, Sylvilagus graysoni
Robust cottontail, Sylvilagus holzneri
Omilteme cottontail, Sylvilagus insonus
Mountain cottontail, Sylvilagus nuttallii
Appalachian cottontail, Sylvilagus obscurus
Marsh rabbit, Sylvilagus palustris
Santa Marta tapeti, Sylvilagus sanctaemartae
Coastal tapeti, Sylvilagus tapetillus
New England cottontail, Sylvilagus transitionalis
Venezuelan lowland rabbit, Sylvilagus varynaensis
Differences from hares
The term rabbit is typically used for all Leporidae species, excluding the genus Lepus. Members of that genus are known as hares or jackrabbits.
Lepus species are precocial, born relatively mature and mobile with hair and good vision out in the open air, while rabbit species are altricial, born hairless and blind in burrows and buried nests. Hares are also generally larger than rabbits, and have longer pregnancies. Hares and some rabbits live relatively solitary lives above the ground in open grassy areas, interacting mainly during breeding season. Some rabbit species group together to reduce their chance of being preyed upon, and the European rabbit will form large social groups in burrows, which are grouped together to form warrens. Burrowing by hares varies by location, and is more prominent in younger members of the genus; many rabbit species that do not dig their own burrows will use the burrows of other animals.
Rabbits and hares have historically not occupied the same locations, and only became sympatric relatively recently; historic accounts describe antagonistic relationships between rabbits and hares, specifically between the European hare and European or cottontail rabbits, but scientific literature since 1956 has found no evidence of aggression or undue competition between rabbits and hares. When they appear in the same habitat, rabbits and hares can co-exist on similar diets. Hares will notably force other hare species out of an area to control resources, but are not territorial. When faced with predators, hares will escape by outrunning them, whereas rabbits, being smaller and less able to reach the high speeds of longer-legged hares, will try to seek cover.
Descendants of the European rabbit are commonly bred as livestock and kept as pets, whereas no hares have been domesticated, though populations have been introduced to non-native habitats for use as a food source. The breed known as the Belgian hare is actually a domestic rabbit which has been selectively bred to resemble a hare, most likely from Flemish Giant stock originally. Common names of hare and rabbit species may also be confused; "jackrabbits" refer to hares, and the hispid hare is a rabbit.
Domestication
Rabbits, specifically the European rabbit (Oryctolagus cuniculus) species, have long been domesticated. The European rabbit has been widely kept as livestock, starting in ancient Rome from at least the first century BC. Selective breeding, which began in the Middle Ages, has generated a wide variety of rabbit breeds, of which many (since the early 19th century) are also kept as pets. Some strains of European rabbit have been bred specifically as research subjects, such as the New Zealand white.
As livestock, European rabbits are bred for their meat and fur. The earliest breeds were important sources of meat, and so were bred to be larger than wild rabbits at younger ages, but domestic rabbits in modern times range in size from dwarf to giant. Rabbit fur, produced as a byproduct of meat production but occasionally selected for as in the case of the Rex rabbit, can be found in a broad range of coat colors and patterns, some of which are produced via dyeing. Some breeds are raised for their wool, such as the Angora rabbit breeds; their fur is sheared, combed or plucked, and the fibers are spun into yarn.
Biology
Evolution
The earliest ancestor of rabbits and hares lived 55 million years ago in what is now Mongolia. Because the rabbit's epiglottis is engaged over the soft palate except when swallowing, the rabbit is an obligate nasal breather. As lagomorphs, rabbits have two sets of incisor teeth, one behind the other, a manner in which they differ from rodents, which only have one set of incisors. Another difference is that for rabbits, all of their teeth continue to grow, whereas for most rodents, only their incisors continue to grow. Carl Linnaeus originally grouped rabbits and rodents under the class Glires; later, they were separated as the scientific consensus is that many of their similarities were a result of convergent evolution. DNA analysis and the discovery of a common ancestor have supported the view that they share a common lineage, so rabbits and rodents are now often grouped together in the clade or superorder Glires.
Morphology
Since speed and agility are a rabbit's main defenses against predators, rabbits have large hind leg bones and well-developed musculature. Though plantigrade at rest, rabbits are on their toes while running, assuming a more digitigrade posture. Rabbits use their strong claws for digging and (along with their teeth) for defense. Each front foot has four toes plus a dewclaw. Each hind foot has four toes (but no dewclaw).
Most wild rabbits (especially compared to hares) have relatively full, egg-shaped bodies. The soft coat of the wild rabbit is agouti in coloration (or, rarely, melanistic), which aids in camouflage. The tail of the rabbit (with the exception of the cottontail species) is dark on top and white below. Cottontails have white on the top of their tails.
As a result of the position of the eyes in its skull and the size of the cornea, the rabbit has a panoramic field of vision that encompasses nearly 360 degrees. However, there is a blind spot at the bridge of the nose, and because of this, rabbits cannot see what is below their mouth and rely on their lips and whiskers to determine what they are eating. Blinking occurs 2 to 4 times an hour.
Hind limb elements
The anatomy of rabbits' hind limbs is structurally similar to that of other land mammals and contributes to their specialized form of locomotion. The bones of the hind limbs consist of long bones (the femur, tibia, fibula, and phalanges) as well as short bones (the tarsals). These bones are created through endochondral ossification during fetal development. Like most land mammals, the round head of the femur articulates with the acetabulum of the os coxae, the hip bone. The femur articulates with the tibia, but not the fibula, which is fused to the tibia. The tibia and fibula articulate with the tarsals of the pes, commonly called the foot. The hind limbs of the rabbit are longer than the front limbs. This allows them to produce their hopping form of locomotion. Longer hind limbs are more capable of producing faster speeds. Hares, which have longer legs than cottontail rabbits, are able to move considerably faster. The hind feet have four long toes that allow for digitigrade movement, which are webbed to prevent them from spreading when hopping. Rabbits do not have paw pads on their feet like most other animals that use digitigrade locomotion. Instead, they have coarse compressed hair that offers protection.
Musculature
Rabbits have muscled hind legs that allow for maximum force, maneuverability, and acceleration that is divided into three main parts: foot, thigh, and leg. The hind limbs of a rabbit are an exaggerated feature. They are much longer and can provide more force than the forelimbs, which are structured like brakes to take the brunt of the landing after a leap. The force put out by the hind limbs is contributed by both the structural anatomy of the fusion of the tibia and fibula, and by the muscular features.
Bone formation and removal, from a cellular standpoint, is directly correlated to hind limb muscles. Action pressure from muscles creates force that is then distributed through the skeletal structures. Rabbits that generate less force, putting less stress on bones are more prone to osteoporosis due to bone rarefaction. In rabbits, the more fibers in a muscle, the more resistant to fatigue. For example, hares have a greater resistance to fatigue than cottontails. The muscles of rabbit's hind limbs can be classified into four main categories: hamstrings, quadriceps, dorsiflexors, or plantar flexors. The quadriceps muscles are in charge of force production when jumping. Complementing these muscles are the hamstrings, which aid in short bursts of action. These muscles play off of one another in the same way as the plantar flexors and dorsiflexors, contributing to the generation and actions associated with force.
Ears
Within the order of lagomorphs, the ears are used to detect and avoid predators. In the family Leporidae, the ears are typically longer than they are wide, and are in general relatively long compared to other mammals.
According to Allen's rule, endothermic animals adapted to colder climates have shorter, thicker limbs and appendages than those of similar animals adapted to warm climates. The rule was originally derived by comparing the ear lengths of Lepus species across the various climates of North America. Subsequent studies show that this rule remains true in the Leporidae for the ears specifically, in that the surface area of rabbits' and hares' ears are enlarged in warm climates; the ears are an important structure to aid thermoregulation as well as in detecting predators due to the way the outer, middle, and inner ear muscles coordinate with one another. The ear muscles also aid in maintaining balance and movement when fleeing predators.
The auricle, also known as the pinna, is a rabbit's outer ear. The rabbit's pinnae represent a fair part of the body surface area. It is theorized that the ears aid in dispersion of heat at temperatures above , with rabbits in warmer climates having longer pinnae due to this. Another theory is that the ears function as shock absorbers that could aid and stabilize rabbits' vision when fleeing predators, but this has typically only been seen in hares. The rest of the outer ear has bent canals that lead to the eardrum or tympanic membrane.
The middle ear, separated by the outer eardrum in the back of the rabbit's skull, contains three bones: the hammer, anvil, and stirrup, collectively called ossicles, which act to decrease sound before it hits the inner ear; in general, the ossicles act as a barrier to the inner ear for sound energy.
Inner ear fluid, called endolymph, receives the sound energy. After receiving the energy. The inner ear comprises two parts: the cochlea that uses sound waves from the ossicles, and the vestibular apparatus that manages the rabbit's position in regard to movement. Within the cochlea a basilar membrane contains sensory hair structures that send nerve signals to the brain, allowing it to recognize different sound frequencies. Within the vestibular apparatus three semicircular canals help detect angular motion.
Thermoregulation
The pinnae, which contain a vascular network and arteriovenous shunts, aid in thermoregulation. In a rabbit, the optimal body temperature is around . If their body temperature exceeds or does not meet this optimal temperature, the rabbit must make efforts to return to homeostasis. Homeostasis of body temperature is maintained by changing the amount of blood flow that passes through the highly vascularized ears, as rabbits have few to no sweat glands. Rabbits may also regulate their temperature by resting in depressions in the ground, known as forms.
Respiratory system
The rabbit's nasal cavity lies dorsal to the oral cavity, and the two compartments are separated by the hard and soft palate. The nasal cavity itself is separated into a left and right side by a cartilage barrier, and it is covered in fine hairs that trap dust before it can enter the respiratory tract. As the rabbit breathes, air flows in through the nostrils along the alar folds. From there, the air moves into the nasal cavity, also known as the nasopharynx, down through the trachea, through the larynx, and into the lungs. The larynx functions as the rabbit's voice box, which enables it to produce a wide variety of sounds. The trachea is a long tube embedded with cartilaginous rings that prevent the tube from collapsing as air moves in and out of the lungs. The trachea then splits into a left and right bronchus, which meet the lungs at a structure called the hilum. From there, the bronchi split into progressively more narrow and numerous branches. The bronchi branch into bronchioles, into respiratory bronchioles, and ultimately terminate at the alveolar ducts. The branching that is typically found in rabbit lungs is a clear example of monopodial branching, in which smaller branches divide out laterally from a larger central branch.
The structure of the rabbit's nasal and oral cavities necessitates breathing through the nose. This is due to the fact that the epiglottis is fixed to the backmost portion of the soft palate. Within the oral cavity, a layer of tissue sits over the opening of the glottis, which blocks airflow from the oral cavity to the trachea. The epiglottis functions to prevent the rabbit from aspirating on its food. Further, the presence of a soft and hard palate allow the rabbit to breathe through its nose while it feeds.
Rabbits' lungs are divided into four lobes: the cranial, middle, caudal, and accessory lobes. The right lung is made up of all four lobes, while the left lung only has two: the cranial and caudal lobes. To provide space for the heart, the left cranial lobe of the lungs is significantly smaller than that of the right. The diaphragm is a muscular structure that lies caudal to the lungs and contracts to facilitate respiration.
Diet and digestion
Rabbits are strict herbivores and are suited to a diet high in fiber, mostly in the form of cellulose. They will typically graze grass upon waking up and emerging from a burrow, and will move on to consume vegetation and other plants throughout the waking period; rabbits have been known to eat a wide variety of plants, including tree leaves and fruits, though consumption of fruit and lower fiber foods is common for pet rabbits where natural vegetation is scarce.
Easily digestible food is processed in the gastrointestinal tract and expelled as regular feces. To get nutrients out of hard to digest fiber, rabbits ferment fiber in the cecum (part of the gastrointestinal tract) and then expel the contents as cecotropes, which are reingested (cecotrophy or refection). The cecotropes are then absorbed in the small intestine to use the nutrients. Soft cecotropes are usually consumed during periods of rest in underground burrows.
Rabbits cannot vomit; and therefore if buildup occurs within the intestines (due often to a diet with insufficient fibre), intestinal blockage can occur.
Reproduction
The adult male reproductive system forms the same as most mammals with the seminiferous tubular compartment containing the Sertoli cells and an adluminal compartment that contains the Leydig cells. The Leydig cells produce testosterone, which maintains libido and creates secondary sex characteristics such as the genital tubercle and penis. The Sertoli cells triggers the production of Anti-Müllerian duct hormone, which absorbs the Müllerian duct. In an adult male rabbit, the sheath of the penis is cylinder-like and can be extruded as early as two months of age. The scrotal sacs lay lateral to the penis and contain epididymal fat pads which protect the testes. Between 10 and 14 weeks, the testes descend and are able to retract into the pelvic cavity to thermoregulate. Furthermore, the secondary sex characteristics, such as the testes, are complex and secrete many compounds. These compounds include fructose, citric acid, minerals, and a uniquely high amount of catalase, all of which affect the characteristics of rabbit semen; for instance, citric acid is positively correlated with agglutination, and high amounts of catalase protect against premature capacitation.
The adult female reproductive tract is bipartite, which prevents an embryo from translocating between uteri. The female urethra and vagina open into a urogenital sinus with a single urogenital opening. The two uterine horns communicate to two cervixes and forms one vaginal canal. Along with being bipartite, the female rabbit does not go through an estrus cycle, which causes mating induced ovulation.
The average female rabbit becomes sexually mature at three to eight months of age and can conceive at any time of the year for the duration of her life. Egg and sperm production can begin to decline after three years, with some species such as those in genus Oryctolagus completely stopping reproduction at 6 years of age. During mating, the male rabbit will insert his penis into the female from behind, make rapid pelvic thrusts until ejaculation, and throw himself backward off the female. Copulation lasts only 20–40 seconds.
The rabbit gestation period is short and ranges from 27 to 30 days. A longer gestation period will generally yield a smaller litter while shorter gestation periods will give birth to a larger litter. The size of a single litter can range from 1 to 12 kits, depending on species. After birth, the only role of males is to protect the young from other rabbits, and the mother will leave the young in the nest most of the day, returning to nurse them once every 24 hours. The female can become pregnant again as early as the next day.
After mating, the doe will begin to dig a burrow or prepare a nest before giving birth. Between three days and a few hours before giving birth another series of hormonal changes will cause her to prepare the nest structure. The doe will first gather grass for a structure, and an elevation in prolactin shortly before birth will cause her fur to shed that the doe will then use to line the nest, providing insulation for the newborn kits.
The mortality rates of embryos are high in rabbits and can be due to infection, trauma, poor nutrition and environmental stress. A high fertility rate is necessary to counter this. More than half of rabbit pregnancies are aborted, causing embryos to be resorbed into the mother's body; vitamin deficiencies are a major cause of abortions in domestic rabbits.
Sleep
Rabbits may appear to be crepuscular, but many species are naturally inclined towards nocturnal activity. In 2011, the average sleep time of a rabbit in captivity was calculated at 8.4 hours per day; previous studies have estimated sleep periods as long as 11.4 hours on average, undergoing both slow-wave and rapid eye movement sleep. Newborn rabbits will sleep for 22 hours a day before leaving the nest. As with other prey animals, rabbits often sleep with their eyes open, so that sudden movements will awaken the rabbit to respond to potential danger.
Diseases and immunity
In addition to being at risk of disease from common pathogens such as Bordetella bronchiseptica and Escherichia coli, rabbits can contract the virulent, species-specific viruses myxomatosis, and a form of calicivirus which causes rabbit hemorrhagic disease. Myxomatosis is more hazardous to pet rabbits, as wild rabbits often have some immunity. Among the parasites that infect rabbits are tapeworms (such as Taenia serialis), external parasites (including fleas and mites), coccidia species, Encephalitozoon cuniculi, and Toxoplasma gondii. Domesticated rabbits with a diet lacking in high-fiber sources, such as hay and grass, are susceptible to potentially lethal gastrointestinal stasis. Rabbits and hares are almost never found to be infected with rabies and have not been known to transmit rabies to humans.
Rabbit hemorrhagic disease (RHD) is a highly infectious rabbit-specific disease caused by strains of rabbit hemorrhagic disease virus (RHDV), including type 2 (RHDV2). The disease was first described in domestic Angora rabbits imported from Germany to Jiangsu, China in 1984, and quickly spread to Korea, Italy, and the rest of Europe. The disease spread to the Americas from 1988, first appearing in rabbits imported to Mexico, but subsequent outbreaks were infrequent, as RHDV only affected the European rabbit species. RHDV2, a strain of RHD-causing virus that affects both domestic and wild lagomorphs, such as hares, was detected for the first time in France in 2010. RHDV2 has since spread to the rest of Europe, Canada, Australia, and the United States.
Ecology
Rabbits are prey animals. In Mediterranean Europe, for example, rabbits are the main prey of red foxes, badgers, and Iberian lynxes. To avoid predation and to navigate underground, rabbits have heightened senses (compared to humans) and are constantly aware of their surroundings. If confronted by a potential threat, a rabbit may freeze and observe, then warn others in the warren with powerful thumps on the ground from a hind foot. Rabbits have a remarkably wide field of vision, and a good deal of it is devoted to overhead scanning. A rabbit eye has no fovea, but a "visual streak", a horizontal line in the middle of the retina where both rod and cone cell densities are the highest. This allows them to scan the horizon with little head turning.
Rabbits survive predation by burrowing (in some species), and hopping away to dense cover. Their strong teeth allow them to bite to escape a struggle.
The longest-lived rabbit on record, a domesticated European rabbit living in Tasmania, died at age 18. The lifespan of wild rabbits is much shorter; the average longevity of an eastern cottontail, for instance, is about one to five years. The various species of rabbit have been recorded as living from four to 13 years in captivity.
Habitat and range
Rabbit habitats include forests, steppes, plateaus, deserts, and swamps. Some species, such as the volcano rabbit (Romerolagus diazi) have especially limited distribution due to their habitat needs. Rabbits live in groups, or colonies, varying in behavior depending on species and often using the burrows of other animals or creating nests in holes. The European rabbit notably lives in extensive burrow networks called warrens.
Rabbits are native to North America, southwestern Europe, Southeast Asia, Sumatra, some islands of Japan, and parts of Africa and South America. They are not naturally found in most of Eurasia, where a number of species of hares are present. A 2003 study on domestic rabbits in China found that "(so-called) Chinese rabbits were introduced from Europe", and that "genetic diversity in Chinese rabbits was very low".
Rabbits first entered South America relatively recently, as part of the Great American Interchange. Much of the continent was considered to have just one species of rabbit, the tapeti, and most of South America's Southern Cone has had no rabbits until the introduction of the European rabbit, which has been introduced to many places around the world, in the late 19th century.
Rabbits have been launched into space orbit.
Marking
Both sexes of rabbits often rub their chins on objects with their scent gland located under the chin. This is the rabbit's way of marking their territory or possessions for other rabbits to recognize by depositing scent gland secretions. Rabbits who have bonded will respect each other's smell, which indicates a territorial border. Rabbits also have scent glands that produce a strong-smelling waxy substance near their anuses. Territorial marking by scent glands has been documented among both domestic and wild rabbit species.
Environmental problems
Rabbits, particularly the European rabbit, have been a source of environmental problems when introduced into the wild by humans. As a result of their appetites, and the rate at which they breed, feral rabbit depredation can be problematic for agriculture. Gassing (fumigation of warrens), barriers (fences), shooting, snaring, and ferreting have been used to control rabbit populations, but the most effective measures are diseases such as myxomatosis and calicivirus. In Europe, where domestic rabbits are farmed on a large scale, they can be protected against myxomatosis and calicivirus via vaccination. Rabbits in Australia and New Zealand are considered to be such a pest that landowners are legally obliged to control them.
Rabbits are known to be able to catch fire and spread wildfires, but the efficiency and relevance of this method has been doubted by forest experts who contend that a rabbit on fire could move some meters. Knowledge on fire-spreading rabbits is based on anecdotes as there is no known scientific investigation on the subject.
As food and clothing
Humans have hunted rabbits for food since at least the onset of the Last Glacial Maximum, and wild rabbits and hares are still hunted for their meat as game. Hunting is accomplished with the aid of trained falcons, ferrets, or dogs (a common hunting breed being beagles), as well as with snares, rifles and other guns. A caught rabbit may be dispatched with a sharp blow to the back of its head, a practice from which the term rabbit punch is derived.
Wild leporids comprise a small portion of global rabbit-meat consumption. Domesticated descendants of the European rabbit (Oryctolagus cuniculus) that are bred and kept as livestock (a practice called cuniculture) account for the estimated 200million tons of rabbit meat produced annually. Approximately 1.2 billion rabbits are slaughtered each year for meat worldwide. In 1994, the countries with the highest consumption per capita of rabbit meat were Malta with , Italy with , and Cyprus with . The largest producers of rabbit meat were China, Russia, Italy (specifically Veneto), France, and Spain. Rabbit meat was once a common commodity in Sydney, with European rabbits having been introduced intentionally to Australia for hunting purposes, but declined after the myxomatosis virus was intentionally introduced to control the exploding population of feral rabbits in the area.
In the United Kingdom, fresh rabbits are sold in butcher shops and markets, and some supermarkets sell frozen rabbit meat. It is sold in farmers markets there, including the Borough Market in London. Rabbit meat is a feature of Moroccan cuisine, where it is cooked in a tajine with "raisins and grilled almonds added a few minutes before serving". In China, rabbit meat is particularly popular in Sichuan cuisine, with its stewed rabbit, spicy diced rabbit, BBQ-style rabbit, and even spicy rabbit heads, which have been compared to spicy duck neck. In the United States, rabbits sold as food are typically the domestic New Zealand, Belgian, and Chinese rabbits, or Scottish hares.
An infectious disease associated with rabbits-as-food is tularemia (also known as rabbit fever), which may be contracted from an infected rabbit. The disease can cause symptoms of fever, skin ulcers and enlarged lymph nodes, and can occasionally lead to pneumonia or throat infection. Secondary vectors of tularemia include tick and fly bites, which may be present in the fur of a caught rabbit. Inhaling the bacteria during the skinning process increases the risk of getting tularemia; preventative measures against this include the use of gloves and face masks. Prior to the development of antibiotics, such as docycycline and gentamicin, the death rate associated with tularemia infections was 60%, which has since decreased to less than 4%.
In addition to their meat, domestic rabbits are used for their wool and fur for clothing, as well as their nitrogen-rich manure and their high-protein milk. Production industries have developed domesticated rabbit breeds (such as the Angora rabbit) for the purpose of meeting these needs. In 1986, the number of rabbit skins produced annually in France was as high as 70 million, compared to 25 million mink pelts produced at the same time. However, rabbit fur is on the whole a byproduct of rabbit meat production, whereas minks are bred primarily for fur production.
In culture
Rabbits are often posited by scholars as symbols of fertility, sexuality and spring, though they have been variously interpreted throughout history. Up until the end of the 18th century, it was widely believed that rabbits and hares were hermaphrodites, contributing to a possible view of rabbits as "sexually aberrant". The Easter Bunny is a figure from German folklore that then spread to America and later other parts of the world and is similar to Santa Claus, albeit both with softened roles compared to earlier incarnations of the figures.
The rabbits' role as a prey animal with few defenses evokes vulnerability and innocence in folklore and modern children's stories, and rabbits appear as sympathetic characters, able to connect easily with youth, though this particular symbolic depiction only became popular in the 1930s following the massive popularization of the pet rabbit decades before. Additionally, they have not been limited to sympathetic depictions since then, as in literature such as Watership Down and the works of Ariel Dorfman. With its reputation as a prolific breeder, the rabbit juxtaposes sexuality with innocence, as in the Playboy Bunny. The rabbit has also been used as a symbol of playfulness and endurance, as represented by the Energizer Bunny and the Duracell Bunny.
Folklore and mythology
The rabbit often appears in folklore as the trickster archetype, as he uses his cunning to outwit his enemies. In Central Africa, the common hare (Kalulu) is described as a trickster figure, and in Aztec mythology, a pantheon of four hundred rabbit gods known as Centzon Totochtin, led by Ometochtli or Two Rabbit, represented fertility, parties, and drunkenness. Rabbits in the Americas varied in mythological symbolism: in Aztec mythology, they were also associated with the moon, and in Anishinaabe traditional beliefs, held by the Ojibwe and some other Native American peoples, Nanabozho, or Great Rabbit, is an important deity related to the creation of the world. More broadly, a rabbit's foot may be carried as an amulet, believed to bring protection and good luck. This belief is found in many parts of the world, with the earliest use being recorded in Europe .
Rabbits also appear in Chinese, Vietnamese, Japanese and Korean mythology, though rabbits are a relatively new introduction to some of these regions. In Chinese folklore, rabbits accompany Chang'e on the Moon, and the moon rabbit is a prominent symbol in the Mid-Autumn Festival. In the Chinese New Year, the zodiacal rabbit or hare is one of the twelve celestial animals in the Chinese zodiac. At the time of the zodiacal cycles becoming associated with animals in the Han dynasty, only hares were native to China, with the currently extant breeds of rabbit in China being of European origin. The Vietnamese zodiac includes a zodiacal cat in place of the rabbit. The most common explanation is that the ancient Vietnamese word for "rabbit" (mao) sounds like the Chinese word for "cat" (卯, mao). In Japanese tradition, rabbits live on the Moon where they make mochi. This comes from interpreting the pattern of dark patches on the moon as a rabbit standing on tiptoes on the left pounding on an usu, a Japanese mortar. In Korean mythology, as in Japanese, rabbits live on the moon making rice cakes ("tteok" in Korean).
Rabbits have also appeared in religious symbolism. Buddhism, Christianity, and Judaism have associations with an ancient circular motif called the three rabbits (or "three hares"). Its meaning ranges from "peace and tranquility" to the Holy Trinity. The tripartite symbol also appears in heraldry. In Jewish folklore, rabbits are associated with cowardice, a usage still current in contemporary Israeli spoken Hebrew. The original Hebrew word (shfanim, שפנים) refers to the hyrax, but early translations to English interpreted the word to mean "rabbit", as no hyraxes were known to northern Europe.
Modern times
The rabbit as trickster is a part of American popular culture, as Br'er Rabbit (from African-American folktales and, later, Disney animation) and Bugs Bunny (the cartoon character from Warner Bros.), for example.
Anthropomorphized rabbits have appeared in film and literature, in Alice's Adventures in Wonderland (the White Rabbit and the March Hare characters), in Watership Down (including the film and television adaptations), in Rabbit Hill (by Robert Lawson), and in the Peter Rabbit stories (by Beatrix Potter). In the 1920s, Oswald the Lucky Rabbit was a popular cartoon character.
On the Isle of Portland in Dorset, UK, the rabbit is said to be unlucky, and speaking the creature's name can cause upset among older island residents. This is thought to date back to early times in the local quarrying industry, where, to save space, extracted stones that were not fit for sale were set aside in what became tall, unstable walls. The local rabbits' tendency to burrow there would weaken the walls, and their collapse would result in injuries or even death. In the local culture to this day, the rabbit (when he has to be referred to) may instead be called a "long ears" or "underground mutton" so as not to risk bringing a downfall upon oneself.
In other parts of Britain and in North America, "Rabbit rabbit rabbit" is one variant of an apotropaic or talismanic superstition that involves saying or repeating the word "rabbit" (or "rabbits" or "white rabbits" or some combination thereof) out loud upon waking on the first day of each month, because doing so is believed to ensure good fortune for the duration of that month.
The "rabbit test" is a term first used in 1949 for the Friedman test, an early diagnostic tool for detecting a pregnancy in humans. It is a common misconception (or perhaps an urban legend) that the test-rabbit would die if the woman was pregnant. This led to the phrase "the rabbit died" becoming a euphemism for a positive pregnancy test.
Many modern children's stories and cartoons portray rabbits as particularly fond of eating carrots, largely due to the popularity of Bugs Bunny, whose carrot eating habit was modeled after Peter Wayne, the character played by Clark Gable in the 1934 romantic comedy It Happened One Night. This is a misleading as wild rabbits do not naturally prefer carrots over other plants. Carrots are high in sugar, and excessive consumption can be unhealthy. This has led to some owners of domestic rabbits feeding a carrot heavy diet on this false perception.
| Biology and health sciences | Lagomorphs | null |
26590 | https://en.wikipedia.org/wiki/Relay | Relay | A relay is an electrically operated switch. It consists of a set of input terminals for a single or multiple control signals, and a set of operating contact terminals. The switch may have any number of contacts in multiple contact forms, such as make contacts, break contacts, or combinations thereof.
Relays are used where it is necessary to control a circuit by an independent low-power signal, or where several circuits must be controlled by one signal. Relays were first used in long-distance telegraph circuits as signal repeaters: they refresh the signal coming in from one circuit by transmitting it on another circuit. Relays were used extensively in telephone exchanges and early computers to perform logical operations.
The traditional electromechanical form of a relay uses an electromagnet to close or open the contacts, but relays using other operating principles have also been invented, such as in solid-state relays which use semiconductor properties for control without relying on moving parts. Relays with calibrated operating characteristics and sometimes multiple operating coils are used to protect electrical circuits from overload or faults; in modern electric power systems these functions are performed by digital instruments still called protective relays or safety relays.
Latching relays require only a single pulse of control power to operate the switch persistently. Another pulse applied to a second set of control terminals, or a pulse with opposite polarity, resets the switch, while repeated pulses of the same kind have no effects. Magnetic latching relays are useful in applications when interrupted power should not affect the circuits that the relay is controlling.
History
In 1809 an electrolytic relay was designed as an alarm for an electrochemical telegraph by Samuel Thomas von Sömmerring.
Electrical relays got their start mainly in application to telegraphs. American scientist Joseph Henry is often cited to have invented a relay in 1835 in order to improve his version of the electrical telegraph, developed earlier in 1831. However, Henry never published any of these experiments and dating for his relay experiments is based solely on the words of Henry himself and his students, often decades later.
In March 1837 Edward Davy deposited a letter with the British Secretary for the Society of Arts containing his ideas for an electromagnetic relay, which, even if it was not the first, was considered more practical than previous designs, being a ‘make-and-break’ type rather than being based on the use of mercury. He did this two months before Charles Wheatstone and William Cooke filed their first patent for their telegraph system and would file a patent for the same idea a year later.
However, an official patent was not issued until 1840 to Samuel Morse for his telegraph, which is now called a relay. The mechanism described acted as a digital amplifier, repeating the telegraph signal, and thus allowing signals to be propagated as far as desired.
The word relay appears in the context of electromagnetic operations from 1860 onwards.
Basic design and operation
A simple electromagnetic relay consists of a coil of wire wrapped around a soft iron core (a solenoid), an iron yoke which provides a low reluctance path for magnetic flux, a movable iron armature, and one or more sets of contacts (there are two contacts in the relay pictured). The armature is hinged to the yoke and mechanically linked to one or more sets of moving contacts. The armature is held in place by a spring so that when the relay is de-energized there is an air gap in the magnetic circuit. In this condition, one of the two sets of contacts in the relay pictured is closed, and the other set is open. Other relays may have more or fewer sets of contacts depending on their function. The relay in the picture also has a wire connecting the armature to the yoke. This ensures continuity of the circuit between the moving contacts on the armature, and the circuit track on the printed circuit board (PCB) via the yoke, which is soldered to the PCB.
When an electric current is passed through the coil it generates a magnetic field that activates the armature, and the consequent movement of the movable contact(s) either makes or breaks (depending upon construction) a connection with a fixed contact. If the set of contacts was closed when the relay was de-energized, then the movement opens the contacts and breaks the connection, and vice versa if the contacts were open. When the current to the coil is switched off, the armature is returned by a force, approximately half as strong as the magnetic force, to its relaxed position. Usually this force is provided by a spring, but gravity is also used commonly in industrial motor starters. Most relays are manufactured to operate quickly. In a low-voltage application this reduces noise; in a high voltage or current application it reduces arcing.
When the coil is energized with direct current, a flyback diode or snubber resistor is often placed across the coil to dissipate the energy from the collapsing magnetic field (back EMF) at deactivation, which would otherwise generate a voltage spike dangerous to semiconductor circuit components. Such diodes were not widely used before the application of transistors as relay drivers, but soon became ubiquitous as early germanium transistors were easily destroyed by this surge. Some automotive relays include a diode inside the relay case. Resistors, while more durable than diodes, are less efficient at eliminating voltage spikes generated by relays and therefore not as commonly used.
If the relay is driving a large, or especially a reactive load, there may be a similar problem of surge currents around the relay output contacts. In this case a snubber circuit (a capacitor and resistor in series) across the contacts may absorb the surge. Suitably rated capacitors and the associated resistor are sold as a single packaged component for this commonplace use.
If the coil is designed to be energized with alternating current (AC), some method is used to split the flux into two out-of-phase components which add together, increasing the minimum pull on the armature during the AC cycle. Typically this is done with a small copper "shading ring" crimped around a portion of the core that creates the delayed, out-of-phase component, which holds the contacts during the zero crossings of the control voltage.
Contact materials for relays vary by application. Materials with low contact resistance may be oxidized by the air, or may tend to "stick" instead of cleanly parting when opening. Contact material may be optimized for low electrical resistance, high strength to withstand repeated operations, or high capacity to withstand the heat of an arc. Where very low resistance is required, or low thermally-induced voltages are desired, gold-plated contacts may be used, along with palladium and other non-oxidizing, semi-precious metals. Silver or silver-plated contacts are used for signal switching. Mercury-wetted relays make and break circuits using a thin, self-renewing film of liquid mercury. For higher-power relays switching many amperes, such as motor circuit contactors, contacts are made with a mixtures of silver and cadmium oxide, providing low contact resistance and high resistance to the heat of arcing. Contacts used in circuits carrying scores or hundreds of amperes may include additional structures for heat dissipation and management of the arc produced when interrupting the circuit. Some relays have field-replaceable contacts, such as certain machine tool relays; these may be replaced when worn out, or changed between normally open and normally closed state, to allow for changes in the controlled circuit.
Terminology
Since relays are switches, the terminology applied to switches is also applied to relays; a relay switches one or more poles, each of whose contacts can be thrown by energizing the coil. Normally open (NO) contacts connect the circuit when the relay is activated; the circuit is disconnected when the relay is inactive. Normally closed (NC) contacts disconnect the circuit when the relay is activated; the circuit is connected when the relay is inactive. All of the contact forms involve combinations of NO and NC connections.
The National Association of Relay Manufacturers and its successor, the Relay and Switch Industry Association define 23 distinct electrical contact forms found in relays and switches. Of these, the following are commonly encountered:
SPST-NO (Single-Pole Single-Throw, Normally-Open) relays have a single Form A or make contact. These have two terminals which can be connected or disconnected. Including two for the coil, such a relay has four terminals in total.
SPST-NC (Single-Pole Single-Throw, Normally-Closed) relays have a single Form B or break contact. As with an SPST-NO relay, such a relay has four terminals in total.
SPDT (Single-Pole Double-Throw) relays have a single set of Form C, break before make or transfer contacts. That is, a common terminal connects to either of two others, never connecting to both at the same time. Including two for the coil, such a relay has a total of five terminals.
DPST – Double-Pole Single-Throw relays are equivalent to a pair of SPST switches or relays actuated by a single coil. Including two for the coil, such a relay has a total of six terminals. The poles may be Form A or Form B (or one of each; the designations NO and NC should be used to resolve the ambiguity).
DPDT – Double-Pole Double-Throw relays have two sets of Form C contacts. These are equivalent to two SPDT switches or relays actuated by a single coil. Such a relay has eight terminals, including the coil
Form D – make before break
Form E – combination of D and B
The S (single) or D (double) designator for the pole count may be replaced with a number, indicating multiple contacts connected to a single actuator. For example, 4PDT indicates a four-pole double-throw relay that has 12 switching terminals.
EN 50005 are among applicable standards for relay terminal numbering; a typical EN 50005-compliant SPDT relay's terminals would be numbered 11, 12, 14, A1 and A2 for the C, NC, NO, and coil connections, respectively.
DIN 72552 defines contact numbers in relays for automotive use:
85 = relay coil -
86 = relay coil +
87 = to load (normally open)
87a = to load (normally closed)
30 = battery +
Types
Coaxial relay
Where radio transmitters and receivers share one antenna, often a coaxial relay is used as a TR (transmit-receive) relay, which switches the antenna from the receiver to the transmitter. This protects the receiver from the high power of the transmitter. Such relays are often used in transceivers which combine transmitter and receiver in one unit. The relay contacts are designed not to reflect any radio frequency power back toward the source, and to provide very high isolation between receiver and transmitter terminals. The characteristic impedance of the relay is matched to the transmission line impedance of the system, for example, 50 ohms.
Contactor
A contactor is a heavy-duty relay with higher current ratings, used for switching electric motors and lighting loads. Continuous current ratings for common contactors range from 10 amps to several hundred amps. High-current contacts are made with alloys containing silver. The unavoidable arcing causes the contacts to oxidize; however, silver oxide is still a good conductor. Contactors with overload protection devices are often used to start motors.
Force-guided contacts relay
A force-guided contacts relay has relay contacts that are mechanically linked together, so that when the relay coil is energized or de-energized, all of the linked contacts move together. If one set of contacts in the relay becomes immobilized, no other contact of the same relay will be able to move. The function of force-guided contacts is to enable the safety circuit to check the status of the relay. Force-guided contacts are also known as "positive-guided contacts", "captive contacts", "locked contacts", "mechanically linked contacts", or "safety relays".
These safety relays have to follow design rules and manufacturing rules that are defined in one main machinery standard EN 50205 : Relays with forcibly guided (mechanically linked) contacts. These rules for the safety design are the one defined in type B standards such as EN 13849-2 as Basic safety principles and Well-tried safety principles for machinery that applies to all machines.
Force-guided contacts by themselves can not guarantee that all contacts are in the same state, however, they do guarantee, subject to no gross mechanical fault, that no contacts are in opposite states. Otherwise, a relay with several normally open (NO) contacts may stick when energized, with some contacts closed and others still slightly open, due to mechanical tolerances. Similarly, a relay with several normally closed (NC) contacts may stick to the unenergized position, so that when energized, the circuit through one set of contacts is broken, with a marginal gap, while the other remains closed. By introducing both NO and NC contacts, or more commonly, changeover contacts, on the same relay, it then becomes possible to guarantee that if any NC contact is closed, all NO contacts are open, and conversely, if any NO contact is closed, all NC contacts are open. It is not possible to reliably ensure that any particular contact is closed, except by potentially intrusive and safety-degrading sensing of its circuit conditions, however in safety systems it is usually the NO state that is most important, and as explained above, this is reliably verifiable by detecting the closure of a contact of opposite sense.
Force-guided contact relays are made with different main contact sets, either NO, NC or changeover, and one or more auxiliary contact sets, often of reduced current or voltage rating, used for the monitoring system. Contacts may be all NO, all NC, changeover, or a mixture of these, for the monitoring contacts, so that the safety system designer can select the correct configuration for the particular application. Safety relays are used as part of an engineered safety system.
Latching relay
A latching relay, also called impulse, bistable, keep, or stay relay, or simply latch, maintains either contact position indefinitely without power applied to the coil. The advantage is that one coil consumes power only for an instant while the relay is being switched, and the relay contacts retain this setting across a power outage. A latching relay allows remote control of building lighting without the hum that may be produced from a continuously (AC) energized coil.
In one mechanism, two opposing coils with an over-center spring or permanent magnet hold the contacts in position after the coil is de-energized. A pulse to one coil turns the relay on, and a pulse to the opposite coil turns the relay off. This type is widely used where control is from simple switches or single-ended outputs of a control system, and such relays are found in avionics and numerous industrial applications.
Another latching type has a remanent core that retains the contacts in the operated position by the remanent magnetism in the core. This type requires a current pulse of opposite polarity to release the contacts. A variation uses a permanent magnet that produces part of the force required to close the contact; the coil supplies sufficient force to move the contact open or closed by aiding or opposing the field of the permanent magnet. A polarity controlled relay needs changeover switches or an H-bridge drive circuit to control it. The relay may be less expensive than other types, but this is partly offset by the increased costs in the external circuit.
In another type, a ratchet relay has a ratchet mechanism that holds the contacts closed after the coil is momentarily energized. A second impulse, in the same or a separate coil, releases the contacts. This type may be found in certain cars, for headlamp dipping and other functions where alternating operation on each switch actuation is needed.
A stepping relay is a specialized kind of multi-way latching relay designed for early automatic telephone exchanges.
An earth-leakage circuit breaker includes a specialized latching relay.
Very early computers often stored bits in a magnetically latching relay, such as ferreed or the later remreed in the 1ESS switch.
Some early computers used ordinary relays as a kind of latch—they store bits in ordinary wire-spring relays or reed relays by feeding an output wire back as an input, resulting in a feedback loop or sequential circuit. Such an electrically latching relay requires continuous power to maintain state, unlike magnetically latching relays or mechanically ratcheting relays. While (self-)holding circuits are often realized with relays they can also be implemented by other means.
In computer memories, latching relays and other relays were replaced by delay-line memory, which in turn was replaced by a series of ever faster and ever smaller memory technologies.
Machine tool relay
A machine tool relay is a type standardized for industrial control of machine tools, transfer machines, and other sequential control. They are characterized by a large number of contacts (sometimes extendable in the field) which are easily converted from normally open to normally closed status, easily replaceable coils, and a form factor that allows compactly installing many relays in a control panel. Although such relays once were the backbone of automation in such industries as automobile assembly, the programmable logic controller (PLC) mostly displaced the machine tool relay from sequential control applications.
A relay allows circuits to be switched by electrical equipment: for example, a timer circuit with a relay could switch power at a preset time. For many years relays were the standard method of controlling industrial electronic systems. A number of relays could be used together to carry out complex functions (relay logic). The principle of relay logic is based on relays which energize and de-energize associated contacts. Relay logic is the predecessor of ladder logic, which is commonly used in programmable logic controllers.
Mercury relay
A mercury relay is a relay that uses mercury as the switching element. They are used where contact erosion would be a problem for conventional relay contacts. Owing to environmental considerations about significant amount of mercury used and modern alternatives, they are now comparatively uncommon.
Mercury-wetted relay
A mercury-wetted reed relay is a form of reed relay that employs a mercury switch, in which the contacts are wetted with mercury. Mercury reduces the contact resistance and mitigates the associated voltage drop. Surface contamination may result in poor conductivity for low-current signals. For high-speed applications, the mercury eliminates contact bounce, and provides virtually instantaneous circuit closure. Mercury wetted relays are position-sensitive and must be mounted according to the manufacturer's specifications. Because of the toxicity and expense of liquid mercury, these relays have increasingly fallen into disuse.
The high speed of switching action of the mercury-wetted relay is a notable advantage. The mercury globules on each contact coalesce, and the current rise time through the contacts is generally considered to be a few picoseconds. However, in a practical circuit it may be limited by the inductance of the contacts and wiring. It was quite common, before restrictions on the use of mercury, to use a mercury-wetted relay in the laboratory as a convenient means of generating fast rise time pulses, however although the rise time may be picoseconds, the exact timing of the event is, like all other types of relay, subject to considerable jitter, possibly milliseconds, due to mechanical variations.
The same coalescence process causes another effect, which is a nuisance in some applications. The contact resistance is not stable immediately after contact closure, and drifts, mostly downwards, for several seconds after closure, the change perhaps being 0.5 ohm.
Multi-voltage relays
Multi-voltage relays are devices designed to work for wide voltage ranges such as 24 to 240 VAC and VDC and wide frequency ranges such as 0 to 300 Hz. They are indicated for use in installations that do not have stable supply voltages.
Overload protection relay
Electric motors need overcurrent protection to prevent damage from over-loading the motor, or to protect against short circuits in connecting cables or internal faults in the motor windings. The overload sensing devices are a form of heat operated relay where a coil heats a bimetallic strip, or where a solder pot melts, to operate auxiliary contacts. These auxiliary contacts are in series with the motor's contactor coil, so they turn off the motor when it overheats.
This thermal protection operates relatively slowly allowing the motor to draw higher starting currents before the protection relay will trip. Where the overload relay is exposed to the same ambient temperature as the motor, a useful though crude compensation for motor ambient temperature is provided.
The other common overload protection system uses an electromagnet coil in series with the motor circuit that directly operates contacts. This is similar to a control relay but requires a rather high fault current to operate the contacts. To prevent short over current spikes from causing nuisance triggering the armature movement is damped with a dashpot. The thermal and magnetic overload detections are typically used together in a motor protection relay.
Electronic overload protection relays measure motor current and can estimate motor winding temperature using a "thermal model" of the motor armature system that can be set to provide more accurate motor protection. Some motor protection relays include temperature detector inputs for direct measurement from a thermocouple or resistance thermometer sensor embedded in the winding.
Polarized relay
A polarized relay places the armature between the poles of a permanent magnet to increase sensitivity. Polarized relays were used in middle 20th Century telephone exchanges to detect faint pulses and correct telegraphic distortion.
Reed relay
A reed relay is a reed switch enclosed in a solenoid. The switch has a set of contacts inside an evacuated or inert gas-filled glass tube that protects the contacts against atmospheric corrosion; the contacts are made of magnetic material that makes them move under the influence of the field of the enclosing solenoid or an external magnet.
Reed relays can switch faster than larger relays and require very little power from the control circuit. However, they have relatively low switching current and voltage ratings. Though rare, the reeds can become magnetized over time, which makes them stick "on", even when no current is present; changing the orientation of the reeds or degaussing the switch with respect to the solenoid's magnetic field can resolve this problem.
Sealed contacts with mercury-wetted contacts have longer operating lives and less contact chatter than any other kind of relay.
Safety relays
Safety relays are devices which generally implement protection functions. In the event of a hazard, the task of such a safety function is to use appropriate measures to reduce the existing risk to an acceptable level.
Solid-state contactor
A solid-state contactor is a heavy-duty solid state relay, including the necessary heat sink, used where frequent on-off cycles are required, such as with electric heaters, small electric motors, and lighting loads. There are no moving parts to wear out and there is no contact bounce due to vibration. They are activated by AC control signals or DC control signals from programmable logic controllers (PLCs), PCs, transistor-transistor logic (TTL) sources, or other microprocessor and microcontroller controls.
Solid-state relay
A solid-state relay (SSR) is a solid state electronic component that provides a function similar to an electromechanical relay but does not have any moving components, increasing long-term reliability. A solid-state relay uses a thyristor, TRIAC or other solid-state switching device, activated by the control signal, to switch the controlled load, instead of a solenoid. An optocoupler (a light-emitting diode (LED) coupled with a photo transistor) can be used to isolate control and controlled circuits.
Static relay
A static relay consists of electronic circuitry to emulate all those characteristics which are achieved by moving parts in an electro-magnetic relay.
Time-delay relay
Timing relays are arranged for an intentional delay in operating their contacts. A very short (a fraction of a second) delay would use a copper disk between the armature and moving blade assembly. Current flowing in the disk maintains a magnetic field for a short time, lengthening release time. For a slightly longer (up to a minute) delay, a dashpot is used. A dashpot is a piston filled with fluid that is allowed to escape slowly; both air-filled and oil-filled dashpots are used. The time period can be varied by increasing or decreasing the flow rate. For longer time periods, a mechanical clockwork timer is installed. Relays may be arranged for a fixed timing period, or may be field-adjustable, or remotely set from a control panel. Modern microprocessor-based timing relays provide precision timing over a great range.
Some relays are constructed with a kind of "shock absorber" mechanism attached to the armature, which prevents immediate, full motion when the coil is either energized or de-energized. This addition gives the relay the property of time-delay actuation. Time-delay relays can be constructed to delay armature motion on coil energization, de-energization, or both.
Time-delay relay contacts must be specified not only as either normally open or normally closed, but whether the delay operates in the direction of closing or in the direction of opening. The following is a description of the four basic types of time-delay relay contacts.
First, we have the normally open, timed-closed (NOTC) contact. This type of contact is normally open when the coil is unpowered (de-energized). The contact is closed by the application of power to the relay coil, but only after the coil has been continuously powered for the specified amount of time. In other words, the direction of the contact's motion (either to close or to open) is identical to a regular NO contact, but there is a delay in closing direction. Because the delay occurs in the direction of coil energization, this type of contact is alternatively known as a normally open, on-delay.
Vacuum relays
A vacuum relay is a sensitive relay having its contacts mounted in an evacuated glass housing, to permit handling radio-frequency voltages as high as 20,000 volts without flashover between contacts even though contact spacing is as low as a few hundredths of an inch when open.
Applications
Relays are used wherever it is necessary to control a high power or high voltage circuit with a low power circuit, especially when galvanic isolation is desirable. The first application of relays was in long telegraph lines, whereas the weak signal received at an intermediate station could control a contact, regenerating the signal for further transmission. High-voltage or high-current devices can be controlled with small, low voltage wiring and pilots switches. Operators can be isolated from the high voltage circuit. Low power devices such as microprocessors can drive relays to control electrical loads beyond their direct drive capability. In an automobile, a starter relay allows the high current of the cranking motor to be controlled with small wiring and contacts in the ignition key.
Electromechanical switching systems including Strowger and crossbar telephone exchanges made extensive use of relays in ancillary control circuits. The Relay Automatic Telephone Company also manufactured telephone exchanges based solely on relay switching techniques designed by Gotthilf Ansgarius Betulander. The first public relay based telephone exchange in the UK was installed in Fleetwood on 15 July 1922 and remained in service until 1959.
The use of relays for the logical control of complex switching systems like telephone exchanges was studied by Claude Shannon, who formalized the application of Boolean algebra to relay circuit design in A Symbolic Analysis of Relay and Switching Circuits. Relays can perform the basic operations of Boolean combinatorial logic. For example, the Boolean AND function is realised by connecting normally open relay contacts in series, the OR function by connecting normally open contacts in parallel. Inversion of a logical input can be done with a normally closed contact. Relays were used for control of automated systems for machine tools and production lines. The Ladder programming language is often used for designing relay logic networks.
Early electro-mechanical computers such as the ARRA, Harvard Mark II, Zuse Z2, and Zuse Z3 used relays for logic and working registers. However, electronic devices proved faster and easier to use.
Relays are much more resistant than semiconductors to nuclear radiation, so they are widely used in safety-critical logic, such as the control panels of radioactive waste-handling machinery. Electromechanical protective relays are used to detect overload and other faults on electrical lines by opening and closing circuit breakers.
Protective relays
For protection of electrical apparatus and transmission lines, electromechanical relays with accurate operating characteristics were used to detect overload, short-circuits, and other faults. While many such relays remain in use, digital protective relays now provide equivalent and more complex protective functions.
Railway signaling
Railway signalling relays are large considering the mostly small voltages (less than 120 V) and currents (perhaps 100 mA) that they switch. Contacts are widely spaced to prevent flashovers and short circuits over a lifetime that may exceed fifty years.
Since rail signal circuits must be highly reliable, special techniques are used to detect and prevent failures in the relay system. To protect against false feeds, double switching relay contacts are often used on both the positive and negative side of a circuit, so that two false feeds are needed to cause a false signal. Not all relay circuits can be proved so there is reliance on construction features such as carbon to silver contacts to resist lightning induced contact welding and to provide AC immunity.
Opto-isolators are also used in some instances with railway signalling, especially where only a single contact is to be switched.
Selection considerations
Selection of an appropriate relay for a particular application requires evaluation of many different factors:
Number and type of contacts — normally open, normally closed, (double-throw)
Contact sequence — "make before break" or "break before make". For example, the old style telephone exchanges required make-before-break so that the connection did not get dropped while dialing the number.
Contact current rating — small relays switch a few amperes, large contactors are rated for up to 3000 amperes, alternating or direct current
Contact voltage rating — typical control relays rated 300 VAC or 600 VAC, automotive types to 50 VDC, special high-voltage relays to about 15,000 V
Operating lifetime, useful life — the number of times the relay can be expected to operate reliably. There is both a mechanical life and a contact life. The contact life is affected by the type of load switched. Breaking load current causes undesired arcing between the contacts, eventually leading to contacts that weld shut or contacts that fail due to erosion by the arc.
Coil voltage — machine-tool relays usually 24 VDC, 120 or 250 VAC, relays for switchgear may have 125 V or 250 VDC coils,
Coil current — Minimum current required for reliable operation and minimum holding current, as well as effects of power dissipation on coil temperature at various duty cycles. "Sensitive" relays operate on a few milliamperes.
Package/enclosure — open, touch-safe, double-voltage for isolation between circuits, explosion proof, outdoor, oil and splash resistant, washable for printed circuit board assembly
Operating environment — minimum and maximum operating temperature and other environmental considerations, such as effects of humidity and salt
Assembly — Some relays feature a sticker that keeps the enclosure sealed to allow PCB post soldering cleaning, which is removed once assembly is complete.
Mounting — sockets, plug board, rail mount, panel mount, through-panel mount, enclosure for mounting on walls or equipment
Switching time — where high speed is required
"Dry" contacts — when switching very low level signals, special contact materials may be needed such as gold-plated contacts
Contact protection — suppress arcing in very inductive circuits
Coil protection — suppress the surge voltage produced when switching the coil current
Isolation between coil contacts
Aerospace or radiation-resistant testing, special quality assurance
Expected mechanical loads due to acceleration — some relays used in aerospace applications are designed to function in shock loads of 50 g, or more.
Size — smaller relays often resist mechanical vibration and shock better than larger relays, because of the lower inertia of the moving parts and the higher natural frequencies of smaller parts. Larger relays often handle higher voltage and current than smaller relays.
Accessories such as timers, auxiliary contacts, pilot lamps, and test buttons.
Regulatory approvals.
Stray magnetic linkage between coils of adjacent relays on a printed circuit board.
There are many considerations involved in the correct selection of a control relay for a particular application, including factors such as speed of operation, sensitivity, and hysteresis. Although typical control relays operate in the 5 ms to 20 ms range, relays with switching speeds as fast as 100 μs are available. Reed relays which are actuated by low currents and switch fast are suitable for controlling small currents.
As with any switch, the contact current (unrelated to the coil current) must not exceed a given value to avoid damage. In high-inductance circuits such as motors, other issues must be addressed. When an inductance is connected to a power source, an input surge current or electromotor starting current larger than the steady-state current exists. When the circuit is broken, the current cannot change instantaneously, which creates a potentially damaging arc across the separating contacts.
Consequently, for relays used to control inductive loads, we must specify the maximum current that may flow through the relay contacts when it actuates, the make rating; the continuous rating; and the break rating. The make rating may be several times larger than the continuous rating, which is larger than the break rating.
Safety and reliability
Switching while "wet" (under load) causes undesired arcing between the contacts, eventually leading to contacts that weld shut or contacts that fail due to a buildup of surface damage caused by the destructive arc energy.
Inside the Number One Electronic Switching System (1ESS) crossbar switch and certain other high-reliability designs, the reed switches are always switched "dry" (without load) to avoid that problem, leading to much longer contact life.
Without adequate contact protection, the occurrence of electric current arcing causes significant degradation of the contacts, which suffer significant and visible damage. Every time the relay contacts open or close under load, an electrical arc can occur between the contacts of the relay, either a break arc (when opening), or a make / bounce arc (when closing). In many situations, the break arc is more energetic and thus more destructive, in particular with inductive loads, but this can be mitigated by bridging the contacts with a snubber circuit. The inrush current of tungsten filament incandescent lamps is typically ten times the normal operating current. Thus, relays intended for tungsten loads may use special contact composition, or the relay may have lower contact ratings for tungsten loads than for purely resistive loads.
An electrical arc across relay contacts can be very hot — thousands of degrees Fahrenheit — causing the metal on the contact surfaces to melt, pool, and migrate with the current. The extremely high temperature of the arc splits the surrounding gas molecules, creating ozone, carbon monoxide, and other compounds. Over time, the arc energy slowly destroys the contact metal, causing some material to escape into the air as fine particulate matter. This action causes the material in the contacts to degrade and coordination, resulting in device failure. This contact degradation drastically limits the overall life of a relay to a range of about 10,000 to 100,000 operations, a level far below the mechanical life of the device, which can be in excess of 20 million operations.
| Technology | Components | null |
26685 | https://en.wikipedia.org/wiki/Statistics | Statistics | Statistics (from German: , "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation.
Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made using mathematical statistics employ the framework of probability theory, which deals with the analysis of random phenomena.
A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is rejected when it is in fact true, giving a "false positive") and Type II errors (null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis.
Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
Introduction
"Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science.Statistics is the discipline that deals with data, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population.
Statistics is regarded as a body of science or a branch of mathematics. It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics was once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields.
Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty. Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification. Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification.
The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works. In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name.
Statistical data
Data collection
Sampling
When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models.
To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population.
Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population.
Experimental and observational studies
A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements with different levels using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data—like natural experiments and observational studies—for which a statistician would use a modified, more structured estimation method (e.g., difference in differences estimation and instrumental variables, among many others) that produce consistent estimators.
Experiments
The basic steps of a statistical experiment are:
Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary. Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects.
Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that will guide the performance of the experiment and which specifies the primary analysis of the experimental data.
Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol.
Further examining the data set in secondary analyses, to suggest new hypotheses for future study.
Documenting and presenting the results of the study.
Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed.
Observational study
An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group. A case-control study is another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected.
Types of data
Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation.
Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented.
Other categorizations have been proposed. For example, Mosteller and Tukey (1977) distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. ( | Mathematics | Mathematics | null |
26691 | https://en.wikipedia.org/wiki/Set%20%28mathematics%29 | Set (mathematics) | In mathematics, a set is a collection of different things; these things are called elements or members of the set and are typically mathematical objects of any kind: numbers, symbols, points in space, lines, other geometrical shapes, variables, or even other sets. A set may have a finite number of elements or be an infinite set. There is a unique set with no elements, called the empty set; a set with a single element is a singleton.
Sets are uniquely characterized by their elements; this means that two sets that have precisely the same elements are equal (they are the same set). This property is called extensionality. In particular, this implies that there is only one empty set.
Sets are ubiquitous in modern mathematics. Indeed, set theory, more specifically Zermelo–Fraenkel set theory, has been the standard way to provide rigorous foundations for all branches of mathematics since the first half of the 20th century.
Definition and notation
Mathematical texts commonly denote sets by capital letters in italic, such as , , . A set may also be called a collection or family, especially when its elements are themselves sets.
Roster notation
Roster or enumeration notation defines a set by listing its elements between curly brackets, separated by commas:
This notation was introduced by Ernst Zermelo in 1908. In a set, all that matters is whether each element is in it or not, so the ordering of the elements in roster notation is irrelevant (in contrast, in a sequence, a tuple, or a permutation of a set, the ordering of the terms matters). For example, and represent the same set.
For sets with many elements, especially those following an implicit pattern, the list of members can be abbreviated using an ellipsis ''. For instance, the set of the first thousand positive integers may be specified in roster notation as
Infinite sets in roster notation
An infinite set is a set with an infinite number of elements. If the pattern of its elements is obvious, an infinite set can be given in roster notation, with an ellipsis placed at the end of the list, or at both ends, to indicate that the list continues forever. For example, the set of nonnegative integers is
and the set of all integers is
Semantic definition
Another way to define a set is to use a rule to determine what the elements are:
Such a definition is called a semantic description.
Set-builder notation
Set-builder notation specifies a set as a selection from a larger set, determined by a condition on the elements. For example, a set can be defined as follows:
In this notation, the vertical bar "|" means "such that", and the description can be interpreted as " is the set of all numbers such that is an integer in the range from 0 to 19 inclusive". Some authors use a colon ":" instead of the vertical bar.
Classifying methods of definition
Philosophy uses specific terms to classify types of definitions:
An intensional definition uses a rule to determine membership. Semantic definitions and definitions using set-builder notation are examples.
An extensional definition describes a set by listing all its elements. Such definitions are also called enumerative.
An ostensive definition is one that describes a set by giving examples of elements; a roster involving an ellipsis would be an example.
Membership
If is a set and is an element of , this is written in shorthand as , which can also be read as "x belongs to B", or "x is in B". The statement "y is not an element of B" is written as , which can also be read as "y is not in B".
For example, with respect to the sets , , and ,
The empty set
The empty set (or null set) is the unique set that has no members. It is denoted , , , , or .
Singleton sets
A singleton set is a set with exactly one element; such a set may also be called a unit set. Any such set can be written as , where x is the element.
The set and the element x mean different things; Halmos draws the analogy that a box containing a hat is not the same as the hat.
Subsets
If every element of set A is also in B, then A is described as being a subset of B, or contained in B, written , or . The latter notation may be read B contains A, B includes A, or B is a superset of A. The relationship between sets established by ⊆ is called inclusion or containment. Two sets are equal if they contain each other: and is equivalent to A = B.
If A is a subset of B, but A is not equal to B, then A is called a proper subset of B. This can be written . Likewise, means B is a proper superset of A, i.e. B contains A, and is not equal to A.
A third pair of operators ⊂ and ⊃ are used differently by different authors: some authors use and to mean A is any subset of B (and not necessarily a proper subset), while others reserve and for cases where A is a proper subset of B.
Examples:
The set of all humans is a proper subset of the set of all mammals.
.
.
The empty set is a subset of every set, and every set is a subset of itself:
.
.
Euler and Venn diagrams
An Euler diagram is a graphical representation of a collection of sets; each set is depicted as a planar region enclosed by a loop, with its elements inside. If is a subset of , then the region representing is completely inside the region representing . If two sets have no elements in common, the regions do not overlap.
A Venn diagram, in contrast, is a graphical representation of sets in which the loops divide the plane into zones such that for each way of selecting some of the sets (possibly all or none), there is a zone for the elements that belong to all the selected sets and none of the others. For example, if the sets are , , and , there should be a zone for the elements that are inside and and outside (even if such elements do not exist).
Special sets of numbers in mathematics
There are sets of such mathematical importance, to which mathematicians refer so frequently, that they have acquired special names and notational conventions to identify them.
Many of these important sets are represented in mathematical texts using bold (e.g. ) or blackboard bold (e.g. ) typeface. These include
or , the set of all natural numbers: (often, authors exclude );
or , the set of all integers (whether positive, negative or zero): ;
or , the set of all rational numbers (that is, the set of all proper and improper fractions): . For example, and ;
or , the set of all real numbers, including all rational numbers and all irrational numbers (which include algebraic numbers such as that cannot be rewritten as fractions, as well as transcendental numbers such as and );
or , the set of all complex numbers: , for example, .
Each of the above sets of numbers has an infinite number of elements. Each is a subset of the sets listed below it.
Sets of positive or negative numbers are sometimes denoted by superscript plus and minus signs, respectively. For example, represents the set of positive rational numbers.
Functions
A function (or mapping) from a set to a set is a rule that assigns to each "input" element of an "output" that is an element of ; more formally, a function is a special kind of relation, one that relates each element of to exactly one element of . A function is called
injective (or one-to-one) if it maps any two different elements of to different elements of ,
surjective (or onto) if for every element of , there is at least one element of that maps to it, and
bijective (or a one-to-one correspondence) if the function is both injective and surjective — in this case, each element of is paired with a unique element of , and each element of is paired with a unique element of , so that there are no unpaired elements.
An injective function is called an injection, a surjective function is called a surjection, and a bijective function is called a bijection or one-to-one correspondence.
Cardinality
The cardinality of a set , denoted , is the number of members of . For example, if , then . Repeated members in roster notation are not counted, so , too.
More formally, two sets share the same cardinality if there exists a bijection between them.
The cardinality of the empty set is zero.
Infinite sets and infinite cardinality
The list of elements of some sets is endless, or infinite. For example, the set of natural numbers is infinite. In fact, all the special sets of numbers mentioned in the section above are infinite. Infinite sets have infinite cardinality.
Some infinite cardinalities are greater than others. Arguably one of the most significant results from set theory is that the set of real numbers has greater cardinality than the set of natural numbers. Sets with cardinality less than or equal to that of are called countable sets; these are either finite sets or countably infinite sets (sets of the same cardinality as ); some authors use "countable" to mean "countably infinite". Sets with cardinality strictly greater than that of are called uncountable sets.
However, it can be shown that the cardinality of a straight line (i.e., the number of points on a line) is the same as the cardinality of any segment of that line, of the entire plane, and indeed of any finite-dimensional Euclidean space.
The continuum hypothesis
The continuum hypothesis, formulated by Georg Cantor in 1878, is the statement that there is no set with cardinality strictly between the cardinality of the natural numbers and the cardinality of a straight line. In 1963, Paul Cohen proved that the continuum hypothesis is independent of the axiom system ZFC consisting of Zermelo–Fraenkel set theory with the axiom of choice. (ZFC is the most widely-studied version of axiomatic set theory.)
Power sets
The power set of a set is the set of all subsets of . The empty set and itself are elements of the power set of , because these are both subsets of . For example, the power set of is . The power set of a set is commonly written as or .
If has elements, then has elements. For example, has three elements, and its power set has elements, as shown above.
If is infinite (whether countable or uncountable), then is uncountable. Moreover, the power set is always strictly "bigger" than the original set, in the sense that any attempt to pair up the elements of with the elements of will leave some elements of unpaired. (There is never a bijection from onto .)
Partitions
A partition of a set S is a set of nonempty subsets of S, such that every element x in S is in exactly one of these subsets. That is, the subsets are pairwise disjoint (meaning any two sets of the partition contain no element in common), and the union of all the subsets of the partition is S.
Basic operations
Suppose that a universal set (a set containing all elements being discussed) has been fixed, and that is a subset of .
The complement of is the set of all elements (of ) that do not belong to . It may be denoted or . In set-builder notation, . The complement may also be called the absolute complement to distinguish it from the relative complement below. Example: If the universal set is taken to be the set of integers, then the complement of the set of even integers is the set of odd integers.
Given any two sets and ,
their union is the set of all things that are members of A or B or both.
their intersection is the set of all things that are members of both A and B. If , then and are said to be disjoint.
the set difference (also written ) is the set of all things that belong to but not . Especially when is a subset of , it is also called the relative complement of in . With as the absolute complement of B (in the universal set ), .
their symmetric difference is the set of all things that belong to or but not both. One has .
their cartesian product is the set of all ordered pairs such that is an element of and is an element of .
Examples:
}.
}.
}.
}.
}.
The operations above satisfy many identities. For example, one of De Morgan's laws states that (that is, the elements outside the union of and are the elements that are outside and outside ).
The cardinality of is the product of the cardinalities of and . This is an elementary fact when and are finite. When one or both are infinite, multiplication of cardinal numbers is defined to make this true.
The power set of any set becomes a Boolean ring with symmetric difference as the addition of the ring and intersection as the multiplication of the ring.
Applications
Sets are ubiquitous in modern mathematics. For example, structures in abstract algebra, such as groups, fields and rings, are sets closed under one or more operations.
One of the main applications of naive set theory is in the construction of relations. A relation from a domain to a codomain is a subset of the Cartesian product . For example, considering the set of shapes in the game of the same name, the relation "beats" from to is the set ; thus beats in the game if the pair is a member of . Another example is the set of all pairs , where is real. This relation is a subset of , because the set of all squares is subset of the set of all real numbers. Since for every in , one and only one pair is found in , it is called a function. In functional notation, this relation can be written as .
Principle of inclusion and exclusion
The inclusion–exclusion principle is a technique for counting the elements in a union of two finite sets in terms of the sizes of the two sets and their intersection. It can be expressed symbolically as
A more general form of the principle gives the cardinality of any finite union of finite sets:
History
The concept of a set emerged in mathematics at the end of the 19th century. The German word for set, Menge, was coined by Bernard Bolzano in his work Paradoxes of the Infinite. Georg Cantor, one of the founders of set theory, gave the following definition at the beginning of his Beiträge zur Begründung der transfiniten Mengenlehre:
Bertrand Russell introduced the distinction between a set and a class (a set is a class, but some classes, such as the class of all sets, are not sets; see Russell's paradox):
Naive set theory
The foremost property of a set is that it can have elements, also called members. Two sets are equal when they have the same elements. More precisely, sets A and B are equal if every element of A is an element of B, and every element of B is an element of A; this property is called the extensionality of sets. As a consequence, e.g. and represent the same set. Unlike sets, multisets can be distinguished by the number of occurrences of an element; e.g. and represent different multisets, while and are equal. Tuples can even be distinguished by element order; e.g. and represent different tuples.
The simple concept of a set has proved enormously useful in mathematics, but paradoxes arise if no restrictions are placed on how sets can be constructed:
Russell's paradox shows that the "set of all sets that do not contain themselves", i.e., , cannot exist.
Cantor's paradox shows that "the set of all sets" cannot exist.
Naïve set theory defines a set as any well-defined collection of distinct elements, but problems arise from the vagueness of the term well-defined.
Axiomatic set theory
In subsequent efforts to resolve these paradoxes since the time of the original formulation of naïve set theory, the properties of sets have been defined by axioms. Axiomatic set theory takes the concept of a set as a primitive notion. The purpose of the axioms is to provide a basic framework from which to deduce the truth or falsity of particular mathematical propositions (statements) about sets, using first-order logic. According to Gödel's incompleteness theorems however, it is not possible to use first-order logic to prove any such particular axiomatic set theory is free from paradox.
| Mathematics | Mathematics: General | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.