text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Reich Labour Service ( Reichsarbeitsdienst ; RAD) was a major paramilitary organization established in Nazi Germany as an agency to help mitigate the effects of unemployment on the German economy , militarise the workforce and indoctrinate it with Nazi ideology. It was the official state labour service, divided into separate sections for men and women.
From June 1935 onward, men aged between 18 and 25 may have served six months before their military service. [ vague ] During World War II , compulsory service also included young women, and the RAD developed to an auxiliary formation which provided support for the Wehrmacht armed forces.
In the course of the Great Depression , the German government of the Weimar Republic under Chancellor Heinrich Brüning by emergency decree established the Freiwilliger Arbeitsdienst ('Voluntary Labour Service', FAD), on 5 June 1931, two years before the Nazi Party (NSDAP) ascended to national power. The state sponsored employment organisation provided services to civic and land improvement projects, from 16 July 1932 it was headed by Friedrich Syrup in the official rank of a Reichskommissar . As the name stated, participating was voluntary as long as the Weimar Republic existed.
The concept was adopted by Adolf Hitler , who upon the Nazi seizure of power in 1933 appointed Konstantin Hierl state secretary in the Reich Ministry of Labour, responsible for FAD matters. Hierl was already a high-ranking member of the NSDAP and head of the party's labour organisation, the Nationalsozialistischer Arbeitsdienst or NSAD. Hierl developed the concept of a state labour service organisation similar to the Reichswehr army, with a view to implementing a compulsory service. Meant as an evasion of the regulations set by the 1919 Treaty of Versailles , voluntariness initially was maintained after protests by the Geneva World Disarmament Conference .
Hierl's rivalry with Labour Minister Franz Seldte led to the affiliation of his office as a FAD Reichskommissar with the Interior Ministry under his party fellow Wilhelm Frick . On 11 July 1934, the NSAD was renamed Reichsarbeitsdienst or RAD with Hierl as its director until the end of World War II. By law issued on 26 June 1935, the RAD was re-established as an amalgamation of the many prior labour organisations formed in Germany during the Weimar Republic, [ 2 ] with Hierl appointed as Reich Labour Leader ( Reichsarbeitsführer ) according to the Führerprinzip . With massive financial support by the German government, RAD members were to provide service for civic and agricultural construction projects. Per Reich Labor Service Act of June 26, 1935: [ 3 ]
§ 1.
(1) The Reich Labor Service is honorary service to the German people.
(2) All young Germans of both sexes are obliged to serve their people in the Reich Labor Service.
(3) The Reich Labor Service is intended to educate German youth in the spirit of National Socialism in national community and in the true concept of work, above all in the due respect for manual work.
(4) The Reich Labor Service is intended to carry out charitable work.
§ 2.
(1) The Reich Labor Service is subordinate to the Reich Minister of the Interior. Under him, the Reich Labor Leader exercises command over the Reich Labor Service.
(2) The Reich Labor Leader stands at the head of the Reich leadership of the Labor Service; he determines the organization, regulates the work assignment and directs training and education.
The RAD was divided into two major sections, one for men ( Reichsarbeitsdienst Männer – RAD/M ) and the voluntary, from 1939 compulsory, section for young women ( Reichsarbeitsdienst der weiblichen Jugend – RAD/wJ ).
The RAD was composed of 33 districts each called an Arbeitsgau ( lit. ' Work District ' ) similar to the Gaue subdivisions of the Nazi Party. Each of these districts was headed by an Arbeitsgauführer officer with headquarters staff and a Wachkompanie (Guard Company). Under each district were between six and eight Arbeitsgruppen (Work Groups), battalion-sized formations of 1200–1800 men. These groups were divided into six company -sized RAD-Abteilung units.
Conscripted personnel had to move into labour barracks. Each rank and file RAD man was supplied with a spade and a bicycle . A paramilitary uniform was implemented in 1934; beside the swastika brassard, the RAD symbol, an arm badge in the shape of an upward pointing shovel blade, was displayed on the upper left shoulder of all uniforms and great-coats worn by all personnel. Men and women had to work up to 76 hours a week.
A health- and life-insurance program for NSAD members (from November 1933 to June 1935) and RAD workers (from June 1935 to 1945) in case they became ill or were injured or killed while on the job. The pre-war organization would also provide funding for education or training for poor members so they could learn a trade or get a university degree. Members had to carry a Mitglieds-karte ("membership card") that gave personal information (name, birthdate, and birthplace) and identified which Arbeitsgau and Mitgliedschaft ("membership group") they were assigned to, kind of like a soldier's Soldbuch ("military identification booklet").
Workers who benefited from the Arbeits Dank program were encouraged to pay back into it with donations. Donors received an enameled Erinnerungsnadel ("commemorative pin") that used the oval NSAD or RAD symbol with the text Arbeits / Dank added in the colored border. Officials and employees of the organization wore a larger version of the pin to indicate their status.
The RAD was classed as Wehrmachtgefolge ( lit. ' Defence Force Followers ' ). Auxiliary forces with this status, while not a part of the Armed Forces themselves, provided such vital support that they were given protection by the Geneva Convention . Some, including the RAD, were militarized.
Just prior to the outbreak of World War II, nearly all the RAD/M's extant RAD-Abteilung units were either incorporated into the Heer 's Bautruppen (Construction troops) as an expedient to rapidly increase their numbers or else in a few cases transferred to the Luftwaffe to form the basis of new wartime construction units for that service. New units were quickly formed to replace them.
During the early war Norwegian and Western campaigns, hundreds of RAD units were engaged in supplying frontline troops with food and ammunition, repairing damaged roads and constructing and repairing airstrips. Throughout the course of the war, the RAD were involved in many projects. [ 4 ] The RAD units constructed coastal fortifications (many RAD men worked on the Atlantic Wall ), laid minefields, manned fortifications, and even helped guard vital locations and prisoners.
The role of the RAD was not limited to combat support functions. Hundreds of RAD units received training as anti-aircraft units and were deployed as RAD Flak Batteries. [ 4 ] Several RAD units also performed combat on the eastern front as infantry. As the German defences were devastated, more and more RAD men were committed to combat. During the final months of the war, RAD men formed 6 major frontline units, which were involved in serious fighting.
During Operation Market-Garden in September 1944, RAD troops were used as reinforcements. Losses for these troops were in the hundreds. Some RAD troops were assigned to the 9th SS Pionier Abteilung ("Engineer Battalion") under SS- Hauptsturmführer Hans Moeller as part of Kampfgruppe Moeller. The understrength unit was made up of 90 Pioneers armed with flamethrowers and extra machineguns , which Moeller divided into two assault companies. On 17 September, SS- Kampfgruppe Moeller advanced from the railway station but were blocked just east of the Arnhem town square by the British 2nd and 3rd Parachute Battalions. They engaged in intense house to house fighting, which allowed their parent formation SS- Kampfgruppe Spindler to dig in and form a defensive line. The 2nd Parachute Battalion under Col. John Frost snuck past and took the Arnhem Bridge , but were then encircled by the German forces. [ citation needed ]
Moeller's Pioneers were then involved in the fighting on 18 September to reduce the British perimeter and retake the northern end of the Arnhem bridge. It was noted that the RAD troops had no combat experience. Captain Moeller's report concluded: "These men were rather skeptical and reluctant at the beginning, which was hardly surprising. But when they were put in the right place they helped us a lot; and in time they integrated completely, becoming good and reliable comrades." [ 5 ]
Final solution
Pre- Machtergreifung
Post- Machtergreifung
Parties | https://en.wikipedia.org/wiki/Reich_Labour_Service |
The Reichert value (also Reichert-Meissl number , Reichert-Meissl-Wollny value or Reichert-Meissl-Wollny number [ citation needed ] ) is a value determined when examining fats and oils . The Reichert value is an indicator of how much volatile fatty acid can be extracted from a particular fat or oil through saponification . It is equal to the number of millilitres of 0.1 normal hydroxide solution necessary for the neutralization of the water-soluble volatile fatty acids distilled and filtered from 5 grams of a given saponified fat. (The hydroxide solution used in such a titration is typically made from sodium hydroxide , potassium hydroxide , or barium hydroxide .) [ 1 ]
This number is a useful indicator of non-fat compounds in edible fats, and is especially high in butter.
The value is named for the chemists who developed it, Emil Reichert and Emerich Meissl. [ 2 ]
The Polenske value and Kirschner value are related numbers based on similar tests.
The Reichert-Meissel value for milk ranges between 28.5 and 33.
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reichert_value |
Reichle & De-Massari Holding AG ( R&M ), is a globally active corporate group in the information and communication technology sector, based in Wetzikon , Switzerland. The family company develops and produces connecting technology for communications networks , such as fiber-optic distributors, patch panels, data and communications network connection modules, as well as cables, housings, and software. [ 3 ] [ 4 ] [ 5 ]
When they were still working for a supplier of what was then Swiss Telecom PTT (currently Swiss Post and Swisscom ), Hans Reichle and Renato De-Massari had an idea for the development of a new, simpler to install telephone outlet . In 1964, they founded R&M as a two-man company to develop and produce what they called the Reichle connector . In subsequent years, the company extended its portfolio to include the entire connecting technology for data and voice networks based on copper cables for communications wiring. Fiber-optic communication technology was added in 1987. [ 6 ]
In Poland, R&M opened its first Eastern European subsidiary in 1993 by founding Reichle & De-Massari Polska Sp.z.o.o., Warsaw. [ 6 ]
Before Renato De-Massari died in 2000, his partner Hans Reichle took over the company as sole owner in 1996. In the years following, R&M drove international expansion by founding subsidiaries in Ukraine (Reichle & De-Massari Ukraine Ltd.), Malaysia (Reichle & De-Massari Malaysia) and Brazil (Reichle & De-Massari do Brasil Ltda.). [ 6 ]
In 1999, Reichle & De-Massari merged all shares of the company in Reichle Holding AG as a holding company owned by the Reichle family. In the same year, Hans Reichle retired and became a member of the board of directors, handing over operational responsibility to his two sons Martin (chairman of the management board) and Peter (operational management), who from then on managed the company together with other members of the management board. [ 6 ]
In 2006, R&M began to establish its own sales office and production in India . [ 7 ] [ 8 ] Two years later, the first management and logistics hub for the Asia-Pacific region was set up in Singapore . A subsidiary for product development and sales (Reichle & De-Massari Far East (Pte) Ltd.) had existed there since 1994. This was followed in 2009 by the establishment of a location in the United Arab Emirates. [ 9 ] [ 10 ] In 2010, the company moved into the R&M Kubus in Wetzikon, R&M's new corporate headquarters.
On September 1, 2012, Michel Riva was the first non-family member to be appointed CEO . [ 11 ] Martin and Peter Reichle continue to be members of the six-man board of directors and, in that function, represent the interests of the owner family.
In July 2012, R&M opened a branch in the Dubai Freezone , where products are assembled for the regions Middle East and Africa (MEA), and Asia. [ 11 ] [ 12 ] In October 2012, a further fiber-optic production plant was commissioned in Sofia, Bulgaria. [ 13 ] In the same year, the company opened a Saudi Arabian branch in Riyadh to pursue regional expansion plans and offer local customer service. In 2014, the company celebrated its 50th anniversary. [ 3 ] [ 14 ]
In February 2016, R&M acquired the fiber-optic specialist REALM Communications Group Inc. domiciled in Milpitas, California . The production and sales operations in North America have since been run as a subsidiary under the name R&M USA, Inc. [ 13 ] [ 15 ] In April 2017, R&M acquired the Brazilian corporate group Peltier Comércio e Industria LTDA (PETCOM), domiciled in Santa Rita do Sapucaí, the intention being to gain a foothold in the Southern American fiber-optic market. A production plant is part of the group. [ 16 ]
The company Transportkabel DIXI a.s., domiciled in Děčín , Czech Republic , acquired in May 2018, was renamed Reichle & De-Massari Czech Republic a.s. in August 2018. This takeover has allowed R&M to produce its own fiber-optic cables for the very first time. [ 17 ] Furthermore, R&M opened a new production facility for fiber-optic products in Bangalore , India, in August 2018. With this investment, R&M is supporting the Make in India initiative. [ 18 ] [ 19 ] In May 2024, the Indian production facility relocated to a larger building in Bagaluru, Bangalore. With 400 workplaces, this is now the largest production R&M facility. [ 8 ]
With the takeover of Optimum Fiberoptics Inc. domiciled in Elkridge, Maryland , in March 2019, R&M further expanded its business activities in the US. [ 13 ] [ 15 ] In the same year, the company acquired Durack Intelligent Electric Co. Ltd., Jinshan District , Shanghai , a Chinese manufacturer of network cabinets and enclosures for data centers. [ 20 ] [ 21 ] In January 2022, R&M acquired a further manufacturer of network cabinets and enclosures for data centers, the Tecnosteel S.r.l. based in Brunello (VA), Italy . [ 22 ] [ 23 ] In the same year, Hans Hess, who had been chairman of the board of directors for 15 years, handed over his position to Thomas A. Ernst. [ 24 ]
R&M is an unlisted public limited company under the umbrella of Reichle Holding AG and is owned by the brothers Martin and Peter Reichle. [ 3 ] [ 4 ]
Reichle & De-Massari's sales revenue amounted to CHF 267.2 million in the 2023 financial year. The company generates around 80 percent of its sales outside of Switzerland. At the end of 2024, the company employed 1,700 people. Michel Riva is the chairman of the five-member executive board. The chairman of the five-member board of directors is Thomas A. Ernst. [ 25 ]
The international production network has its own plants (in Brazil, Bulgaria, China, Germany, India, Poland, Dubai, Saudi Arabia, Czech Republic) as well as competence centers and warehouse locations. [ 7 ] [ 12 ] [ 13 ] [ 8 ] R&M has around 40 subsidiaries in Australia , Brazil, Bulgaria, China, Germany , Dubai, France , United Kingdom , India, Italy, the Netherlands, Poland, Singapore, Spain , Ukraine, Hungary and the US. [ 26 ] [ 27 ] The company is also represented by sales partners in other countries. [ 9 ] [ 10 ]
R&M cooperates to some extent with other manufacturers in the development of new connectivity technologies. Among other activities, R&M participates in the Single Pair Ethernet System Alliance to promote the single pair ethernet (SPE) technology. [ 28 ] [ 29 ]
R&M develops, produces, and sells components and systems for communication and data networks. The company's cabling systems are used in office buildings and industry, for example. R&M also designs infrastructures and components for data centers . R&M develops fiber-optic cabling systems for nationwide broadband expansion to the subscriber ( fiber to the home , FTTH). For network levels 3 and 4, R&M supplies ready-to-install systems such as main distributors, points of presence (PoP), cross connection cabinet, closures and building entry points (BEP).
Reichle & De-Massari's technology and product developments include:
The company was involved in technological developments, for example:
Since 2010, R&M has published an annual sustainability report. [ 43 ] [ 44 ] This provides information about the company’s measures in accordance with its own sustainability strategy, the UN’s 17 Sustainable Development Goals (SDGs), [ 45 ] and the ten principles of the UN Global Compact . In 2024, sustainability rating agency EcoVadis awarded R&M the gold medal for its progress in sustainability. | https://en.wikipedia.org/wiki/Reichle_&_De-Massari |
The Reichstein process in chemistry is a combined chemical and microbial method for the production of ascorbic acid from D-glucose in five steps. [ 1 ] This process was devised by Nobel Prize winner Tadeusz Reichstein and his colleagues in 1933 while working in the laboratory of the ETH in Zürich . [ chronology citation needed ]
The reaction steps are:
The microbial oxidation of sorbitol to sorbose is important because it provides the correct stereochemistry .
This process was patented and sold to Hoffmann-La Roche in 1934. [ chronology citation needed ] The first commercially sold vitamin C product was either Cebion from Merck or Redoxon from Hoffmann-La Roche. [ citation needed ]
Even today industrial methods for the production of ascorbic acid can be based on the Reichstein process. In modern methods however, sorbose is directly oxidized with a platinum catalyst (developed by Kurt Heyns (1908–2005) in 1942). This method avoids the use of protective groups. A side product with particular modification is 5-Keto-D-gluconic acid. [ 4 ]
A shorter biotechnological synthesis of ascorbic acid was announced in 1988 by Genencor International and Eastman Chemical . Glucose is converted to 2-keto-L-gulonic acid in two steps (via 2,4-diketo-L-gulonic acid intermediate) as compared to five steps in the traditional process. [ 5 ]
Though many organisms synthesize their own vitamin C, the steps can be different in plants and mammals. Smirnoff concluded that “..little is known about many of the enzymes involved in ascorbate biosynthesis or about the factors controlling flux through the pathways". [ 6 ] There is interest in finding alternatives to the Reichstein process. Experiments suggest that genetically modified bacteria might be commercially usable. [ 7 ] | https://en.wikipedia.org/wiki/Reichstein_process |
Reid's Paradox of Rapid Plant Migration or Reid's Paradox , describes the observation from the paleoecological record that plant ranges shifted northward, after the last glacial maximum , at a faster rate than the seed dispersal rates commonly occur. [ 1 ] [ 2 ] Rare long-distance seed dispersal events have been hypothesized to explain these fast migration rates, but the dispersal vector(s) are still unknown. The plant species' geographic range expansion rates are compared to the actualistic rates of seed dispersal using mathematical models, and are graphically visualized using dispersal kernels. [ 2 ] [ 3 ] These observations made in the paleontological record, which inspired Reid's Paradox, are from fossilized remains of plant parts, including needles, leaves , pollen , and seeds , that can be used to identify past shifts in plant species' ranges.
Reid's Paradox is named after Clement Reid , a paleobotanist, who made the principle observations from the paleobotanical record in Europe in 1899. His comparison of oak tree seed dispersal rates, and the observed range of oak trees from the fossil record, did not concur. Reid hypothesized that diffusion was not a possible explanation for the observed paradox, and supplemented his hypothesis by noting that birds were the likely cause of long range seed dispersal. [ 1 ] Reid's Paradox has been subsequently documented across Europe and North America. [ 2 ] [ 3 ] [ 4 ]
Dispersal kernels are statistical models that represent the probability of seed dispersal from the source tree. Realistic biological data is required to complete the models. These data are used to accurately fill in variables such as seed number, seed size, and reproductive age. [ 3 ] Depending on the plant species, the variables in the equation will change. In the years since Reid hypothesized the methods for seed dispersal, the models have gained more complex elements which attempt to resolve Reid's Paradox. [ 2 ]
The dispersal of seeds from a parent tree are initially occurs as a normal distribution , as predicted by a standard diffusion equation . However, biological phenomenon complicate the diffusion equation by adding biotic vectors of dispersal such as blue jays and eastern grey squirrels , species which possess caching behaviors, and abiotic agents of dispersal such as high velocity wind storms. [ 2 ] These additional vectors of seed dispersal make the dispersal kernels have a "fat-tail", or a large kurtosis . This means that the probability of a long-range dispersal event is higher than that of the standard diffusion dispersal kernel. [ 2 ] [ 3 ] [ 5 ] In order to resolve Reid's Paradox, the vector(s) of seed-dispersal, which give the dispersal kernel a fat-tail, must be identified.
Long distance seed-dispersal events due to animal-seed interactions (such as caching or endozoochorous dispersal) would fatten the tail of the dispersal kernels. To fully explain Reid's Paradox, these rare animal induced seed-dispersal events must have been more important during migration events than recognized or recorded currently. [ 1 ] [ 3 ]
Small populations of plants may have grown closer to the ice sheets in microhabitats that possessed the habitat characteristics needed for growth and reproduction. This would minimize the actual post-glacial dispersal distance. Such hypothetical populations would not be abundant enough to leave fossil evidence, so have escaped detection. In North America, there is some genetic evidence of cryptic northern refugia for sugar maple and American beech . [ 6 ] [ 4 ] | https://en.wikipedia.org/wiki/Reid's_paradox_of_rapid_plant_migration |
Reid vapor pressure ( RVP ) is a common measure of the volatility of gasoline and other petroleum products. [ 1 ] It is defined as the
absolute vapor pressure exerted by the vapor of the liquid and any dissolved gases/moisture at 37.8 °C (100 °F) as determined by the test method ASTM-D-323, which was first developed in 1930 [ 2 ] and has been revised several times (the latest version is ASTM D323-15a). [ 3 ] The test method measures the vapor pressure of gasoline, volatile crude oil, jet fuels, naphtha, and other volatile petroleum products but is not applicable for liquefied petroleum gases . [ 4 ] ASTM D323-15a requires that the sample be chilled to 0 to 1 degrees Celsius and then poured into the apparatus; [ 5 ] for any material that solidifies at this temperature, this step cannot be performed. RVP is commonly reported in kilopascals (kPa) or pounds per square inch (psi) [ 6 ] and represents volatization at atmospheric pressure because ASTM-D-323 measures the gauge pressure of the sample in a non-evacuated chamber.
The matter of vapor pressure is important relating to the function and operation of gasoline-powered, especially carbureted, vehicles and is also important for many other reasons. High levels of vaporization are desirable for winter starting and operation and lower levels are desirable in avoiding vapor lock during summer heat. Fuel cannot be pumped when there is vapor in the fuel line (summer) and winter starting will be more difficult when liquid gasoline in the combustion chambers has not vaporized. Thus, oil refineries manipulate the Reid vapor pressure seasonally specifically to maintain gasoline engine reliability.
The Reid vapor pressure (RVP) can differ substantially from the true vapor pressure (TVP) of a liquid mixture, since (1) RVP is the vapor pressure measured at 37.8 °C (100 °F) and the TVP is a function of the temperature; (2) RVP is defined as being measured at a vapor-to-liquid ratio of 4:1, whereas the TVP of mixtures can depend on the actual vapor-to-liquid ratio; (3) RVP will include the pressure associated with the presence of dissolved water and air in the sample (which is excluded by some but not all definitions of TVP); and (4) the RVP method is applied to a sample which has had the opportunity to volatilize somewhat prior to measurement: i.e., the sample container is required to be only 70-80% full of liquid [ 7 ] (so that whatever volatilizes into the container headspace is lost prior to analysis); the sample then again volatilizes into the headspace of the D323 test chamber before it is heated to 37.8 degrees Celsius. [ 8 ] | https://en.wikipedia.org/wiki/Reid_vapor_pressure |
The Hofmann–Martius rearrangement in organic chemistry is a rearrangement reaction converting an N-alkylated aniline to the corresponding ortho and / or para aryl -alkylated aniline. The reaction requires heat, and the catalyst is an acid like hydrochloric acid . [ 1 ] [ 2 ]
When the catalyst is a metal halide the reaction is also called the Reilly–Hickinbottom rearrangement (named after Wilfred Hickinbottom and Joseph Reilly). [ 3 ]
The reaction is also known to work for aryl ethers and two conceptually related reactions are the Fries rearrangement and the Fischer–Hepp rearrangement . Its reaction mechanism centers around dissociation of the reactant with the positively charged organic residue R attacking the aniline ring in a Friedel–Crafts alkylation .
In one study this rearrangement was applied to a 3-N(CH 3 )(C 6 H 5 )-2-oxindole: [ 4 ] [ 5 ]
The reaction is named after German chemists August Wilhelm von Hofmann and Carl Alexander von Martius . | https://en.wikipedia.org/wiki/Reilly–Hickinbottom_rearrangement |
The Reimer–Tiemann reaction is a chemical reaction used for the ortho - formylation of phenols . [ 1 ] [ 2 ] [ 3 ] [ 4 ] with the simplest example being the conversion of phenol to salicylaldehyde . The reaction was first reported by Karl Reimer and Ferdinand Tiemann . [ 5 ]
Chloroform ( 1 ) is deprotonated by a strong base (normally hydroxide ) to form the chloroform carbanion ( 2 ) which will quickly alpha-eliminate to give dichlorocarbene ( 3 ); this is the principal reactive species. The hydroxide will also deprotonate the phenol ( 4 ) to give a negatively charged phenoxide ( 5 ). The negative charge is delocalised into the aromatic ring, making it far more nucleophilic. Nucleophilic attack on the dichlorocarbene gives an intermediate dichloromethyl substituted phenol ( 7 ). After basic hydrolysis, the desired product ( 9 ) is formed. [ 6 ]
By virtue of its two electron-withdrawing chlorine groups, the carbene ( 3 ) is highly electron deficient and is attracted to the electron rich phenoxide ( 5 ). This interaction favors selective ortho -formylation, consistent with other electrophilic aromatic substitution reactions.
Hydroxides are not readily soluble in chloroform, thus the reaction is generally carried out in a biphasic solvent system. In the simplest sense this consists of an aqueous hydroxide solution and an organic phase containing the chloroform. Therefore, the two reagents are separated and must be brought together for the reaction to take place. This can be achieved by rapid mixing, phase-transfer catalysts , or an emulsifying agent such as 1,4-dioxane as solvent.
The reaction typically needs to be heated to initiate the process; however, once started, the Reimer–Tiemann Reaction can be highly exothermic. This combination of properties makes it prone to thermal runaways .
The Reimer–Tiemann reaction is effective for other hydroxy-aromatic compounds, such as naphthols . [ 7 ] Electron rich heterocycles such as pyrroles and indoles are also known to react.
Dichlorocarbenes can react with alkenes and amines to form dichlorocyclopropanes and isocyanides respectively. As such the Reimer–Tiemann reaction may be unsuitable for substrates bearing these functional groups. In addition, many compounds can not withstand being heated with hydroxide.
The direct formylation of aromatic compounds can be accomplished by various methods such as the Gattermann reaction , Gattermann–Koch reaction , Vilsmeier–Haack reaction , or Duff reaction ; however, in terms of ease and safety of operations, the Reimer–Tiemann reaction is often the most advantageous route chosen in chemical synthesis. Of the reactions mentioned before, the Reimer–Tiemann reaction is the only route not requiring acidic and/or anhydrous conditions. [ 2 ] Additionally the Gattermann—Koch reaction is not applicable to phenol substrates .
Using carbon tetrachloride instead of chloroform gives a carboxylic acid product instead of an aldehyde. [ 8 ] For example, this reaction variant with phenol would yield salicylic acid .
Reimer and Tiemann published several papers on the subject. [ 9 ] [ 10 ] [ 5 ] [ 11 ] The early work has been reviewed. [ 12 ] | https://en.wikipedia.org/wiki/Reimer–Tiemann_reaction |
The reindeer (caribou in North America) is a widespread and numerous species in the northern Holarctic , being present in both tundra and taiga (boreal forest). [ 1 ] Originally, the reindeer was found in Scandinavia , eastern Europe , Russia , Mongolia , and northern China north of the 50th latitude . In North America, it was found in Canada , Alaska ( United States ), and the northern contiguous USA from Washington to Maine . In the 19th century, it was apparently still present in southern Idaho . [ 2 ] It also occurred naturally on Sakhalin , Greenland , and probably even in historical times in Ireland .
During the late Pleistocene era, reindeer were found further south, such as at Nevada , Tennessee , and Alabama [ 3 ] in North America and Spain in Europe. [ 1 ] [ 4 ] Today, wild reindeer have disappeared from many areas within this large historical range, especially from the southern parts, where it vanished almost everywhere. Populations of wild reindeer are still found in Norway , Finland , Siberia , Greenland , Alaska , and Canada .
The George River reindeer herd in the tundra of Quebec and Labrador in eastern Canada, once numbered world's largest 8–900,000 animals, stands December 2011 at 74,000 – a drop of up to 92% because of Iron-ore mining , flooding for hydropower and road building. [ 5 ]
Domesticated reindeer are mostly found in northern Fennoscandia and Russia, with a herd of approximately 150–170 semi-domesticated reindeer living around the Cairngorms region in Scotland . Although formerly more widespread in Scandinavia, the last remaining wild mountain reindeer in Europe are found in portions of southern Norway . [ 6 ] Siberian tundra reindeer are widespread in Russia.
A few reindeer from Norway were introduced to the South Atlantic island of South Georgia in the beginning of the 20th century. The South Georgian reindeer totaled some estimated 2600 animals in two distinct herds separated by glaciers . Although the flag and the coat of arms of the territory contain an image of a reindeer, they were eradicated from 2013 to 2017 because of the environmental damage they caused. [ 7 ] Around 4000 reindeer have been introduced into the French sub-Antarctic archipelago of Kerguelen Islands . East Iceland has a small herd of about 2500–3000 animals. [ 8 ]
Caribou and reindeer numbers have fluctuated historically, but many herds are in decline across their range. [ 9 ] This global decline is linked to climate change for northern, migratory caribou and reindeer herds and industrial disturbance of caribou habitat for sedentary, non-migratory herds. [ 10 ]
In 2013, the Taimyr herd in Russia was the largest herd in the world. In 2000, the herd increased to 1,000,000 but by 2009, there were 700,000 animals. [ 11 ] [ 12 ] In the 1950s, there were 110,000. [ 13 ]
There are three large herds of migratory tundra wild reindeer in central Siberia's Yakutia region: the Lena-Olenek, Yana-Indigirka and Sundrun herds. While the population of the Lena-Olenek herd is stable, the others are declining. [ 13 ]
Further east again, the Chukotka herd is also in decline. In 1971, there were 587,000 animals. They recovered after a severe decline in 1986, to only 32,200 individuals, but their numbers fell again. [ 14 ] According to Kolpashikov, by 2009 there were less than 70,000. [ 13 ]
Until a recent revision, there were six living subspecies of the reindeer ( Rangifer tarandus ), known in North America as the caribou: [ 15 ] woodland (boreal) , R. t. caribou; Labrador or Ungava caribou, R. t. caboti ; Newfoundland caribou, R. t. terranovae; barren-ground caribou, R. t. groenlandicus (including the Porcupine , Dolphin-Union and other Alaskan and Canadian herds of barren-ground caribou ); Osborn's caribou, R. t. oborni ; and Peary caribou , R. t. pearyi .
In Canada, the Committee on Status of Endangered Wildlife in Canada (COSEWIC) defined 12 "designatable units", DU, which included the above named subspecies and several ecotypes: Peary caribou DU1, the Dolphin-Union herd of barren-ground caribou DU2, mainland barren-ground (including Alaskan) caribou DU3, Labrador caribou ("eastern migratory") caribou DU4, Newfoundland caribou DU5, boreal woodland caribou DU6, Osborn's caribou ("northern mountain") DU7, Rocky Mountain caribou ("central mountain") DU8, Selkirk Mountain caribou DU9 ("southern mountain"), Torngat Mountain DU10 (an ecotype of Labrador caribou), Atlantic-Gaspésie DU11 (a montane ecotype of woodland caribou) and the extinct Dawson's caribou DU2. Genetic research has shown that Osborn's caribou and the other two western montane ecotypes are of Beringian-Eurasian ancestry (but distantly, having diverged > 60,000 years ago) and therefore not closely relate to woodland caribou (see Reindeer : Evolution and Reindeer : Taxonomy). While useful for conservation and research, Designatable units, an adaptation of "evolutionary significant units", are not phylogenetically based and cannot substitute for taxonomy. [ 16 ] [ 17 ]
In North America, because of its vast range in a wide diversity of ecosystems, the woodland caribou is further distinguished by a number of ecotypes. In the Ungava region of Quebec, several herds of Labrador caribou in the north, such as the large George River caribou herd, overlap in range with the boreal woodland caribou to the south.
A recent revision [ 18 ] returned Woodland caribou to species status, R. caribou , with subspecies Labrador or Ungava caribou, R. c. caboti , the migratory form; Newfoundland caribou, R. c. terranovae ; and Boreal woodland caribou, R. c. caribou . The revision returned the name of Arctic caribou to its original R. arcticus , with the nominate subspecies being barren-ground caribou, R. a. arcticus , and returned four western montane ecotypes to subspecies of Arctic caribou: Selkirk Mountain caribou, R. a. montanus , Rocky Mountain caribou, R. a. fortidens , Osborn's caribou, R. a. osborni , and Stone's caribou, R. a. stonei, in accordance with molecular data that showed these to be of Beringian-Eurasian ancestry (see Reindeer : Evolution and Taxonomy).
Some caribou populations are "endangered in Canada in regions such as southeastern British Columbia at the Canadian-USA border, along the Columbia , Kootenay and Kootenai rivers and around Kootenay Lake . Selkirk Mountain caribou (formerly thought to be an ecotype of woodland caribou, Rangifer tarandus caribou) was considered endangered in the United States in Idaho and Washington . R. t. pearyi is on the IUCN endangered list. The woodland caribou is highly endangered throughout its distribution. [ 19 ]
All U.S. caribou populations are in Alaska. There was also a remnant population of about a dozen caribou in the Selkirk Mountains of Idaho , which were the only remaining wild caribou in the contiguous United States. [ 20 ] In 2018 there were three left; [ 21 ] the last member, a female, was transported to a wildlife rehab center in Canada, thus marking the extirpation of the caribou from the Lower 48.
There are four migratory herds of barren-ground caribou, R. tarandus groenlandicus , in Alaska: the Western Arctic herd, the Teshekpuk Lake herd, the Central Arctic herd and the Porcupine caribou herd (named for a river that flows from Yukon into Alaska), the last of which is transnational as its migratory range extends far into Canada's north. The largest is the Western Arctic caribou herd, but the smaller Porcupine herd has the longest migration of any terrestrial mammal on Earth with a vast historical range. There are also about 20 montane herds in the south and east, recently returned to their former name, R. a. stonei, [ 22 ] [ 18 ] that move seasonally within their small ranges, but to not migrate per se; and one nearly insular herd on the western end of the Alaska Peninsula and nearby islands, originally described as R. granti. [ 23 ] Phylogenetic analysis shows that Grant's caribou clusters separately from all other Alaskan caribou [ 24 ] and does not interbreed with nearby caribou ecotypes. [ 25 ]
The Porcupine caribou herd is transnational and migratory. The herd is named after their birthing grounds, for example, the Porcupine River , which runs through a large part of the range of the Porcupine herd. Individual herds of migratory caribou once had over a million animals per herd and could take over ten days to cross the Yukon River, but these numbers dramatically declined with habitat disturbance and degradation. Though numbers fluctuate, the herd comprises approximately 169,000 animals (based on a July 2010 photocensus). [ 26 ] The Porcupine herd's annual migrations of 1,500 miles (2,400 km) are among the longest of any terrestrial mammal. [ 27 ] Its range spans approximately 260,000 km 2 (64,000,000 acres), from Aklavik, Northwest Territories to Dawson City, Yukon to Kaktovik, Alaska on the Beaufort Sea . The Porcupine caribou ( R. tarandus groenlandicus (originally named Tarandus rangifer ogilviensis Millais 1915 after the Ogilvie Mountains, their Yukon winter range; [ 28 ] see Reindeer : Taxonomy) has a vast range that includes northeastern Alaska and the Yukon and is therefore cooperatively managed by government agencies and aboriginal peoples from both countries. [ 29 ] [ 30 ] The Gwich'in people followed the Porcupine herd—their primary source of food, tools, and clothing—for thousands of years—according to oral tradition, for as long as 20,000 years. They continued their nomadic lifestyle until the 1870s. [ 31 ] This herd is also traditional food for the Inupiat , the Inuvialuit , the Hän , and the Northern Tutchone . There is currently controversy over whether possible future oil drilling on the coastal plains of the Arctic National Wildlife Refuge , encompassing much of the Porcupine caribou calving grounds, will have a severe negative impact on the caribou population or whether the caribou population will grow.
Unlike many other barren-ground caribou, the Porcupine caribou is stable at relatively high numbers, but the 2013 photo census was not counted by January 2014. The peak population in 1989 of 178,000 animals was followed by a decline by 2001 to 123,000. By 2010, it recovered to 169,000. [ 32 ] [ 26 ]
Many Gwich'in people, who depend on the Porcupine herd, still follow traditional caribou management practices that include a 1981 prohibition against selling caribou meat and limits on the number of caribou to be taken per hunting trip. [ 33 ]
The Western Arctic caribou herd is the largest of the three Alaskan barren-ground caribou herds. The Western Arctic herd reached a low of 75,000 in the mid-1970s. In 1997 the 90,000 WACH changed their migration and wintered on Seward Peninsula . Alaska's reindeer herding industry has been concentrated on Seward Peninsula ever since the first shipment of reindeer was imported from eastern Siberia in 1892 as part of the Reindeer Project, an initiative to replace whale meat in the diet of the indigenous people of the region. [ 34 ] For many years it was believed that the geography of the peninsula would prevent migrating caribou from mingling with domesticated reindeer who might otherwise join caribou herds when they left an area. [ 34 ] [ 35 ] In 1997, the domesticated reindeer joined the Western Arctic caribou herd on their summer migration and disappeared. [ 36 ] The WACH reached a peak of 490,000 in 2003 and then declined to 325,000 in 2011. [ 37 ] [ 38 ]
In 2008, the Teshekpuk Lake caribou herd had 64,107 animals and the Central Arctic caribou herd had 67,000. [ 39 ] [ 40 ]
By 2017, the Teshekpuk herd's numbers, whose calving grounds are in the region of the shallow Teshekpuk Lake , [ 41 ] had declined to 41,000 animals. [ 41 ] Teshekpuk Lake in the North Slope is in the traditional lands of the Iñupiat , who depended on the Teshekpuk herd for millennia. Teshekpuk Lake is also in the National Petroleum Reserve-Alaska , where the U.S. Department of the Interior (DOI) had approved oil and gas drilling on 11 January 2006. [ 42 ] [ 43 ] The NPR-A is the "single largest parcel of public land in the United States" covering about 23 million acres". The reserve's eastern border sits about 100 miles to the west of the more famous Arctic National Wildlife Refuge . The leasing of Teshekpuk Lake land to industry was protested by the Iñupiat and others who sent 300,000 letters to the US Secretary of the Interior and the ConocoPhillips CEO over the summer of 2006. On 25 September 2006, the U.S. District Court for the District of Alaska protected the wildlife habitat around the lake from an oil and gas lease sale. [ 44 ]
In October 2017, U. S. Secretary of the Interior , Ryan Zinke , announced that as of 6 December 2017, lands under the administration of the U.S. Bureau of Land Management will be up for bid on the "largest offering of public lands for lease in the history of the [BLM] — 10.3 million acres". [ 41 ] The Prudhoe Bay Oil Field , near Prudhoe Bay , Alaska, is situated between the Arctic National Wildlife Refuge to the east. Industry will be allowed to run "roads, pipelines and drill rigs" in the very sensitive habitat areas, including the Teshekpuk caribou herd calving grounds. The Teshekpuk herd remains at the calving grounds for several weeks in spring before moving from Teshekpuk Lake for relief from mosquitoes and botflies before their annual migration. [ 41 ]
Reindeer were imported from Siberia in the late 19th century and from Norway in the early 1900s as semi-domesticated livestock in Alaska. [ 45 ] [ 46 ] Reindeer can interbreed with the native caribou subspecies, but they rarely do, and even then their offspring do not survive well in the wild. [ 47 ] [ 25 ]
The barren-ground caribou ( R. t. groenlandicus ), [ 48 ] a long-distance migrant, includes large herds in the Northwest Territories and in Nunavut, for example, the Beverly, the Ahiak and Qamanirjuaq herds. In 1996, the population of the Ahiak herd was approximately 250,000 animals.
The Ahiak, Beverly and Qamanirjuaq caribou herds are all barren-ground caribou.
"The Beverly herd’s crossing of the Thelon River to its traditional calving grounds near Beverly Lake was part of the lives of the Dene aboriginal people for 8,000 years, as revealed by an unbroken archaeological record of deep layers of caribou bones and stone tools in the banks of the Thelon River (Gordon 2005)." [ 50 ] [ 51 ] The Beverly herd (located primarily in Saskatchewan, Northwest Territories; with portions in Nunavut, Manitoba and Alberta) and the Qamanirjuaq Herd (located primarily in Manitoba, Nunavut; with portions in the southeastern NWT and northeastern Saskatchewan) fall under the auspices of the Beverly and Qamanirjuaq Caribou Management Board. [ 52 ] The Beverly herd, whose range spans the tundra from northern Manitoba and Saskatchewan and well into the Northwest Territories and Nunavut, had a peak population in 1994 of 276,000 [ 53 ] [ 54 ] or 294,000, [ 13 ] but by 2011 there were approximately 124,000 caribou in the Beverly herd and 83,300 in the Ahiak herd. The calving grounds of the Beverly herd are located around Queen Maud Gulf , but the herd shifted its traditional birthing area. [ 55 ] Caribou management agencies are concerned that deterioration and disturbance of habitat along with "parasites, predation and poor weather" [ 53 ] are contributing to a cycling down of most caribou populations. It was suggested the Ahiak and Beverly herds switched calving grounds and the Beverly may have moved "near the western Queen Maud Gulf coast to the north of the herd’s "traditional" calving ground in the Gary Lakes area north of Baker Lake ." [ 56 ] The "Beverly herd may have declined (similar to other Northwest Territories herds), and cows switched to the neighbouring Ahiak herd to maintain the advantages of gregarious calving." [ 57 ] By 2011 there were approximately 124,000 caribou in the combined Beverly/Ahiak herd which represents a "50% or a 75% decline from the 1994 population estimate for the Beverly Herd." [ 13 ]
The barren-ground caribou population on Southampton Island , Nunavut declined by almost 75%, from about 30,000 caribou in 1997 to 7,800 caribou in 2011. [ 13 ] [ 58 ]
The Peary caribou ( R. t. pearyi ), the smallest subspecies in North America, known as tuktu in Inuktitut, are found in the northern islands of Nunavut (except Baffin Island) and the Northwest Territories. They remain at low numbers after severe declines.
A population of barren-ground caribou ( R. t. groenlandicus ) summers on Victoria Island and crosses the ice of Dolphin and Union Strait to the lands around Coronation Gulf for winter. Once thought to be hybrids or intergrades with Peary caribou, they are now known to be a barren-ground caribou named after the strait that they migrate across: Dolphin-Union caribou. Further research showed that some R. t. pearyi x groenlandicus hybrids occur on Banks Island and the northwest corner of Victoria Island. [ 59 ]
On Baffin Island, the largest Arctic island, the population of barren-ground caribou ( R. t. groenlandicus ) peaked in the early 1990s to approximately 60,000 to 180,000. [ 60 ] By 2012, in northern Baffin Island caribou numbers were considered to be at a "low in the cycle after a high in the 1990s" and in southern Baffin Island, the population was estimated as between 1,065 and 2,067. [ 61 ] Baffin Island caribou are highly divergent from other barren-ground caribou, [ 62 ] have a different mating system, lack migratory and aggregation behaviors, and have morphological differences. [ 63 ]
There are four barren-ground caribou herds in the Northwest Territories—the Cape Bathurst, Bluenose West, Bluenose East and Bathurst herds. [ 13 ] The Bluenose East caribou herd began a recovery with a population of approximately 122,000 in 2010, [ 64 ] which is being credited to the establishment of Tuktut Nogait National Park . [ 65 ] According to T. Davison 2010, CARMA 2011, the three other herds "declined 84–93% from peak sizes in the mid-1980s and 1990s. [ 13 ]
The Committee on Status of Endangered Wildlife in Canada (COSEWIC) [ 66 ] divided woodland caribou ( R. tarandus caribou ) ecotypes into five "Designatable Units" (DU) as noted above. Caribou are classified by ecotype depending on several behavioral factors – predominant habitat use (northern, tundra, mountain, forest, boreal forest, forest-dwelling), spacing (dispersed or aggregated) and migration patterns (sedentary or migratory). [ 67 ] [ 68 ] [ 69 ]
In Canada, the national meta-population of the sedentary boreal woodland ecotype spans the boreal forest from the Northwest Territories to Labrador . They prefer lichen-rich mature forests [ 70 ] and mainly live in marshes, bogs, lakes and river regions. [ 71 ] [ 72 ] The historic range of the boreal woodland caribou covered over half of present-day Canada, [ 73 ] stretching from Alaska to Newfoundland and Labrador and as far south as New England , Idaho and Washington. Woodland caribou have disappeared from most of their original southern range and only about 34,000 remain. [ 74 ] The boreal woodland caribou was designated as threatened in 2002. [ 75 ]
The migratory George River caribou herd (GRCH), in the Ungava region of Quebec and Labrador in eastern Canada was once the world's largest caribou herd with 800,000–900,000 animals. It is a herd of Labrador caribou, Rangifer tarandus caboti. [ 48 ] The GRCH is the migratory woodland caribou and, like the barren-ground caribou, its ecotype may be tundra caribou, Arctic, northern or migratory, not forest-dwelling and sedentary like most woodland caribou ecotypes. It is unlike most woodland caribou in that it is not sedentary. Since the mid-1990s, the herd declined sharply and by 2010, it was reduced to 74,131—a drop of up to 92%. [ 76 ] A 2011 survey confirms a continuing decline of the George River caribou herd population. By 2018 it was estimated to be fewer than 9,000 animals as reported by the Canadian Broadcasting Corporation , down from 385,000 in 2001 and 74,131 in 2010. [ 11 ] [ 76 ] [ 77 ]
The Leaf River caribou herd (LRCH), [ 78 ] another migratory herd of Labrador caribou, near the coast of Hudson Bay , increased from 270 000 individuals in 1991 to 628 000 in 2001. [ 79 ] By 2011 the herd had decreased to 430 000. [ 11 ] [ 76 ] [ 80 ] According to an international study on caribou populations, the George River and Leaf River herds and other herds that migrate from Nunavik, Quebec and insular Newfoundland, could be threatened with extinction by 2080. [ 77 ]
The Queen Charlotte Islands caribou (formerly R. t. dawsoni ) from Graham Island , the largest of the Queen Charlotte Islands , is a distinct subspecies. [ 15 ] It became extinct at the beginning of the 20th century. Recent DNA analysis from mitochondrial DNA taken from the remains of these caribou suggest that the animals from the Queen Charlotte Islands were genetically close to from the adjacent mainland caribou subspecies, [ 81 ] Osborn's caribou, now recognized as of Beringian-Eurasian lineage. [ 82 ]
Four main populations of Greenland reindeer and caribou (Originally Cervus [Rangifer] grönlandicus Borowski, 1780 ) occupied western Greenland in 2013. [ 83 ] The Kangerlussuaq-Sisimiut caribou herd, the largest, had a population of around 98,000 animals in 2007. [ 84 ] The second largest, the Akia-Maniitsoq caribou herd, decreased from an estimated 46,000 in 2001 to about 17,400 in 2010. According to Cuyler, "one possible cause might be the topography, which prevents hunter access in the former while permitting access in the latter." [ citation needed ]
Greenland reindeer, formerly recognized as a full species, [ 85 ] are the most genetically divergent of all caribou and reindeer, with an average genetic distance (FST) of 44%. [ 86 ] Unlike barren-ground caribou, they have a harem-defense mating system, migrate only short (< 60 km) distances if at all, and lack the rutting and post-calving aggregation behavior of barren-ground caribou. Genetic, behavioral and morphological differences from other caribou and reindeer are so great that a recent revision returned them to full species status. [ 18 ]
The last remaining wild tundra reindeer in Europe are found in portions of southern Norway. [ 87 ] In southern Norway in the mountain ranges, there are about 30,000–35,000 reindeer with 23 different populations. The largest herd, with about 10,000 individuals, is at Hardangervidda. By 2013 the greatest challenges to management were "loss of habitat and migration corridors to piecemeal infrastructure development and abandonment of reindeer habitat as a result of human activities and disturbance." [ 11 ]
Norway is now preparing to apply for nomination as a World Heritage Site for areas with traces and traditions of reindeer hunting in Dovrefjell-Sunndalsfjella National Park , Reinheimen National Park and Rondane National Park in Central Sør-Norge ( Southern Norway ). There is in these parts of Norway an unbroken tradition of reindeer hunting from the post-glacial Stone Age until today. [ citation needed ]
On 29 August 2016, the Norwegian Environment Agency announced the death of 323 reindeer by the effects of a lightning strike in Hardangervidda . [ 88 ]
On 3 December 2018 a hiker in Northern Norway reported a sighting, and posted photos, of a rare white reindeer calf. [ 89 ]
The Svalbard reindeer ( R. tarandus platyrhynchus ) from Svalbard Island is very small compared to other subspecies (a phenomenon known as insular dwarfism ) and is the smallest of all the subspecies, with females having a length of approximately 150 cm (59 in), and a weight around 53 kg (117 lb) in the spring and 70 kg (150 lb) in the autumn. [ 90 ] Males are approximately 160 cm (63 in) long, and weigh around 65 kg (143 lb) in the spring and 90 kg (200 lb) in the autumn. [ 90 ] The reindeer from Svalbard are also relatively short-legged and may have a shoulder height of as little as 80 cm (31 in), [ 90 ] thereby following Allen's rule .
The Svalbard reindeer seems to have evolved from large European reindeer, [ 91 ] and is special in several ways: it has peculiarities in its metabolism, and its skeleton shows a remarkable relative shortening of the legs, thus parallelling many extinct insular deer species. [ 92 ]
Reindeer inhabit mostly northern parts of Sweden and the central Swedish province of Dalarna . In northern Sweden and parts of Dalarna, reindeer herding activity is generally part of the lifestyle of the indigenous Sámi people .
The Finnish forest reindeer ( R. t. fennicus ), is found in the wild in only two areas of the Fennoscandia peninsula of Northern Europe , in Finnish/Russian Karelia and a small population in central south Finland . The Karelia population reaches far into Russia, and genetic research shows that the Altai-Sayan forest reindeer, R. t. valentinae , clusters together with Finnish forest reindeer and apart from tundra reindeer, R. t. sibiricus . [ 93 ] By 2007 reindeer experts were concerned about the collapse of the wild Finnish forest reindeer in the eastern province of Kainuu . [ 94 ] During the peak year of 2001, the Finnish forest reindeer population in Kainuu was established at 1,700. In a March 2007 helicopter count, only 960 individuals were detected.
East Iceland has a small herd of about 2,500–3,000 animals. [ 95 ] Reindeer were introduced to Iceland in the late 1700s. [ 96 ] [ 11 ] The Icelandic reindeer population in July 2013 was estimated at approximately 6,000. With a hunting quota of 1,229 animals, the winter 2013–2014 population is expected to be around 4,800 reindeer. [ 11 ]
Semi-domesticated reindeer of domestic stock were brought to Scotland in 1952. In 2017, there were about 150 left to graze across 10,000 acres of land in the Cairngorms National Park , where the climate is classed as tundra . [ 97 ] [ 98 ]
A few reindeer from Norway were introduced to the South Atlantic island of South Georgia in the beginning of the 20th century. The South Georgian reindeer totaled some estimated 2,600 animals in two distinct herds separated by glaciers . Although both the flag and the coat of arms of the territory contain an image of a reindeer, a decision was taken in 2011 to completely eradicate the animals from the island because of the environmental damage they cause, [ 99 ] [ 100 ] which was done so with a team of Norwegian Sami hunters from 2013 to 2017, which revealed the true count to be around 6,750. [ 7 ]
Around 4,000 reindeer are descendants to those who have been introduced into the French sub-Antarctic archipelago of the Kerguelen Islands . | https://en.wikipedia.org/wiki/Reindeer_distribution |
Reinecke salt,
Reinecke's salt is an inorganic compound with the formula NH 4 [Cr(NCS) 4 (NH 3 ) 2 ·H 2 O . The dark-red crystalline compound is soluble in boiling water, acetone , and ethanol . [ 2 ] It can be classified as a metal isothiocyanate complex .
The chromium atom is surrounded by six nitrogen atoms in an octahedral geometry . The NH 3 ligands are mutually trans and the Cr-NCS groups are linear. The salt crystallizes with one molecule of water. [ 1 ]
It was first reported in 1863. [ 3 ] NH 4 [Cr(NCS) 4 (NH 3 ) 2 is prepared by treatment of molten NH 4 SCN (melting point around 145–150 °C (293–302 °F)) with (NH 4 ) 2 Cr 2 O 7 . [ 4 ]
This salt was once widely used to precipitate primary and secondary amines as their ammonium salts. Included in the amines that effectively form crystalline precipitates are those derived from the amino acids, including proline and hydroxyproline . It also reacts with Hg +2 compounds, giving a red color or a red precipitate. | https://en.wikipedia.org/wiki/Reinecke's_salt |
Reinforced concrete , also called ferroconcrete , is a composite material in which concrete 's relatively low tensile strength and ductility are compensated for by the inclusion of reinforcement having higher tensile strength or ductility. The reinforcement is usually, though not necessarily, steel reinforcing bars (known as rebar ) and is usually embedded passively in the concrete before the concrete sets. However, post-tensioning is also employed as a technique to reinforce the concrete. In terms of volume used annually, it is one of the most common engineering materials. [ 1 ] [ 2 ] In corrosion engineering terms, when designed correctly, the alkalinity of the concrete protects the steel rebar from corrosion . [ 3 ]
Reinforcing schemes are generally designed to resist tensile stresses in particular regions of the concrete that might cause unacceptable cracking and/or structural failure. Modern reinforced concrete can contain varied reinforcing materials made of steel, polymers or alternate composite material in conjunction with rebar or not. Reinforced concrete may also be permanently stressed (concrete in compression, reinforcement in tension), so as to improve the behavior of the final structure under working loads. In the United States , the most common methods of doing this are known as pre-tensioning and post-tensioning .
For a strong, ductile and durable construction the reinforcement needs to have the following properties at least:
The early development of the reinforced concrete was going on in parallel in England and France, in the middle of the 19th century. [ 4 ]
French builder François Coignet [ fr ] was the first one to use iron-reinforced concrete as a building technique. [ 5 ] In 1853-55, Coignet built for himself the first iron reinforced concrete structure, a four-story house at 72 rue Charles Michels in the suburbs of Paris known as the François Coignet House [ fr ] . [ 6 ] Coignet's descriptions of reinforcing concrete suggests that he did not do it for means of adding strength to the concrete but for keeping walls in monolithic construction from overturning. [ 7 ] The 1872–73 Pippen Building in Brooklyn , although not designed by Coignet, stands as a testament to his technique.
In 1854, English builder William B. Wilkinson reinforced the concrete roof and floors in the two-story house he was constructing. His positioning of the reinforcement demonstrated that, unlike his predecessors, he had knowledge of tensile stresses. [ 8 ] [ 9 ] [ 10 ] Between 1869 and 1870, Henry Eton would design, and Messrs W & T Phillips of London construct the wrought iron reinforced Homersfield Bridge , with a 50' (15.25 meter) span, over the river Waveney, between the English counties of Norfolk and Suffolk. [ 11 ]
Joseph Monier , a 19th-century French gardener, was a pioneer in the development of structural, prefabricated and reinforced concrete, having been dissatisfied with the existing materials available for making durable flowerpots. [ 12 ] He was granted a patent for reinforcing concrete flowerpots by means of mixing a wire mesh and a mortar shell in 1867. [ 13 ] In 1877, Monier was granted another patent for a more advanced technique of reinforcing concrete columns and girders, using iron rods placed in a grid pattern. Though Monier undoubtedly knew that reinforcing concrete would improve its inner cohesion, it is not clear whether he even knew how much the tensile strength of concrete was improved by the reinforcing. [ 14 ]
In 1877, Thaddeus Hyatt published a report entitled An Account of Some Experiments with Portland-Cement-Concrete Combined with Iron as a Building Material, with Reference to Economy of Metal in Construction and for Security against Fire in the Making of Roofs, Floors, and Walking Surfaces , [ 15 ] in which he reported his experiments on the behaviour of reinforced concrete. His work played a major role in the evolution of concrete construction as a proven and studied science. Without Hyatt's work, more dangerous trial and error methods might have been depended on for the advancement in the technology. [ 7 ] [ 16 ]
Before the 1870s, the use of concrete construction, though dating back to the Roman Empire , and having been reintroduced in the early 19th century, was not yet a scientifically proven technology.
Ernest L. Ransome , an English-born engineer, was an early innovator of reinforced concrete techniques at the end of the 19th century. Using the knowledge of reinforced concrete developed during the previous 50 years, Ransome improved nearly all the styles and techniques of the earlier inventors of reinforced concrete. Ransome's key innovation was to twist the reinforcing steel bar, thereby improving its bond with the concrete. [ 17 ] [ 18 ] Gaining increasing fame from his concrete constructed buildings, Ransome was able to build in 1886-1889 two of the first reinforced concrete bridges in North America. [ 17 ] [ failed verification ] One of his bridges still stands on Shelter Island in New York's East End.
One of the first concrete buildings constructed in the United States was a private home designed by William Ward , completed in 1876. The home was particularly designed to be fireproof.
G. A. Wayss was a German civil engineer and a pioneer of the iron and steel concrete construction. In 1879, Wayss bought the German rights to Monier's patents and, in 1884, his firm, Wayss & Freytag , made the first commercial use of reinforced concrete. Up until the 1890s, Wayss and his firm greatly contributed to the advancement of Monier's system of reinforcing, established it as a well-developed scientific technology. [ 14 ]
The Lamington Bridge was Australia's first large reinforced concrete road bridge. It was designed by Alfred Barton Brady , who was the Queensland Government Architect at the time of the bridge's construction in 1896. [ 19 ] It has eleven 15.2-metre (50 ft) spans and a total length of 187-metre (614 ft), larger than any known comparable bridge in the world at that time. [ 20 ]
One of the first skyscrapers made with reinforced concrete was the 16-story Ingalls Building in Cincinnati, constructed in 1904. [ 10 ]
The first reinforced concrete building in Southern California was the Laughlin Annex in downtown Los Angeles , constructed in 1905. [ 21 ] [ 22 ] In 1906, 16 building permits were reportedly issued for reinforced concrete buildings in the City of Los Angeles, including the Temple Auditorium and 8-story Hayward Hotel. [ 23 ] [ 24 ]
In 1906, a partial collapse of the Bixby Hotel in Long Beach killed 10 workers during construction when shoring was removed prematurely. That event spurred a scrutiny of concrete erection practices and building inspections. The structure was constructed of reinforced concrete frames with hollow clay tile ribbed flooring and hollow clay tile infill walls. That practice was strongly questioned by experts and recommendations for "pure" concrete construction were made, using reinforced concrete for the floors and walls as well as the frames. [ 25 ]
In April 1904, Julia Morgan , an American architect and engineer, who pioneered the aesthetic use of reinforced concrete, completed her first reinforced concrete structure, El Campanil, a 72-foot (22 m) bell tower at Mills College , [ 26 ] which is located across the bay from San Francisco . Two years later, El Campanil survived the 1906 San Francisco earthquake without any damage, [ 27 ] which helped build her reputation and launch her prolific career. [ 28 ] The 1906 earthquake also changed the public's initial resistance to reinforced concrete as a building material, which had been criticized for its perceived dullness. In 1908, the San Francisco Board of Supervisors changed the city's building codes to allow wider use of reinforced concrete. [ 29 ]
In 1906, the National Association of Cement Users (NACU) published Standard No. 1 [ 30 ] and, in 1910, the Standard Building Regulations for the Use of Reinforced Concrete . [ 31 ]
Many different types of structures and components of structures can be built using reinforced concrete elements including slabs , walls , beams , columns , foundations , frames and more.
Reinforced concrete can be classified as precast or cast-in-place concrete .
Designing and implementing the most efficient floor system is key to creating optimal building structures. Small changes in the design of a floor system can have significant impact on material costs, construction schedule, ultimate strength, operating costs, occupancy levels and end use of a building.
Without reinforcement, constructing modern structures with concrete material would not be possible.
When reinforced concrete elements are used in construction, these reinforced concrete elements exhibit basic behavior when subjected to external loads . Reinforced concrete elements may be subject to tension , compression , bending , shear , and/or torsion . [ 33 ]
Concrete is a mixture of coarse (stone or brick chips) and fine (generally sand and/or crushed stone) aggregates with a paste of binder material (usually Portland cement ) and water. When cement is mixed with a small amount of water, it hydrates to form microscopic opaque crystal lattices encapsulating and locking the aggregate into a rigid shape. [ 34 ] [ 35 ] The aggregates used for making concrete should be free from harmful substances like organic impurities, silt, clay, lignite, etc. Typical concrete mixes have high resistance to compressive stresses (about 4,000 psi (28 MPa)); however, any appreciable tension ( e.g., due to bending ) will break the microscopic rigid lattice, resulting in cracking and separation of the concrete. For this reason, typical non-reinforced concrete must be well supported to prevent the development of tension.
If a material with high strength in tension, such as steel , is placed in concrete, then the composite material, reinforced concrete, resists not only compression but also bending and other direct tensile actions. A composite section where the concrete resists compression and reinforcement " rebar " resists tension can be made into almost any shape and size for the construction industry.
Three physical characteristics give reinforced concrete its special properties:
As a rule of thumb, only to give an idea on orders of magnitude, steel is protected at pH above ~11 but starts to corrode below ~10 depending on steel characteristics and local physico-chemical conditions when concrete becomes carbonated. Carbonation of concrete along with chloride ingress are amongst the chief reasons for the failure of reinforcement bars in concrete.
The relative cross-sectional area of steel required for typical reinforced concrete is usually quite small and varies from 1% for most beams and slabs to 6% for some columns. Reinforcing bars are normally round in cross-section and vary in diameter. Reinforced concrete structures sometimes have provisions such as ventilated hollow cores to control their moisture & humidity.
Distribution of concrete (in spite of reinforcement) strength characteristics along the cross-section of vertical reinforced concrete elements is inhomogeneous. [ 36 ]
The reinforcement in a RC structure, such as a steel bar, has to undergo the same strain or deformation as the surrounding concrete in order to prevent discontinuity, slip or separation of the two materials under load. Maintaining composite action requires transfer of load between the concrete and steel. The direct stress is transferred from the concrete to the bar interface so as to change the tensile stress in the reinforcing bar along its length. This load transfer is achieved by means of bond (anchorage) and is idealized as a continuous stress field that develops in the vicinity of the steel-concrete interface.
The reasons that the two different material components concrete and steel can work together are as follows:
(1) Reinforcement can be well bonded to the concrete, thus they can jointly resist external loads and deform.
(2) The thermal expansion coefficients of concrete and steel are so close
( 1.0 × 10 −5 to 1.5 × 10 −5 for concrete and 1.2 × 10 −5 for steel) that the thermal stress-induced damage to the bond between the two components can be prevented.
(3) Concrete can protect the embedded steel from corrosion and high-temperature induced softening.
Because the actual bond stress varies along the length of a bar anchored in a zone of tension, current international codes of specifications use the concept of development length rather than bond stress. The main requirement for safety against bond failure is to provide a sufficient extension of the length of the bar beyond the point where the steel is required to develop its yield stress and this length must be at least equal to its development length. However, if the actual available length is inadequate for full development, special anchorages must be provided, such as cogs or hooks or mechanical end plates. The same concept applies to lap splice length [ 37 ] mentioned in the codes where splices (overlapping) provided between two adjacent bars in order to maintain the required continuity of stress in the splice zone.
In wet and cold climates, reinforced concrete for roads, bridges, parking structures and other structures that may be exposed to deicing salt may benefit from use of corrosion-resistant reinforcement such as uncoated, low carbon/chromium (micro composite), epoxy-coated, hot dip galvanized or stainless steel rebar. Good design and a well-chosen concrete mix will provide additional protection for many applications.
Uncoated, low carbon/chromium rebar looks similar to standard carbon steel rebar due to its lack of a coating; its highly corrosion-resistant features are inherent in the steel microstructure. It can be identified by the unique ASTM specified mill marking on its smooth, dark charcoal finish. Epoxy-coated rebar can easily be identified by the light green color of its epoxy coating. Hot dip galvanized rebar may be bright or dull gray depending on length of exposure, and stainless rebar exhibits a typical white metallic sheen that is readily distinguishable from carbon steel reinforcing bar. Reference ASTM standard specifications A1035/A1035M Standard Specification for Deformed and Plain Low-carbon, Chromium, Steel Bars for Concrete Reinforcement, A767 Standard Specification for Hot Dip Galvanized Reinforcing Bars, A775 Standard Specification for Epoxy Coated Steel Reinforcing Bars and A955 Standard Specification for Deformed and Plain Stainless Bars for Concrete Reinforcement.
Another, cheaper way of protecting rebars is coating them with zinc phosphate . [ 38 ] Zinc phosphate slowly reacts with calcium cations and the hydroxyl anions present in the cement pore water and forms a stable hydroxyapatite layer.
Penetrating sealants typically must be applied some time after curing. Sealants include paint, plastic foams, films and aluminum foil , felts or fabric mats sealed with tar, and layers of bentonite clay, sometimes used to seal roadbeds.
Corrosion inhibitors , such as calcium nitrite [Ca(NO 2 ) 2 ], can also be added to the water mix before pouring concrete. Generally, 1–2 wt. % of [Ca(NO 2 ) 2 ] with respect to cement weight is needed to prevent corrosion of the rebars. The nitrite anion is a mild oxidizer that oxidizes the soluble and mobile ferrous ions (Fe 2+ ) present at the surface of the corroding steel and causes them to precipitate as an insoluble ferric hydroxide (Fe(OH) 3 ). This causes the passivation of steel at the anodic oxidation sites. Nitrite is a much more active corrosion inhibitor than nitrate , which is a less powerful oxidizer of the divalent iron.
A beam bends under bending moment , resulting in a small curvature. At the outer face (tensile face) of the curvature the concrete experiences tensile stress, while at the inner face (compressive face) it experiences compressive stress.
A singly reinforced beam is one in which the concrete element is only reinforced near the tensile face and the reinforcement, called tension steel, is designed to resist the tension.
A doubly reinforced beam is the section in which besides the tensile reinforcement the concrete element is also reinforced near the compressive face to help the concrete resist compression and take stresses. The latter reinforcement is called compression steel. When the compression zone of a concrete is inadequate to resist the compressive moment (positive moment), extra reinforcement has to be provided if the architect limits the dimensions of the section.
An under-reinforced beam is one in which the tension capacity of the tensile reinforcement is smaller than the combined compression capacity of the concrete and the compression steel (under-reinforced at tensile face). When the reinforced concrete element is subject to increasing bending moment, the tension steel yields while the concrete does not reach its ultimate failure condition. As the tension steel yields and stretches, an "under-reinforced" concrete also yields in a ductile manner, exhibiting a large deformation and warning before its ultimate failure. In this case the yield stress of the steel governs the design.
An over-reinforced beam is one in which the tension capacity of the tension steel is greater than the combined compression capacity of the concrete and the compression steel (over-reinforced at tensile face). So the "over-reinforced concrete" beam fails by crushing of the compressive-zone concrete and before the tension zone steel yields, which does not provide any warning before failure as the failure is instantaneous.
A balanced-reinforced beam is one in which both the compressive and tensile zones reach yielding at the same imposed load on the beam, and the concrete will crush and the tensile steel will yield at the same time. This design criterion is however as risky as over-reinforced concrete, because failure is sudden as the concrete crushes at the same time of the tensile steel yields, which gives a very little warning of distress in tension failure. [ 39 ]
Steel-reinforced concrete moment-carrying elements should normally be designed to be under-reinforced so that users of the structure will receive warning of impending collapse.
The characteristic strength is the strength of a material where less than 5% of the specimen shows lower strength.
The design strength or nominal strength is the strength of a material, including a material-safety factor. The value of the safety factor generally ranges from 0.75 to 0.85 in Permissible stress design .
The ultimate limit state is the theoretical failure point with a certain probability. It is stated under factored loads and factored resistances.
Reinforced concrete structures are normally designed according to rules and regulations or recommendation of a code such as ACI-318, CEB, Eurocode 2 or the like. WSD, USD or LRFD methods are used in design of RC structural members. Analysis and design of RC members can be carried out by using linear or non-linear approaches. When applying safety factors, building codes normally propose linear approaches, but for some cases non-linear approaches. To see the examples of a non-linear numerical simulation and calculation visit the references: [ 40 ] [ 41 ]
Prestressing concrete is a technique that greatly increases the load-bearing strength of concrete beams. The reinforcing steel in the bottom part of the beam, which will be subjected to tensile forces when in service, is placed in tension before the concrete is poured around it. Once the concrete has hardened, the tension on the reinforcing steel is released, placing a built-in compressive force on the concrete. When loads are applied, the reinforcing steel takes on more stress and the compressive force in the concrete is reduced, but does not become a tensile force. Since the concrete is always under compression, it is less subject to cracking and failure. [ 42 ]
Reinforced concrete can fail due to inadequate strength, leading to mechanical failure, or due to a reduction in its durability. Corrosion and freeze/thaw cycles may damage poorly designed or constructed reinforced concrete. When rebar corrodes, the oxidation products ( rust ) expand and tends to flake, cracking the concrete and unbonding the rebar from the concrete. Typical mechanisms leading to durability problems are discussed below.
Cracking of the concrete section is nearly impossible to prevent; however, the size and location of cracks can be limited and controlled by appropriate reinforcement, control joints, curing methodology and concrete mix design. Cracking can allow moisture to penetrate and corrode the reinforcement. This is a serviceability failure in limit state design . Cracking is normally the result of an inadequate quantity of rebar, or rebar spaced at too great a distance. The concrete cracks either under excess loading, or due to internal effects such as early thermal shrinkage while it cures.
Ultimate failure leading to collapse can be caused by crushing the concrete, which occurs when compressive stresses exceed its strength, by yielding or failure of the rebar when bending or shear stresses exceed the strength of the reinforcement, or by bond failure between the concrete and the rebar. [ 43 ]
Carbonation, or neutralisation, is a chemical reaction between carbon dioxide in the air and calcium hydroxide and hydrated calcium silicate in the concrete.
When a concrete structure is designed, it is usual to specify the concrete cover for the rebar (the depth of the rebar within the object). The minimum concrete cover is normally regulated by design or building codes . If the reinforcement is too close to the surface, early failure due to corrosion may occur. The concrete cover depth can be measured with a cover meter . However, carbonated concrete incurs a durability problem only when there is also sufficient moisture and oxygen to cause electropotential corrosion of the reinforcing steel.
One method of testing a structure for carbonation is to drill a fresh hole in the surface and then treat the cut surface with phenolphthalein indicator solution. This solution turns pink when in contact with alkaline concrete, making it possible to see the depth of carbonation. Using an existing hole does not suffice because the exposed surface will already be carbonated.
Chlorides can promote the corrosion of embedded rebar if present in sufficiently high concentration. Chloride anions induce both localized corrosion ( pitting corrosion ) and generalized corrosion of steel reinforcements. For this reason, one should only use fresh raw water or potable water for mixing concrete, ensure that the coarse and fine aggregates do not contain chlorides, rather than admixtures which might contain chlorides.
It was once common for calcium chloride to be used as an admixture to promote rapid set-up of the concrete. It was also mistakenly believed that it would prevent freezing. However, this practice fell into disfavor once the deleterious effects of chlorides became known. It should be avoided whenever possible.
The use of de-icing salts on roadways, used to lower the freezing point of water, is probably one of the primary causes of premature failure of reinforced or prestressed concrete bridge decks, roadways, and parking garages. The use of epoxy-coated reinforcing bars and the application of cathodic protection has mitigated this problem to some extent. Also FRP (fiber-reinforced polymer) rebars are known to be less susceptible to chlorides. Properly designed concrete mixtures that have been allowed to cure properly are effectively impervious to the effects of de-icers.
Another important source of chloride ions is sea water . Sea water contains by weight approximately 3.5% salts. These salts include sodium chloride , magnesium sulfate , calcium sulfate , and bicarbonates . In water these salts dissociate in free ions (Na + , Mg 2+ , Cl − , SO 2− 4 , HCO − 3 ) and migrate with the water into the capillaries of the concrete. Chloride ions, which make up about 50% of these ions, are particularly aggressive as a cause of corrosion of carbon steel reinforcement bars.
In the 1960s and 1970s it was also relatively common for magnesite , a chloride rich carbonate mineral , to be used as a floor-topping material. This was done principally as a levelling and sound attenuating layer. However it is now known that when these materials come into contact with moisture they produce a weak solution of hydrochloric acid due to the presence of chlorides in the magnesite. Over a period of time (typically decades), the solution causes corrosion of the embedded rebars . This was most commonly found in wet areas or areas repeatedly exposed to moisture.
This a reaction of amorphous silica ( chalcedony , chert , siliceous limestone ) sometimes present in the aggregates with the hydroxyl ions (OH − ) from the cement pore solution. Poorly crystallized silica (SiO 2 ) dissolves and dissociates at high pH (12.5 - 13.5) in alkaline water. The soluble dissociated silicic acid reacts in the porewater with the calcium hydroxide ( portlandite ) present in the cement paste to form an expansive calcium silicate hydrate (CSH). The alkali–silica reaction (ASR) causes localised swelling responsible for tensile stress and cracking . The conditions required for alkali silica reaction are threefold:
(1) aggregate containing an alkali-reactive constituent (amorphous silica), (2) sufficient availability of hydroxyl ions (OH − ), and (3) sufficient moisture, above 75% relative humidity (RH) within the concrete. [ 44 ] [ 45 ] This phenomenon is sometimes popularly referred to as " concrete cancer ". This reaction occurs independently of the presence of rebars; massive concrete structures such as dams can be affected.
Resistant to weak acids and especially sulfates, this cement cures quickly and has very high durability and strength. It was frequently used after World War II to make precast concrete objects. However, it can lose strength with heat or time (conversion), especially when not properly cured. After the collapse of three roofs made of prestressed concrete beams using high alumina cement, this cement was banned in the UK in 1976. Subsequent inquiries into the matter showed that the beams were improperly manufactured, but the ban remained. [ 46 ]
Sulfates (SO 4 ) in the soil or in groundwater, in sufficient concentration, can react with the Portland cement in concrete causing the formation of expansive products, e.g., ettringite or thaumasite , which can lead to early failure of the structure. The most typical attack of this type is on concrete slabs and foundation walls at grades where the sulfate ion, via alternate wetting and drying, can increase in concentration. As the concentration increases, the attack on the Portland cement can begin. For buried structures such as pipe, this type of attack is much rarer, especially in the eastern United States. The sulfate ion concentration increases much slower in the soil mass and is especially dependent upon the initial amount of sulfates in the native soil. A chemical analysis of soil borings to check for the presence of sulfates should be undertaken during the design phase of any project involving concrete in contact with the native soil. If the concentrations are found to be aggressive, various protective coatings can be applied. Also, in the US ASTM C150 Type 5 Portland cement can be used in the mix. This type of cement is designed to be particularly resistant to a sulfate attack.
In steel plate construction, stringers join parallel steel plates. The plate assemblies are fabricated off site, and welded together on-site to form steel walls connected by stringers. The walls become the form into which concrete is poured. Steel plate construction speeds reinforced concrete construction by cutting out the time-consuming on-site manual steps of tying rebar and building forms. The method results in excellent strength because the steel is on the outside, where tensile forces are often greatest.
Fiber reinforcement is mainly used in shotcrete , but can also be used in normal concrete. Fiber-reinforced normal concrete is mostly used for on-ground floors and pavements, but can also be considered for a wide range of construction parts (beams, pillars, foundations, etc.), either alone or with hand-tied rebars.
Concrete reinforced with fibers (which are usually steel, glass , plastic fibers ) or cellulose polymer fiber is less expensive than hand-tied rebar. [ citation needed ] The shape, dimension, and length of the fiber are important. A thin and short fiber, for example short, hair-shaped glass fiber, is only effective during the first hours after pouring the concrete (its function is to reduce cracking while the concrete is stiffening), but it will not increase the concrete tensile strength. A normal-size fiber for European shotcrete (1 mm diameter, 45 mm length—steel or plastic) will increase the concrete's tensile strength. Fiber reinforcement is most often used to supplement or partially replace primary rebar, and in some cases it can be designed to fully replace rebar. [ 47 ]
Steel is the strongest commonly available fiber, [ citation needed ] and comes in different lengths (30 to 80 mm in Europe) and shapes (end-hooks). Steel fibers can only be used on surfaces that can tolerate or avoid corrosion and rust stains. In some cases, a steel-fiber surface is faced with other materials.
Glass fiber is inexpensive and corrosion-proof, but not as ductile as steel. Recently, spun basalt fiber , long available in Eastern Europe , has become available in the U.S. and Western Europe. Basalt fiber is stronger and less expensive than glass, but historically has not resisted the alkaline environment of Portland cement well enough to be used as direct reinforcement. New materials use plastic binders to isolate the basalt fiber from the cement.
The premium fibers are graphite -reinforced plastic fibers, which are nearly as strong as steel, lighter in weight, and corrosion-proof. [ citation needed ] Some experiments have had promising early results with carbon nanotubes , but the material is still far too expensive for any building. [ citation needed ]
There is considerable overlap between the subjects of non-steel reinforcement and fiber-reinforcement of concrete. The introduction of non-steel reinforcement of concrete is relatively recent; it takes two major forms: non-metallic rebar rods, and non-steel (usually also non-metallic) fibers incorporated into the cement matrix. For example, there is increasing interest in glass fiber reinforced concrete (GFRC) and in various applications of polymer fibers incorporated into concrete. Although currently there is not much suggestion that such materials will replace metal rebar, some of them have major advantages in specific applications, and there also are new applications in which metal rebar simply is not an option. However, the design and application of non-steel reinforcing is fraught with challenges. For one thing, concrete is a highly alkaline environment, in which many materials, including most kinds of glass, have a poor service life . Also, the behavior of such reinforcing materials differs from the behavior of metals, for instance in terms of shear strength, creep and elasticity. [ 48 ] [ 49 ]
Fiber-reinforced plastic/polymer (FRP) and glass-reinforced plastic (GRP) consist of fibers of polymer , glass, carbon, aramid or other polymers or high-strength fibers set in a resin matrix to form a rebar rod, or grid, or fiber. These rebars are installed in much the same manner as steel rebars. The cost is higher but, suitably applied, the structures have advantages, in particular a dramatic reduction in problems related to corrosion , either by intrinsic concrete alkalinity or by external corrosive fluids that might penetrate the concrete. These structures can be significantly lighter and usually have a longer service life . The cost of these materials has dropped dramatically since their widespread adoption in the aerospace industry and by the military.
In particular, FRP rods are useful for structures where the presence of steel would not be acceptable. For example, MRI machines have huge magnets, and accordingly require non-magnetic buildings. Again, toll booths that read radio tags need reinforced concrete that is transparent to radio waves . Also, where the design life of the concrete structure is more important than its initial costs, non-steel reinforcing often has its advantages where corrosion of reinforcing steel is a major cause of failure. In such situations corrosion-proof reinforcing can extend a structure's life substantially, for example in the intertidal zone . FRP rods may also be useful in situations where it is likely that the concrete structure may be compromised in future years, for example the edges of balconies when balustrades are replaced, and bathroom floors in multi-story construction where the service life of the floor structure is likely to be many times the service life of the waterproofing building membrane.
Plastic reinforcement often is stronger , or at least has a better strength to weight ratio than reinforcing steels. Also, because it resists corrosion, it does not need a protective concrete cover as thick as steel reinforcement does (typically 30 to 50 mm or more). FRP-reinforced structures therefore can be lighter and last longer. Accordingly, for some applications the whole-life cost will be price-competitive with steel-reinforced concrete.
The material properties of FRP or GRP bars differ markedly from steel, so there are differences in the design considerations. FRP or GRP bars have relatively higher tensile strength but lower stiffness, so that deflections are likely to be higher than for equivalent steel-reinforced units. Structures with internal FRP reinforcement typically have an elastic deformability comparable to the plastic deformability (ductility) of steel reinforced structures. Failure in either case is more likely to occur by compression of the concrete than by rupture of the reinforcement. Deflection is always a major design consideration for reinforced concrete. Deflection limits are set to ensure that crack widths in steel-reinforced concrete are controlled to prevent water, air or other aggressive substances reaching the steel and causing corrosion. For FRP-reinforced concrete, aesthetics and possibly water-tightness will be the limiting criteria for crack width control. FRP rods also have relatively lower compressive strengths than steel rebar, and accordingly require different design approaches for reinforced concrete columns .
One drawback to the use of FRP reinforcement is their limited fire resistance. Where fire safety is a consideration, structures employing FRP have to maintain their strength and the anchoring of the forces at temperatures to be expected in the event of fire. For purposes of fireproofing , an adequate thickness of cement concrete cover or protective cladding is necessary. The addition of 1 kg/m 3 of polypropylene fibers to concrete has been shown to reduce spalling during a simulated fire. [ 50 ] (The improvement is thought to be due to the formation of pathways out of the bulk of the concrete, allowing steam pressure to dissipate. [ 50 ] )
Another problem is the effectiveness of shear reinforcement. FRP rebar stirrups formed by bending before hardening generally perform relatively poorly in comparison to steel stirrups or to structures with straight fibers. When strained, the zone between the straight and curved regions are subject to strong bending, shear, and longitudinal stresses. Special design techniques are necessary to deal with such problems.
There is growing interest in applying external reinforcement to existing structures using advanced materials such as composite (fiberglass, basalt, carbon) rebar, which can impart exceptional strength. Worldwide, there are a number of brands of composite rebar recognized by different countries, such as Aslan, DACOT, V-rod, and ComBar. The number of projects using composite rebar increases day by day around the world, in countries ranging from USA, Russia, and South Korea to Germany. | https://en.wikipedia.org/wiki/Reinforced_concrete |
A reinforced concrete column is a structural member designed to carry compressive loads , composed of concrete with an embedded steel frame to provide reinforcement. For design purposes, the columns are separated into two categories: short columns and slender columns.
The strength of short columns is controlled by the strength of the material and the geometry of the cross section. Reinforcing rebar is placed axially in the column to provide additional axial stiffness. Accounting for the additional stiffness of the steel, the nominal loading capacity P n for the column in terms of the maximum compressive stress of the concrete f c ' , the yield stress of the steel f y , the gross cross section area of the column A g , and the total cross section area of the steel rebar A st
where the first term represents the load carried by the concrete and the second term represents the load carried by the steel. Because the yield strength of steel is an order of magnitude larger than that of concrete, a small addition of steel will greatly increase the strength of the column. [ 1 ]
To give a conservative estimate and build redundancies into the final structural system, the ACI Building Code Requirements give a maximum reduced design load of ϕ P n {\displaystyle \mathrm {{\phi }P_{\mathrm {n} }} \,\!} where ϕ {\displaystyle \mathrm {\phi } \,\!} is the strength reduction factor for the type of column used. For spiral columns
where ϕ = 0.75 {\displaystyle \mathrm {\phi } =0.75\,\!} . For tied columns
where ϕ = 0.65 {\displaystyle \mathrm {\phi } =0.65\,\!} .
The additional reduction past the strength reduction factor is to account for any eccentricities in the loading of column. Distributing a load toward one end of the column will produce a moment in the column and prevent the entire cross section from carrying the load, thus producing high stress concentrations towards that end of the column.
Spiral columns are cylindrical columns with a continuous helical bar wrapping around the column. The spiral acts to provide support in the transverse direction and prevent the column from barreling . The amount of reinforcement is required to provide additional load-carrying capacity greater than or equal to that attributed from the shell as to compensate for the strength lost when the shell spalls off. With further thickening of the spiral rebar, the axially loaded concrete becomes the weakest link in the system and the strength contribution from the additional rebar does not take effect until the column has failed axially. At that point, the additional strength from spiral reinforcement engages and prevents catastrophic failure, instead giving rise to a much slower ductile failure. [ 2 ]
The ACI Building Code Requirements put the following restrictions on amount of spiral reinforcement.
ACI Code 7.10.4.2: For cast-in-place construction, size of spirals shall not be less than 3/8 in. diameter.
ACI Code 7.10.4.3: Clear spacing between spirals shall not exceed 3 in., nor be less than 1in.
Section 10.9.3 adds an additional lower limit to the amount of spiral reinforcement via the volumetric spiral reinforcement ratio ρ s .
where A ch is the shell area, the cross-sectional area measured to the outside edges of transverse reinforcement. [ 3 ] P = f/A
Tied columns have closed lateral ties spaced approximately uniformly across the column. The spacing of the ties is limited in that they must be close enough to prevent barreling failure between them, and far enough apart that they do not interfere with the setting of the concrete. The ACI codebook puts an upward limit on the spacing between ties.
ACI Code 7.10.5: Vertical spacing of ties shall not exceed 16 longitudinal bar diameters, 48 tie bar or wire diameters, or least dimension of the compression member.
If the ties are spaced too far apart, the column will experience shearfailure and barrel in between the ties. [ 4 ]
Columns qualify as being slender when their cross sectional area is very small in proportion to their length. Unlike Short Columns, Slender Columns are limited by their geometry and will buckle before the concrete or steel reinforcement yields.
There are some analytical stress-strain models and damage indices for confined and unconfined concretes to simulate reinforced concrete columns that make possible without any experimental test to evaluate the stress-strain relationship and damage of confined and unconfined concretes situated inside and outside of stirrups. To see such models and simulations of columns subjected to the cyclic and monotonic loading, refer to the following links:, [ 5 ] [ 6 ] [ 7 ]
Machine learning (ML) is a subfield of artificial intelligence (AI) and an advanced form of data analysis and computation that employs the high elaboration speed and pattern recognition techniques of computers for knowledge output from data. In other words, it is a computer programming technique inspired by AI that allows computers to improve their learning abilities through data supplies or data access. This resembles the way human beings improve their intelligence in real life. There are four generalized categories of ML. To be more specific, there is supervised learning, semi-supervised learning, unsupervised learning and reinforcement learning. In supervised learning, the desired output is known by the trainer, where the trainer is the human being that can ascribe physical meaning to the data and characterize it by adding a tag or correcting system errors. The machine is trained based on inputs with tags that are connected to a corresponding output. Through this process, the machine develops a predictive model for the connection of this input to a certain output. This does not differ from the way that knowledge is learned in a classroom, with a teacher available to correct any errors.
The mode of failure of structural members, such as reinforced concrete columns, depends on several factors, such as their geometric characteristics, the longitudinal reinforcement, the efficiency of confinement through the transverse reinforcement and the loading history. Their behavior throughout the loading range is controlled by competing mechanisms of resistance such as flexure, shear, buckling of longitudinal bars when they are subjected to compressive loads and, in the case of lap splices, the lap splice mechanism of the development of reinforcing bars. Very often, a combination of such mechanisms characterizes the macroscopic behavior of the column, especially in cases of cyclic load reversals. Various predictive models have been developed in the past to determine both the strength as well as the deformation capacity of the columns, with the uncertainty being at least one order of magnitude greater in terms of deformation capacity rather than strength, as evidenced by comparisons with test results. System identification and damage detection is a twofold area that utilizes ML to imitate a structural system and predict its deterministic seismic response. Laboratory tests of reinforced concrete (RC) structures have provided one source of data that enables ML methods to identify their failure modes, strength, capacities and constitutive behaviors [ 8 ]
"Reinforcing Mesh For Concrete" . "Standard Size Of Column" . | https://en.wikipedia.org/wiki/Reinforced_concrete_column |
The durability design of reinforced concrete structures has been recently introduced in national and international regulations. It is required that structures are designed to preserve their characteristics during the service life, avoiding premature failure and the need of extraordinary maintenance and restoration works. Considerable efforts have therefore made in the last decades [ when? ] in order to define useful models describing the degradation processes affecting reinforced concrete structures, to be used during the design stage in order to assess the material characteristics and the structural layout of the structure. [ 1 ]
Initially, the chemical reactions that normally occur in the cement paste, generate an alkaline environment, bringing the solution in the cement paste pores to pH values around 13. In these conditions, passivation of steel rebar occurs, due to a spontaneous generation of a thin film of oxides able to protect the steel from corrosion. Over time, the thin film can be damaged, and corrosion of steel rebar starts. The corrosion of steel rebar is one of the main causes of premature failure of reinforced concrete structures worldwide, [ 4 ] mainly as a consequence of two degradation processes, carbonation and penetration of chlorides . [ 1 ] With regard to the corrosion degradation process, a simple and accredited model for the assessment of the service life is the one proposed by Tuutti, in 1982. [ 5 ] According to this model, the service life of a reinforced concrete structure can be divided into two distinct phases.
The identification of initiation time and propagation time is useful to further identify the main variables and processes influencing the service life of the structure which are specific of each service life phase and of the degradation process considered.
The initiation time is related to the rate at which carbonation propagates in the concrete cover thickness . Once that carbonation reaches the steel surface, altering the local pH value of the environment, the protective thin film of oxides on the steel surface becomes instable, and corrosion initiates involving an extended portion of the steel surface. One of the most simplified and accredited models describing the propagation of carbonation in time is to consider penetration depth proportional to the square root of time, following the correlation
x = K t {\displaystyle x=K{\sqrt {t}}}
where x {\displaystyle x} is the carbonation depth, t {\displaystyle t} is time, and K {\displaystyle K} is the carbonation coefficient. The corrosion onset takes place when the carbonation depth reaches the concrete cover thickness, and therefore can be evaluated as
t i = ( c K ) 2 {\displaystyle t_{i}=\left({\frac {c}{K}}\right)^{2}}
where c {\displaystyle c} is the concrete cover thickness .
K {\displaystyle K} is the key design parameter to assess initiation time in the case of carbonation-induced corrosion. It is expressed in mm/year 1/2 and depends on the characteristics of concrete and the exposure conditions. The penetration of gaseous CO 2 in a porous medium such as concrete occurs via diffusion . The humidity content of concrete is one of the main influencing factors of CO 2 diffusion in concrete. If concrete pores are completely and permanently saturated (for instance in submerged structures ) CO 2 diffusion is prevented. On the other hand, for completely dry concrete, the chemical reaction of carbonation cannot occur. Another influencing factor for CO 2 diffusion rate is concrete porosity . Concrete obtained with higher w/c ratio or obtained with an incorrect curing process presents higher porosity at hardened state, and is therefore subjected to a higher carbonation rate. The influencing factors concerning the exposure conditions are the environmental temperature, humidity and concentration of CO 2 . Carbonation rate is higher for environments with higher humidity and temperature, and increases in polluted environments such as urban centres and inside close spaces as tunnels. [ 1 ]
To evaluate propagation time in the case of carbonation-induced corrosion , several models have been proposed. In a simplified but commonly accepted method, the propagation time is evaluated as function of the corrosion propagation rate. If the corrosion rate is considered constant, t p can be estimated as:
t p = p l i m v c o r r {\displaystyle t_{p}={\frac {p_{lim}}{v_{corr}}}}
where p l i m {\displaystyle p_{lim}} is the limit corrosion penetration in steel and v c o r r {\displaystyle v_{corr}} is the corrosion propagation rate. [ 1 ] p l i m {\displaystyle p_{lim}} must be defined in function of the limit state considered. Generally for carbonation-induced corrosion the concrete cover cracking is considered as limit state, and in this case a p l i m {\displaystyle p_{lim}} equal to 100 μm is considered. [ 6 ] v c o r r {\displaystyle v_{corr}} depends on the environmental factors in proximity of the corrosion process, such as the availability of oxygen and water at concrete cover depth. Oxygen is generally available at the steel surface, except for submerged structures. If pores are constantly fully saturated, a very low amount of oxygen reaches the steel surface and corrosion rate can be considered negligible. [ 7 ] For very dry concretes v c o r r {\displaystyle v_{corr}} is negligible due to the absence of water which prevents the chemical reaction of corrosion . For intermediate concrete humidity content, corrosion rate increases with increasing the concrete humidity content. Since the humidity content in a concrete can significantly vary along the year, it is general not possible to define a constant v c o r r {\displaystyle v_{corr}} . One possible approach is to consider a mean annual value of v c o r r {\displaystyle v_{corr}} .
The presence of chlorides to the steel surface, above a certain critical amount, can locally break the protective thin film of oxides on the steel surface, even if concrete is still alkaline, causing a very localized and aggressive form of corrosion known as pitting . Current regulations forbid the use of chloride contaminated raw materials, therefore one factor influencing the initiation time is chloride penetration rate from the environment. This is a complex task, because chloride solutions penetrate in concrete through the combination of several transport phenomena, such as diffusion , capillary effect and hydrostatic pressure . Chloride binding is another phenomenon affecting the kinetic of chloride penetration. Part of the total chloride ions can be absorbed or can chemically react with some constituents of the cement paste, leading to a reduction of chlorides in the pore solution (free chlorides that are steel able to penetrate in concrete). The ability of a concrete to chloride binding is related to the cement type, being higher for blended cements containing silica fume, fly ash or furnace slag.
Being the modelling of chloride penetration in concrete particularly complex, a simplified correlation is generally adopted, which was firstly proposed by Collepardi in 1972 [ 8 ]
C ( x , t ) = C s [ 1 − e r f ( x 2 D t ) ] {\displaystyle C(x,t)=C_{s}\left[1-\mathrm {erf} \left({\frac {x}{2{\sqrt {Dt}}}}\right)\right]}
Where C s {\displaystyle C_{s}} is the chloride concentration at the exposed surface, x is the chloride penetration depth, D is the chloride diffusion coefficient, and t is time.
This equation is a solution of Fick's II law of diffusion in the hypothesis that chloride initial content is zero, that C s {\displaystyle C_{s}} is constant in time on the whole surface, and D is constant in time and through the concrete cover. With C s {\displaystyle C_{s}} and D known, the equation can be used to evaluate the temporal evolution of the chloride concentration profile in the concrete cover and evaluate the initiation time as the moment in which critical chloride threshold ( C c l {\displaystyle C_{cl}} ) is reached at the depth of steel rebar.
However, there are many critical issues related to the practical use of this model. For existing reinforced concrete structures in chloride-bearing environment C s {\displaystyle C_{s}} and D can be identified calculating the best-fit curve for measured chloride concertation profiles. From concrete samples retrieved on field is therefore possible to define the values of C s and D for residual service life evaluation. [ 9 ] On the other hand, for new structures it is more complicated to define C s {\displaystyle C_{s}} and D. These parameters depend on the exposure conditions, the properties of concrete such as porosity (and therefore w/c ratio and curing process) and type of cement used. Furthermore, for the evaluation of long-term behaviour of structure, a critical issue is related to the fact that C s {\displaystyle C_{s}} and D can not be considered constant in time, and that the transport penetration of chlorides can be considered as pure diffusion only for submerged structures.
A further issue is the assessment of C c l {\displaystyle C_{cl}} . There are various influencing factors, such as are the potential of steel rebar and the pH of the solution included in concrete pores. Moreover, pitting corrosion initiation is a phenomenon with a stochastic nature, therefore also C c l {\displaystyle C_{cl}} can be defined only on statistical basis. [ 1 ]
The durability assessment has been implemented in European design codes at the beginning of the 90s. It is required for designers to include the effects of long-term corrosion of steel rebar during the design stage, in order to avoid unacceptable damages during the service life of the structure. Different approaches are then available for the durability design.
It is the standardized method to deal with durability, also known as deem-to-satisfy approach, and provided by current european regulation EN 206. It is required that the designer identifies the environmental exposure conditions and the expected degradation process, assessing the correct exposure class. Once this is defined, design code gives standard prescriptions for w/c ratio, the cement content, and the thickness of the concrete cover.
This approach represents an improvement step for the durability design of reinforced concrete structures, it is suitable for the design of ordinary structures designed with traditional materials (Portland cement, carbon steel rebar) and with an expected service life of 50 years. Nevertheless, it is considered not completely exhaustive in some cases. The simple prescriptions do not allow to optimize the design for different parts of the structures with different local exposure conditions. Furthermore, they do not allow to consider the effects on service life of special measures such as the use of additional protections. [ 6 ]
Performance-based approaches provide for a real design of durability, based on models describing the evolution in time of degradation processes, and the definition of times at which defined limit states will be reached. To consider the wide variety of service life influencing factors and their variability, performance-based approaches address the problem from a probabilistic or semiprobabilistic point of view.
The performance-based service life model proposed by the European project DuraCrete, [ 10 ] and by FIB Model Code for Service Life Design, [ 11 ] is based on a probabilistic approach, similar to the one adopted for structural design. Environmental factors are considered as loads S(t), while material properties such as chloride penetration resistance are considered as resistances R(t) as shown in Figure 2. For each degradation process, design equations are set to evaluate the probability of failure of predefined performances of the structure, where acceptable probability is selected on the basis of the limit state considered. The degradation processes are still described with the models previously defined for carbonation-induced and chloride-induced corrosion, but to reflect the statistical nature of the problem, the variables are considered as probability distribution curves over time. [ 6 ] To assess some of the durability design parameters, the use of accelerated laboratory test is suggested, such as the so called Rapid Chloride Migration Test to evaluate chloride penetration resistance of concrete [ 11 ] '. Through the application of corrective parameters, the long-term behaviour of the structure in real exposure conditions may be evaluated.
The use of probabilistic service life models allows to implement a real durability design that could be implemented in the design stage of structures. This approach is of particular interest when an extended service life is required (>50 years) or when the environmental exposure conditions are particularly aggressive. Anyway, the applicability of this kind of models is still limited. The main critical issues still concern, for instance, the individuation of accelerated laboratory tests able to characterize concrete performances, reliable corrective factors to be used for the evaluation of long-term durability performances and the validation of these models based on real long-term durability performances. [ 6 ] [ 9 ] | https://en.wikipedia.org/wiki/Reinforced_concrete_structures_durability |
Reinforced lipids are lipid molecules in which some of the fatty acids contain deuterium . They can be used for the protection of living cells by slowing the chain reaction due to isotope effect on lipid peroxidation . [ 1 ] The lipid bilayer of the cell and organelle membranes contain polyunsaturated fatty acids (PUFA) are key components of cell and organelle membranes. Any process that either increases oxidation of PUFAs or hinders their ability to be replaced can lead to serious disease. Correspondingly, use of reinforced lipids that stop the chain reaction of lipid peroxidation has preventive and therapeutic potential.
There are a number of polyunsaturated fatty acids that can be reinforced by deuteration. [ 2 ] They include (the names of the reinforced deuterated versions are separated by a slash):
Hydrogen is a chemical element with atomic number 1. It has just one proton and one electron. Deuterium is the heavier naturally occurring, stable isotope of hydrogen. Deuterium contains one proton, one electron, and a neutron, doubling the mass without changing its properties significantly. Substituting deuterium for hydrogen yields deuterated compounds that are similar in size and shape to normal hydrogen compounds.
One of the most pernicious and irreparable types of oxidative damage inflicted by reactive oxygen species (ROS) upon biomolecules involves the carbon-hydrogen bond cleavage (hydrogen abstraction). In theory, replacing hydrogen with deuterium "reinforces" the bond due to the kinetic isotope effect , and such reinforced biomolecules taken up by the body will be more resistant to ROS. [ 3 ]
The deuterium-reinforced lipids resists the non-enzymatic lipid peroxidation (LPO) through isotope effect — a non-antioxidant based mechanism that protects mitochondrial, neuronal and other lipid membranes, thereby greatly reducing the levels of numerous LPO-derived toxic products such as reactive carbonyls . [ 4 ] [ 5 ]
Treating cells with deuterium-containing PUFAs (D-PUFAs) can prevent of ferroptosis. This treatment stops the autoxidation process through the kinetic isotope effect (KIE), as shown in Table 1 [66]. The efficacy of D-PUFAs in preventing ferroptosis has been demonstrated in models induced by erastin and RSL3, and has shown promising results in various disease models, especially those related to neurodegenerative disorders. [ 6 ]
The concept of using reinforced lipids to inhibit lipid peroxidation has been tested in numerous cell and animal
models, including:
A double-blind comparator-controlled Phase I/II clinical trial of using D 2 -linoleic acid ethyl ester (RT001) for Friedreich's ataxia , sponsored by Retrotope and Friedreich's Ataxia Research Alliance , was conducted to determine the safety profile and appropriate dosing for consequent trials. [ 11 ] RT001 was promptly absorbed and was found to be safe and tolerable over 28 days at the maximal dose of 9 g/day. It improved peak workload and peak oxygen consumption in the test group compared to the control group who received the equal doses of normal, non-deuterated linoleic acid ethyl ester. [ 12 ] Another randomised, double-blind, placebo-controlled clinical study began in 2019. [ 13 ]
An open-label clinical study for infantile neuroaxonal dystrophy evaluating long-term evaluation of efficacy, safety, tolerability, and pharmacokinetics of RT001, which, when taken with food, can protect the neuronal cells from degeneration, started in the Summer 2018. [ 14 ]
In 2017, the FDA granted RT001 orphan drug designation in the treatment of phospholipase 2G6 -associated neurodegeneration ( PLAN ). [ 15 ]
In 2018, RT001 was given to a patient with amyotrophic lateral sclerosis (ALS) under a "compassionate use scheme". [ 16 ]
In 2020, the FDA granted orphan drug designation RT001 for the treatment of patients with progressive supranuclear palsy (PSP). PSP is a disease involving modification and dysfunction of tau protein; RT001's mechanism of action both lowers lipid peroxidation and prevents mitochondrial cell death of neurons which is associated with disease onset and progression. [ 17 ] | https://en.wikipedia.org/wiki/Reinforced_lipids |
Reinforced thermoplastic pipe ( RTP ) is a type of pipe reinforced using a high strength synthetic fibre such as glass, aramid or carbon. It was initially developed in the early 1990s by Wavin Repox, Akzo Nobel and by Tubes d'Aquitaine from France, who developed the first pipes reinforced with synthetic fibre to replace medium pressure steel pipes in response to growing demand for non-corrosive conduits for application in the onshore oil and gas industry, particularly in the Middle East. [ 1 ] Typically, the materials used in the construction of the pipe might be Polyethylene (PE), Polyamide -11 or PVDF and may be reinforced with Aramid or Polyester fibre although other combinations are used. [ 2 ] More recently the technology of producing such pipe, including the marketing, rests with a few key companies, where it is available in coils up to 400 m (1,312 ft) length. These pipes are available in pressure ratings from 30 to 90 bar (3 to 9 MPa; 435 to 1,305 psi). Over the last few years [ when? ] this type of pipe has been acknowledged as a standard alternative solution to steel for oilfield flowline applications by certain oil companies and operators. [ 3 ] An advantage of this pipe is also its very fast installation time compared to steel pipe when considering the welding time as average speeds up to 1,000 m (3,281 ft)/day have been reached installing RTP in ground surface. [ 4 ]
Primarily, the pipe provides benefit to applications where steel may rupture due to corrosion and installation time is an issue.
The idea of synthetic fibre reinforced pipe has origins in the flexible hose and offshore industry where it has been frequently used for applications such as control lines in umbilicals and production flowlines for over 30 years. However, the commercialisation and realisation of a competitive product for the onshore oil industry came from a partnership between Teijin Aramid (supplier of aramid fibre Twaron ) and Wavin Repox (manufacturer of reinforced thermoset pipes), where Bert Dalmolen initiated a project to develop such a pipe. He was later employed by Pipelife where a state of the art production line was developed to produce RTP. Pipelife also developed a pipe reinforced with steel wire to achieve even higher pressure ratings of over 150 bar (15 MPa; 2,176 psi) using steel reinforcement. Mr Chevrier (Tubes d'Aquitaine) also developed machinery that could produce such pipes, but was not successful in commercialising RTP. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Reinforced_thermoplastic_pipe |
The reinforcement of 3D printed concrete is a mechanism where the ductility and tensile strength of printed concrete are improved using various reinforcing techniques, including reinforcing bars , meshes , fibers , or cables. The reinforcement of 3D printed concrete is important for the large-scale use of the new technology, like in the case of ordinary concrete . With a multitude of additive manufacturing application in the concrete construction industry—specifically the use of additively constructed concrete in the manufacture of structural concrete elements [ 1 ] —the reinforcement and anchorage technologies vary significantly. Even for non-structural elements, the use of non-structural reinforcement such as fiber reinforcement is not uncommon. [ 2 ] The lack of formwork in most 3D printed concrete makes the installation of reinforcement complicated. Early phases of research in concrete 3D printing primarily focused on developing the material technologies of the cementitious/concrete mixes. These causes combined with the non-existence of codal provisions on reinforcement and anchorage for printed elements speak for the limited awareness and the usage of the various reinforcement techniques in additive manufacturing. [ 3 ] [ 4 ] The material extrusion-based printing of concrete is currently favorable both in terms of availability of technology and of the cost-effectiveness. Therefore, most of the reinforcement techniques developed or currently under development are suitable to the extrusion-based 3D printing technology. [ 5 ]
The reinforcement in concrete 3D printing, much like that in conventional concrete, can be classified based either on the method of placement or the method of action. The methods of placement of reinforcement are preinstallation, co-installation, and post-installation. [ 6 ] The examples of each are pre-installed meshes, fibers mixed with concrete, and post-tensioning cables, respectively. The classification based on the structural action is once again the same as that in conventional concrete. Examples of active and passive reinforcement in 3D printed concrete are reinforcement bars and post-tensioning cables used to prestress segmental elements, respectively. The majority of the reinforcement in concrete has conventionally been steel and continues to be even in 3D printed concrete. Alternate composite materials such as FRPs and fibers of glass , basalt etc., in the mix have gained considerable prominence. [ 7 ]
The high availability and popularity of deformed bars or rebars as a passive structural reinforcement in conventional concrete systems make it sought after in printed concrete. They are welded together to form trusses laid between layers to form a very effective co-installed reinforcement strategy without the use of formworks . [ 8 ] [ 9 ] They are erected to reinforce cages around which concrete is printed to form wall and beam elements, making rebars an effective pre-installment strategy. [ 10 ]
The rebar-based formative skeletal structure can also act as a core on which printable concrete is shotcreted in a new method developed at TU Braunschweig . [ 11 ]
The rebar cages can also be installed inside printed concrete formworks in non-structural members, and the holes are filled with grout . This method of post-installed reinforcement has proven to be cost-effective; however, it requires attention to the interface between steel and the printed concrete. [ 12 ] The use of printed concrete as formwork requires higher tensile hoop strength of the concrete, which could be provided by the use of fibers in the mix. [ 13 ]
Smart Dynamic Casting (SDC), a new printing technology being developed in ETH Zurich , combines slipforming and printing material technologies to produce varied cross-sections and complex geometries using very little formwork . [ 7 ] Reinforcement bars are pre-installed, just like in the case of conventionally cast concrete, and the rheology of the concrete is adapted to retain the shape of the slipforming formwork before concrete hydrates enough to sustain self-weight. [ 14 ] Concrete facade mullions of varying cross-sections are produced for a DFAB house [ 15 ] in Switzerland.
Similar to the use of rebars, reinforcement meshes are also used popularly as a passive reinforcement technique. The welded wire meshes are laid in-between printed layers of slabs without requiring any formwork. They can also be used to print wall elements that are fabricated laterally and erected in place. In a method unlike with rebars, spools of meshes are unwound simultaneously ahead of the printer nozzle to provide both horizontal and vertical reinforcement to the printed elements. This method not only acts as reinforcement in the hardened state of concrete but also compensates for the lack of formwork in the fresh state of concrete. [ 16 ]
High-strength galvanised steel cables provide effective reinforcement in printed concrete elements where sufficient cover concrete cannot be provided owing to the complexity of the shape. [ 3 ] The cables can either be laid in-between layers or extruded simultaneously like the meshes. The bond between high-strength steel cables and concrete needs special attention. [ 17 ]
Continuous yarn in Glass, Basalt, High-performance Polymer or carbon can also effectively be used as reinforcement for 3D-printed concrete without needing additional motors. [ 18 ] The technique takes advantage of the extruded concrete consistency to passively pultrude numerous continuous yarns. The obtained material is a unidirectional cementitious composite with an increase in strength and ductility in the extrusion direction depending on the proportion of fiber. Thanks to the small diameter of the yarn used their bond with the matrix is usually great. Furthermore, the process takes advantage of the small bending stiffness of the yarn to ensure the same geometric freedom with extended buildability possibility thanks to the early traction strength provided by the yarn during the printing. This feature comes with a more complex extrusion nozzle and the use of a specific device for handling the numerous yarns.
The automated fabrication of elements realises its true potential when printed segmental elements are fit in place using post-tensioning . The concrete segments are printed, leaving holes for the post-tensioning cables that not only act as an active reinforcement but also help in connecting the segmental elements to form a load-bearing structure. The holes left behind for the cables are filled with grout post the tensioning of the cables. [ 19 ] A bicycle bridge has been constructed in TU Eindhoven by printing segments that are post-tensioned using high-strength cables running perpendicular to the printing direction. [ 20 ] [ 21 ] The post-tensioning technology has a lot of potential as a reinforcement strategy in additively manufactured concrete systems. [ 20 ]
The use of fibers in the mix has several advantages like in the case of conventional concrete. The higher cement content and faster hydration rate requirements of printed concrete make it susceptible to shrinkage cracking and thermal stresses. The use of fibers (structural or non-structural) can counter these significantly. [ 22 ] Fiber reinforcements are also useful in printing shell structures as the tensile membrane action required to convert bending moment into axial force is possible only with tough and high stiffness concrete. [ 13 ] Fibers, when aligned can provide this required higher toughness and stiffness. [ 23 ] The flexural tensile strength is also improved with the addition of structural steel or PVA fibers. [ 24 ] These properties make the fiber-reinforced concrete a suitable material for printing formworks. The cohesiveness of concrete in the fresh state, which is crucial for printing, can be improved by using non-structural fibers such as polypropylene or basalt . The use of fiber reinforcement in 3D printing creates a much-needed segway into the fields of ultra-high performance concretes with enhanced strengths and durability , crucial in aesthetic slender elements. [ 22 ]
Anchor connectors are installed in truss elements with the aim of connecting them to similar units using exposed threaded bars. This reinforcing technique has the advantage of faster fabrication of lightweight units that can be arranged in a free-form manner on-site, depending on the requirement. [ 25 ] The exposed reinforcement might face corrosion issues when installed in outdoor environments. Topologically optimised truss shapes with force-follows-form can be created and used to save material and, in turn, the construction costs. The anchors can be connected both by in-plane and out-of-plane threaded rebars to create elements beyond simple beams and arches. [ 25 ]
Bamboo reinforcement, including bamboo wrapped in steel wires has been proposed as reinforcement for traditional concrete elements as early as 2005, [ 26 ] with recent studies suggesting possible applications in 3D-printed concrete. This technique has the advantage of producing potentially 50 times less carbon emissions than traditional steel reinforcement techniques. One drawback of this method is potential durability issues, as the organic nature of bamboo makes it vulnerable to pests and decomposition. Proper treating of the material can circumvent this issue, and can preserve the bamboo reinforcement for as long as 15 years. [ 27 ]
Interface ties and staples are sometimes used to improve the bonding between printed layers. [ 28 ] Ladder wire is used to reinforce printed elements to improve horizontal bending. Print stabilisers are used to prevent the elastic buckling of printed layers during the printing process. Welded/printed reinforcement is a technology being developed at TU Braunschweig where the steel reinforcements are simultaneously printed using gas metal arc welding. [ 29 ]
Each reinforcement technology is usually more effective when used in conjuncture with another reinforcing technology, leaving a lot of scope for research and development. The mesh mould technology can be combined with SDC to produce highly automated elements faster. The printable Fiber Reinforced Concrete (FRC) technology can be combined with most other reinforcement techniques seamlessly to produce a highly durable concrete structure. Fiber-reinforced concrete, when used to print formwork, has a higher resistance to hoop stresses owing to higher filament strengths. The meshes and bar cages are almost always combined in the usage of large-scale construction projects. [ 3 ] | https://en.wikipedia.org/wiki/Reinforcement_in_concrete_3D_printing |
Reinhart Ahlrichs (16 January 1940 – 12 October 2016) was a German theoretical chemist . [ 1 ]
Ahlrichs was born on the 16 January 1940 in Göttingen . He studied Physics at the University of Göttingen (Diplom (M.Sc.) in 1965) and received his PhD in 1968 with W. A. Bingel . From 1968-69 he was assistant at Göttingen with Werner Kutzelnigg and from 1969-70 Postdoctoral Fellow with C. C. J. Roothaan at the University of Chicago .
After a period as assistant from 1970-75 in Karlsruhe he had been Professor of Theoretical chemistry at the University of Karlsruhe . He also headed a research group at the INT. [ 2 ]
His group developed the program TURBOMOLE . | https://en.wikipedia.org/wiki/Reinhart_Ahlrichs |
Reinhold Baer (22 July 1902 – 22 October 1979) was a German mathematician , known for his work in algebra . He introduced injective modules in 1940. He is the eponym of Baer rings , Baer groups , and Baer subplanes .
Baer studied mechanical engineering for a year at Leibniz University Hannover . He then went to study philosophy at Freiburg in 1921. While he was at Göttingen in 1922 he was influenced by Emmy Noether and Hellmuth Kneser . In 1924 he won a scholarship for specially gifted students. Baer wrote up his doctoral dissertation and it was published in Crelle's Journal in 1927.
Baer accepted a post at Halle in 1928. There, he published Ernst Steinitz 's "Algebraische Theorie der Körper" with Helmut Hasse , first published in Crelle's Journal in 1910. [ 1 ]
While Baer was with his wife in Austria , Adolf Hitler and the Nazis came into power. Both of Baer's parents were Jewish, and he was for this reason informed that his services at Halle were no longer required. Louis Mordell invited him to go to Manchester and Baer accepted.
Baer stayed at Princeton University and was a visiting scholar at the nearby Institute for Advanced Study from 1935 to 1937. [ 2 ] For a short while he lived in North Carolina . From 1938 to 1956 he worked at the University of Illinois at Urbana-Champaign . He returned to Germany in 1956.
According to biographer K. W. Gruenberg,
He died of heart failure on 22 October in 1979.
In 2016 the Reinhold Baer Prize for the best Ph.D. thesis in group theory was set up in his honour. [ 4 ] | https://en.wikipedia.org/wiki/Reinhold_Baer |
Reinke's space is a potential space between the vocal ligament and the overlying mucosa. [ 1 ] It is not an empty space, but contains cells, special fibers and extracellular matrix . It plays an important role in the vibration of the vocal cords. Edema of this space is called Reinke's edema . | https://en.wikipedia.org/wiki/Reinke's_space |
Reinnervation is the restoration, either by spontaneous cellular regeneration or by surgical grafting , of nerve supply to a body part from which it has been lost or damaged. [ 1 ] [ 2 ] [ 3 ]
This neuroanatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reinnervation |
In the fields of Big Bang theory and cosmology , reionization is the process that caused electrically neutral atoms in the primordial universe to reionize after the lapse of the " dark ages ".
Detecting and studying the reionization process is challenging but multiple avenues have been pursued.
This reionization was driven by the formation of the first stars and galaxies.
Reionization refers to a change in the intergalactic medium from neutral hydrogen to ions. The neutral hydrogen had been ions at an earlier stage in the history of the universe, thus the conversion back into ions is termed a re ionization. The reionization was driven by energetic photons emitted by the first stars and galaxies. [ 1 ]
In the timeline of the universe, neutral hydrogen gas was originally formed when primordial hydrogen nuclei (protons) combined with electrons. Light with sufficient energy will ionize neutral hydrogen gas. At early times, light was so dense and energetic that hydrogen atoms would be immediately re-ionized. As the universe expanded and cooled, the rate of recombination of electrons and protons to form neutral hydrogen was higher than the ionization rate. At around 379,000 years after the Big Bang ( redshift z = 1089), this recombination left most normal matter in the form of neutral hydrogen. [ 2 ]
The universe was opaque before the recombination, due to the scattering of photons of all wavelengths off free electrons (and free protons, to a significantly lesser extent), but it became increasingly transparent as more electrons and protons combined to form neutral hydrogen atoms. While the electrons of neutral hydrogen can absorb photons of some wavelengths by rising to an excited state , a universe full of neutral hydrogen will be relatively opaque only at those few wavelengths. The remaining light could travel freely and become the cosmic microwave background radiation . The only other light at this point would be provided by those excited hydrogen atoms, marking the beginning of an era called the Dark Ages of the universe. [ 3 ]
The second phase change occurred once objects started to form in the early universe emitting radiation energetic enough to re-ionize neutral hydrogen. As these objects formed and radiated energy, the universe reverted from being composed of neutral atoms, to once again being an ionized plasma . This occurred between 150 million and one billion years after the Big Bang (at a redshift 20 > z > 6) [ 3 ] : 150 At that time, however, matter had been diffused by the expansion of the universe, and the scattering interactions of photons and electrons were much less frequent than before electron-proton recombination. Thus, the universe was full of low density ionized hydrogen and remained transparent, as is the case today.
It is believed that the primordial helium also experienced a similar reionization phase change, but at a later epoch in the history of the universe. [ 4 ]
Theoretical models give a timeline of the reionization process.
In the first stage of reionization, each new star is surrounded by neutral hydrogen. Light emitted by the star ionizes gas immediately around the star. Then light can reach further out to ionize gas. The ions can recombine, competing with the ionization process. The ionized gas will be hot and it will expand, clearing out the region around the star. The sphere of ionized gas expands until the amount of light from the star that can cause ionizations balances the recombination, a process that takes hundreds of millions of years. (The time is so long that stars die before the full extent of the reionization completes for that star.) At some point the shell of ionization from each star in a galaxy begin to overlap and the ionization frontier pushes out into the intergalatic medium. [ 3 ]
Looking back so far in the history of the universe presents some observational challenges. There are, however, a few observational methods for studying reionization.
One means of studying reionization uses the spectra of distant quasars . Quasars release an extraordinary amount of energy, being among the brightest objects in the universe. As a result, some quasars are detectable from as long ago as the epoch of reionization. Quasars also happen to have relatively uniform spectral features, regardless of their position in the sky or distance from the Earth . Thus it can be inferred that any major differences between quasar spectra will be caused by the interaction of their emission with atoms along the line of sight. For wavelengths of light at the energies of one of the Lyman transitions of hydrogen, the scattering cross-section is large, meaning that even for low levels of neutral hydrogen in the intergalactic medium (IGM), absorption at those wavelengths is highly likely.
For nearby objects in the universe, spectral absorption lines are very sharp, as only photons with energies just right to cause an atomic transition can cause that transition. However, the large distances between the quasars and the telescopes which detect them mean that the expansion of the universe causes light to undergo noticeable redshifting. This means that as light from the quasar travels through the IGM and is redshifted, wavelengths which had been below the Lyman alpha wavelength are stretched, and will at some point be just equal to the wavelength needed for the Lyman Alpha transition. This means that instead of showing sharp spectral absorption lines, a quasar's light which has traveled through a large, spread out region of neutral hydrogen will show a Gunn-Peterson trough . [ 5 ]
The redshifting for a particular quasar provides temporal information about reionization. Since an object's redshift corresponds to the time at which it emitted the light, it is possible to determine when reionization ended. Quasars below a certain redshift (closer in space and time) do not show the Gunn-Peterson trough (though they may show the Lyman-alpha forest ), while quasars emitting light prior to reionization will feature a Gunn-Peterson trough. In 2001, four quasars were detected by the Sloan Digital Sky Survey with redshifts ranging from z = 5.82 to z = 6.28. While the quasars above z = 6 showed a Gunn-Peterson trough, indicating that the IGM was still at least partly neutral, the ones below did not, meaning the hydrogen was ionized. As reionization is expected to occur over relatively short timescales, the results suggest that the universe was approaching the end of reionization at z = 6. [ 6 ] This, in turn, suggests that the universe must still have been almost entirely neutral at z > 10. On the other hand, long absorption troughs persisting down to z < 5.5 in the Lyman-alpha and Lyman-beta forests suggest that reionization potentially extends later than z = 6. [ 7 ] [ 8 ]
The anisotropy of the cosmic microwave background on different angular scales can also be used to study reionization. Photons undergo scattering when there are free electrons present, in a process known as Thomson scattering . However, as the universe expands, the density of free electrons will decrease, and scattering will occur less frequently. In the period during and after reionization, but before significant expansion had occurred to sufficiently lower the electron density, the light that composes the CMB will experience observable Thomson scattering. This scattering will leave its mark on the CMB anisotropy map, introducing secondary anisotropies (anisotropies introduced after recombination). [ 9 ] The overall effect is to erase anisotropies that occur on smaller scales. While anisotropies on small scales are erased, polarization anisotropies are actually introduced because of reionization. [ 10 ] By looking at the CMB anisotropies observed, and comparing with what they would look like had reionization not taken place, the electron column density at the time of reionization can be determined. With this, the age of the universe when reionization occurred can then be calculated.
The Wilkinson Microwave Anisotropy Probe allowed that comparison to be made. The initial observations, released in 2003, suggested that reionization took place from 30 > z > 11. [ 11 ] This redshift range was in clear disagreement with the results from studying quasar spectra. However, the three year WMAP data returned a different result, with reionization beginning at z = 11 and the universe ionized by z = 7. [ 12 ] This is in much better agreement with the quasar data.
Results in 2018 from Planck mission, yield an instantaneous reionization redshift of z = 7.68 ± 0.79. [ 13 ]
The parameter usually quoted here is τ, the "optical depth to reionization," or alternatively, z re , the redshift of reionization, assuming it was an instantaneous event. While this is unlikely to be physical, since reionization was very likely not instantaneous, z re provides an estimate of the mean redshift of reionization.
Lyman alpha light from galaxies offers a complementary tool set to study reionization. The Lyman alpha line is the n=2 to n=1 transition of neutral hydrogen and can be produced copiously by galaxies with young stars. [ 14 ] Moreover, Lyman alpha photons interact strongly with neutral hydrogen in intergalactic gas through resonant scattering, wherein neutral atoms in the ground (n=1) state absorb Lyman alpha photons and almost immediately re-emit them in a random direction. This obscures Lyman alpha emission from galaxies that are embedded in neutral gas. [ 15 ] Thus, experiments to find galaxies by their Lyman alpha light can indicate the ionization state of the surrounding gas. An average density of galaxies with detectable Lyman alpha emission means the surrounding gas must be ionized, while an absence of detectable Lyman alpha sources may indicate neutral regions. A closely related class of experiments measures the Lyman alpha line strength in samples of galaxies identified by other methods (primarily Lyman break galaxy searches). [ 16 ] [ 17 ] [ 18 ]
The earliest application of this method was in 2004, when the tension between late neutral gas indicated by quasar spectra and early reionization suggested by CMB results was strong. The detection of Lyman alpha galaxies at redshift z=6.5 demonstrated that the intergalactic gas was already predominantly ionized [ 19 ] at an earlier time than the quasar spectra suggested. Subsequent applications of the method suggested some residual neutral gas as recently as z=6.5, [ 20 ] [ 21 ] [ 22 ] but still indicate that a majority of intergalactic gas was ionized prior to z=7. [ 23 ]
Lyman alpha emission can be used in other ways to probe reionization further. Theory suggests that reionization was patchy, meaning that the clustering of Lyman alpha selected samples should be strongly enhanced during the middle phases of reionization. [ 24 ] Moreover, specific ionized regions can be pinpointed by identifying groups of Lyman alpha emitters. [ 25 ] [ 26 ]
Even with the quasar data roughly in agreement with the CMB anisotropy data, there are still a number of questions, especially concerning the energy sources of reionization and the effects on, and role of, structure formation during reionization. The 21-cm line in hydrogen is potentially a means of studying this period, as well as the "dark ages" that preceded reionization. The 21-cm line occurs in neutral hydrogen, due to differences in energy between the spin triplet and spin singlet states of the electron and proton. This transition is forbidden , meaning it occurs extremely rarely. The transition is also highly temperature dependent, meaning that as objects form in the "dark ages" and emit Lyman-alpha photons that are absorbed and re-emitted by surrounding neutral hydrogen, it will produce a 21-cm line signal in that hydrogen through Wouthuysen-Field coupling . [ 27 ] [ 28 ] By studying 21-cm line emission, it will be possible to learn more about the early structures that formed. Observations from the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) points to a signal from this era, although follow-up observations will be needed to confirm it. [ 29 ] Several other projects hope to make headway in this area in the near future, such as the Precision Array for Probing the Epoch of Reionization (PAPER), Low Frequency Array (LOFAR), Murchison Widefield Array (MWA), Giant Metrewave Radio Telescope (GMRT), Mapper of the IGM Spin Temperature (MIST), the Dark Ages Radio Explorer (DARE) mission, and the Large-Aperture Experiment to Detect the Dark Ages (LEDA).
While observations have come in which narrow the window during which the epoch of reionization could have taken place, it is still uncertain which objects provided the photons that reionized the IGM. To ionize neutral hydrogen, an energy larger than 13.6 eV is required, which corresponds to photons with a wavelength of 91.2 nm or shorter. This is in the ultraviolet part of the electromagnetic spectrum , which means that the primary candidates are all sources which produce a significant amount of energy in the ultraviolet and above. How numerous the source is must also be considered, as well as the longevity, as protons and electrons will recombine if energy is not continuously provided to keep them apart. Altogether, the critical parameter for any source considered can be summarized as its "emission rate of hydrogen-ionizing photons per unit cosmological volume." [ 31 ] With these constraints, it is expected that quasars and first generation stars and galaxies were the main sources of energy. [ 32 ]
Dwarf galaxies are currently considered to be the primary source of ionizing photons during the epoch of reionization. [ 33 ] [ 34 ] For most scenarios, this would require the log-slope of the UV galaxy luminosity function , often denoted α, to be steeper than it is today, approaching α = -2. [ 33 ] With the advent of the James Webb Space Telescope (JWST), constraints on the UV luminosity function at the Epoch of Reionization have become commonplace, [ 35 ] [ 36 ] allowing for better constraints on the faint, low-mass population of galaxies.
In 2014, two separate studies identified two Green Pea galaxies (GPs) to be likely Lyman Continuum (LyC)-emitting candidates. [ 37 ] [ 38 ] Compact dwarf star-forming galaxies like the GPs are considered excellent low-redshift analogs of high-redshift Lyman-alpha and LyC emitters (LAEs and LCEs, respectively). [ 39 ] At that time, only two other LCEs were known: Haro 11 and Tololo-1247-232 . [ 37 ] [ 38 ] [ 40 ] Finding local LyC emitters has thus become crucial to the theories about the early universe and the epoch of reionization. [ 37 ] [ 38 ]
Subsequently, motivated, a series of surveys have been conducted using Hubble Space Telescope 's Cosmic Origins Spectrograph ( HST /COS) to measure the LyC directly. [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] These efforts culminated in the Low-redshift Lyman Continuum Survey, [ 47 ] a large HST /COS program which nearly tripled the number of direct measurements of the LyC from dwarf galaxies. To date, at least 50 LCEs have been confirmed using HST /COS [ 47 ] with LyC escape fractions anywhere from ≈ 0 to 88%. The results from the Low-redshift Lyman Continuum Survey have provided the empirical foundation necessary to identify and understand LCEs at the Epoch of Reionization. [ 48 ] [ 49 ] [ 50 ] With new observations from JWST , populations of LCEs are now being studied at cosmological redshifts greater than 6, allowing for the first time a detailed and direct assessment of the origins of cosmic Reionization. [ 51 ] Combining these large samples of galaxies with new constraints on the UV luminosity function indicates that dwarf galaxies overwhelmingly contribute to Reionization. [ 52 ]
Quasars , a class of active galactic nuclei (AGN), were considered a good candidate source because they are highly efficient at converting mass to energy , and emit a great deal of light above the threshold for ionizing hydrogen. It is unknown, however, how many quasars existed prior to reionization. Only the brightest of quasars present during reionization can be detected, which means there is no direct information about dimmer quasars that existed. However, by looking at the more easily observed quasars in the nearby universe, and assuming that the luminosity function (number of quasars as a function of luminosity ) during reionization will be approximately the same as it is today, it is possible to make estimates of the quasar populations at earlier times. Such studies have found that quasars do not exist in high enough numbers to reionize the IGM alone, [ 31 ] [ 53 ] saying that "only if the ionizing background is dominated by low-luminosity AGNs can the quasar luminosity function provide enough ionizing photons." [ 54 ]
Population III stars were the earliest stars, which had no elements more massive than hydrogen or helium . During Big Bang nucleosynthesis , the only elements that formed aside from hydrogen and helium were trace amounts of lithium . Yet quasar spectra have revealed the presence of heavy elements in the intergalactic medium at an early era. Supernova explosions produce such heavy elements, so hot, large, Population III stars which will form supernovae are a possible mechanism for reionization. While they have not been directly observed, they are consistent according to models using numerical simulation [ 55 ] and current observations. [ 56 ] A gravitationally lensed galaxy also provides indirect evidence of Population III stars. [ 57 ] Even without direct observations of Population III stars, they are a compelling source. They are more efficient and effective ionizers than Population II stars, as they emit more ionizing photons, [ 58 ] and are capable of reionizing hydrogen on their own in some reionization models with reasonable initial mass functions . [ 59 ] As a consequence, Population III stars are currently considered the most likely energy source to initiate the reionization of the universe, [ 60 ] though other sources are likely to have taken over and driven reionization to completion.
In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at z = 6.60 . Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it. [ 61 ] [ 62 ] | https://en.wikipedia.org/wiki/Reionization |
The Reissert indole synthesis is a series of chemical reactions designed to synthesize indole or substituted-indoles ( 4 and 5 ) from ortho-nitrotoluene 1 and diethyl oxalate 2 . [ 1 ] [ 2 ]
Potassium ethoxide has been shown to give better results than sodium ethoxide . [ 3 ]
The first step of the synthesis is the condensation of o-nitrotoluene 1 with a diethyl oxalate 2 to give ethyl o-nitrophenylpyruvate 3 . The reductive cyclization of 3 with zinc in acetic acid gives indole-2-carboxylic acid 4 . If desired, 4 can be decarboxylated with heat to give indole 5 .
In an intramolecular version of the Reissert reaction, a furan ring-opening provides the carbonyl necessary for cyclization to form an indole. A ketone side chain is present in the final product, allowing further modifications. [ 4 ] | https://en.wikipedia.org/wiki/Reissert_indole_synthesis |
The Reissert reaction is a series of chemical reactions that transforms quinoline to quinaldic acid . [1] [2] Quinolines will react with acid chlorides and potassium cyanide to give 1-acyl-2-cyano-1,2-dihydroquinolines, also known as Reissert compounds. Hydrolysis gives the desired quinaldic acid.
The Reissert reaction is also successful with isoquinolines [3] [4] and most pyridines .
Several reviews have been published. [5] [6] | https://en.wikipedia.org/wiki/Reissert_reaction |
In physics and astronomy , the Reissner–Nordström metric is a static solution to the Einstein–Maxwell field equations , which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body of mass M . The analogous solution for a charged, rotating body is given by the Kerr–Newman metric .
The metric was discovered between 1916 and 1921 by Hans Reissner , [ 1 ] Hermann Weyl , [ 2 ] Gunnar Nordström [ 3 ] and George Barker Jeffery [ 4 ] independently. [ 5 ]
In spherical coordinates ( t , r , θ , φ ) {\displaystyle (t,r,\theta ,\varphi )} , the Reissner–Nordström metric (i.e. the line element ) is
where
The total mass of the central body and its irreducible mass are related by [ 6 ] [ 7 ]
The difference between M {\displaystyle M} and M i r r {\displaystyle M_{\rm {irr}}} is due to the equivalence of mass and energy , which makes the electric field energy also contribute to the total mass.
In the limit that the charge Q {\displaystyle Q} (or equivalently, the length scale r Q {\displaystyle r_{Q}} ) goes to zero, one recovers the Schwarzschild metric . The classical Newtonian theory of gravity may then be recovered in the limit as the ratio r s / r {\displaystyle r_{\text{s}}/r} goes to zero. In the limit that both r Q / r {\displaystyle r_{Q}/r} and r s / r {\displaystyle r_{\text{s}}/r} go to zero, the metric becomes the Minkowski metric for special relativity .
In practice, the ratio r s / r {\displaystyle r_{\text{s}}/r} is often extremely small. For example, the Schwarzschild radius of the Earth is roughly 9 mm (3/8 inch ), whereas a satellite in a geosynchronous orbit has an orbital radius r {\displaystyle r} that is roughly four billion times larger, at 42 164 km ( 26 200 miles ). Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The ratio only becomes large close to black holes and other ultra-dense objects such as neutron stars .
Although charged black holes with r Q ≪ r s are similar to the Schwarzschild black hole , they have two horizons: the event horizon and an internal Cauchy horizon . [ 8 ] As with the Schwarzschild metric, the event horizons for the spacetime are located where the metric component g r r {\displaystyle g_{rr}} diverges; that is, where 1 − r s r + r Q 2 r 2 = − 1 g r r = 0. {\displaystyle 1-{\frac {r_{\rm {s}}}{r}}+{\frac {r_{\rm {Q}}^{2}}{r^{2}}}=-{\frac {1}{g_{rr}}}=0.}
This equation has two solutions: r ± = 1 2 ( r s ± r s 2 − 4 r Q 2 ) . {\displaystyle r_{\pm }={\frac {1}{2}}\left(r_{\rm {s}}\pm {\sqrt {r_{\rm {s}}^{2}-4r_{\rm {Q}}^{2}}}\right).}
These concentric event horizons become degenerate for 2 r Q = r s , which corresponds to an extremal black hole . Black holes with 2 r Q > r s cannot exist in nature because if the charge is greater than the mass there can be no physical event horizon (the term under the square root becomes negative). [ 9 ] Objects with a charge greater than their mass can exist in nature, but they can not collapse down to a black hole, and if they could, they would display a naked singularity . [ 10 ] Theories with supersymmetry usually guarantee that such "superextremal" black holes cannot exist.
The electromagnetic potential is A α = ( Q / r , 0 , 0 , 0 ) . {\displaystyle A_{\alpha }=(Q/r,0,0,0).}
If magnetic monopoles are included in the theory, then a generalization to include magnetic charge P is obtained by replacing Q 2 by Q 2 + P 2 in the metric and including the term P cos θ dφ in the electromagnetic potential. [ clarification needed ]
The gravitational time dilation in the vicinity of the central body is given by γ = | g t t | = r 2 Q 2 + ( r − 2 M ) r , {\displaystyle \gamma ={\sqrt {|g^{tt}|}}={\sqrt {\frac {r^{2}}{Q^{2}+(r-2M)r}}},} which relates to the local radial escape velocity of a neutral particle v e s c = γ 2 − 1 γ . {\displaystyle v_{\rm {esc}}={\frac {\sqrt {\gamma ^{2}-1}}{\gamma }}.}
The Christoffel symbols Γ i j k = ∑ s = 0 3 g i s 2 ( ∂ g j s ∂ x k + ∂ g s k ∂ x j − ∂ g j k ∂ x s ) {\displaystyle \Gamma ^{i}{}_{jk}=\sum _{s=0}^{3}\ {\frac {g^{is}}{2}}\left({\frac {\partial g_{js}}{\partial x^{k}}}+{\frac {\partial g_{sk}}{\partial x^{j}}}-{\frac {\partial g_{jk}}{\partial x^{s}}}\right)} with the indices { 0 , 1 , 2 , 3 } → { t , r , θ , φ } {\displaystyle \{0,\ 1,\ 2,\ 3\}\to \{t,\ r,\ \theta ,\ \varphi \}} give the nonvanishing expressions Γ t t r = M r − Q 2 r ( Q 2 + r 2 − 2 M r ) Γ r t t = ( M r − Q 2 ) ( r 2 − 2 M r + Q 2 ) r 5 Γ r r r = Q 2 − M r r ( Q 2 − 2 M r + r 2 ) Γ r θ θ = − r 2 − 2 M r + Q 2 r Γ r φ φ = − sin 2 θ ( r 2 − 2 M r + Q 2 ) r Γ θ θ r = 1 r Γ θ φ φ = − sin θ cos θ Γ φ φ r = 1 r Γ φ φ θ = cot θ {\displaystyle {\begin{aligned}\Gamma ^{t}{}_{tr}&={\frac {Mr-Q^{2}}{r(Q^{2}+r^{2}-2Mr)}}\\[6pt]\Gamma ^{r}{}_{tt}&={\frac {(Mr-Q^{2})\left(r^{2}-2Mr+Q^{2}\right)}{r^{5}}}\\[6pt]\Gamma ^{r}{}_{rr}&={\frac {Q^{2}-Mr}{r(Q^{2}-2Mr+r^{2})}}\\[6pt]\Gamma ^{r}{}_{\theta \theta }&=-{\frac {r^{2}-2Mr+Q^{2}}{r}}\\[6pt]\Gamma ^{r}{}_{\varphi \varphi }&=-{\frac {\sin ^{2}\theta \left(r^{2}-2Mr+Q^{2}\right)}{r}}\\[6pt]\Gamma ^{\theta }{}_{\theta r}&={\frac {1}{r}}\\[6pt]\Gamma ^{\theta }{}_{\varphi \varphi }&=-\sin \theta \cos \theta \\[6pt]\Gamma ^{\varphi }{}_{\varphi r}&={\frac {1}{r}}\\[6pt]\Gamma ^{\varphi }{}_{\varphi \theta }&=\cot \theta \end{aligned}}}
Given the Christoffel symbols, one can compute the geodesics of a test-particle. [ 11 ] [ 12 ]
Instead of working in the holonomic basis, one can perform efficient calculations with a tetrad . [ 13 ] Let e I = e μ I {\displaystyle {\bf {e}}_{I}=e_{\mu I}} be a set of one-forms with internal Minkowski index I ∈ { 0 , 1 , 2 , 3 } {\displaystyle I\in \{0,1,2,3\}} , such that η I J e μ I e ν J = g μ ν {\displaystyle \eta ^{IJ}e_{\mu I}e_{\nu J}=g_{\mu \nu }} . The Reissner metric can be described by the tetrad
where G ( r ) = 1 − r s r − 1 + r Q 2 r − 2 {\displaystyle G(r)=1-r_{s}r^{-1}+r_{Q}^{2}r^{-2}} . The parallel transport of the tetrad is captured by the connection one-forms ω I J = − ω J I = ω μ I J = e I ν ∇ μ e J ν {\displaystyle {\boldsymbol {\omega }}_{IJ}=-{\boldsymbol {\omega }}_{JI}=\omega _{\mu IJ}=e_{I}^{\nu }\nabla _{\mu }e_{J\nu }} . These have only 24 independent components compared to the 40 components of Γ λ μ ν {\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }} . The connections can be solved for by inspection from Cartan's equation d e I = e J ∧ ω I J {\displaystyle d{\bf {e}}_{I}={\bf {e}}^{J}\wedge {\boldsymbol {\omega }}_{IJ}} , where the left hand side is the exterior derivative of the tetrad, and the right hand side is a wedge product .
The Riemann tensor R I J = R μ ν I J {\displaystyle {\bf {R}}_{IJ}=R_{\mu \nu IJ}} can be constructed as a collection of two-forms by the second Cartan equation R I J = d ω I J + ω I K ∧ ω K J , {\displaystyle {\bf {R}}_{IJ}=d{\boldsymbol {\omega }}_{IJ}+{\boldsymbol {\omega }}_{IK}\wedge {\boldsymbol {\omega }}^{K}{}_{J},} which again makes use of the exterior derivative and wedge product. This approach is significantly faster than the traditional computation with Γ λ μ ν {\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }} ; note that there are only four nonzero ω I J {\displaystyle {\boldsymbol {\omega }}_{IJ}} compared with nine nonzero components of Γ λ μ ν {\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }} .
[ 14 ]
Because of the spherical symmetry of the metric, the coordinate system can always be aligned in a way that the motion of a test-particle is confined to a plane, so for brevity and without restriction of generality we use θ instead of φ . In dimensionless natural units of G = M = c = K = 1 the motion of an electrically charged particle with the charge q is given by x ¨ i = − ∑ j = 0 3 ∑ k = 0 3 Γ j k i x ˙ j x ˙ k + q F i k x ˙ k {\displaystyle {\ddot {x}}^{i}=-\sum _{j=0}^{3}\ \sum _{k=0}^{3}\ \Gamma _{jk}^{i}\ {{\dot {x}}^{j}}\ {{\dot {x}}^{k}}+q\ {F^{ik}}\ {{\dot {x}}_{k}}} which yields t ¨ = 2 ( Q 2 − M r ) r ( r 2 − 2 M r + Q 2 ) r ˙ t ˙ + q Q ( r 2 − 2 m r + Q 2 ) r ˙ {\displaystyle {\ddot {t}}={\frac {\ 2(Q^{2}-Mr)}{r(r^{2}-2Mr+Q^{2})}}{\dot {r}}{\dot {t}}+{\frac {qQ}{(r^{2}-2mr+Q^{2})}}\ {\dot {r}}} r ¨ = ( r 2 − 2 M r + Q 2 ) ( Q 2 − M r ) t ˙ 2 r 5 + ( M r − Q 2 ) r ˙ 2 r ( r 2 − 2 M r + Q 2 ) + ( r 2 − 2 M r + Q 2 ) θ ˙ 2 r + q Q ( r 2 − 2 m r + Q 2 ) r 4 t ˙ {\displaystyle {\ddot {r}}={\frac {(r^{2}-2Mr+Q^{2})(Q^{2}-Mr)\ {\dot {t}}^{2}}{r^{5}}}+{\frac {(Mr-Q^{2}){\dot {r}}^{2}}{r(r^{2}-2Mr+Q^{2})}}+{\frac {(r^{2}-2Mr+Q^{2})\ {\dot {\theta }}^{2}}{r}}+{\frac {qQ(r^{2}-2mr+Q^{2})}{r^{4}}}\ {\dot {t}}} θ ¨ = − 2 θ ˙ r ˙ r . {\displaystyle {\ddot {\theta }}=-{\frac {2\ {\dot {\theta }}\ {\dot {r}}}{r}}.}
All total derivatives are with respect to proper time a ˙ = d a d τ {\displaystyle {\dot {a}}={\frac {da}{d\tau }}} .
Constants of the motion are provided by solutions S ( t , t ˙ , r , r ˙ , θ , θ ˙ , φ , φ ˙ ) {\displaystyle S(t,{\dot {t}},r,{\dot {r}},\theta ,{\dot {\theta }},\varphi ,{\dot {\varphi }})} to the partial differential equation [ 15 ] 0 = t ˙ ∂ S ∂ t + r ˙ ∂ S ∂ r + θ ˙ ∂ S ∂ θ + t ¨ ∂ S ∂ t ˙ + r ¨ ∂ S ∂ r ˙ + θ ¨ ∂ S ∂ θ ˙ {\displaystyle 0={\dot {t}}{\dfrac {\partial S}{\partial t}}+{\dot {r}}{\frac {\partial S}{\partial r}}+{\dot {\theta }}{\frac {\partial S}{\partial \theta }}+{\ddot {t}}{\frac {\partial S}{\partial {\dot {t}}}}+{\ddot {r}}{\frac {\partial S}{\partial {\dot {r}}}}+{\ddot {\theta }}{\frac {\partial S}{\partial {\dot {\theta }}}}} after substitution of the second derivatives given above. The metric itself is a solution when written as a differential equation S 1 = 1 = ( 1 − r s r + r Q 2 r 2 ) c 2 t ˙ 2 − ( 1 − r s r + r Q 2 r 2 ) − 1 r ˙ 2 − r 2 θ ˙ 2 . {\displaystyle S_{1}=1=\left(1-{\frac {r_{s}}{r}}+{\frac {r_{\rm {Q}}^{2}}{r^{2}}}\right)c^{2}\,{\dot {t}}^{2}-\left(1-{\frac {r_{s}}{r}}+{\frac {r_{Q}^{2}}{r^{2}}}\right)^{-1}\,{\dot {r}}^{2}-r^{2}\,{\dot {\theta }}^{2}.}
The separable equation ∂ S ∂ r − 2 r θ ˙ ∂ S ∂ θ ˙ = 0 {\displaystyle {\frac {\partial S}{\partial r}}-{\frac {2}{r}}{\dot {\theta }}{\frac {\partial S}{\partial {\dot {\theta }}}}=0} immediately yields the constant relativistic specific angular momentum S 2 = L = r 2 θ ˙ ; {\displaystyle S_{2}=L=r^{2}{\dot {\theta }};} a third constant obtained from ∂ S ∂ r − 2 ( M r − Q 2 ) r ( r 2 − 2 M r + Q 2 ) t ˙ ∂ S ∂ t ˙ = 0 {\displaystyle {\frac {\partial S}{\partial r}}-{\frac {2(Mr-Q^{2})}{r(r^{2}-2Mr+Q^{2})}}{\dot {t}}{\frac {\partial S}{\partial {\dot {t}}}}=0} is the specific energy (energy per unit rest mass) [ 16 ] S 3 = E = t ˙ ( r 2 − 2 M r + Q 2 ) r 2 + q Q r . {\displaystyle S_{3}=E={\frac {{\dot {t}}(r^{2}-2Mr+Q^{2})}{r^{2}}}+{\frac {qQ}{r}}.}
Substituting S 2 {\displaystyle S_{2}} and S 3 {\displaystyle S_{3}} into S 1 {\displaystyle S_{1}} yields the radial equation c ∫ d τ = ∫ r 2 d r r 4 ( E − 1 ) + 2 M r 3 − ( Q 2 + L 2 ) r 2 + 2 M L 2 r − Q 2 L 2 . {\displaystyle c\int d\tau =\int {\frac {r^{2}\,dr}{\sqrt {r^{4}(E-1)+2Mr^{3}-(Q^{2}+L^{2})r^{2}+2ML^{2}r-Q^{2}L^{2}}}}.}
Multiplying under the integral sign by S 2 {\displaystyle S_{2}} yields the orbital equation c ∫ L r 2 d θ = ∫ L d r r 4 ( E − 1 ) + 2 M r 3 − ( Q 2 + L 2 ) r 2 + 2 M L 2 r − Q 2 L 2 . {\displaystyle c\int Lr^{2}\,d\theta =\int {\frac {L\,dr}{\sqrt {r^{4}(E-1)+2Mr^{3}-(Q^{2}+L^{2})r^{2}+2ML^{2}r-Q^{2}L^{2}}}}.}
The total time dilation between the test-particle and an observer at infinity is γ = q Q r 3 + E r 4 r 2 ( r 2 − 2 r + Q 2 ) . {\displaystyle \gamma ={\frac {q\ Q\ r^{3}+E\ r^{4}}{r^{2}\ (r^{2}-2r+Q^{2})}}.}
The first derivatives x ˙ i {\displaystyle {\dot {x}}^{i}} and the contravariant components of the local 3-velocity v i {\displaystyle v^{i}} are related by x ˙ i = v i ( 1 − v 2 ) | g i i | , {\displaystyle {\dot {x}}^{i}={\frac {v^{i}}{\sqrt {(1-v^{2})\ |g_{ii}|}}},} which gives the initial conditions r ˙ = v ∥ r 2 − 2 M + Q 2 r ( 1 − v 2 ) {\displaystyle {\dot {r}}={\frac {v_{\parallel }{\sqrt {r^{2}-2M+Q^{2}}}}{r{\sqrt {(1-v^{2})}}}}} θ ˙ = v ⊥ r ( 1 − v 2 ) . {\displaystyle {\dot {\theta }}={\frac {v_{\perp }}{r{\sqrt {(1-v^{2})}}}}.}
The specific orbital energy E = Q 2 − 2 r M + r 2 r 1 − v 2 + q Q r {\displaystyle E={\frac {\sqrt {Q^{2}-2rM+r^{2}}}{r{\sqrt {1-v^{2}}}}}+{\frac {qQ}{r}}} and the specific relative angular momentum L = v ⊥ r 1 − v 2 {\displaystyle L={\frac {v_{\perp }\ r}{\sqrt {1-v^{2}}}}} of the test-particle are conserved quantities of motion. v ∥ {\displaystyle v_{\parallel }} and v ⊥ {\displaystyle v_{\perp }} are the radial and transverse components of the local velocity-vector. The local velocity is therefore v = v ⊥ 2 + v ∥ 2 = ( E 2 − 1 ) r 2 − Q 2 − r 2 + 2 r M E 2 r 2 . {\displaystyle v={\sqrt {v_{\perp }^{2}+v_{\parallel }^{2}}}={\sqrt {\frac {(E^{2}-1)r^{2}-Q^{2}-r^{2}+2rM}{E^{2}r^{2}}}}.}
The metric can be expressed in Kerr–Schild form like this: g μ ν = η μ ν + f k μ k ν f = G r 2 [ 2 M r − Q 2 ] k = ( k x , k y , k z ) = ( x r , y r , z r ) k 0 = 1. {\displaystyle {\begin{aligned}g_{\mu \nu }&=\eta _{\mu \nu }+fk_{\mu }k_{\nu }\\[5pt]f&={\frac {G}{r^{2}}}\left[2Mr-Q^{2}\right]\\[5pt]\mathbf {k} &=(k_{x},k_{y},k_{z})=\left({\frac {x}{r}},{\frac {y}{r}},{\frac {z}{r}}\right)\\[5pt]k_{0}&=1.\end{aligned}}}
Notice that k is a unit vector . Here M is the constant mass of the object, Q is the constant charge of the object, and η is the Minkowski tensor . | https://en.wikipedia.org/wiki/Reissner–Nordström_metric |
Rejuvenation is a medical discipline focused on the practical reversal of the aging process . [ 1 ]
Rejuvenation is distinct from life extension . Life extension strategies often study the causes of aging and try to oppose those causes to slow aging. Rejuvenation is the reversal of aging and thus requires a different strategy, namely repair of the damage that is associated with aging or replacement of damaged tissue with new tissue. Rejuvenation can be a means of life extension, but most life extension strategies do not involve rejuvenation.
Various myths tell the stories about the quest for rejuvenation. It was believed that magic or intervention of a supernatural power can bring back youth and many mythical adventurers set out on a journey to do that, for themselves, their relatives or some authority that sent them anonymously.
An ancient Chinese emperor sent out ships of young men and women to find a pearl that would rejuvenate him. This led to a myth among modern Chinese that Japan was founded by these people.
In some religions, people were to be rejuvenated after death prior to placing them in heaven .
The stories continued well into the 16th century. The Spanish explorer Juan Ponce de León led an expedition around the Caribbean islands and into Florida to find the Fountain of Youth . Led by the rumors, the expedition continued the search and many perished. The Fountain was nowhere to be found as locals were unaware of its exact location.
Since the emergence of philosophy , sages and self-proclaimed wizards always made enormous efforts to find the secret of youth, both for themselves and their noble patrons and sponsors . It was widely believed that some potions may restore the youth.
Another commonly cited approach was attempting to transfer the essence of youth from young people to old. Some examples of this approach were sleeping with virgins or children (sometimes literally sleeping, not necessarily having sex), [ 2 ] bathing in or drinking their blood.
The quest for rejuvenation reached its height with alchemy . All around Europe, and also beyond, alchemists were looking for the Philosopher's Stone , the mythical substance that, as it was believed, could not only turn lead into gold, but also prolong life and restore youth. Although the set goal was not achieved, alchemy paved the way to the scientific method and so to the medical advances of today. [ citation needed ]
Serge Abrahamovitch Voronoff was a French surgeon born in Russia who gained fame for his technique of grafting monkey testicle tissue on to the testicles of men while working in France in the 1920s and 1930s. This was one of the first medically accepted rejuvenation therapies (before he was proved to be wrong around 1930–1940). The technique brought him a great deal of money, although he was already independently wealthy. As his work fell out of favor, he went from being a highly respected surgeon to a subject of ridicule. By the early 1930s, over 500 men had been treated in France by his rejuvenation technique, and thousands more around the world, such as in a special clinic set up in Algiers . [ 3 ] Noteworthy people who had the surgery included Harold McCormick , chairman of the board of International Harvester Company , [ 4 ] and the aging premier of Turkey . [ 5 ]
Rejuvenation technology and its effects on individuals and society have long been a subject of science fiction. The Misspent Youth and Commonwealth Saga by Peter F. Hamilton are among the most well known examples of this, dealing with the short- and long-term effects of a near perfect 80-year-old to 20-year-old body change with mind intact. The less perfect rejuvenation featured in the Mars trilogy by Kim Stanley Robinson results in long-term memory loss and sheer boredom that comes with extreme age. The post-mortal characters in the Revelation Space series have long-term or essentially infinite lifespans, and sheer boredom induces them to undertake activities of extreme risk.
Aging is the accumulation of damage to macromolecules , cells , tissues and organs in and on the body which, when it can no longer be tolerated by an organism , ultimately leads to its death . If any of that damage can be repaired, the result is rejuvenation.
There have been many experiments which have been shown to increase the maximum life span of laboratory animals, [ citation needed ] thereby achieving life extension . A few experimental methods such as replacing hormones to youthful levels have had considerable success in partially rejuvenating laboratory animals and humans. A 2011 experiment involved breeding genetically manipulated mice that lacked an enzyme called telomerase, causing the mice to age prematurely and suffer ailments. When the mice were given injections to reactivate the enzyme, it repaired the damaged tissues and reversed the signs of aging. [ 6 ] There are at least eight important hormones that decline with age: 1. human growth hormone (HGH); 2. the sexual hormones: testosterone or oestrogen/progesterone; 3. erythropoietin (EPO); 4. insulin; 5. DHEA ; 6. melatonin; 7. thyroid; 8. pregnenolone. In theory, if all or some of these hormones are replaced, the body will respond to them as it did when it was younger, thus repairing and restoring many body functions. In line with this, recent experiments show that heterochronic parabiosis , i.e. connecting the circulatory systems of young and old animal, leads to the rejuvenation of the old animal, including restoration of proper stem cell function. Similar experiments show that grafting old muscles into young hosts leads to their complete restoration, whereas grafting young muscles into old hosts does not. These experiments show that aging is mediated by systemic environment, rather than being an intrinsic cell property. [ citation needed ] Clinical trials based on transfusion of young blood were scheduled to begin in 2014. [ 7 ] Another intervention that is gaining popularity is epigenetic reprogramming. [ 8 ] Through the use of Yamanaka factors , aged cells can revert to a younger state. It has been demonstrated that reprogramming induces a youthful epigenetic state and can restore vision after injury. [ 9 ] Only through reprogramming were stochastic epigenetic variations, which accumulate with age, successfully reversed, as demonstrated by a stochastic data-based clock. [ 10 ]
Most attempts at genetic repair have traditionally involved the use of a retrovirus to insert a new gene into a random position on a chromosome . But by attaching zinc fingers (which determine where transcription factors bind) to endonucleases (which break DNA strands), homologous recombination can be induced to correct and replace defective (or undesired) DNA sequences. The first applications of this technology are to isolate stem cells from the bone marrow of patients having blood disease mutations , to correct those mutations in laboratory dishes using zinc finger endonucleases and to transplant the stem cells back into the patients. [ 11 ] More recent efforts leverage CRISPR-Cas systems or adeno-associated viruses (AAVs).
Enhanced DNA repair has been proposed as a potential rejuvenation strategy. [ 12 ]
Stem cell regenerative medicine uses three different strategies:
A salamander can not only regenerate a limb , but can regenerate the lens or retina of an eye and can regenerate an intestine . For regeneration the salamander tissues form a blastema by de-differentiation of mesenchymal cells , and the blastema functions as a self-organizing system to regenerate the limb. [ 13 ]
Yet another option involves cosmetic changes to the individual to create the appearance of youth. These are generally superficial and do little to make the person healthier or live longer, but the real improvement in a person's appearance may elevate their mood and have positive side effects normally correlated with happiness . Cosmetic surgery is a large industry offering treatments such as removal of wrinkles ("face lift"), removal of extra fat (liposuction) and reshaping or augmentation of various body parts ( abdomen , breasts , face ).
There are also, as commonly found throughout history, many fake rejuvenation products that have been shown to be ineffective. Chief among these are powders, sprays, gels, and homeopathic substances that claim to contain growth hormones. Authentic growth hormones are only effective when injected, mainly due to the fact that the 191-amino acid protein is too large to be absorbed through the mucous membranes , and would be broken up in the stomach if swallowed.
The Mprize scientific competition is under way to deliver on the mission of extending healthy human life. It directly accelerates the development of revolutionary new life extension therapies by awarding two cash prizes: one to the research team that breaks the world record for the oldest-ever mouse; and one to the team that develops the most successful late-onset rejuvenation. Current Mprize winner for rejuvenation is Steven Spindler. Caloric restriction (CR), the consumption of fewer calories while avoiding malnutrition, was applied as a robust method of decelerating aging and the development of age-related diseases . [ 14 ]
In 2020, scientists reported the reversion of ageing in human cells through nuclear reprogramming to pluripotency . Such process included resetting of epigenetic clock , reduction of the inflammatory profile in chondrocytes and restoration of youthful regenerative response to aged, human muscle stem cells , without abolishing cellular identity. [ 15 ]
The biomedical gerontologist Aubrey de Grey has initiated a project, strategies for engineered negligible senescence (SENS), to study how to reverse the damage caused by aging. He has proposed seven strategies for what he calls the seven deadly sins of aging: [ 16 ]
In 2009, Aubrey de Grey co-founded the SENS Foundation to expedite progress in the above-listed areas. | https://en.wikipedia.org/wiki/Rejuvenation |
This article relates the Schrödinger equation with the path integral formulation of quantum mechanics using a simple nonrelativistic one-dimensional single-particle Hamiltonian composed of kinetic and potential energy.
Schrödinger's equation, in bra–ket notation , is i ℏ d d t | ψ ⟩ = H ^ | ψ ⟩ {\displaystyle i\hbar {\frac {d}{dt}}\left|\psi \right\rangle ={\hat {H}}\left|\psi \right\rangle } where H ^ {\displaystyle {\hat {H}}} is the Hamiltonian operator .
The Hamiltonian operator can be written H ^ = p ^ 2 2 m + V ( q ^ ) {\displaystyle {\hat {H}}={\frac {{\hat {p}}^{2}}{2m}}+V({\hat {q}})} where V ( q ^ ) {\displaystyle V({\hat {q}})} is the potential energy , m is the mass and we have assumed for simplicity that there is only one spatial dimension q .
The formal solution of the equation is
| ψ ( t ) ⟩ = exp ( − i ℏ H ^ t ) | q 0 ⟩ ≡ exp ( − i ℏ H ^ t ) | 0 ⟩ {\displaystyle \left|\psi (t)\right\rangle =\exp \left(-{\frac {i}{\hbar }}{\hat {H}}t\right)\left|q_{0}\right\rangle \equiv \exp \left(-{\frac {i}{\hbar }}{\hat {H}}t\right)|0\rangle }
where we have assumed the initial state is a free-particle spatial state | q 0 ⟩ {\displaystyle \left|q_{0}\right\rangle } . [ clarification needed ]
The transition probability amplitude for a transition from an initial state | 0 ⟩ {\displaystyle \left|0\right\rangle } to a final free-particle spatial state | F ⟩ {\displaystyle |F\rangle } at time T is
⟨ F | ψ ( T ) ⟩ = ⟨ F | exp ( − i ℏ H ^ T ) | 0 ⟩ . {\displaystyle \langle F|\psi (T)\rangle =\left\langle F{\Biggr |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\Biggl |}0\right\rangle .}
The path integral formulation states that the transition amplitude is simply the integral of the quantity exp ( i ℏ S ) {\displaystyle \exp \left({\frac {i}{\hbar }}S\right)} over all possible paths from the initial state to the final state. Here S is the classical action .
The reformulation of this transition amplitude, originally due to Dirac [ 1 ] and conceptualized by Feynman, [ 2 ] forms the basis of the path integral formulation. [ 3 ]
The following derivation [ 4 ] makes use of the Trotter product formula , which states that for self-adjoint operators A and B (satisfying certain technical conditions), we have e i ( A + B ) ψ = lim N → ∞ ( e i A / N e i B / N ) N ψ , {\displaystyle e^{i(A+B)}\psi =\lim _{N\to \infty }\left(e^{iA/N}e^{iB/N}\right)^{N}\psi ,} even if A and B do not commute.
We can divide the time interval [0, T ] into N segments of length δ t = T N . {\displaystyle \delta t={\frac {T}{N}}.}
The transition amplitude can then be written
⟨ F | exp ( − i ℏ H ^ T ) | 0 ⟩ = ⟨ F | exp ( − i ℏ H ^ δ t ) exp ( − i ℏ H ^ δ t ) ⋯ exp ( − i ℏ H ^ δ t ) | 0 ⟩ . {\displaystyle \left\langle F{\biggr |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\biggl |}0\right\rangle =\left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right)\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right)\cdots \exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}0\right\rangle .}
Although the kinetic energy and potential energy operators do not commute, the Trotter product formula, cited above, says that over each small time-interval, we can ignore this noncommutativity and write
exp ( − i ℏ H ^ δ t ) ≈ exp ( − i ℏ p ^ 2 2 m δ t ) exp ( − i ℏ V ( q j ) δ t ) . {\displaystyle \exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right)\approx \exp \left({-{i \over \hbar }{{\hat {p}}^{2} \over 2m}\delta t}\right)\exp \left({-{i \over \hbar }V\left(q_{j}\right)\delta t}\right).}
The equality of the above can be verified to hold up to first order in δt by expanding the exponential as power series.
For notational simplicity, we delay making this substitution for the moment.
We can insert the identity matrix
I = ∫ d q | q ⟩ ⟨ q | {\displaystyle I=\int dq\left|q\right\rangle \left\langle q\right|}
N − 1 times between the exponentials to yield
⟨ F | exp ( − i ℏ H ^ T ) | 0 ⟩ = ( ∏ j = 1 N − 1 ∫ d q j ) ⟨ F | exp ( − i ℏ H ^ δ t ) | q N − 1 ⟩ ⟨ q N − 1 | exp ( − i ℏ H ^ δ t ) | q N − 2 ⟩ ⋯ ⟨ q 1 | exp ( − i ℏ H ^ δ t ) | 0 ⟩ . {\displaystyle \left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\bigg |}0\right\rangle =\left(\prod _{j=1}^{N-1}\int dq_{j}\right)\left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{N-1}\right\rangle \left\langle q_{N-1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{N-2}\right\rangle \cdots \left\langle q_{1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}0\right\rangle .}
We now implement the substitution associated to the Trotter product formula, so that we have, effectively
⟨ q j + 1 | exp ( − i ℏ H ^ δ t ) | q j ⟩ = ⟨ q j + 1 | exp ( − i ℏ p ^ 2 2 m δ t ) exp ( − i ℏ V ( q j ) δ t ) | q j ⟩ . {\displaystyle \left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{j}\right\rangle =\left\langle q_{j+1}{\Bigg |}\exp \left({-{i \over \hbar }{{\hat {p}}^{2} \over 2m}\delta t}\right)\exp \left({-{i \over \hbar }V\left(q_{j}\right)\delta t}\right){\Bigg |}q_{j}\right\rangle .}
We can insert the identity
I = ∫ d p 2 π | p ⟩ ⟨ p | {\displaystyle I=\int {dp \over 2\pi }\left|p\right\rangle \left\langle p\right|}
into the amplitude to yield
⟨ q j + 1 | exp ( − i ℏ H ^ δ t ) | q j ⟩ = exp ( − i ℏ V ( q j ) δ t ) ∫ d p 2 π ⟨ q j + 1 | exp ( − i ℏ p 2 2 m δ t ) | p ⟩ ⟨ p | q j ⟩ = exp ( − i ℏ V ( q j ) δ t ) ∫ d p 2 π exp ( − i ℏ p 2 2 m δ t ) ⟨ q j + 1 | p ⟩ ⟨ p | q j ⟩ = exp ( − i ℏ V ( q j ) δ t ) ∫ d p 2 π ℏ exp ( − i ℏ p 2 2 m δ t − i ℏ p ( q j + 1 − q j ) ) {\displaystyle {\begin{aligned}\left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{j}\right\rangle &=\exp \left(-{\frac {i}{\hbar }}V\left(q_{j}\right)\delta t\right)\int {\frac {dp}{2\pi }}\left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\frac {p^{2}}{2m}}\delta t\right){\bigg |}p\right\rangle \langle p|q_{j}\rangle \\&=\exp \left(-{\frac {i}{\hbar }}V\left(q_{j}\right)\delta t\right)\int {\frac {dp}{2\pi }}\exp \left(-{\frac {i}{\hbar }}{\frac {p^{2}}{2m}}\delta t\right)\left\langle q_{j+1}|p\right\rangle \left\langle p|q_{j}\right\rangle \\&=\exp \left(-{\frac {i}{\hbar }}V\left(q_{j}\right)\delta t\right)\int {\frac {dp}{2\pi \hbar }}\exp \left(-{\frac {i}{\hbar }}{\frac {p^{2}}{2m}}\delta t-{\frac {i}{\hbar }}p\left(q_{j+1}-q_{j}\right)\right)\end{aligned}}}
where we have used the fact that the free particle wave function is ⟨ p | q j ⟩ = 1 ℏ exp ( i ℏ p q j ) . {\displaystyle \langle p|q_{j}\rangle ={\frac {1}{\sqrt {\hbar }}}\exp \left({\frac {i}{\hbar }}pq_{j}\right).}
The integral over p can be performed (see Common integrals in quantum field theory ) to obtain
⟨ q j + 1 | exp ( − i ℏ H ^ δ t ) | q j ⟩ = − i m 2 π δ t ℏ exp [ i ℏ δ t ( 1 2 m ( q j + 1 − q j δ t ) 2 − V ( q j ) ) ] {\displaystyle \left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{j}\right\rangle ={\sqrt {-im \over 2\pi \delta t\hbar }}\exp \left[{i \over \hbar }\delta t\left({1 \over 2}m\left({q_{j+1}-q_{j} \over \delta t}\right)^{2}-V\left(q_{j}\right)\right)\right]}
The transition amplitude for the entire time period is
⟨ F | exp ( − i ℏ H ^ T ) | 0 ⟩ = ( − i m 2 π δ t ℏ ) N 2 ( ∏ j = 1 N − 1 ∫ d q j ) exp [ i ℏ ∑ j = 0 N − 1 δ t ( 1 2 m ( q j + 1 − q j δ t ) 2 − V ( q j ) ) ] . {\displaystyle \left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\bigg |}0\right\rangle =\left({-im \over 2\pi \delta t\hbar }\right)^{N \over 2}\left(\prod _{j=1}^{N-1}\int dq_{j}\right)\exp \left[{i \over \hbar }\sum _{j=0}^{N-1}\delta t\left({1 \over 2}m\left({q_{j+1}-q_{j} \over \delta t}\right)^{2}-V\left(q_{j}\right)\right)\right].}
If we take the limit of large N the transition amplitude reduces to
⟨ F | exp ( − i ℏ H ^ T ) | 0 ⟩ = ∫ D q ( t ) exp ( i ℏ S ) {\displaystyle \left\langle F{\bigg |}\exp \left({-{i \over \hbar }{\hat {H}}T}\right){\bigg |}0\right\rangle =\int Dq(t)\exp \left({i \over \hbar }S\right)}
where S is the classical action given by
S = ∫ 0 T d t L ( q ( t ) , q ˙ ( t ) ) {\displaystyle S=\int _{0}^{T}dtL\left(q(t),{\dot {q}}(t)\right)}
and L is the classical Lagrangian given by
L ( q , q ˙ ) = 1 2 m q ˙ 2 − V ( q ) {\displaystyle L\left(q,{\dot {q}}\right)={1 \over 2}m{\dot {q}}^{2}-V(q)}
Any possible path of the particle, going from the initial state to the final state, is approximated as a broken line and included in the measure of the integral ∫ D q ( t ) = lim N → ∞ ( − i m 2 π δ t ℏ ) N 2 ( ∏ j = 1 N − 1 ∫ d q j ) {\displaystyle \int Dq(t)=\lim _{N\to \infty }\left({\frac {-im}{2\pi \delta t\hbar }}\right)^{\frac {N}{2}}\left(\prod _{j=1}^{N-1}\int dq_{j}\right)}
This expression actually defines the manner in which the path integrals are to be taken. The coefficient in front is needed to ensure that the expression has the correct dimensions, but it has no actual relevance in any physical application.
This recovers the path integral formulation from Schrödinger's equation.
The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times.
Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of ẋ , the path integral has most weight for y close to x . In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula .) The exponential of the action is
e − i ε V ( x ) e i x ˙ 2 2 ε {\displaystyle e^{-i\varepsilon V(x)}e^{i{\frac {{\dot {x}}^{2}}{2}}\varepsilon }}
The first term rotates the phase of ψ ( x ) locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to i times a diffusion process. To lowest order in ε they are additive; in any case one has with (1) :
ψ ( y ; t + ε ) ≈ ∫ ψ ( x ; t ) e − i ε V ( x ) e i ( x − y ) 2 2 ε d x . {\displaystyle \psi (y;t+\varepsilon )\approx \int \psi (x;t)e^{-i\varepsilon V(x)}e^{\frac {i(x-y)^{2}}{2\varepsilon }}\,dx\,.}
As mentioned, the spread in ψ is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase which slowly varies from point to point from the potential:
∂ ψ ∂ t = i ( 1 2 ∇ 2 − V ( x ) ) ψ {\displaystyle {\frac {\partial \psi }{\partial t}}=i\left({\tfrac {1}{2}}\nabla ^{2}-V(x)\right)\psi }
and this is the Schrödinger equation. Note that the normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment. | https://en.wikipedia.org/wiki/Relation_between_Schrödinger's_equation_and_the_path_integral_formulation_of_quantum_mechanics |
In computer science , a relational operator is a programming language construct or operator that tests or defines some kind of relation between two entities . These include numerical equality ( e.g. , 5 = 5 ) and inequalities ( e.g. , 4 ≥ 3 ).
In programming languages that include a distinct boolean data type in their type system , like Pascal , Ada , Python or Java , these operators usually evaluate to true or false, depending on if the conditional relationship between the two operands holds or not.
In languages such as C , relational operators return the integers 0 or 1, where 0 stands for false and any non-zero value stands for true.
An expression created using a relational operator forms what is termed a relational expression or a condition . Relational operators can be seen as special cases of logical predicates .
Equality is used in many programming language constructs and data types. It is used to test if an element already exists in a set , or to access to a value through a key. It is used in switch statements to dispatch the control flow to the correct branch, and during the unification process in logic programming.
There can be multiple valid definitions of equality, and any particular language might adopt one or more of them, depending on various design aspects. One possible meaning of equality is that "if a equals b , then either a or b can be used interchangeably in any context without noticing any difference". But this statement does not necessarily hold, particularly when taking into account mutability together with content equality.
Sometimes, particularly in object-oriented programming , the comparison raises questions of data types and inheritance , equality , and identity . It is often necessary to distinguish between:
In many modern programming languages, objects and data structures are accessed through references . In such languages, there becomes a need to test for two different types of equality:
The first type of equality usually implies the second (except for things like not a number ( NaN ) which are unequal to themselves), but the converse is not necessarily true. For example, two string objects may be distinct objects (unequal in the first sense) but contain the same sequence of characters (equal in the second sense). See identity for more of this issue.
Real numbers, including many simple fractions , cannot be represented exactly in floating-point arithmetic , and it may be necessary to test for equality within a given tolerance. Such tolerance, however, can easily break desired properties such as transitivity, whereas reflexivity breaks too: the IEEE floating-point standard requires that NaN ≠ NaN holds. In contrast, the (2022) private standard for posit arithmetic (posit proponents mean to replace IEEE floats) has a similar concept, NaR (Not a Real), where NaR = NaR holds. [ 1 ]
Other programming elements such as computable functions, may either have no sense of equality, or an equality that is uncomputable. For these reasons, some languages define an explicit notion of "comparable", in the form of a base class, an interface, a trait or a protocol, which is used either explicitly, by declaration in source code, or implicitly, via the structure of the type involved.
In JavaScript , PHP , VBScript and a few other dynamically typed languages, the standard equality operator follows so-called loose typing , that is it evaluates to true even if two values are not equal and are of incompatible types, but can be coerced to each other by some set of language-specific rules, making the number 4 compare equal to the text string "4", for instance. Although such behaviour is typically meant to make the language easier, it can lead to surprising and difficult to predict consequences that many programmers are unaware of. For example, JavaScript's loose equality rules can cause equality to be intransitive (i.e., a == b and b == c , but a != c ), or make certain values be equal to their own negation. [ 2 ]
A strict equality operator is also often available in those languages, returning true only for values with identical or equivalent types (in PHP, 4 === "4" is false although 4 == "4" is true). [ 3 ] [ 4 ] For languages where the number 0 may be interpreted as false , this operator may simplify things such as checking for zero (as x == 0 would be true for x being either 0 or "0" using the type agnostic equality operator).
Greater than and less than comparison of non-numeric data is performed according to a sort convention (such as, for text strings, lexicographical order ) which may be built into the programming language and/or configurable by a programmer.
When it is desired to associate a numeric value with the result of a comparison between two data items, say a and b , the usual convention is to assign −1 if a < b, 0 if a = b and 1 if a > b. For example, the C function strcmp performs a three-way comparison and returns −1, 0, or 1 according to this convention, and qsort expects the comparison function to return values according to this convention. In sorting algorithms , the efficiency of comparison code is critical since it is one of the major factors contributing to sorting performance.
Comparison of programmer-defined data types (data types for which the programming language has no in-built understanding) may be carried out by custom-written or library functions (such as strcmp mentioned above), or, in some languages, by overloading a comparison operator – that is, assigning a programmer-defined meaning that depends on the data types being compared. Another alternative is using some convention such as member-wise comparison.
Though perhaps unobvious at first, like the boolean logical operators XOR, AND, OR, and NOT, relational operators can be designed to have logical equivalence , such that they can all be defined in terms of one another. The following four conditional statements all have the same logical equivalence E (either all true or all false) for any given x and y values:
This relies on the domain being well ordered .
The most common numerical relational operators used in programming languages are shown below. Standard SQL uses the same operators as BASIC, while many databases allow != in addition to <> from the standard. SQL follows strict boolean algebra , i.e. doesn't use short-circuit evaluation , which is common to most languages below. E.g. PHP has it, but otherwise it has these same two operators defined as aliases, like many SQL databases.
Other conventions are less common: Common Lisp and Macsyma / Maxima use Basic-like operators for numerical values, except for inequality, which is /= in Common Lisp and # in Macsyma/Maxima. Common Lisp has multiple other sets of equality and relational operators serving different purposes, including eq , eql , equal , equalp , and string= . [ 6 ] Older Lisps used equal , greaterp , and lessp ; and negated them using not for the remaining operators.
Relational operators are also used in technical literature instead of words. Relational operators are usually written in infix notation , if supported by the programming language, which means that they appear between their operands (the two expressions being related). For example, an expression in Python will print the message if the x is less than y :
Other programming languages, such as Lisp , use prefix notation , as follows:
In mathematics, it is common practice to chain relational operators, such as in 3 < x < y < 20 (meaning 3 < x and x < y and y < 20). The syntax is clear since these relational operators in mathematics are transitive.
However, many recent programming languages would see an expression like 3 < x < y as consisting of two left (or right-) associative operators, interpreting it as something like (3 < x) < y . If we say that x=4, we then get (3 < 4) < y , and evaluation will give true < y which generally does not make sense. However, it does compile in C/C++ and some other languages, yielding surprising result (as true would be represented by the number 1 here).
It is possible to give the expression x < y < z its familiar mathematical meaning, and some programming languages such as Python and Raku do that. Others, such as C# and Java, do not, partly because it would differ from the way most other infix operators work in C-like languages. The D programming language does not do that since it maintains some compatibility with C, and "Allowing C expressions but with subtly different semantics (albeit arguably in the right direction) would add more confusion than convenience". [ 7 ]
Some languages, like Common Lisp , use multiple argument predicates for this. In Lisp (<= 1 x 10) is true when x is between 1 and 10.
Early FORTRAN (1956–57) was bounded by heavily restricted character sets where = was the only relational operator available. There were no < or > (and certainly no ≤ or ≥ ). This forced the designers to define symbols such as .GT. , .LT. , .GE. , .EQ. etc. and subsequently made it tempting to use the remaining = character for copying, despite the obvious incoherence with mathematical usage ( X=X+1 should be impossible).
International Algebraic Language (IAL, ALGOL 58 ) and ALGOL (1958 and 1960) thus introduced := for assignment, leaving the standard = available for equality, a convention followed by CPL , ALGOL W , ALGOL 68 , Basic Combined Programming Language ( BCPL ), Simula , SET Language ( SETL ), Pascal , Smalltalk , Modula-2 , Ada , Standard ML , OCaml , Eiffel , Object Pascal ( Delphi ), Oberon , Dylan , VHSIC Hardware Description Language ( VHDL ), and several other languages.
This uniform de facto standard among most programming languages was eventually changed, indirectly, by a minimalist compiled language named B . Its sole intended application was as a vehicle for a first port of (a then very primitive) Unix , but it also evolved into the very influential C language.
B started off as a syntactically changed variant of the systems programming language BCPL , a simplified (and typeless) version of CPL . In what has been described as a "strip-down" process, the and and or operators of BCPL [ 8 ] were replaced with & and | (which would later become && and || , respectively. [ 9 ] ). In the same process, the ALGOL style := of BCPL was replaced by = in B. The reason for all this being unknown. [ 10 ] As variable updates had no special syntax in B (such as let or similar) and were allowed in expressions, this non standard meaning of the equal sign meant that the traditional semantics of the equal sign now had to be associated with another symbol. Ken Thompson used the ad hoc == combination for this.
As a small type system was later introduced, B then became C. The popularity of this language along with its association with Unix, led to Java, C#, and many other languages following suit, syntactically, despite this needless conflict with the mathematical meaning of the equal sign.
Assignments in C have a value and since any non-zero scalar value is interpreted as true in conditional expressions , [ 11 ] the code if (x = y) is legal, but has a very different meaning from if (x == y) . The former code fragment means "assign y to x , and if the new value of x is not zero, execute the following statement". The latter fragment means " if and only if x is equal to y , execute the following statement". [ 12 ]
Though Java and C# have the same operators as C, this mistake usually causes a compile error in these languages instead, because the if-condition must be of type boolean , and there is no implicit way to convert from other types ( e.g. , numbers) into boolean s. So unless the variable that is assigned to has type boolean (or wrapper type Boolean ), there will be a compile error.
In ALGOL-like languages such as Pascal, Delphi, and Ada (in the sense that they allow nested function definitions ), and in Python , and many functional languages, among others, assignment operators cannot appear in an expression (including if clauses), thus precluding this class of error. Some compilers, such as GNU Compiler Collection (GCC), provide a warning when compiling code containing an assignment operator inside an if statement, though there are some legitimate uses of an assignment inside an if-condition. In such cases, the assignment must be wrapped in an extra pair of parentheses explicitly, to avoid the warning.
Similarly, some languages, such as BASIC use just the = symbol for both assignment and equality, as they are syntactically separate (as with Pascal, Ada, Python, etc., assignment operators cannot appear in expressions).
Some programmers get in the habit of writing comparisons against a constant in the reverse of the usual order:
If = is used accidentally, the resulting code is invalid because 2 is not a variable. The compiler will generate an error message, on which the proper operator can be substituted. This coding style is termed left-hand comparison, or Yoda conditions .
This table lists the different mechanisms to test for these two types of equality in various languages:
Ruby uses a === b to mean "b is a member of the set a", though the details of what it means to be a member vary considerably depending on the data types involved. === is here known as the "case equality" or "case subsumption" operator. | https://en.wikipedia.org/wiki/Relational_operator |
The relational theory of space is a metaphysical theory according to which space is composed of relations between objects, with the implication that it cannot exist in the absence of matter. Its opposite is the container theory. A relativistic physical theory implies a relational metaphysics , but not the other way round: even if space is composed of nothing but relations between observers and events , it would be conceptually possible for all observers to agree on their measurements, whereas relativity implies they will disagree. Newtonian physics can be cast in relational terms , but Newton insisted, for philosophical reasons, on absolute (container) space. The subject was famously debated by Gottfried Wilhelm Leibniz and a supporter of Newton's in the Leibniz–Clarke correspondence .
An absolute approach can also be applied to time , with, for instance, the implication that there might have been vast epochs of time before the first event . [ 1 ] | https://en.wikipedia.org/wiki/Relational_space |
In thermodynamics , the heat capacity at constant volume, C V {\displaystyle C_{V}} , and the heat capacity at constant pressure, C P {\displaystyle C_{P}} , are extensive properties that have the magnitude of energy divided by temperature.
The laws of thermodynamics imply the following relations between these two heat capacities (Gaskell 2003:23):
Here α {\displaystyle \alpha } is the thermal expansion coefficient :
β T {\displaystyle \beta _{T}} is the isothermal compressibility (the inverse of the bulk modulus ):
and β S {\displaystyle \beta _{S}} is the isentropic compressibility:
A corresponding expression for the difference in specific heat capacities ( intensive properties ) at constant volume and constant pressure is:
where ρ is the density of the substance under the applicable conditions.
The corresponding expression for the ratio of specific heat capacities remains the same since the thermodynamic system size-dependent quantities, whether on a per mass or per mole basis, cancel out in the ratio because specific heat capacities are intensive properties. Thus:
The difference relation allows one to obtain the heat capacity for solids at constant volume which is not readily measured in terms of quantities that are more easily measured. The ratio relation allows one to express the isentropic compressibility in terms of the heat capacity ratio.
If an infinitesimally small amount of heat δ Q {\displaystyle \delta Q} is supplied to a system in a reversible way then, according to the second law of thermodynamics , the entropy change of the system is given by:
Since
where C is the heat capacity, it follows that:
The heat capacity depends on how the external variables of the system are changed when the heat is supplied. If the only external variable of the system is the volume, then we can write:
From this follows:
Expressing dS in terms of dT and dP similarly as above leads to the expression:
One can find the above expression for C P − C V {\displaystyle C_{P}-C_{V}} by expressing dV in terms of dP and dT in the above expression for dS.
results in
and it follows:
Therefore,
The partial derivative ( ∂ S ∂ V ) T {\displaystyle \left({\frac {\partial S}{\partial V}}\right)_{T}} can be rewritten in terms of variables that do not involve the entropy using a suitable Maxwell relation . These relations follow from the fundamental thermodynamic relation :
It follows from this that the differential of the Helmholtz free energy F = E − T S {\displaystyle F=E-TS} is:
This means that
and
The symmetry of second derivatives of F with respect to T and V then implies
allowing one to write:
The r.h.s. contains a derivative at constant volume, which can be difficult to measure. It can be rewritten as follows. In general,
Since the partial derivative ( ∂ P ∂ T ) V {\displaystyle \left({\frac {\partial P}{\partial T}}\right)_{V}} is just the ratio of dP and dT for dV = 0, one can obtain this by putting dV = 0 in the above equation and solving for this ratio:
which yields the expression:
The expression for the ratio of the heat capacities can be obtained as follows:
The partial derivative in the numerator can be expressed as a ratio of partial derivatives of the pressure w.r.t. temperature and entropy. If in the relation
we put d P = 0 {\displaystyle dP=0} and solve for the ratio d S d T {\displaystyle {\frac {dS}{dT}}} we obtain ( ∂ S ∂ T ) P {\displaystyle \left({\frac {\partial S}{\partial T}}\right)_{P}} . Doing so gives:
One can similarly rewrite the partial derivative ( ∂ S ∂ T ) V {\displaystyle \left({\frac {\partial S}{\partial T}}\right)_{V}} by expressing dV in terms of dS and dT, putting dV equal to zero and solving for the ratio d S d T {\displaystyle {\frac {dS}{dT}}} . When one substitutes that expression in the heat capacity ratio expressed as the ratio of the partial derivatives of the entropy above, it follows:
Taking together the two derivatives at constant S:
Taking together the two derivatives at constant T:
From this one can write:
This is a derivation to obtain an expression for C P − C V {\displaystyle C_{P}-C_{V}\,} for an ideal gas .
An ideal gas has the equation of state : P V = n R T {\displaystyle PV=nRT\,}
where
The ideal gas equation of state can be arranged to give:
The following partial derivatives are obtained from the above equation of state :
The following simple expressions are obtained for thermal expansion coefficient α {\displaystyle \alpha } :
and for isothermal compressibility β T {\displaystyle \beta _{T}} :
One can now calculate C P − C V {\displaystyle C_{P}-C_{V}\,} for ideal gases from the previously obtained general formula:
Substituting from the ideal gas equation gives finally:
where n = number of moles of gas in the thermodynamic system under consideration and R = universal gas constant. On a per mole basis, the expression for difference in molar heat capacities becomes simply R for ideal gases as follows:
This result would be consistent if the specific difference were derived directly from the general expression for c p − c v {\displaystyle c_{p}-c_{v}\,} . | https://en.wikipedia.org/wiki/Relations_between_heat_capacities |
The relationship between animal ethics and environmental ethics concerns the differing ethical consideration of individual nonhuman animals—particularly those living in spaces outside of direct human control—and conceptual entities such as species, populations and ecosystems . The intersection of these two fields is a prominent component of vegan discourse. [ 1 ]
Generally, animal ethicists place the well-being and interests of sentient individuals at the center of their concern, while environmental ethicists focus on the preservation of biodiversity, populations, ecosystems, species and nature itself. [ 2 ] [ 3 ] Animal ethicists may also give value to these entities, but only so far as they are instrumentally valuable to sentient individuals. [ 4 ]
Environmental ethicists consider it justifiable to remove or kill individual animals belonging to introduced species, who are consider to threaten the preservation of ecological entities, such as endangered and native species, which they consider to be more valuable than members of more common species. [ 5 ] These actions are frequently opposed by animal ethicists, [ 2 ] who may argue for a gradation of value of individual animals based on their level of sentience and would not consider whether an individual animal exists naturally as morally relevant; to them the individual's capacity to suffer is what matters. [ 5 ]
Environmental ethicists may support hunting , which harms individual animals, in cases when it is considered to be ecologically beneficial. [ 6 ] [ 7 ] Some animal ethicists argue that we have a moral obligation to take steps to reduce wild animal suffering ; this is something that environmental ethicists are normally against. [ 8 ]
These differences of opinion have led some ethicists to argue that animal ethics and environmental ethics are incompatible, [ 8 ] [ 9 ] while others assert that the positions are reconcilable, or that the disagreements are not as strong as they first appear. [ 10 ] [ 11 ]
Animal rights philosopher Tom Regan , in his 1981 paper, conceived of an environmental ethic in which "nonconscious natural objects can have value in their own right, independently of human interests". [ 12 ] In his 1982 book, The Case for Animal Rights , Regan argued that it is difficult to reconcile Aldo Leopold 's holistic land ethic , where the "individual may be sacrificed for the greater biotic good", with the concept of animal rights and that, as a result, Leopold's view could justly be labelled as "environmental fascism". [ 13 ]
The utilitarian philosopher Peter Singer , in Practical Ethics , argues for an environmental ethic which "fosters consideration for the interests of all sentient creatures, including subsequent generations stretching into the far future." [ 14 ]
Eze Paez and Catia Faria assert that animal and environmental ethics "have incompatible criteria of moral considerability" and "incompatible normative implications regarding the interests of sentient individuals"; they also claim that environmental ethics fails to give a proper account of the problem of wild animal suffering. [ 8 ] Oscar Horta has argued that contrary to first appearances, "biocentric views should strongly support intervention" to relieve the suffering of animals in the wild. [ 10 ]
J. Baird Callicott , in his 1980 paper "Animal Liberation: A Triangular Affair", was the first environmental philosopher to argue for "intractable practical differences" between the ethical foundations of Leopold's land ethic, taken as a paradigm for environmental ethics, with those of the animal liberation movement . [ 9 ] Mark Sagoff made a similar case in his 1984 paper "Animal Liberation and Environmental Ethics: Bad Marriage, Quick Divorce", stating "[e]nvironmentalists cannot be animal liberationists. Animal liberationists cannot be environmentalists". [ 15 ] In a follow-up paper, published in 1988, Callicott lamented the conflict that his earlier paper had sparked, stating "it would be far wiser to make common cause against a common enemy — the destructive forces at work ravaging the nonhuman world — than to continue squabbling among ourselves". [ 11 ]
Michael Hutchins and Christin Wemmer in their 1986 paper "Wildlife Conservation and Animal Rights: Are They Compatible?", labelled the position of animal liberationists as "biologically illiterate and thus ill-equipped to provide an intelligent basis for wildlife conservation"; however, they conceded that "ethical philosophy faces a severe test when it comes to the conservation problem." [ 16 ]
In a 1992 paper, Ned Hettinger raises the predation problem , in response to animal rights activists criticizing the environmental ethics of Holmes Rolston and his support of hunting, stating "[b]y arguing that humans should not join other predators and must not kill animals for basic needs, animal activists risk being committed to the view that all carnivorous predation is intrinsically evil". [ 17 ]
Dale Jaimeson has argued that rather than being distinct positions, "animal liberation is an environmental ethic" and that it should be welcomed back by environmental ethicists. [ 18 ]
Ricardo Rozzi has criticized animal ethicists for "taxonomic chauvinism" and has urged them to "reevaluate the participation of invertebrates in the moral community". [ 19 ] | https://en.wikipedia.org/wiki/Relationship_between_animal_ethics_and_environmental_ethics |
The relationship between mathematics and physics has been a subject of study of philosophers , mathematicians and physicists since antiquity , and more recently also by historians and educators . [ 2 ] Generally considered a relationship of great intimacy, [ 3 ] mathematics has been described as "an essential tool for physics" [ 4 ] and physics has been described as "a rich source of inspiration and insight in mathematics". [ 5 ] Some of the oldest and most discussed themes are about the main differences between the two subjects, their mutual influence, the role of mathematical rigor in physics, and the problem of explaining the effectiveness of mathematics in physics.
In his work Physics , one of the topics treated by Aristotle is about how the study carried out by mathematicians differs from that carried out by physicists. [ 6 ] Considerations about mathematics being the language of nature can be found in the ideas of the Pythagoreans : the convictions that "Numbers rule the world" and "All is number", [ 7 ] [ 8 ] and two millennia later were also expressed by Galileo Galilei : "The book of nature is written in the language of mathematics". [ 9 ] [ 10 ]
Before giving a mathematical proof for the formula for the volume of a sphere , Archimedes used physical reasoning to discover the solution (imagining the balancing of bodies on a scale). [ 11 ] Aristotle classified physics and mathematics as theoretical sciences, in contrast to practical sciences (like ethics or politics ) and to productive sciences (like medicine or botany ). [ 12 ]
From the seventeenth century, many of the most important advances in mathematics appeared motivated by the study of physics, and this continued in the following centuries (although in the nineteenth century mathematics started to become increasingly independent from physics). [ 13 ] [ 14 ] The creation and development of calculus were strongly linked to the needs of physics: [ 15 ] There was a need for a new mathematical language to deal with the new dynamics that had arisen from the work of scholars such as Galileo Galilei and Isaac Newton . [ 16 ] The concept of derivative was needed, Newton did not have the modern concept of limits , and instead employed infinitesimals , which lacked a rigorous foundation at that time. [ 17 ] During this period there was little distinction between physics and mathematics; [ 18 ] as an example, Newton regarded geometry as a branch of mechanics . [ 19 ]
Non-Euclidean geometry , as formulated by Carl Friedrich Gauss , János Bolyai , Nikolai Lobachevsky , and Bernhard Riemann , freed physics from the limitation of a single Euclidean geometry. [ 20 ] A version of non-Euclidean geometry, called Riemannian geometry , enabled Albert Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. [ 21 ]
In the 19th century Auguste Comte in his hierarchy of the sciences , placed physics and astronomy as less general and more complex than mathematics, as both depend on it. [ 22 ] In 1900, David Hilbert in his 23 problems for the advancement of mathematical science, considered the axiomatization of physics as his sixth problem . The problem remains open. [ 23 ]
In 1930, Paul Dirac invented the Dirac delta function which produced a single value when used in an integral.
The mathematical rigor of this function was in doubt until the mathematician Laurent Schwartz developed on the theory of distributions . [ 24 ]
Connections between the two fields sometimes only require identifing similar concepts by different names, as shown in the 1975 Wu–Yang dictionary , [ 25 ] that related concepts of gauge theory with differential geometry . [ 26 ] : 332
Despite the close relationship between math and physics, they are not synonyms. In mathematics objects can be defined exactly and logically related, but the object need have no relationship to experimental measurements . In physics, definitions are abstractions or idealizations, approximations adequate when compared to the natural world. In 1960, Georg Rasch noted that no models are ever true , not even Newton's laws , emphasizing that models should not be evaluated based on truth but on their applicability for a given purpose. [ 27 ] For example, Newton built a physical model around definitions like his second law of motion F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } based on observations, leading to the development of calculus and highly accurate planetary mechanics, but later this definition was superseded by improved models of mechanics. [ 28 ] Mathematics deals with entities whose properties can be known with certainty . [ 29 ] According to David Hume , only statements that deal solely with ideas themselves—such as those encountered in mathematics—can be demonstrated to be true with certainty, while any conclusions pertaining to experiences of the real world can only be achieved via "probable reasoning". [ 30 ] This leads to a situation that was put by Albert Einstein as "No number of experiments can prove me right; a single experiment can prove me wrong." [ 31 ] The ultimate goal in research in pure mathematics are rigorous proofs , while in physics heuristic arguments may sometimes suffice in leading-edge research. [ 32 ] In short, the methods and goals of physicists and mathematicians are different. [ 33 ] Nonetheless, according to Roland Omnès , the axioms of mathematics are not mere conventions, but have physical origins. [ 34 ]
Rigor is indispensable in pure mathematics. [ 35 ] But many definitions and arguments found in the physics literature involve concepts and ideas that are not up to the standards of rigor in mathematics . [ 32 ] [ 36 ] [ 37 ] [ 38 ]
For example, Freeman Dyson characterized quantum field theory as having two "faces". The outward face looked at nature and there the predictions of quantum field theory are exceptionally successful. The inward face looked at mathematical foundations and found inconsistency and mystery. The success of the physical theory comes despite its lack of rigorous mathematical backing. [ 39 ] : ix [ 40 ] : 2
Some of the problems considered in the philosophy of mathematics are the following:
In recent times the two disciplines have most often been taught separately, despite all the interrelations between physics and mathematics. [ 51 ] This led some professional mathematicians who were also interested in mathematics education , such as Felix Klein , Richard Courant , Vladimir Arnold and Morris Kline , to strongly advocate teaching mathematics in a way more closely related to the physical sciences. [ 52 ] [ 53 ] The initial courses of mathematics for college students of physics are often taught by mathematicians, despite the differences in "ways of thinking" of physicists and mathematicians about those traditional courses and how they are used in the physics courses classes thereafter. [ 54 ] | https://en.wikipedia.org/wiki/Relationship_between_mathematics_and_physics |
The relationship between telomeres and longevity and changing the length of telomeres is one of the new fields of research on increasing human lifespan and even human immortality . [ 1 ] [ 2 ] Telomeres are sequences at the ends of chromosomes that shorten with each cell division and determine the lifespan of cells . [ 3 ] The telomere was first discovered by biologist Hermann Joseph Muller in the early 20th century. [ 4 ] However, experiments by Elizabeth Blackburn , Carol Greider , and Jack Szostak in the 1980s led to the successful discovery of telomerase (the enzyme responsible for maintaining telomere length) and a better understanding of telomeres. [ 5 ] [ 6 ] [ 7 ]
Telomeres play essential roles in the stability and control of cell division. [ 8 ] Telomeres protect chromosomes from deterioration [ 9 ] and fusion with neighboring chromosomes and act as a buffer zone, preventing the loss of essential genetic information during cell division. [ 2 ]
It is predicted that the knowledge of methods to increase the length of cell telomeres ( Stem cell and quasi-stem cells, control the regeneration and rebuilding of different tissues of the body) will pave the way for increasing human lifespan. [ 10 ] [ 11 ] Examining telomeres is one of the most important fields of research related to aging. It is also very important to investigate the mechanisms of maintaining telomerase, cell cleansing ( old cells that accumulate in tissues and sometimes cause cancer and inflammation) and the production of new cells in long-lived organisms. [ 1 ] [ 12 ] However, this idea faces major challenges such as increased cancer incidence, immune system problems, and unwanted long-term consequences. [ 1 ] [ 2 ] [ 13 ] [ 14 ]
In the early 1970s, Alexey Olovnikov first recognized that chromosomes cannot completely duplicate their ends during cell division . [ 15 ] This is known as the "end replication problem". [ 16 ] Olovnikov proposed that every time a cell divides, a part of the DNA sequence is lost, and if this loss reaches a certain level, cell division will stop at the end. [ 7 ] [ 9 ] [ 16 ] According to his "marginotomy" theory, there are sequences at the end of the DNA (telomeres) that are placed in tandem repeats and create a buffer zone that determines the number of divisions a particular cell can undergo. [ 16 ] [ 15 ]
Many organisms have a ribonucleoprotein enzyme called telomerase, which is responsible for adding repetitive nucleotide sequences to the ends of DNA. Telomerase replicates the telomere head and does not require ATP . [ 17 ] In most multicellular eukaryotic organisms , telomerase is active only in germ cells , some types of stem cells such as embryonic stem cells , and certain white blood cells . [ 9 ] Telomerase can be reactivated and telomeres restored to the embryonic state by somatic cell nuclear transfer. [ 18 ] The continuous shortening of telomeres with each replication in somatic (body) cells may play a role in aging [ 19 ] and in cancer prevention. [ 20 ] [ 21 ] This is because telomeres act as a kind of "delayed fuse" and eventually run out after a certain number of cell divisions. This action results in the loss of vital genetic information from the cell's chromosome after multiple divisions. [ 22 ] Research on telomerase is extremely important in understanding its role in maintaining telomere length and its potential implications for aging and cancer. [ 23 ]
While telomeres play an important role in cellular senescence , the intricate biological details of telomeres still require further investigation. [ 24 ] The complex interactions between telomeres, different proteins and the cellular environment must be fully understood in order to develop precise and safe interventions to change it. [ 25 ] Understanding the long-term effects of telomere extension on the body is complex and risky. Prediction of long-term consequences, including potential unanticipated side effects or interactions with other cellular processes, requires thorough and long-term investigation. [ 26 ]
Extending telomeres can allow cells to divide more and increase the risk of uncontrolled cell growth and cancer development. [ 24 ] A study conducted by Johns Hopkins University challenged the idea that long telomeres prevent aging. Rather than protecting cells from aging, long telomeres help cells with age-related mutations last longer. [ 13 ] This problem prepares the conditions for the occurrence of various types of cancer, and people with longer cell telomeres showed more signs of suffering from types of cancer such as Melanoma and Lymphoma . [ 13 ]
It is important to strike the right balance to avoid unintended consequences. [ 12 ]
Telomere dysfunction during cellular aging (a state in which cells do not divide but are metabolically active) affects the health of the body. [ 2 ] Preventing telomere shortening without clearing old cells may lead to the accumulation of these cells in the body and contribute to age-related diseases and tissue dysfunction. [ 29 ]
Different tissues of the human body may react differently to changes in telomeres. Telomere length is different in different tissues and cell types of the body. [ 10 ] Developing a general telomere lengthening strategy that is effective in all tissues is a complex task; Also, understanding how different types of cells, organs and systems react to telomere manipulation is very important for developing safe and effective interventions. [ 10 ]
The immune system plays an important role in monitoring and destroying abnormal or cancerous cells. [ 10 ] Telomere extension may affect the immune system's ability to recognize and eliminate cells with long telomeres, potentially compromising immune surveillance. It is very important to ensure the ability of the immune system to effectively identify and fight against pathogens and abnormal cells. [ 10 ] | https://en.wikipedia.org/wiki/Relationship_between_telomeres_and_longevity |
Relative accessible surface area or relative solvent accessibility (RSA) of a protein residue is a measure of residue solvent exposure . It can be calculated by formula:
RSA = ASA / MaxASA {\displaystyle {\text{RSA}}={\text{ASA}}/{\text{MaxASA}}} [ 1 ]
where ASA is the solvent accessible surface area and MaxASA is the maximum possible solvent accessible surface area for the residue. [ 1 ] Both ASA and MaxASA are commonly measured in Å 2 {\displaystyle {\mathrm {\AA} }^{2}} .
To measure the relative solvent accessibility of the residue side-chain only, one usually takes MaxASA values that have been obtained from Gly-X-Gly tripeptides, where X is the residue of interest. Several MaxASA scales have been published [ 1 ] [ 2 ] [ 3 ] and are commonly used (see Table).
In this table, the more recently published MaxASA values (from Tien et al. 2013 [ 1 ] ) are systematically larger than the older values (from Miller et al. 1987 [ 2 ] or Rose et al. 1985 [ 3 ] ). This discrepancy can be traced back to the conformation in which the Gly-X-Gly tripeptides are evaluated to calculate MaxASA. The earlier works used the extended conformation, with backbone angles of ϕ = − 120 ∘ {\displaystyle \phi =-120^{\circ }} and ψ = 140 ∘ {\displaystyle \psi =140^{\circ }} . [ 2 ] [ 3 ] However, Tien et al. 2013 [ 1 ] demonstrated that tripeptides in extended conformation fall among the least-exposed conformations. The largest ASA values are consistently observed in alpha helices, with backbone angles around ϕ = − 50 ∘ {\displaystyle \phi =-50^{\circ }} and ψ = − 45 ∘ {\displaystyle \psi =-45^{\circ }} . Tien et al. 2013 recommend to use their theoretical MaxASA values (2nd column in Table), as they were obtained from a systematic enumeration of all possible conformations and likely represent a true upper bound to observable ASA. [ 1 ]
ASA and hence RSA values are generally calculated from a protein structure, for example with the software DSSP. [ 4 ] However, there is also an extensive literature attempting to predict RSA values from sequence data, using machine-learning approaches. [ 5 ] [ 6 ]
Experimentally predicting RSA is an expensive and time-consuming task. In recent decades, several computational methods have been introduced for RSA prediction. [ 7 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Relative_accessible_surface_area |
Relative apparent synapomorphy analysis , or RASA , is a method that aims to determine whether a given character is shared between taxa due to shared ancestry or due to convergence . A synapomorphy is a shared trait found among two or more taxa and their most recent common ancestor, whose ancestor in turn does not possess the trait. RASA assigns a score to the character based on its potential to be informative. [ 1 ]
The method performs poorly when used to select an outgroup taxon, to quantify the amount of phylogenetic signal present, or to identify taxa that may be prone to long branch attraction . [ 2 ]
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relative_apparent_synapomorphy_analysis |
Relative atomic mass (symbol: A r ; sometimes abbreviated RAM or r.a.m. ), also known by the deprecated synonym atomic weight , is a dimensionless physical quantity defined as the ratio of the average mass of atoms of a chemical element in a given sample to the atomic mass constant . The atomic mass constant (symbol: m u ) is defined as being 1 / 12 of the mass of a carbon-12 atom. [ 1 ] [ 2 ] Since both quantities in the ratio are masses, the resulting value is dimensionless. These definitions remain valid [ 3 ] : 134 even after the 2019 revision of the SI . [ a ] [ b ]
For a single given sample, the relative atomic mass of a given element is the weighted arithmetic mean of the masses of the individual atoms (including all its isotopes ) that are present in the sample. This quantity can vary significantly between samples because the sample's origin (and therefore its radioactive history or diffusion history) may have produced combinations of isotopic abundances in varying ratios. For example, due to a different mixture of stable carbon-12 and carbon-13 isotopes, a sample of elemental carbon from volcanic methane will have a different relative atomic mass than one collected from plant or animal tissues.
The more common, and more specific quantity known as standard atomic weight ( A r,standard ) is an application of the relative atomic mass values obtained from many different samples. It is sometimes interpreted as the expected range of the relative atomic mass values for the atoms of a given element from all terrestrial sources, with the various sources being taken from Earth . [ 8 ] "Atomic weight" is often loosely and incorrectly used as a synonym for standard atomic weight (incorrectly because standard atomic weights are not from a single sample). Standard atomic weight is nevertheless the most widely published variant of relative atomic mass.
Additionally, the continued use of the term "atomic weight" (for any element) as opposed to "relative atomic mass" has attracted considerable controversy since at least the 1960s, mainly due to the technical difference between weight and mass in physics. [ 9 ] Still, both terms are officially sanctioned by the IUPAC . The term "relative atomic mass" now seems to be replacing "atomic weight" as the preferred term, although the term " standard atomic weight" (as opposed to the more correct " standard relative atomic mass") continues to be used.
Relative atomic mass is determined by the average atomic mass, or the weighted mean of the atomic masses of all the atoms of a particular chemical element found in a particular sample, which is then compared to the atomic mass of carbon-12. [ 10 ] This comparison is the quotient of the two weights, which makes the value dimensionless (having no unit). This quotient also explains the word relative : the sample mass value is considered relative to that of carbon-12.
It is a synonym for atomic weight, though it is not to be confused with relative isotopic mass . Relative atomic mass is also frequently used as a synonym for standard atomic weight and these quantities may have overlapping values if the relative atomic mass used is that for an element from Earth under defined conditions. However, relative atomic mass (atomic weight) is still technically distinct from standard atomic weight because of its application only to the atoms obtained from a single sample; it is also not restricted to terrestrial samples, whereas standard atomic weight averages multiple samples but only from terrestrial sources. Relative atomic mass is therefore a more general term that can more broadly refer to samples taken from non-terrestrial environments or highly specific terrestrial environments which may differ substantially from Earth-average or reflect different degrees of certainty (e.g., in number of significant figures ) than those reflected in standard atomic weights.
The prevailing IUPAC definitions (as taken from the " Gold Book ") are:
and
Here the "unified atomic mass unit" refers to 1/12 of the mass of an atom of 12 C in its ground state . [ 13 ]
The IUPAC definition [ 1 ] of relative atomic mass is:
The definition deliberately specifies " An atomic weight ...", as an element will have different relative atomic masses depending on the source. For example, boron from Turkey has a lower relative atomic mass than boron from California , because of its different isotopic composition . [ 14 ] [ 15 ] Nevertheless, given the cost and difficulty of isotope analysis , it is common practice to instead substitute the tabulated values of standard atomic weights , which are ubiquitous in chemical laboratories and which are revised biennially by the IUPAC's Commission on Isotopic Abundances and Atomic Weights (CIAAW). [ 16 ]
Older (pre-1961) historical relative scales based on the atomic mass unit (symbol: a.m.u. or amu ) used either the oxygen-16 relative isotopic mass or else the oxygen relative atomic mass (i.e., atomic weight) for reference. See the article on the history of the modern unified atomic mass unit for the resolution of these problems.
The IUPAC commission CIAAW maintains an expectation-interval value for relative atomic mass (or atomic weight) on Earth named standard atomic weight. Standard atomic weight requires the sources be terrestrial, natural, and stable with regard to radioactivity. Also, there are requirements for the research process. For 84 stable elements, CIAAW has determined this standard atomic weight. These values are widely published and referred to loosely as 'the' atomic weight of elements for real-life substances like pharmaceuticals and commercial trade.
Also, CIAAW has published abridged (rounded) values and simplified values (for when the Earthly sources vary systematically).
Atomic mass ( m a ) is the mass of a single atom. It defines the mass of a specific isotope, which is an input value for the determination of the relative atomic mass. An example for three silicon isotopes is given below. A convenient unit of mass for atomic mass is the dalton (Da), which is also called the unified atomic mass unit (u).
The relative isotopic mass is the ratio of the mass of a single atom to the atomic mass constant ( m u = 1 Da ). This ratio is dimensionless.
Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide ) and isotopic composition of a sample. Highly accurate atomic masses are available [ 17 ] [ 18 ] for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples. [ 19 ] [ 20 ] For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy. For example, there is an uncertainty of only one part in 38 million for the relative atomic mass of fluorine , a precision which is greater than the current best value for the Avogadro constant (one part in 20 million).
The calculation is exemplified for silicon , whose relative atomic mass is especially important in metrology . Silicon exists in nature as a mixture of three isotopes: 28 Si, 29 Si and 30 Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for 28 Si and about one part in one billion for the others. However, the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table).
The calculation is as follows:
The estimation of the uncertainty is complicated, [ 21 ] especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties, [ 22 ] and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is 10 −5 or 10 ppm.
Apart from this uncertainty by measurement, some elements have variation over sources. That is, different sources (ocean water, rocks) have a different radioactive history and so different isotopic composition. To reflect this natural variability, the IUPAC made the decision in 2010 to list the standard relative atomic masses of 10 elements as an interval rather than a fixed number. [ 23 ] | https://en.wikipedia.org/wiki/Relative_atomic_mass |
In any quantitative science , the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a standard or reference or starting value. [ 1 ] The comparison is expressed as a ratio and is a unitless number . By multiplying these ratios by 100 they can be expressed as percentages so the terms percentage change , percent(age) difference , or relative percentage difference are also commonly used. The terms "change" and "difference" are used interchangeably. [ 2 ]
Relative change is often used as a quantitative indicator of quality assurance and quality control for repeated measurements where the outcomes are expected to be the same. A special case of percent change (relative change expressed as a percentage) called percent error occurs in measuring situations where the reference value is the accepted or actual value (perhaps theoretically determined) and the value being compared to it is experimentally determined (by measurement).
The relative change formula is not well-behaved under many conditions. Various alternative formulas, called indicators of relative change , have been proposed in the literature. Several authors have found log change and log points to be satisfactory indicators, but these have not seen widespread use. [ 3 ]
Given two numerical quantities, v ref and v with v ref some reference value, their actual change , actual difference , or absolute change is
The term absolute difference is sometimes also used even though the absolute value is not taken; the sign of Δ typically is uniform, e.g. across an increasing data series. If the relationship of the value with respect to the reference value (that is, larger or smaller) does not matter in a particular application, the absolute value may be used in place of the actual change in the above formula to produce a value for the relative change which is always non-negative. The actual difference is not usually a good way to compare the numbers, in particular because it depends on the unit of measurement. For instance, 1 m is the same as 100 cm , but the absolute difference between 2 and 1 m is 1 while the absolute difference between 200 and 100 cm is 100, giving the impression of a larger difference. [ 4 ] But even with constant units, the relative change helps judge the importance of the respective change. For example, an increase in price of $100 of a valuable is considered big if changing from $50 to 150 but rather small when changing from $10,000 to 10,100 .
We can adjust the comparison to take into account the "size" of the quantities involved, by defining, for positive values of v ref :
relative change ( v ref , v ) = actual change reference value = Δ v v ref = v v ref − 1. {\displaystyle {\text{relative change}}(v_{\text{ref}},v)={\frac {\text{actual change}}{\text{reference value}}}={\frac {\Delta v}{v_{\text{ref}}}}={\frac {v}{v_{\text{ref}}}}-1.}
The relative change is independent of the unit of measurement employed; for example, the relative change from 2 to 1 m is −50% , the same as for 200 to 100 cm . The relative change is not defined if the reference value ( v ref ) is zero, and gives negative values for positive increases if v ref is negative, hence it is not usually defined for negative reference values either. For example, we might want to calculate the relative change of −10 to −6. The above formula gives (−6) − (−10) / −10 = 4 / −10 = −0.4 , indicating a decrease, yet in fact the reading increased.
Measures of relative change are unitless numbers expressed as a fraction . Corresponding values of percent change would be obtained by multiplying these values by 100 (and appending the % sign to indicate that the value is a percentage).
The domain restriction of relative change to positive numbers often poses a constraint. To avoid this problem it is common to take the absolute value, so that the relative change formula works correctly for all nonzero values of v ref :
Relative change ( v ref , v ) = v − v ref | v ref | . {\displaystyle {\text{Relative change}}(v_{\text{ref}},v)={\frac {v-v_{\text{ref}}}{|v_{\text{ref}}|}}.}
This still does not solve the issue when the reference is zero. It is common to instead use an indicator of relative change, and take the absolute values of both v and v reference {\displaystyle v_{\text{reference}}} . Then the only problematic case is v = v reference = 0 {\displaystyle v=v_{\text{reference}}=0} , which can usually be addressed by appropriately extending the indicator. For example, for arithmetic mean this formula may be used: [ 5 ] d r ( x , y ) = | x − y | ( | x | + | y | ) / 2 , d r ( 0 , 0 ) = 0 {\displaystyle d_{r}(x,y)={\frac {|x-y|}{(|x|+|y|)/2}},\ d_{r}(0,0)=0}
A percentage change is a way to express a change in a variable. It represents the relative change between the old value and the new one. [ 6 ]
For example, if a house is worth $100,000 today and the year after its value goes up to $110,000, the percentage change of its value can be expressed as 110000 − 100000 100000 = 0.1 = 10 % . {\displaystyle {\frac {110000-100000}{100000}}=0.1=10\%.}
It can then be said that the worth of the house went up by 10%.
More generally, if V 1 represents the old value and V 2 the new one, Percentage change = Δ V V 1 = V 2 − V 1 V 1 × 100 % . {\displaystyle {\text{Percentage change}}={\frac {\Delta V}{V_{1}}}={\frac {V_{2}-V_{1}}{V_{1}}}\times 100\%.}
Some calculators directly support this via a %CH or Δ% function.
When the variable in question is a percentage itself, it is better to talk about its change by using percentage points , to avoid confusion between relative difference and absolute difference .
The percent error is a special case of the percentage form of relative change calculated from the absolute change between the experimental (measured) and theoretical (accepted) values, and dividing by the theoretical (accepted) value.
% Error = | Experimental − Theoretical | | Theoretical | × 100. {\displaystyle \%{\text{ Error}}={\frac {|{\text{Experimental}}-{\text{Theoretical}}|}{|{\text{Theoretical}}|}}\times 100.}
The terms "Experimental" and "Theoretical" used in the equation above are commonly replaced with similar terms. Other terms used for experimental could be "measured," "calculated," or "actual" and another term used for theoretical could be "accepted." Experimental value is what has been derived by use of calculation and/or measurement and is having its accuracy tested against the theoretical value, a value that is accepted by the scientific community or a value that could be seen as a goal for a successful result.
Although it is common practice to use the absolute value version of relative change when discussing percent error, in some situations, it can be beneficial to remove the absolute values to provide more information about the result. Thus, if an experimental value is less than the theoretical value, the percent error will be negative. This negative result provides additional information about the experimental result. For example, experimentally calculating the speed of light and coming up with a negative percent error says that the experimental value is a velocity that is less than the speed of light. This is a big difference from getting a positive percent error, which means the experimental value is a velocity that is greater than the speed of light (violating the theory of relativity ) and is a newsworthy result.
The percent error equation, when rewritten by removing the absolute values, becomes: % Error = Experimental − Theoretical | Theoretical | × 100. {\displaystyle \%{\text{ Error}}={\frac {{\text{Experimental}}-{\text{Theoretical}}}{|{\text{Theoretical}}|}}\times 100.}
It is important to note that the two values in the numerator do not commute . Therefore, it is vital to preserve the order as above: subtract the theoretical value from the experimental value and not vice versa.
Suppose that car M costs $50,000 and car L costs $40,000. We wish to compare these costs. [ 7 ] With respect to car L , the absolute difference is $10,000 = $50,000 − $40,000 . That is, car M costs $10,000 more than car L . The relative difference is, $ 10 , 000 $ 40 , 000 = 0.25 = 25 % , {\displaystyle {\frac {\$10,000}{\$40,000}}=0.25=25\%,} and we say that car M costs 25% more than car L . It is also common to express the comparison as a ratio, which in this example is, $ 50 , 000 $ 40 , 000 = 1.25 = 125 % , {\displaystyle {\frac {\$50,000}{\$40,000}}=1.25=125\%,} and we say that car M costs 125% of the cost of car L .
In this example the cost of car L was considered the reference value, but we could have made the choice the other way and considered the cost of car M as the reference value. The absolute difference is now −$10,000 = $40,000 − $50,000 since car L costs $10,000 less than car M . The relative difference, − $ 10 , 000 $ 50 , 000 = − 0.20 = − 20 % {\displaystyle {\frac {-\$10,000}{\$50,000}}=-0.20=-20\%} is also negative since car L costs 20% less than car M . The ratio form of the comparison, $ 40 , 000 $ 50 , 000 = 0.8 = 80 % {\displaystyle {\frac {\$40,000}{\$50,000}}=0.8=80\%} says that car L costs 80% of what car M costs.
It is the use of the words "of" and "less/more than" that distinguish between ratios and relative differences. [ 8 ]
If a bank were to raise the interest rate on a savings account from 3% to 4%, the statement that "the interest rate was increased by 1%" would be incorrect and misleading. The absolute change in this situation is 1 percentage point (4% − 3%), but the relative change in the interest rate is: 4 % − 3 % 3 % = 0.333 … = 33 1 3 % . {\displaystyle {\frac {4\%-3\%}{3\%}}=0.333\ldots =33{\frac {1}{3}}\%.}
In general, the term "percentage point(s)" indicates an absolute change or difference of percentages, while the percent sign or the word "percentage" refers to the relative change or difference. [ 9 ]
The (classical) relative change above is but one of the possible measures/indicators of relative change. An indicator of relative change from x (initial or reference value) to y (new value) R ( x , y ) {\displaystyle R(x,y)} is a binary real-valued function defined for the domain of interest which satisfies the following properties: [ 10 ]
The normalization condition is motivated by the observation that R scaled by a constant c > 0 {\displaystyle c>0} still satisfies the other conditions besides normalization. Furthermore, due to the independence condition, every R can be written as a single argument function H of the ratio y / x {\displaystyle y/x} . [ 11 ] The normalization condition is then that H ′ ( 1 ) = 1 {\displaystyle H'(1)=1} . This implies all indicators behave like the classical one when y / x {\displaystyle y/x} is close to 1 .
Usually the indicator of relative change is presented as the actual change Δ scaled by some function of the values x and y , say f ( x , y ) . [ 2 ]
Relative change ( x , y ) = Actual change Δ f ( x , y ) = y − x f ( x , y ) . {\displaystyle {\text{Relative change}}(x,y)={\frac {{\text{Actual change}}\,\Delta }{f(x,y)}}={\frac {y-x}{f(x,y)}}.}
As with classical relative change, the general relative change is undefined if f ( x , y ) is zero. Various choices for the function f ( x , y ) have been proposed: [ 12 ]
As can be seen in the table, all but the first two indicators have, as denominator a mean . One of the properties of a mean function m ( x , y ) {\displaystyle m(x,y)} is: [ 12 ] m ( x , y ) = m ( y , x ) {\displaystyle m(x,y)=m(y,x)} , which means that all such indicators have a "symmetry" property that the classical relative change lacks: R ( x , y ) = − R ( y , x ) {\displaystyle R(x,y)=-R(y,x)} . This agrees with intuition that a relative change from x to y should have the same magnitude as a relative change in the opposite direction, y to x , just like the relation y x = 1 x y {\displaystyle {\frac {y}{x}}={\frac {1}{\frac {x}{y}}}} suggests.
Maximum mean change has been recommended when comparing floating point values in programming languages for equality with a certain tolerance. [ 13 ] Another application is in the computation of approximation errors when the relative error of a measurement is required. [ citation needed ] Minimum mean change has been recommended for use in econometrics. [ 14 ] [ 15 ] Logarithmic change has been recommended as a general-purpose replacement for relative change and is discussed more below.
Tenhunen defines a general relative difference function from L (reference value) to K : [ 16 ] H ( K , L ) = { ∫ 1 K / L t c − 1 d t when K > L − ∫ K / L 1 t c − 1 d t when K < L {\displaystyle H(K,L)={\begin{cases}\int _{1}^{K/L}t^{c-1}dt&{\text{when }}K>L\\-\int _{K/L}^{1}t^{c-1}dt&{\text{when }}K<L\end{cases}}}
which leads to
H ( K , L ) = { 1 c ⋅ ( ( K / L ) c − 1 ) c ≠ 0 ln ( K / L ) c = 0 , K > 0 , L > 0 {\displaystyle H(K,L)={\begin{cases}{\frac {1}{c}}\cdot ((K/L)^{c}-1)&c\neq 0\\\ln(K/L)&c=0,K>0,L>0\end{cases}}}
In particular for the special cases c = ± 1 {\displaystyle c=\pm 1} ,
H ( K , L ) = { ( K − L ) / K c = − 1 ( K − L ) / L c = 1 {\displaystyle H(K,L)={\begin{cases}(K-L)/K&c=-1\\(K-L)/L&c=1\end{cases}}}
Of these indicators of relative change, the most natural arguably is the natural logarithm (ln) of the ratio of the two numbers (final and initial), called log change . [ 2 ] Indeed, when | V 1 − V 0 V 0 | ≪ 1 {\displaystyle \left|{\frac {V_{1}-V_{0}}{V_{0}}}\right|\ll 1} , the following approximation holds: ln V 1 V 0 = ∫ V 0 V 1 d V V ≈ ∫ V 0 V 1 d V V 0 = V 1 − V 0 V 0 = classical relative change {\displaystyle \ln {\frac {V_{1}}{V_{0}}}=\int _{V_{0}}^{V_{1}}{\frac {{\mathrm {d} }V}{V}}\approx \int _{V_{0}}^{V_{1}}{\frac {{\mathrm {d} }V}{V_{0}}}={\frac {V_{1}-V_{0}}{V_{0}}}={\text{classical relative change}}}
In the same way that relative change is scaled by 100 to get percentages, ln V 1 V 0 {\displaystyle \ln {\frac {V_{1}}{V_{0}}}} can be scaled by 100 to get what is commonly called log points . [ 17 ] Log points are equivalent to the unit centinepers (cNp) when measured for root-power quantities. [ 18 ] [ 19 ] This quantity has also been referred to as a log percentage and denoted L% . [ 2 ] Since the derivative of the natural log at 1 is 1, log points are approximately equal to percent change for small differences – for example an increase of 1% equals an increase of 0.995 cNp, and a 5% increase gives a 4.88 cNp increase. This approximation property does not hold for other choices of logarithm base, which introduce a scaling factor due to the derivative not being 1. Log points can thus be used as a replacement for percent change. [ 20 ] [ 18 ]
Using log change has the advantages of additivity compared to relative change. [ 2 ] [ 18 ] Specifically, when using log change, the total change after a series of changes equals the sum of the changes. With percent, summing the changes is only an approximation, with larger error for larger changes. [ 18 ] For example:
Note that in the above table, since relative change 0 (respectively relative change 1 ) has the same numerical value as log change 0 (respectively log change 1 ), it does not correspond to the same variation. The conversion between relative and log changes may be computed as log change = ln ( 1 + relative change ) {\displaystyle {\text{log change}}=\ln(1+{\text{relative change}})} .
By additivity, ln V 1 V 0 + ln V 0 V 1 = 0 {\displaystyle \ln {\frac {V_{1}}{V_{0}}}+\ln {\frac {V_{0}}{V_{1}}}=0} , and therefore additivity implies a sort of symmetry property, namely ln V 1 V 0 = − ln V 0 V 1 {\displaystyle \ln {\frac {V_{1}}{V_{0}}}=-\ln {\frac {V_{0}}{V_{1}}}} and thus the magnitude of a change expressed in log change is the same whether V 0 or V 1 is chosen as the reference. [ 18 ] In contrast, for relative change, V 1 − V 0 V 0 ≠ − V 0 − V 1 V 1 {\displaystyle {\frac {V_{1}-V_{0}}{V_{0}}}\neq -{\frac {V_{0}-V_{1}}{V_{1}}}} , with the difference ( V 1 − V 0 ) 2 V 0 V 1 {\displaystyle {\frac {(V_{1}-V_{0})^{2}}{V_{0}V_{1}}}} becoming larger as V 1 or V 0 approaches 0 while the other remains fixed. For example:
Here 0 + means taking the limit from above towards 0.
The log change is the unique two-variable function that is additive, and whose linearization matches relative change. There is a family of additive difference functions F λ ( x , y ) {\displaystyle F_{\lambda }(x,y)} for any λ ∈ R {\displaystyle \lambda \in \mathbb {R} } , such that absolute change is F 0 {\displaystyle F_{0}} and log change is F 1 {\displaystyle F_{1}} . [ 21 ] | https://en.wikipedia.org/wiki/Relative_change |
Relative density , also called specific gravity , [ 1 ] [ 2 ] is a dimensionless quantity defined as the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. Specific gravity for solids and liquids is nearly always measured with respect to water at its densest (at 4 °C or 39.2 °F); for gases, the reference is air at room temperature (20 °C or 68 °F). The term "relative density" (abbreviated r.d. or RD ) is preferred in SI , whereas the term "specific gravity" is gradually being abandoned. [ 3 ]
If a substance's relative density is less than 1 then it is less dense than the reference; if greater than 1 then it is denser than the reference. If the relative density is exactly 1 then the densities are equal; that is, equal volumes of the two substances have the same mass. If the reference material is water, then a substance with a relative density (or specific gravity) less than 1 will float in water. For example, an ice cube, with a relative density of about 0.91, will float. A substance with a relative density greater than 1 will sink.
Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm (101.325 kPa ). Where it is not, it is more usual to specify the density directly. Temperatures for both sample and reference vary from industry to industry. In British brewing practice, the specific gravity, as specified above, is multiplied by 1000. [ 4 ] Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines , must weight ( syrups , juices, honeys, brewers wort , must , etc.) and acids.
Relative density ( R D {\displaystyle RD} ) or specific gravity ( S G {\displaystyle SG} ) is a dimensionless quantity , as it is the ratio of either densities or weights R D = ρ s u b s t a n c e ρ r e f e r e n c e , {\displaystyle {\mathit {RD}}={\frac {\rho _{\mathrm {substance} }}{\rho _{\mathrm {reference} }}},} where R D {\displaystyle RD} is relative density, ρ s u b s t a n c e {\displaystyle \rho _{\mathrm {substance} }} is the density of the substance being measured, and ρ r e f e r e n c e {\displaystyle \rho _{\mathrm {reference} }} is the density of the reference. (By convention ρ {\displaystyle \rho } , the Greek letter rho , denotes density.)
The reference material can be indicated using subscripts: R D s u b s t a n c e / r e f e r e n c e {\displaystyle RD_{\mathrm {substance/reference} }} which means "the relative density of substance with respect to reference ". If the reference is not explicitly stated then it is normally assumed to be water at 4 ° C (or, more precisely, 3.98 °C, which is the temperature at which water reaches its maximum density). In SI units, the density of water is (approximately) 1000 kg / m 3 or 1 g / cm 3 , which makes relative density calculations particularly convenient: the density of the object only needs to be divided by 1000 or 1, depending on the units.
The relative density of gases is often measured with respect to dry air at a temperature of 20 °C and a pressure of 101.325 kPa absolute, which has a density of 1.205 kg/m 3 . Relative density with respect to air can be obtained by R D = ρ g a s ρ a i r ≈ M g a s M a i r , {\displaystyle {\mathit {RD}}={\frac {\rho _{\mathrm {gas} }}{\rho _{\mathrm {air} }}}\approx {\frac {M_{\mathrm {gas} }}{M_{\mathrm {air} }}},} where M {\displaystyle M} is the molar mass and the approximately equal sign is used because equality pertains only if 1 mol of the gas and 1 mol of air occupy the same volume at a given temperature and pressure, i.e., they are both ideal gases . Ideal behaviour is usually only seen at very low pressure. For example, one mol of an ideal gas occupies 22.414 L at 0 °C and 1 atmosphere whereas carbon dioxide has a molar volume of 22.259 L under those same conditions.
Those with SG greater than 1 are denser than water and will, disregarding surface tension effects, sink in it. Those with an SG less than 1 are less dense than water and will float on it. In scientific work, the relationship of mass to volume is usually expressed directly in terms of the density (mass per unit volume) of the substance under study. It is in industry where specific gravity finds wide application, often for historical reasons.
True specific gravity of a liquid can be expressed mathematically as: S G t r u e = ρ s a m p l e ρ H 2 O , {\displaystyle SG_{\mathrm {true} }={\frac {\rho _{\mathrm {sample} }}{\rho _{\mathrm {H_{2}O} }}},} where ρ s a m p l e {\displaystyle \rho _{\mathrm {sample} }} is the density of the sample and ρ H 2 O {\displaystyle \rho _{\mathrm {H_{2}O} }} is the density of water.
The apparent specific gravity is simply the ratio of the weights of equal volumes of sample and water in air: S G a p p a r e n t = W A , sample W A , H 2 O , {\displaystyle SG_{\mathrm {apparent} }={\frac {W_{\mathrm {A} ,{\text{sample}}}}{W_{\mathrm {A} ,\mathrm {H_{2}O} }}},} where W A , sample {\displaystyle W_{A,{\text{sample}}}} represents the weight of the sample measured in air and W A , H 2 O {\displaystyle {W_{\mathrm {A} ,\mathrm {H_{2}O} }}} the weight of an equal volume of water measured in air.
It can be shown that true specific gravity can be computed from different properties: S G t r u e = ρ s a m p l e ρ H 2 O = m s a m p l e V m H 2 O V = m s a m p l e m H 2 O g g = W V , sample W V , H 2 O , {\displaystyle SG_{\mathrm {true} }={\frac {\rho _{\mathrm {sample} }}{\rho _{\mathrm {H_{2}O} }}}={\frac {\frac {m_{\mathrm {sample} }}{V}}{\frac {m_{\mathrm {H_{2}O} }}{V}}}={\frac {m_{\mathrm {sample} }}{m_{\mathrm {H_{2}O} }}}{\frac {g}{g}}={\frac {W_{\mathrm {V} ,{\text{sample}}}}{W_{\mathrm {V} ,\mathrm {H_{2}O} }}},}
where g is the local acceleration due to gravity, V is the volume of the sample and of water (the same for both), ρ sample is the density of the sample, ρ H 2 O is the density of water, W V represents a weight obtained in vacuum, m s a m p l e {\displaystyle {\mathit {m}}_{\mathrm {sample} }} is the mass of the sample and m H 2 O {\displaystyle {\mathit {m}}_{\mathrm {H_{2}O} }} is the mass of an equal volume of water.
The density of water and of the sample varies with temperature and pressure, so it is necessary to specify the temperatures and pressures at which the densities or weights were determined. Measurements are nearly always made at 1 nominal atmosphere (101.325 kPa ± variations from changing weather patterns), but as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products), variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true ( in vacuo ) specific gravity calculations, air pressure must be considered (see below). Temperatures are specified by the notation ( T s / T r ), with T s representing the temperature at which the sample's density was determined and T r the temperature at which the reference (water) density is specified. For example, SG (20 °C/4 °C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures, while SG H 2 O = 1.000 000 (20 °C/20 °C), it is also the case that SG H 2 O = 0.998 2008 ⁄ 0.999 9720 = 0.998 2288 (20 °C/4 °C). Here, temperature is being specified using the current ITS-90 scale and the densities [ 5 ] used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale, the densities at 20 °C and 4 °C are 0.998 2041 and 0.999 9720 respectively, [ 6 ] resulting in an SG (20 °C/4 °C) value for water of 0.998 232 .
As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and as these are found in tables of SG versus concentration, it is extremely important that the analyst enter the table with the correct form of specific gravity. For example, in the brewing industry, the Plato table lists sucrose concentration by weight against true SG, and was originally (20 °C/4 °C) [ 7 ] i.e. based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density, ρ H 2 O equal to 999.972 kg/m 3 in SI units ( 0.999 972 g/cm 3 in cgs units or 62.43 lb/cu ft in United States customary units ). The ASBC table [ 8 ] in use today in North America for apparent specific gravity measurements at (20 °C/20 °C) is derived from the original Plato table using Plato et al.‘s value for SG(20 °C/4 °C) = 0.998 2343 . In the sugar, soft drink, honey, fruit juice and related industries, sucrose concentration by weight is taken from a table prepared by A. Brix , which uses SG (17.5 °C/17.5 °C). As a final example, the British SG units are based on reference and sample temperatures of 60 °F and are thus (15.56 °C/15.56 °C).
Given the specific gravity of a substance, its actual density can be calculated by rearranging the above formula: ρ s u b s t a n c e = S G × ρ H 2 O . {\displaystyle \rho _{\mathrm {substance} }=SG\times \rho _{\mathrm {H_{2}O} }.}
Occasionally a reference substance other than water is specified (for example, air), in which case specific gravity means density relative to that reference.
The density of substances varies with temperature and pressure so that it is necessary to specify the temperatures and pressures at which the densities or masses were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (101.325 kPa ignoring the variations caused by changing weather patterns) but as relative density usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent relative density is being measured. For true ( in vacuo ) relative density calculations air pressure must be considered (see below). Temperatures are specified by the notation ( T s / T r ) with T s representing the temperature at which the sample's density was determined and T r the temperature at which the reference (water) density is specified. For example, SG (20 °C/4 °C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures, while SG H 2 O = 1.000000 (20 °C/20 °C) it is also the case that RD H 2 O = 0.9982008 / 0.9999720 = 0.9982288 (20 °C/4 °C). Here temperature is being specified using the current ITS-90 scale and the densities [ 5 ] used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities [ 6 ] at 20 °C and 4 °C are, respectively, 0.9982041 and 0.9999720 resulting in an RD (20 °C/4 °C) value for water of 0.99823205.
The temperatures of the two materials may be explicitly stated in the density symbols; for example:
where the superscript indicates the temperature at which the density of the material is measured, and the subscript indicates the temperature of the reference substance to which it is compared.
Relative density can also help to quantify the buoyancy of a substance in a fluid or gas, or determine the density of an unknown substance from the known density of another. Relative density is often used by geologists and mineralogists to help determine the mineral content of a rock or other sample. Gemologists use it as an aid in the identification of gemstones . Water is preferred as the reference because measurements are then easy to carry out in the field (see below for examples of measurement methods).
As the principal use of relative density measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of RD vs concentration it is extremely important that the analyst enter the table with the correct form of relative density. For example, in the brewing industry, the Plato table , which lists sucrose concentration by mass against true RD, were originally (20 °C/4 °C) [ 7 ] that is based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density of ρ ( H 2 O ) equal to 0.999972 g/cm 3 (or 62.43 lb·ft −3 ). The ASBC table [ 8 ] in use today in North America, while it is derived from the original Plato table is for apparent relative density measurements at (20 °C/20 °C) on the IPTS-68 scale where the density of water is 0.9982071 g/cm 3 . In the sugar, soft drink, honey, fruit juice and related industries sucrose concentration by mass is taken from this work [ 4 ] which uses SG (17.5 °C/17.5 °C). As a final example, the British RD units are based on reference and sample temperatures of 60 °F and are thus (15.56 °C/15.56 °C). [ 4 ]
Relative density is use in medicine particularly in pharmaceutical field. It is used in automated compounders in preparation of multicomponent mixtures for parenteral nutrition , while it is an important factor in urinalysis , relative density is an indicator of both the concentration of particles in the urine and a patient's degree of hydration. [ 9 ]
Relative density can be calculated directly by measuring the density of a sample and dividing it by the (known) density of the reference substance. The density of the sample is simply its mass divided by its volume. Although mass is easy to measure, the volume of an irregularly shaped sample can be more difficult to ascertain. One method is to put the sample in a water-filled graduated cylinder and read off how much water it displaces. Alternatively the container can be filled to the brim, the sample immersed, and the volume of overflow measured. The surface tension of the water may keep a significant amount of water from overflowing, which is especially problematic for small samples. For this reason it is desirable to use a water container with as small a mouth as possible.
For each substance, the density, ρ , is given by ρ = Mass Volume = Deflection × Spring Constant Gravity Displacement W a t e r L i n e × Area C y l i n d e r . {\displaystyle \rho ={\frac {\text{Mass}}{\text{Volume}}}={\frac {{\text{Deflection}}\times {\frac {\text{Spring Constant}}{\text{Gravity}}}}{{\text{Displacement}}_{\mathrm {WaterLine} }\times {\text{Area}}_{\mathrm {Cylinder} }}}.}
When these densities are divided, references to the spring constant, gravity and cross-sectional area simply cancel, leaving R D = ρ o b j e c t ρ r e f = Deflection O b j . Displacement O b j . Deflection R e f . Displacement R e f . = 3 i n 20 m m 5 i n 34 m m = 3 i n × 34 m m 5 i n × 20 m m = 1.02. {\displaystyle RD={\frac {\rho _{\mathrm {object} }}{\rho _{\mathrm {ref} }}}={\frac {\frac {{\text{Deflection}}_{\mathrm {Obj.} }}{{\text{Displacement}}_{\mathrm {Obj.} }}}{\frac {{\text{Deflection}}_{\mathrm {Ref.} }}{{\text{Displacement}}_{\mathrm {Ref.} }}}}={\frac {\frac {3\ \mathrm {in} }{20\ \mathrm {mm} }}{\frac {5\ \mathrm {in} }{34\ \mathrm {mm} }}}={\frac {3\ \mathrm {in} \times 34\ \mathrm {mm} }{5\ \mathrm {in} \times 20\ \mathrm {mm} }}=1.02.}
Relative density is more easily and perhaps more accurately measured without measuring volume. Using a spring scale, the sample is weighed first in air and then in water. Relative density (with respect to water) can then be calculated using the following formula: R D = W a i r W a i r − W w a t e r , {\displaystyle RD={\frac {W_{\mathrm {air} }}{W_{\mathrm {air} }-W_{\mathrm {water} }}},} where
This technique cannot easily be used to measure relative densities less than one, because the sample will then float. W water becomes a negative quantity, representing the force needed to keep the sample underwater.
Another practical method uses three measurements. The sample is weighed dry. Then a container filled to the brim with water is weighed, and weighed again with the sample immersed, after the displaced water has overflowed and been removed. Subtracting the last reading from the sum of the first two readings gives the weight of the displaced water. The relative density result is the dry sample weight divided by that of the displaced water. This method allows the use of scales which cannot handle a suspended sample. A sample less dense than water can also be handled, but it has to be held down, and the error introduced by the fixing material must be considered.
The relative density of a liquid can be measured using a hydrometer. This consists of a bulb attached to a stalk of constant cross-sectional area, as shown in the adjacent diagram.
First the hydrometer is floated in the reference liquid (shown in light blue), and the displacement (the level of the liquid on the stalk) is marked (blue line). The reference could be any liquid, but in practice it is usually water.
The hydrometer is then floated in a liquid of unknown density (shown in green). The change in displacement, Δ x , is noted. In the example depicted, the hydrometer has dropped slightly in the green liquid; hence its density is lower than that of the reference liquid. It is necessary that the hydrometer floats in both liquids.
The application of simple physical principles allows the relative density of the unknown liquid to be calculated from the change in displacement. (In practice the stalk of the hydrometer is pre-marked with graduations to facilitate this measurement.)
In the explanation that follows,
Since the floating hydrometer is in static equilibrium , the downward gravitational force acting upon it must exactly balance the upward buoyancy force. The gravitational force acting on the hydrometer is simply its weight, mg . From the Archimedes buoyancy principle, the buoyancy force acting on the hydrometer is equal to the weight of liquid displaced. This weight is equal to the mass of liquid displaced multiplied by g , which in the case of the reference liquid is ρ ref Vg . Setting these equal, we have m g = ρ r e f V g {\displaystyle mg=\rho _{\mathrm {ref} }Vg}
or just
Exactly the same equation applies when the hydrometer is floating in the liquid being measured, except that the new volume is V − A Δ x (see note above about the sign of Δ x ). Thus,
Combining ( 1 ) and ( 2 ) yields
But from ( 1 ) we have V = m / ρ ref . Substituting into ( 3 ) gives
This equation allows the relative density to be calculated from the change in displacement, the known density of the reference liquid, and the known properties of the hydrometer. If Δ x is small then, as a first-order approximation of the geometric series equation ( 4 ) can be written as: R D n e w / r e f ≈ 1 + A Δ x m ρ r e f . {\displaystyle RD_{\mathrm {new/ref} }\approx 1+{\frac {A\Delta x}{m}}\rho _{\mathrm {ref} }.}
This shows that, for small Δ x , changes in displacement are approximately proportional to changes in relative density.
A pycnometer (from Ancient Greek : πυκνός , romanized : puknos , lit. 'dense'), also called pyknometer or specific gravity bottle , is a device used to determine the density of a liquid. A pycnometer is usually made of glass , with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatus. This device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury , using an analytical balance . [ citation needed ]
If the flask is weighed empty, full of water, and full of a liquid whose relative density is desired, the relative density of the liquid can easily be calculated. The particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometer. The powder is added to the pycnometer, which is then weighed, giving the weight of the powder sample. The pycnometer is then filled with a liquid of known density, in which the powder is completely insoluble. The weight of the displaced liquid can then be determined, and hence the relative density of the powder.
A gas pycnometer , the gas-based manifestation of a pycnometer, compares the change in pressure caused by a measured change in a closed volume containing a reference (usually a steel sphere of known volume) with the change in pressure caused by the sample under the same conditions. The difference in change of pressure represents the volume of the sample as compared to the reference sphere, and is usually used for solid particulates that may dissolve in the liquid medium of the pycnometer design described above, or for porous materials into which the liquid would not fully penetrate.
When a pycnometer is filled to a specific, but not necessarily accurately known volume, V and is placed upon a balance, it will exert a force F b = g ( m b − ρ a m b ρ b ) , {\displaystyle F_{\mathrm {b} }=g\left(m_{\mathrm {b} }-\rho _{\mathrm {a} }{\frac {m_{\mathrm {b} }}{\rho _{\mathrm {b} }}}\right),} where m b is the mass of the bottle and g the gravitational acceleration at the location at which the measurements are being made. ρ a is the density of the air at the ambient pressure and ρ b is the density of the material of which the bottle is made (usually glass) so that the second term is the mass of air displaced by the glass of the bottle whose weight, by Archimedes Principle must be subtracted. The bottle is filled with air but as that air displaces an equal amount of air the weight of that air is canceled by the weight of the air displaced. Now we fill the bottle with the reference fluid e.g. pure water. The force exerted on the pan of the balance becomes: F w = g ( m b − ρ a m b ρ b + V ρ w − V ρ a ) . {\displaystyle F_{\mathrm {w} }=g\left(m_{\mathrm {b} }-\rho _{\mathrm {a} }{\frac {m_{\mathrm {b} }}{\rho _{\mathrm {b} }}}+V\rho _{\mathrm {w} }-V\rho _{\mathrm {a} }\right).}
If we subtract the force measured on the empty bottle from this (or tare the balance before making the water measurement) we obtain. F w , n = g V ( ρ w − ρ a ) , {\displaystyle F_{\mathrm {w,n} }=gV(\rho _{\mathrm {w} }-\rho _{\mathrm {a} }),} where the subscript n indicated that this force is net of the force of the empty bottle. The bottle is now emptied, thoroughly dried and refilled with the sample. The force, net of the empty bottle, is now: F s , n = g V ( ρ s − ρ a ) , {\displaystyle F_{\mathrm {s,n} }=gV(\rho _{\mathrm {s} }-\rho _{\mathrm {a} }),} where ρ s is the density of the sample. The ratio of the sample and water forces is: S G A = g V ( ρ s − ρ a ) g V ( ρ w − ρ a ) = ρ s − ρ a ρ w − ρ a . {\displaystyle SG_{\mathrm {A} }={\frac {gV(\rho _{\mathrm {s} }-\rho _{\mathrm {a} })}{gV(\rho _{\mathrm {w} }-\rho _{\mathrm {a} })}}={\frac {\rho _{\mathrm {s} }-\rho _{\mathrm {a} }}{\rho _{\mathrm {w} }-\rho _{\mathrm {a} }}}.}
This is called the apparent relative density , denoted by subscript A, because it is what we would obtain if we took the ratio of net weighings in air from an analytical balance or used a hydrometer (the stem displaces air). Note that the result does not depend on the calibration of the balance. The only requirement on it is that it read linearly with force. Nor does RD A depend on the actual volume of the pycnometer.
Further manipulation and finally substitution of RD V , the true relative density (the subscript V is used because this is often referred to as the relative density in vacuo ), for ρ s / ρ w gives the relationship between apparent and true relative density:
R D A = ρ s ρ w − ρ a ρ w 1 − ρ a ρ w = R D V − ρ a ρ w 1 − ρ a ρ w . {\displaystyle RD_{\mathrm {A} }={{\rho _{\mathrm {s} } \over \rho _{\mathrm {w} }}-{\rho _{\mathrm {a} } \over \rho _{\mathrm {w} }} \over 1-{\rho _{\mathrm {a} } \over \rho _{\mathrm {w} }}}={RD_{\mathrm {V} }-{\rho _{\mathrm {a} } \over \rho _{\mathrm {w} }} \over 1-{\rho _{\mathrm {a} } \over \rho _{\mathrm {w} }}}.}
In the usual case we will have measured weights and want the true relative density. This is found from R D V = R D A − ρ a ρ w ( R D A − 1 ) . {\displaystyle RD_{\mathrm {V} }=RD_{\mathrm {A} }-{\rho _{\mathrm {a} } \over \rho _{\mathrm {w} }}(RD_{\mathrm {A} }-1).}
Since the density of dry air at 101.325 kPa at 20 °C is [ 10 ] 0.001205 g/cm 3 and that of water is 0.998203 g/cm 3 we see that the difference between true and apparent relative densities for a substance with relative density (20 °C/20 °C) of about 1.100 would be 0.000120. Where the relative density of the sample is close to that of water (for example dilute ethanol solutions) the correction is even smaller.
The pycnometer is used in ISO standard: ISO 1183-1:2004, ISO 1014–1985 and ASTM standard: ASTM D854.
Types
Hydrostatic Pressure-based Instruments : This technology relies upon Pascal's Principle which states that the pressure difference between two points within a vertical column of fluid is dependent upon the vertical distance between the two points, the density of the fluid and the gravitational force. This technology is often used for tank gauging applications as a convenient means of liquid level and density measure.
Vibrating Element Transducers : This type of instrument requires a vibrating element to be placed in contact with the fluid of interest. The resonant frequency of the element is measured and is related to the density of the fluid by a characterization that is dependent upon the design of the element. In modern laboratories precise measurements of relative density are made using oscillating U-tube meters. These are capable of measurement to 5 to 6 places beyond the decimal point and are used in the brewing, distilling, pharmaceutical, petroleum and other industries. The instruments measure the actual mass of fluid contained in a fixed volume at temperatures between 0 and 80 °C but as they are microprocessor based can calculate apparent or true relative density and contain tables relating these to the strengths of common acids, sugar solutions, etc.
Ultrasonic Transducer : Ultrasonic waves are passed from a source, through the fluid of interest, and into a detector which measures the acoustic spectroscopy of the waves. Fluid properties such as density and viscosity can be inferred from the spectrum.
Radiation-based Gauge : Radiation is passed from a source, through the fluid of interest, and into a scintillation detector, or counter. As the fluid density increases, the detected radiation "counts" will decrease. The source is typically the radioactive isotope caesium-137 , with a half-life of about 30 years. A key advantage for this technology is that the instrument is not required to be in contact with the fluid—typically the source and detector are mounted on the outside of tanks or piping. [ 11 ]
Buoyant Force Transducer : the buoyancy force produced by a float in a homogeneous liquid is equal to the weight of the liquid that is displaced by the float. Since buoyancy force is linear with respect to the density of the liquid within which the float is submerged, the measure of the buoyancy force yields a measure of the density of the liquid. One commercially available unit claims the instrument is capable of measuring relative density with an accuracy of ± 0.005 RD units. The submersible probe head contains a mathematically characterized spring-float system. When the head is immersed vertically in the liquid, the float moves vertically and the position of the float controls the position of a permanent magnet whose displacement is sensed by a concentric array of Hall-effect linear displacement sensors. The output signals of the sensors are mixed in a dedicated electronics module that provides a single output voltage whose magnitude is a direct linear measure of the quantity to be measured. [ 12 ]
Relative density D R {\displaystyle D_{\mathrm {R} }} a measure of the current void ratio in relation to the maximum and minimum void ratios, and applied effective stress control the mechanical behavior of cohesionless soil. Relative density is defined by D R = e m a x − e e m a x − e m i n × 100 % {\displaystyle D_{\mathrm {R} }={\frac {e_{\mathrm {max} }-e}{e_{\mathrm {max} }-e_{\mathrm {min} }}}\times 100\%} in which e m a x , e m i n {\displaystyle e_{\mathrm {max} },e_{\mathrm {min} }} , and e {\displaystyle e} are the maximum, minimum and actual void ratios.
Specific gravity (SG) is a useful concept but has several limitations. One major issue is its sensitivity to temperature since the density of both the substance being measured and the reference changes with temperature, affecting accuracy. [ 13 ] It also assumes materials are incompressible, which isn't true for gasses or some liquids under varying pressures. [ 14 ] It doesn't provide detailed information about a material’s composition or properties beyond density. [ 15 ] Errors can also occur due to impurities, incomplete mixing, or air bubbles in liquids, which can skew results. [ 16 ]
(Samples may vary, and these figures are approximate.)
Substances with a relative density of 1 are neutrally buoyant, those with RD greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an RD of less than one are less dense than water, and so will float.
Example: R D H 2 O = ρ M a t e r i a l ρ H 2 O = R D , {\displaystyle RD_{\mathrm {H_{2}O} }={\frac {\rho _{\mathrm {Material} }}{\rho _{\mathrm {H_{2}O} }}}=RD,}
Helium gas has a density of 0.164 g/L; [ 17 ] it is 0.139 times as dense as air , which has a density of 1.18 g/L. [ 17 ] | https://en.wikipedia.org/wiki/Relative_density |
In mathematics , specifically linear algebra and geometry , relative dimension is the dual notion to codimension .
In linear algebra, given a quotient map V → Q {\displaystyle V\to Q} , the difference dim V − dim Q is the relative dimension; this equals the dimension of the kernel .
In fiber bundles , the relative dimension of the map is the dimension of the fiber.
More abstractly, the codimension of a map is the dimension of the cokernel , while the relative dimension of a map is the dimension of the kernel .
These are dual in that the inclusion of a subspace V → W {\displaystyle V\to W} of codimension k dualizes to yield a quotient map W ∗ → V ∗ {\displaystyle W^{*}\to V^{*}} of relative dimension k , and conversely.
The additivity of codimension under intersection corresponds to the additivity of relative dimension in a fiber product . Just as codimension is mostly used for injective maps, relative dimension is mostly used for surjective maps.
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relative_dimension |
Relative Fat Mass ( RFM ) is a simple formula for the estimation of overweight or obesity in humans that requires only a calculation based on a ratio of height and waist measurements. [ 1 ]
High body fat is associated with increased risks of poor health and early mortality. [ 2 ] RFM is a simple anthropometric procedure that is claimed to be more convenient than body fat percentage and more accurate than the traditional body mass index (BMI).
The ratio of the patient's height and waist measurement, both in meters, is multiplied by 20 before being subtracted from a number (shown in bold below) that adjusts for differences in gender and height:
Although generally validated on a database of some 12,000 adults, RFM has not yet been evaluated in longitudinal studies of large populations to identify normal or abnormal RFM in relation to obesity-related health problems.
This medical sign article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relative_fat_mass |
The terms " relative fluorescence units " (RFU) and " RFU peak " refer to measurements in electrophoresis methods, such as for DNA analysis. A "relative fluorescence unit" is a unit of measurement used in analysis which employs fluorescence detection. [ 1 ] Fluorescence is detected using a charge-coupled device (CCD) array, when the labeled fragments, which are separated within a capillary by using electrophoresis, are energized by laser light and travel across the detection window. A computer program measures the results, determining the quantity or size of the fragments, at each data point, from the level of fluorescence intensity. [ 1 ] Samples which contain higher quantities of amplified DNA will have higher corresponding RFU values. [ 2 ] [ 3 ]
An "RFU peak" is a relative maximum point along a graph of the analyzed data. The data can be normalized to DNA input or additional normalizing genes. The RFU heights can range from 0 to several thousands.
The RFU measurements are used, for DNA profiling , in a real-time polymerase chain reaction (PCR). Two common methods for detection of products in real-time PCR are: (1) non-specific fluorescent dyes that intercalate with any double-stranded DNA, and (2) sequence-specific DNA probes consisting of oligonucleotides that are labeled with a fluorescent reporter which permits detection only after hybridization of the probe with its complementary DNA target. Frequently, real-time PCR is combined with reverse transcription to quantify messenger RNA and Non-coding RNA in cells or tissues.
The RFU peak height depends on the amount of DNA being analyzed. When the amount of DNA is very low, then
it can be difficult to separate a true low-level RFU peak from signal noise or other technical artifacts . [ 4 ] As a result, many forensic DNA laboratories have set minimum RFU peak-height levels in "scoring" the analysis of alleles . [ 4 ]
There are no firm industry-wide rules for establishing minimum RFU threshold values. [ 4 ] Each laboratory, in general, has established its own threshold levels as one aspect of its particular validation procedure. Many laboratories have established both lower and upper thresholds for data interpretation, as a window of minimum and maximum readings. [ 4 ]
Some threshold levels can be derived experimentally based on the equipment's known signal-to-noise ratios , or a threshold can be defined to match published data or the manufacturer specifications. [ 4 ] The company which sells the most widely used equipment for STR typing, Applied Biosystems, Inc. (ABI), has recommended a peak-height minimum of 150 RFU, advising how peaks below that level should be judged with caution. However, many forensic laboratories which have ABI systems have defined lower thresholds, often only 50 to 100 RFU, as determined by their own studies. [ 4 ]
Many different factors can affect a laboratory's choice of thresholds. [ 4 ] For instance, there might be regulatory guidelines in specific jurisdictions. Also, different kinds of instruments vary in sensitivity (such as slab gel instruments being less sensitive than capillary electrophoresis (CE) instruments). Individual instruments, of a particular model type, have also been known to differ in performance (e.g. differences among various ABI 310 units, all of the same model). Capillary electrophoresis instruments generally provide better resolution compared gel-based systems, as well having better sensitivity. In addition, some laboratories have set different threshold standards depending on which instruments in the lab are used for an analysis. [ 4 ]
Setting an upper maximum threshold is critical when analyzing DNA data within high quantity samples. [ 4 ] Samples with large amounts of amplified DNA will report high RFU levels that might oversaturate an instrument's sensitivity to measure the results. In such cases, an accurate measurement of the relative peak heights and/or areas might be unattainable. Oversaturation can be a problem when analyzing mixed samples. [ 4 ] | https://en.wikipedia.org/wiki/Relative_fluorescence_units |
Relative hour ( Hebrew singular: shaʿah zǝmanit / שעה זמנית ; plural: shaʿot - zǝmaniyot / שעות זמניות ), sometimes called halachic hour , temporal hour , seasonal hour and variable hour , is a term used in rabbinic Jewish law that assigns 12 hours to each day and 12 hours to each night, all throughout the year. A relative hour has no fixed length in absolute time, but changes with the length of daylight each day - depending on summer (when the days are long and the nights are short), and in winter (when the days are short and the nights are long). Even so, in all seasons a day is always divided into 12 hours, and a night is always divided into 12 hours, which invariably makes for a longer hour or a shorter hour. [ 1 ] [ 2 ] [ 3 ] At Mediterranean latitude, one hour can be about 45 minutes at the winter solstice , and 75 minutes at summer solstice . [ 4 ] All of the hours mentioned by the Sages in either the Mishnah or Talmud , or in other rabbinic writings, refer strictly to relative hours. [ 5 ]
Another feature of this ancient practice is that, unlike the standard modern 12-hour clock that assigns 12 o'clock pm for noon time, in the ancient Jewish tradition noon time was always the sixth hour of the day, whereas the first hour began with the break of dawn according to many Halachic authorities, [ 6 ] and with sunrise according to others. [ 7 ] Midnight (12:00 am local official clock time) was also the sixth hour of the night, which, depending on summer or winter, can come before or after 12:00 am local official clock time, whereas the first hour of the night always begins after sunset, when the first three stars appeared in the night sky.
During the Spring ( באחד בתקופת ניסן ) and Autumnal ( באחד בתקופת תשרי ) equinox (around 20 March and 23 September ), the length of a day and night are equal. [ 8 ] However, even during the summer solstice and winter solstice when the length of the day and the length of the night are at their greatest disparity, both day and night are always divided into 12 hours.
Temporal hours were common in many cultures. A division of day and night into twelve hours each was first recorded in Ancient Egypt . A similar division of day and night was later made in the Mediterranean basin from about Classical Greek Antiquity into twelve temporal hours each ( Ancient Greek : ὥραι καιρικαί , romanized : horai kairikai ). [ 9 ]
In Western culture they were adopted from the Roman calendar and were adopted in the European Medieval era. They had particular relevance in the fixed daily schedule of the monastic orders . This division of time allowed the work of the day -such as eating, praying, or working -to always be performed at the same (temporal) hour, regardless of season ( Prayer of the Hours ). [ 9 ]
The prevailing opinion is that each day begins at the rise of dawn (Heb. עלות השחר ), [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] which is about 72 minutes before sunrise, [ 15 ] yet, for practical reasons in some biblically related commandments, some scholars begin counting the hours of the day from sunrise (Heb. הנץ החמה ), [ 16 ] [ 17 ] [ 18 ] such as for the recital of Shema' which, as a first resort, must be recited when a person rises from his sleep in the morning, a time that is traditionally linked with sunrise, and continuing thereafter until the beginning of the 4th hour of the day, [ 19 ] [ 20 ] [ 21 ] or, for example, when burning leaven on the 14th day of the lunar month Nisan , which must be burnt in the 6th hour of the day when counting from sunrise. At this time, the sun is nearly at its apex. [ 22 ]
The commencement of nightfall is not as divisive in Jewish law:
Rabbi Pinchas said in the name of Rabbi Abba bar Pappa : One star is certainly day; two [stars] is a doubtful case; three [stars] is certainly night. [ 23 ]
The precise, intermediate time between day and night, or what is termed in Hebrew bayn ha-sh'meshot , has been discussed by Talmudic scholars in great detail. Some describe the time as when the evening sky turns a silverish-grey color. The same time is described by Moses Alashkar as "from the moment that the entire circle of the sun sets [below the horizon] until there appear [in the sky] three medium-sized stars." [ 24 ] [ 25 ] The duration of this time is generally held to be about 12 minutes, but which, with respect to the Sabbath day, is given a more stringent application, namely, 13.5 minutes after sunset. [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] Rabbeinu Tam , disputing, held the time of bayn ha-sh'meshot to be 58.5 minutes. [ 31 ] A third opinion is that of Maimonides who puts the time of bayn ha-sh'meshot at the time it takes to walk 3 ⁄ 4 of a biblical mile, a time which Maimonides estimates at about 18 minutes (temporal hours), according to what they have understood from the words of Maimonides, namely, that a person traverses a biblical mile in 24 minutes. [ 32 ] This was the custom of the cities of Yemen . [ 32 ]
In old times, the hour was detected by observation of the position of the sun, [ 33 ] or when the first three stars appeared in the night sky. During the first six hours of the day, the sun is seen in the eastern sky. At the sixth hour , the sun is always at its zenith in the sky, meaning, it is either directly overhead, or parallel (depending on the hemisphere ). [ 34 ] Those persons living in the Northern Hemisphere , the sun at noon time will appear overhead slightly towards the south, whereas for those living in the Southern Hemisphere , the sun at noon time will appear overhead slightly towards the north (an exception being in the tropics , the sun can sometimes be directly overhead). From the 6th and a half hour to the 12th hour, the sun inclines towards the west, until it sets. The conclusion of a day at the end of twilight may slightly vary in minutes from place to place, depending on the elevation and the terrain. [ 35 ] Typically, nightfall ushers in more quickly in the low-lying valleys, than it does on a high mountaintop. [ 36 ]
There are two major opinions how to calculate these times:
In the Modern Age of astral science and of precise astronomical calculations, it is now possible to determine the length of the ever-changing hour by simple mathematics. To determine the length of each relative hour, one needs but simply know two variables: (a) the precise time of sunrise, and (b) the precise time of sunset. Since according to the first opinion, the day begins approximately 72 minutes before sunrise and ends approximately 72 minutes after sunset (and according to the variant understanding of this opinion, ends approximately 13½ or 18 minutes after sunset), or begins at sunrise and ends at sunrise according to the second opinion, by collecting the total number of minutes in any given day and dividing the total number of minutes by 12, the quotient that one is left with is the number of minutes to each hour. In summer months, when the days are long, the length of each hour during daytime can be quite long depending on one's latitude, whereas the length of each hour during nighttime can be quite short again depending on one's latitude. It should also be noted that according to those opinions that the 72 minutes are computed according to 16.1 degrees, the further one goes from the equator , the longer it will get, such that in northern latitudes it could become 2 hours or longer.
In Jewish Halacha , the practical bearing of this teaching is reflected in many halachic practices. For example, according to Jewish law, the morning recitation of Kriyat Shema must be made between slightly before sunrise and the end of the third hour of the day, a time that actually fluctuates on the standard 12-hour clock, depending on the time of year. [ 63 ] Its application is also used in determining the time of the Morning Prayer , which must be recited between sunrise until the end of the fourth hour , [ 64 ] but post facto can be said until noon time, [ 65 ] and which times will vary if one were to rely solely on the dials of the standard 12-hour clock , depending on the seasons.
On the eve of Passover , chametz can only be eaten until the end of the fourth-hour of the day, and must be disposed of by the end of the fifth hour. [ 66 ]
In Jewish tradition, prayers were usually offered at the time of the daily whole-burnt offerings . [ 67 ] The historian, Josephus , writing about the daily whole-burnt offering, says that it was offered twice each day, in the morning and about the ninth hour . [ 68 ] The Mishnah , a compendium of Jewish oral laws compiled in the late 2nd-century CE, says of the morning daily offering that it was offered in the fourth hour , [ 69 ] but says of the late afternoon offering: "The daily whole-burnt offering was slaughtered at a half after the eighth hour , and offered up at a half after the ninth hour ." [ 70 ] Elsewhere, when describing the slaughter of the Passover offerings on the eve of Passover (the 14th day of the lunar month Nisan ), Josephus writes: "...their feast which is called the Passover, when they slay their sacrifices, from the ninth hour to the eleventh , etc." (roughly corresponding to 3 o'clock pm to 5 o'clock pm). [ 71 ] Conversely, the Mishnah states that on the eve of Passover the daily whole-burnt offering was slaughtered at a half past the seventh hour , and offered up at a half past the eighth hour . [ 70 ] | https://en.wikipedia.org/wiki/Relative_hour |
In the context of the Microsoft Windows NT line of computer operating systems , the relative identifier (RID) is a variable length number that is assigned to objects at creation and becomes part of the object's Security Identifier (SID) that uniquely identifies an account or group within a domain. The Relative ID Master allocates security RIDs to Domain Controllers to assign to new Active Directory security principals (users, groups or computer objects). It also manages objects moving between domains.
The Relative ID Master is one role of the Flexible single master operation for assigning RID.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relative_identifier |
In mathematics , the relative interior of a set is a refinement of the concept of the interior , which is often more useful when dealing with low-dimensional sets placed in higher-dimensional spaces.
Formally, the relative interior of a set S {\displaystyle S} (denoted relint ( S ) {\displaystyle \operatorname {relint} (S)} ) is defined as its interior within the affine hull of S . {\displaystyle S.} [ 1 ] In other words, relint ( S ) := { x ∈ S : there exists ϵ > 0 such that B ϵ ( x ) ∩ aff ( S ) ⊆ S } , {\displaystyle \operatorname {relint} (S):=\{x\in S:{\text{ there exists }}\epsilon >0{\text{ such that }}B_{\epsilon }(x)\cap \operatorname {aff} (S)\subseteq S\},} where aff ( S ) {\displaystyle \operatorname {aff} (S)} is the affine hull of S , {\displaystyle S,} and B ϵ ( x ) {\displaystyle B_{\epsilon }(x)} is a ball of radius ϵ {\displaystyle \epsilon } centered on x {\displaystyle x} . Any metric can be used for the construction of the ball; all metrics define the same set as the relative interior.
A set is relatively open iff it is equal to its relative interior. Note that when aff ( S ) {\displaystyle \operatorname {aff} (S)} is a closed subspace of the full vector space (always the case when the full vector space is finite dimensional) then being relatively closed is equivalent to being closed.
For any convex set C ⊆ R n {\displaystyle C\subseteq \mathbb {R} ^{n}} the relative interior is equivalently defined as [ 2 ] [ 3 ] relint ( C ) := { x ∈ C : for all y ∈ C , there exists some λ > 1 such that λ x + ( 1 − λ ) y ∈ C } = { x ∈ C : for all y ≠ x ∈ C , there exists some z ∈ C such that x ∈ ( y , z ) } . {\displaystyle {\begin{aligned}\operatorname {relint} (C)&:=\{x\in C:{\text{ for all }}y\in C,{\text{ there exists some }}\lambda >1{\text{ such that }}\lambda x+(1-\lambda )y\in C\}\\&=\{x\in C:{\text{ for all }}y\neq x\in C,{\text{ there exists some }}z\in C{\text{ such that }}x\in (y,z)\}.\end{aligned}}} where x ∈ ( y , z ) {\displaystyle x\in (y,z)} means that there exists some 0 < λ < 1 {\displaystyle 0<\lambda <1} such that x = λ z + ( 1 − λ ) y {\displaystyle x=\lambda z+(1-\lambda )y} .
Theorem — If A ⊂ R n {\displaystyle A\subset \mathbb {R} ^{n}} is nonempty and convex, then its relative interior r e l i n t ( A ) {\displaystyle \mathrm {relint} (A)} is the union of a nested sequence of nonempty compact convex subsets K 1 ⊂ K 2 ⊂ K 3 ⊂ ⋯ ⊂ r e l i n t ( A ) {\displaystyle K_{1}\subset K_{2}\subset K_{3}\subset \cdots \subset \mathrm {relint} (A)} .
Since we can always go down to the affine span of A {\displaystyle A} , WLOG, the relative interior has dimension n {\displaystyle n} . Now let K j ≡ [ − j , j ] n ∩ { x ∈ int ( K ) : d i s t ( x , ( int ( K ) ) c ) ≥ 1 j } {\displaystyle K_{j}\equiv [-j,j]^{n}\cap \left\{x\in {\text{int}}(K):\mathrm {dist} (x,({\text{int}}(K))^{c})\geq {\frac {1}{j}}\right\}} .
Theorem [ 4 ] — Here "+" denotes Minkowski sum .
Theorem [ 5 ] — Here C o n e {\displaystyle \mathrm {Cone} } denotes positive cone . That is, C o n e ( S ) = { r x : x ∈ S , r > 0 } {\displaystyle \mathrm {Cone} (S)=\{rx:x\in S,r>0\}} . | https://en.wikipedia.org/wiki/Relative_interior |
Relative locality is a proposed physical phenomenon in which different observers would disagree on whether two space-time events are coincident. [ 1 ] This is in contrast to special relativity and general relativity in which different observers may disagree on whether two distant events occur at the same time but if an observer infers that two events are at the same spacetime position then all observers will agree.
When a light signal exchange procedure is used to infer spacetime coordinates of distant events from the travel time of photons, information about the photon's energy is discarded with the assumption that the frequency of light doesn't matter. It is also usually assumed that distant observers construct the same spacetime. This assumption of absolute locality implies that momentum space is flat. However research into quantum gravity has indicated that momentum space might be curved [ 2 ] which would imply relative locality. [ 3 ] To regain an absolute arena for invariance one would combine spacetime and momentum space into a phase space. | https://en.wikipedia.org/wiki/Relative_locality |
In multiphase flow in porous media , the relative permeability of a phase is a dimensionless measure of the effective permeability of that phase. It is the ratio of the effective permeability of that phase to the absolute permeability. It can be viewed as an adaptation of Darcy's law to multiphase flow.
For two-phase flow in porous media given steady-state conditions, we can write
where q i {\displaystyle q_{i}} is the flux , ∇ P i {\displaystyle \nabla P_{i}} is the pressure drop , μ i {\displaystyle \mu _{i}} is the viscosity . The subscript i {\displaystyle i} indicates that the parameters are for phase i {\displaystyle i} .
k i {\displaystyle k_{i}} is here the phase permeability (i.e., the effective permeability of phase i {\displaystyle i} ), as observed through the equation above.
Relative permeability , k r i {\displaystyle k_{\mathit {ri}}} , for phase i {\displaystyle i} is then defined from k i = k r i k {\displaystyle k_{i}=k_{\mathit {ri}}k} , as
where k {\displaystyle k} is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability . Relative permeability must be between zero and one.
In applications, relative permeability is often represented as a function of water saturation ; however, owing to capillary hysteresis one often resorts to a function or curve measured under drainage and another measured under imbibition .
Under this approach, the flow of each phase is inhibited by the presence of the other phases. Thus the sum of relative permeabilities over all phases is less than 1. However, apparent relative permeabilities larger than 1 have been obtained since the Darcean approach disregards the viscous coupling effects derived from momentum transfer between the phases (see assumptions below). This coupling could enhance the flow instead of inhibit it. This has been observed in heavy oil petroleum reservoirs when the gas phase flows as bubbles or patches (disconnected). [ 1 ]
The above form for Darcy's law is sometimes also called Darcy's extended law, formulated for horizontal, one-dimensional, immiscible multiphase flow in homogeneous and isotropic porous media. The interactions between the fluids are neglected, so this model assumes that the solid porous media and the other fluids form a new porous matrix through which a phase can flow, implying that the fluid-fluid interfaces remain static in steady-state flow, which is not true, but this approximation has proven useful anyway.
Each of the phase saturations must be larger than the irreducible saturation, and each phase is assumed continuous within the porous medium.
Based on data from special core analysis laboratory (SCAL) experiments, [ 2 ] simplified models of relative permeability as a function of saturation (e.g. water saturation ) can be constructed. This article will focus on an oil-water system.
The water saturation S w {\displaystyle S_{\mathit {w}}} is the fraction of the pore volume that is filled with water, and similarly for the oil saturation S o {\displaystyle S_{\mathit {o}}} . Thus, saturations are themselves scaled properties or variables. This gives the constraint
The model functions or correlations for relative permeabilities in an oil-water system are therefore usually written as functions of only water saturation, and this makes it natural to select water saturation as the horizontal axis in graphical presentations. Let S w i r {\displaystyle S_{\mathit {wir}}} (also denoted S w c {\displaystyle S_{\mathit {wc}}} and sometimes S w r {\displaystyle S_{\mathit {wr}}} ) be the irreducible (or minimal or connate) water saturation, and let S o r w {\displaystyle S_{\mathit {orw}}} be the residual (minimal) oil saturation after water flooding (imbibition). The flowing water saturation window in a water invasion / injection / imbibition process is bounded by a minimum value S w i r {\displaystyle S_{\mathit {wir}}} and a maximum value S w o r = 1 − S o r w {\displaystyle S_{\mathit {wor}}=1-S_{\mathit {orw}}} . In mathematical terms the flowing saturation window is written as
By scaling the water saturation to the flowing saturation window, we get a (new or another) normalized water saturation value
and a normalized oil saturation value
Let K r o w {\displaystyle K_{\mathit {row}}} be oil relative permeability, and let K r w {\displaystyle K_{\mathit {rw}}} be water relative permeability. There are two ways of scaling phase permeability (i.e. effective permeability of the phase). If we scale phase permeability w.r.t. absolute water permeability (i.e. S w = 1 {\displaystyle S_{\mathit {w}}=1} ), we get an endpoint parameter for both oil and water relative permeability. If we scale phase permeability w.r.t. oil permeability with irreducible water saturation present, K r o w {\displaystyle K_{\mathit {row}}} endpoint is one, and we are left with only the K r w {\displaystyle K_{\mathit {rw}}} endpoint parameter. In order to satisfy both options in the mathematical model, it is common to use two endpoint symbols in the model for two-phase relative permeability.
The endpoints / endpoint parameters of oil and water relative permeabilities are
These symbols have their merits and limits. The symbol K r o t {\displaystyle K_{\mathit {rot}}} emphasize that it represents the top point of K r o w {\displaystyle K_{\mathit {row}}} . It occurs at irreducible water saturation, and it is the largest value of K r o w {\displaystyle K_{\mathit {row}}} that can occur for initial water saturation. The competing endpoint symbol K r o r {\displaystyle K_{\mathit {ror}}} occurs in imbibition flow in oil-gas systems. If the permeability basis is oil with irreducible water present, then K r o t = 1 {\displaystyle K_{\mathit {rot}}=1} . The symbol K r w r {\displaystyle K_{\mathit {rwr}}} emphasizes that it is occurring at the residual oil saturation. An alternative symbol to K r w r {\displaystyle K_{\mathit {rwr}}} is K r w o {\displaystyle K_{\mathit {rw}}^{o}} which emphasizes that the reference permeability is oil permeability with irreducible water S w i r {\displaystyle S_{\mathit {wir}}} present.
The oil and water relative permeability models are then written as
The functions K r o w n {\displaystyle K_{\mathit {rown}}} and K r w n {\displaystyle K_{\mathit {rwn}}} are called normalised relative permeabilities or shape functions for oil and water, respectively. The endpoint parameters K r o t {\displaystyle K_{\mathit {rot}}} and K r w r {\displaystyle K_{\mathit {rwr}}} (which is a simplification of K r w o r {\displaystyle K_{\mathit {rwor}}} ) are physical properties that are obtained either before or together with the optimization of shape parameters present in the shape functions.
There are often many symbols in articles that discuss relative permeability models and modelling. A number of busy core analysts, reservoir engineers and scientists often skip using tedious and time-consuming subscripts, and write e.g. Krow instead of K r o w {\displaystyle K_{\mathit {row}}} or k r o w {\displaystyle k_{\mathit {row}}} or krow or oil relative permeability. A variety of symbols are therefore to be expected, and accepted as long as they are explained or defined.
The effects that slip or no-slip boundary conditions in pore flow have on endpoint parameters, are discussed by Berg et alios. [ 3 ] [ 4 ]
An often used approximation of relative permeability is the Corey correlation [ 5 ] [ 6 ] [ 7 ] which is a power law in saturation. The Corey correlations of the relative permeability for oil and water are then
If the permeability basis is normal oil with irreducible water present, then K r o t = 1 {\displaystyle K_{\mathit {rot}}=1} .
The empirical parameters N o {\displaystyle N_{\mathit {o}}} and N w {\displaystyle N_{\mathit {w}}} are called curve shape parameters or simply shape parameters, and they can be obtained from measured data either by analytical interpretation of measured data, or by optimization using a core flow numerical simulator to match the experiment (often called history matching). N o = N w = 2 {\displaystyle N_{\mathit {o}}=N_{\mathit {w}}=2} is sometimes appropriate. The physical properties K r o t {\displaystyle K_{\mathit {rot}}} and K r w r {\displaystyle K_{\mathit {rwr}}} are obtained either before or together with the optimizing of N o {\displaystyle N_{\mathit {o}}} and N w {\displaystyle N_{\mathit {w}}} .
In case of gas-water system or gas-oil system there are Corey correlations similar to the oil-water relative permeabilities correlations shown above.
The Corey-correlation or Corey model has only one degree of freedom for the shape of each relative permeability curve, the shape parameter N.
The LET-correlation [ 8 ] [ 9 ] adds more degrees of freedom in order to accommodate the shape of relative permeability curves in SCAL experiments [ 2 ] and in 3D reservoir models that are adjusted to match historic production. These adjustments frequently includes relative permeability curves and endpoints.
The LET-type approximation is described by 3 parameters L, E, T. The correlation for water and oil relative permeability with water injection is thus
and
written using the same S w {\displaystyle S_{w}} normalization as for Corey.
Only S w i r {\displaystyle S_{\mathit {wir}}} , S o r w {\displaystyle S_{\mathit {orw}}} , K r o t {\displaystyle K_{\mathit {rot}}} , and K r w r {\displaystyle K_{\mathit {rwr}}} have direct physical meaning, while the parameters L , E and T are empirical. The parameter L describes the lower part of the curve, and by similarity and experience the L -values are comparable to the appropriate Corey parameter. The parameter T describes the upper part (or the top part) of the curve in a similar way that the L -parameter describes the lower part of the curve. The parameter E describes the position of the slope (or the elevation) of the curve. A value of one is a neutral value, and the position of the slope is governed by the L - and T -parameters. Increasing the value of the E -parameter pushes the slope towards the high end of the curve. Decreasing the value of the E -parameter pushes the slope towards the lower end of the curve. Experience using the LET correlation indicates the following reasonable ranges for the parameters L , E , and T : L ≥ 0.1, E > 0 and T ≥ 0.1.
In case of gas-water system or gas-oil system there are LET correlations similar to the oil-water relative permeabilities correlations shown above.
After Morris Muskat et alios established the concept of relative permeability in late 1930'ies, the number of correlations, i.e. models, for relative permeability has steadily increased. This creates a need for evaluation of the most common correlations at the current time. Two of the latest (per 2019) and most thorough evaluations are done by Moghadasi et alios [ 10 ] and by Sakhaei et alios. [ 11 ] Moghadasi et alios [ 10 ] evaluated Corey, Chierici and LET correlations for oil/water relative permeability using a sophisticated method that takes into account the number of uncertain model parameters. They found that LET, with the largest number (three) of uncertain parameters, was clearly the best one for both oil and water relative permeability. Sakhaei et alios [ 11 ] evaluated 10 common and widely used relative permeability correlations for gas/oil and gas/condensate systems, and found that LET showed best agreement with experimental values for both gas and oil/condensate relative permeability. | https://en.wikipedia.org/wiki/Relative_permeability |
The relative permittivity (in older texts, dielectric constant ) is the permittivity of a material expressed as a ratio with the electric permittivity of a vacuum . A dielectric is an insulating material, and the dielectric constant of an insulator measures the ability of the insulator to store electric energy in an electrical field.
Permittivity is a material's property that affects the Coulomb force between two point charges in the material. Relative permittivity is the factor by which the electric field between the charges is decreased relative to vacuum.
Likewise, relative permittivity is the ratio of the capacitance of a capacitor using that material as a dielectric , compared with a similar capacitor that has vacuum as its dielectric. Relative permittivity is also commonly known as the dielectric constant, a term still used but deprecated by standards organizations in engineering [ 15 ] as well as in chemistry. [ 16 ]
Relative permittivity is typically denoted as ε r ( ω ) (sometimes κ , lowercase kappa ) and is defined as
where ε ( ω ) is the complex frequency-dependent permittivity of the material, and ε 0 is the vacuum permittivity .
Relative permittivity is a dimensionless number that is in general complex-valued ; its real and imaginary parts are denoted as: [ 17 ]
The relative permittivity of a medium is related to its electric susceptibility , χ e , as ε r ( ω ) = 1 + χ e .
In anisotropic media (such as non cubic crystals) the relative permittivity is a second rank tensor .
The relative permittivity of a material for a frequency of zero is known as its static relative permittivity .
The historical term for the relative permittivity is dielectric constant . It is still commonly used, but has been deprecated by standards organizations, [ 15 ] [ 16 ] because of its ambiguity, as some older reports used it for the absolute permittivity ε . [ 15 ] [ 18 ] [ 19 ] The permittivity may be quoted either as a static property or as a frequency-dependent variant, in which case it is also known as the dielectric function . It has also been used to refer to only the real component ε ′ r of the complex-valued relative permittivity. [ citation needed ]
In the causal theory of waves, permittivity is a complex quantity. The imaginary part corresponds to a phase shift of the polarization P relative to E and leads to the attenuation of electromagnetic waves passing through the medium. By definition, the linear relative permittivity of vacuum is equal to 1, [ 19 ] that is ε = ε 0 , although there are theoretical nonlinear quantum effects in vacuum that become non-negligible at high field strengths. [ 20 ]
The following table gives some typical values.
The relative low frequency permittivity of ice is ~96 at −10.8 °C, falling to 3.15 at high frequency, which is independent of temperature. [ 21 ] It remains in the range 3.12–3.19 for frequencies between about 1 MHz and the far infrared region. [ 22 ]
The relative static permittivity, ε r , can be measured for static electric fields as follows: first the capacitance of a test capacitor , C 0 , is measured with vacuum between its plates. Then, using the same capacitor and distance between its plates, the capacitance C with a dielectric between the plates is measured. The relative permittivity can be then calculated as
For time-variant electromagnetic fields , this quantity becomes frequency -dependent. An indirect technique to calculate ε r is conversion of radio frequency S-parameter measurement results. A description of frequently used S-parameter conversions for determination of the frequency-dependent ε r of dielectrics can be found in this bibliographic source. [ 23 ] Alternatively, resonance based effects may be employed at fixed frequencies. [ 24 ]
The relative permittivity is an essential piece of information when designing capacitors , and in other circumstances where a material might be expected to introduce capacitance into a circuit. If a material with a high relative permittivity is placed in an electric field , the magnitude of that field will be measurably reduced within the volume of the dielectric. This fact is commonly used to increase the capacitance of a particular capacitor design. The layers beneath etched conductors in printed circuit boards ( PCBs ) also act as dielectrics.
Dielectrics are used in radio frequency (RF) transmission lines. In a coaxial cable, polyethylene can be used between the center conductor and outside shield. It can also be placed inside waveguides to form filters . Optical fibers are examples of dielectric waveguides . They consist of dielectric materials that are purposely doped with impurities so as to control the precise value of ε r within the cross-section. This controls the refractive index of the material and therefore also the optical modes of transmission. However, in these cases it is technically the relative permittivity that matters, as they are not operated in the electrostatic limit.
The relative permittivity of air changes with temperature, humidity, and barometric pressure. [ 25 ] Sensors can be constructed to detect changes in capacitance caused by changes in the relative permittivity. Most of this change is due to effects of temperature and humidity as the barometric pressure is fairly stable. Using the capacitance change, along with the measured temperature, the relative humidity can be obtained using engineering formulas.
The relative static permittivity of a solvent is a relative measure of its chemical polarity . For example, water is very polar, and has a relative static permittivity of 80.10 at 20 °C while n - hexane is non-polar, and has a relative static permittivity of 1.89 at 20 °C. [ 26 ] This information is important when designing separation, sample preparation and chromatography techniques in analytical chemistry .
The correlation should, however, be treated with caution. For instance, dichloromethane has a value of ε r of 9.08 (20 °C) and is rather poorly soluble in water (13 g/L or 9.8 mL/L at 20 °C); at the same time, tetrahydrofuran has its ε r = 7.52 at 22 °C, but it is completely miscible with water. In the case of tetrahydrofuran, the oxygen atom can act as a hydrogen bond acceptor; whereas dichloromethane cannot form hydrogen bonds with water.
This is even more remarkable when comparing the ε r values of acetic acid (6.2528) [ 27 ] and that of iodoethane (7.6177). [ 27 ] The large numerical value of ε r is not surprising in the second case, as the iodine atom is easily polarizable; nevertheless, this does not imply that it is polar, too (electronic polarizability prevails over the orientational one in this case).
Again, similar as for absolute permittivity , relative permittivity for lossy materials can be formulated as:
in terms of a "dielectric conductivity" σ (units S/m, siemens per meter), which "sums over all the dissipative effects of the material; it may represent an actual [electrical] conductivity caused by migrating charge carriers and it may also refer to an energy loss associated with the dispersion of ε ′ [the real-valued permittivity]" ( [ 17 ] p. 8). Expanding the angular frequency ω = 2π c / λ and the electric constant ε 0 = 1 / μ 0 c 2 , which reduces to:
where λ is the wavelength, c is the speed of light in vacuum and κ = μ 0 c / 2π = 59.95849 Ω ≈ 60.0 Ω is a newly introduced constant (units ohms , or reciprocal siemens , such that σλκ = ε r remains unitless).
Permittivity is typically associated with dielectric materials , however metals are described as having an effective permittivity, with real relative permittivity equal to one. [ 28 ] In the high-frequency region, which extends from radio frequencies to the far infrared and terahertz region, the plasma frequency of the electron gas is much greater than the electromagnetic propagation frequency, so the refractive index n of a metal is very nearly a purely imaginary number. In the low frequency regime, the effective relative permittivity is also almost purely imaginary: It has a very large imaginary value related to the conductivity and a comparatively insignificant real-value. [ 29 ] | https://en.wikipedia.org/wiki/Relative_permittivity |
The relative rate test is a genetic comparative test between two ingroups (somewhat closely related species) and an outgroup or “reference species” to compare mutation and evolutionary rates between the species. [ 1 ] Each ingroup species is compared independently to the outgroup to determine how closely related the two species are without knowing the exact time of divergence from their closest common ancestor. [ 2 ] If more change has occurred on one lineage relative to another lineage since their shared common ancestor, then the outgroup species will be more different from the faster -evolving lineage's species than it is from the slower -evolving lineage's species. This is because the faster-evolving lineage will, by definition, have accumulated more differences since the common ancestor than the slower-evolving lineage. This method can be applied to averaged data (i.e., groups of molecules), or individual molecules. It is possible for individual molecules to show evidence of approximately constant rates of change in different lineages even while the rates differ between different molecules. The relative rate test is a direct internal test of the molecular clock, for a given molecule and a given set of species, and shows that the molecular clock does not need to be (and should never be) assumed: It can be directly assessed from the data itself. Note that the logic can also be applied to any kind of data for which a distance measure can be defined (e.g., even morphological features).
The initial use of this method was to assess whether or not there was evidence for different rates of molecular change in different lineages for particular molecules. If there was no evidence of significantly different rates, this would be direct evidence of a molecular clock , and (only) then would allow for a phylogeny to be constructed based on relative branch points (absolute dates for branch points in the phylogeny would require further calibration with the best-attested fossil evidence). Sarich and Wilson used the method to show that approximately the same amount of change had occurred in albumin in both the human ( Homo sapiens ) and chimpanzee ( Pan troglodytes ) lineages since their common ancestor. This was done by showing that both human and chimpanzee albumin were equally different from, e.g., monkey albumin. They found the same pattern for other Primate species (i.e., equidistant from an outgroup comparison), which allowed them to then create a relative phylogenetic tree (hypothesis of evolutionary branching order) of Primates. When calibrated with well-attested fossil evidence (for example, no Primates of modern aspect before the K-T boundary), this led them to argue that the human-chimp split had occurred only ~5 million years ago (which was much younger than previously supposed by paleontologists). [ 3 ]
Two other important uses for the relative rate test are to determine if and how generation time and metabolic processes affect mutational rate. Firstly is generation time. Sarich and Wilson first used the relative rate test to show that there was no evidence of a generation effect on lineage mutation rates for albumin within primates. [ 4 ] Using 4 carnivore species as outgroups, they showed that humans (with much longer generation times) had not accumulated significantly fewer (or greater) molecular changes than had other primates in their sample (e.g., rhesus monkeys , spider monkeys , and various prosimians all of which have much shorter generation times). However, a famous experiment comparing eleven genes between mice or rats to humans, with pig, cow, goat, dog, and rabbits acting as an outgroup reference, suggested that rodents had faster mutation rates. Rodents have a much shorter generation time than humans, and so it was suggested that they would be expected to have much faster mutation rates, and so evolve faster. This theory was supported through testing coding regions and untranslated regions with the relative rate test (which showed that rodents had a mutation rate much higher than humans) and backed up by comparing paralogous genes because they are homologous via gene duplication and not speciation and so the comparison is independent of the time of divergence. [ 2 ]
The other use of the test is to determine the effect of metabolic processes. It had previously been believed that birds have a much slower molecular evolutionary rate than other animals, such as mammals, but that was based solely on the small genetic differences between birds, which relied on the fossil record. This was later confirmed with the relative rate test, however the theory was that this was because of metabolic rate and a lower body temperature in birds. Mindell’s paper explains that there was no direct correlation found between these and molecular evolution in the test taxa of birds based on mitochondrial evolution, but birds as a whole do have a lower mutation rate. There are still many hypotheses in this area of study that are being tested, but the relative rate test is proving crucial in order to overcome the fossil record bias. [ 5 ]
Although these are specific instances of the relative rate test, it may also be used to compare species for phylogenetic purposes. For example, Easteal wanted to compare nucleotide substitution rates in four genes of four eutherian mammals. He did this via the relative rate test and then, using this data, he was able to construct a phylogeny using various methods, including parsimony and maximum likelihood. [ 6 ] He took the same approach in another experiment to compare humans to other primates, and found no significant difference in evolutionary rates. [ 7 ]
It is generally agreed that the relative rate test has many strengths that make it invaluable for experimenters. For example, using this test, the date of divergence between two species is not needed. [ 2 ] Also, a generalized test minimizes sampling bias [ 8 ] and the bias of the fossil record.
However, the relative rate test is very poor in some areas, such as detecting major differences compared to rate constancy if it is being used as a test for the molecular clock . [ 9 ] Robinson claims that for this test, size does matter. The relative rate test may have a problem picking up significant variations if the tested sequences are less than one thousand nucleotides . This may be because variations are within the expected error of the test, and because there are so few nucleotides to compare, there is no way to be absolutely sure. [ 8 ] [ 9 ] So, the relative rate test is strong by itself, but it is usually not the only basis for a conclusion. It tends to be paired with other tests, such as branch length or two-cluster tests in order to make sure conclusions are accurate and not based on faulty results. | https://en.wikipedia.org/wiki/Relative_rate_test |
In mathematics, a relative scalar (of weight w ) is a scalar-valued function whose transform under a coordinate transform,
x ¯ j = x ¯ j ( x i ) {\displaystyle {\bar {x}}^{j}={\bar {x}}^{j}(x^{i})}
on an n -dimensional manifold obeys the following equation
f ¯ ( x ¯ j ) = J w f ( x i ) {\displaystyle {\bar {f}}({\bar {x}}^{j})=J^{w}f(x^{i})}
where
J = | ∂ ( x 1 , … , x n ) ∂ ( x ¯ 1 , … , x ¯ n ) | , {\displaystyle J=\left|{\dfrac {\partial (x_{1},\ldots ,x_{n})}{\partial ({\bar {x}}^{1},\ldots ,{\bar {x}}^{n})}}\right|,}
that is, the determinant of the Jacobian of the transformation. [ 1 ] A scalar density refers to the w = 1 {\displaystyle w=1} case.
Relative scalars are an important special case of the more general concept of a relative tensor .
An ordinary scalar or absolute scalar [ 2 ] refers to the w = 0 {\displaystyle w=0} case.
If x i {\displaystyle x^{i}} and x ¯ j {\displaystyle {\bar {x}}^{j}} refer to the same point P {\displaystyle P} on the manifold, then we desire f ¯ ( x ¯ j ) = f ( x i ) {\displaystyle {\bar {f}}({\bar {x}}^{j})=f(x^{i})} . This equation can be interpreted two ways when x ¯ j {\displaystyle {\bar {x}}^{j}} are viewed as the "new coordinates" and x i {\displaystyle x^{i}} are viewed as the "original coordinates". The first is as f ¯ ( x ¯ j ) = f ( x i ( x ¯ j ) ) {\displaystyle {\bar {f}}({\bar {x}}^{j})=f(x^{i}({\bar {x}}^{j}))} , which "converts the function to the new coordinates". The second is as f ( x i ) = f ¯ ( x ¯ j ( x i ) ) {\displaystyle f(x^{i})={\bar {f}}({\bar {x}}^{j}(x^{i}))} , which "converts back to the original coordinates. Of course, "new" or "original" is a relative concept.
There are many physical quantities that are represented by ordinary scalars, such as temperature and pressure.
Suppose the temperature in a room is given in terms of the function f ( x , y , z ) = 2 x + y + 5 {\displaystyle f(x,y,z)=2x+y+5} in Cartesian coordinates ( x , y , z ) {\displaystyle (x,y,z)} and the function in cylindrical coordinates ( r , t , h ) {\displaystyle (r,t,h)} is desired. The two coordinate systems are related by the following sets of equations: r = x 2 + y 2 t = arctan ( y / x ) h = z {\displaystyle {\begin{aligned}r&={\sqrt {x^{2}+y^{2}}}\\t&=\arctan(y/x)\\h&=z\end{aligned}}} and x = r cos ( t ) y = r sin ( t ) z = h . {\displaystyle {\begin{aligned}x&=r\cos(t)\\y&=r\sin(t)\\z&=h.\end{aligned}}}
Using f ¯ ( x ¯ j ) = f ( x i ( x ¯ j ) ) {\displaystyle {\bar {f}}({\bar {x}}^{j})=f(x^{i}({\bar {x}}^{j}))} allows one to derive f ¯ ( r , t , h ) = 2 r cos ( t ) + r sin ( t ) + 5 {\displaystyle {\bar {f}}(r,t,h)=2r\cos(t)+r\sin(t)+5} as the transformed function.
Consider the point P {\displaystyle P} whose Cartesian coordinates are ( x , y , z ) = ( 2 , 3 , 4 ) {\displaystyle (x,y,z)=(2,3,4)} and whose corresponding value in the cylindrical system is ( r , t , h ) = ( 13 , arctan ( 3 / 2 ) , 4 ) {\displaystyle (r,t,h)=({\sqrt {13}},\arctan {(3/2)},4)} . A quick calculation shows that f ( 2 , 3 , 4 ) = 12 {\displaystyle f(2,3,4)=12} and f ¯ ( 13 , arctan ( 3 / 2 ) , 4 ) = 12 {\displaystyle {\bar {f}}({\sqrt {13}},\arctan {(3/2)},4)=12} also. This equality would have held for any chosen point P {\displaystyle P} . Thus, f ( x , y , z ) {\displaystyle f(x,y,z)} is the "temperature function in the Cartesian coordinate system" and f ¯ ( r , t , h ) {\displaystyle {\bar {f}}(r,t,h)} is the "temperature function in the cylindrical coordinate system".
One way to view these functions is as representations of the "parent" function that takes a point of the manifold as an argument and gives the temperature.
The problem could have been reversed. One could have been given f ¯ {\displaystyle {\bar {f}}} and wished to have derived the Cartesian temperature function f {\displaystyle f} . This just flips the notion of "new" vs the "original" coordinate system.
Suppose that one wishes to integrate these functions over "the room", which will be denoted by D {\displaystyle D} . (Yes, integrating temperature is strange but that's partly what's to be shown.) Suppose the region D {\displaystyle D} is given in cylindrical coordinates as r {\displaystyle r} from [ 0 , 2 ] {\displaystyle [0,2]} , t {\displaystyle t} from [ 0 , π / 2 ] {\displaystyle [0,\pi /2]} and h {\displaystyle h} from [ 0 , 2 ] {\displaystyle [0,2]} (that is, the "room" is a quarter slice of a cylinder of radius and height 2).
The integral of f {\displaystyle f} over the region D {\displaystyle D} is [ citation needed ] ∫ 0 2 ∫ 0 2 2 − x 2 ∫ 0 2 f ( x , y , z ) d z d y d x = 16 + 10 π . {\displaystyle \int _{0}^{2}\!\int _{0}^{\sqrt {2^{2}-x^{2}}}\!\int _{0}^{2}\!f(x,y,z)\,dz\,dy\,dx=16+10\pi .} The value of the integral of f ¯ {\displaystyle {\bar {f}}} over the same region is [ citation needed ] ∫ 0 2 ∫ 0 π / 2 ∫ 0 2 f ¯ ( r , t , h ) d h d t d r = 12 + 10 π . {\displaystyle \int _{0}^{2}\!\int _{0}^{\pi /2}\!\int _{0}^{2}\!{\bar {f}}(r,t,h)\,dh\,dt\,dr=12+10\pi .} They are not equal. The integral of temperature is not independent of the coordinate system used. It is non-physical in that sense, hence "strange". Note that if the integral of f ¯ {\displaystyle {\bar {f}}} included a factor of the Jacobian (which is just r {\displaystyle r} ), we get [ citation needed ] ∫ 0 2 ∫ 0 π / 2 ∫ 0 2 f ¯ ( r , t , h ) r d h d t d r = 16 + 10 π , {\displaystyle \int _{0}^{2}\!\int _{0}^{\pi /2}\!\int _{0}^{2}\!{\bar {f}}(r,t,h)r\,dh\,dt\,dr=16+10\pi ,} which is equal to the original integral but it is not however the integral of temperature because temperature is a relative scalar of weight 0, not a relative scalar of weight 1.
If we had said f ( x , y , z ) = 2 x + y + 5 {\displaystyle f(x,y,z)=2x+y+5} was representing mass density, however, then its transformed value
should include the Jacobian factor that takes into account the geometric distortion of the coordinate
system. The transformed function is now f ¯ ( r , t , h ) = ( 2 r cos ( t ) + r sin ( t ) + 5 ) r {\displaystyle {\bar {f}}(r,t,h)=(2r\cos(t)+r\sin(t)+5)r} . This time f ( 2 , 3 , 4 ) = 12 {\displaystyle f(2,3,4)=12} but f ¯ ( 13 , arctan ( 3 / 2 ) , 4 ) = 12 29 {\displaystyle {\bar {f}}({\sqrt {13}},\arctan {(3/2)},4)=12{\sqrt {29}}} . As before
is integral (the total mass) in Cartesian coordinates is ∫ 0 2 ∫ 0 2 2 − x 2 ∫ 0 2 f ( x , y , z ) d z d y d x = 16 + 10 π . {\displaystyle \int _{0}^{2}\!\int _{0}^{\sqrt {2^{2}-x^{2}}}\!\int _{0}^{2}\!f(x,y,z)\,dz\,dy\,dx=16+10\pi .} The value of the integral of f ¯ {\displaystyle {\bar {f}}} over the same region is ∫ 0 2 ∫ 0 π / 2 ∫ 0 2 f ¯ ( r , t , h ) d h d t d r = 16 + 10 π . {\displaystyle \int _{0}^{2}\!\int _{0}^{\pi /2}\!\int _{0}^{2}\!{\bar {f}}(r,t,h)\,dh\,dt\,dr=16+10\pi .} They are equal. The integral of mass density gives total mass which is a coordinate-independent concept.
Note that if the integral of f ¯ {\displaystyle {\bar {f}}} also included a factor of the Jacobian like before, we get [ citation needed ] ∫ 0 2 ∫ 0 π / 2 ∫ 0 2 f ¯ ( r , t , h ) r d h d t d r = 24 + 40 π / 3 , {\displaystyle \int _{0}^{2}\!\int _{0}^{\pi /2}\!\int _{0}^{2}\!{\bar {f}}(r,t,h)r\,dh\,dt\,dr=24+40\pi /3,} which is not equal to the previous case.
Weights other than 0 and 1 do not arise as often. It can be shown the determinant of a type (0,2) tensor is a relative scalar of weight 2. | https://en.wikipedia.org/wiki/Relative_scalar |
Relative species abundance is a component of biodiversity and is a measure of how common or rare a species is relative to other species in a defined location or community. [ 1 ] Relative abundance is the percent composition of an organism of a particular kind relative to the total number of organisms in the area. [ citation needed ] Relative species abundances tend to conform to specific patterns that are among the best-known and most-studied patterns in macroecology . Different populations in a community exist in relative proportions; this idea is known as relative abundance.
Relative species abundance and species richness describe key elements of biodiversity . [ 1 ] Relative species abundance refers to how common or rare a species is relative to other species in a given location or community. [ 1 ] [ 4 ]
Usually relative species abundances are described for a single trophic level . Because such species occupy the same trophic level they will potentially or actually compete for similar resources. [ 1 ] For example, relative species abundances might describe all terrestrial birds in a forest community or all planktonic copepods in a particular marine environment.
Relative species abundances follow very similar patterns over a wide range of ecological communities. When plotted as a histogram of the number of species represented by 1, 2, 3, ..., n individuals usually fit a hollow curve, such that most species are rare, (represented by a single individual in a community sample) and relatively few species are abundant (represented by a large number of individuals in a community sample)(Figure 1). [ 4 ] This pattern has been long-recognized and can be broadly summarized with the statement that "most species are rare". [ 5 ] For example, Charles Darwin noted in 1859 in The Origin of Species that "... rarity is the attribute of vast numbers of species in all classes...." [ 6 ]
Species abundance patterns can be best visualized in the form of relative abundance distribution plots. The consistency of relative species abundance patterns suggests that some common macroecological "rule" or process determines the distribution of individuals among species within a trophic level.
Relative species abundance distributions are usually graphed as frequency histograms ("Preston plots"; Figure 2) [ 7 ] or rank-abundance diagrams ("Whittaker Plots"; Figure 3). [ 8 ]
Frequency histogram (Preston plot) :
Rank-abundance diagram (Whittaker plot) :
When plotted in these ways, relative species abundances from wildly different data sets show similar patterns: frequency histograms tend to be right-skewed (e.g. Figure 2) and rank-abundance diagrams tend to conform to the curves illustrated in Figure 4.
Researchers attempting to understand relative species abundance patterns usually approach them in a descriptive or mechanistic way. Using a descriptive approach biologists attempt to fit a mathematical model to real data sets and infer the underlying biological principles at work from the model parameters. By contrast, mechanistic approaches create a mathematical model based on biological principles and then test how well these models fit real data sets. [ 9 ]
I. Motomura developed the geometric series model based on benthic community data in a lake. [ 12 ] Within the geometric series each species' level of abundance is a sequential, constant proportion ( k ) of the total number of individuals in the community. Thus if k is 0.5, the most common species would represent half of individuals in the community (50%), the second most common species would represent half of the remaining half (25%), the third, half of the remaining quarter (12.5%) and so forth.
Although Motomura originally developed the model as a statistical (descriptive) means to plot observed abundances, the "discovery" of his paper by Western researchers in 1965 led to the model being used as a niche apportionment model – the "niche-preemption model". [ 8 ] In a mechanistic model k represents the proportion of the resource base acquired by a given species.
The geometric series rank-abundance diagram is linear with a slope of – k , and reflects a rapid decrease in species abundances by rank (Figure 4). [ 12 ] The geometric series does not explicitly assume that species colonize an area sequentially, however, the model fits the concept of niche preemption, where species sequentially colonize a region and the first species to arrive receives the majority of resources. [ 13 ] The geometric series model fits observed species abundances in highly uneven communities with low diversity. [ 13 ] This is expected to occur in terrestrial plant communities (as these assemblages often show strong dominance) as well as communities at early successional stages and those in harsh or isolated environments (Figure 5). [ 8 ]
where :
The logseries was developed by Ronald Fisher to fit two different abundance data sets: British moth species (collected by Carrington Williams ) and Malaya butterflies (collected by Alexander Steven Corbet ). [ 14 ] The logic behind the derivation of the logseries is varied [ 15 ] however Fisher proposed that sampled species abundances would follow a negative binomial from which the zero abundance class (species too rare to be sampled) was eliminated. [ 1 ] He also assumed that the total number of species in a community was infinite. Together, this produced the logseries distribution (Figure 4). The logseries predicts the number of species at different levels of abundance ( n individuals) with the formula:
where:
The number of species with 1, 2, 3, ..., n individuals are therefore:
The constants α and x can be estimated through iteration from a given species data set using the values S and N . [ 2 ] Fisher's dimensionless α is often used as a measure of biodiversity, and indeed has recently been found to represent the fundamental biodiversity parameter θ from neutral theory ( see below ).
Using several data sets (including breeding bird surveys from New York and Pennsylvania and moth collections from Maine, Alberta and Saskatchewan) Frank W. Preston (1948) argued that species abundances (when binned logarithmically in a Preston plot) follow a normal (Gaussian) distribution , partly as a result of the central limit theorem (Figure 4). [ 7 ] This means that the abundance distribution is lognormal . According to his argument, the right-skew observed in species abundance frequency histograms (including those described by Fisher et al. (1943) [ 14 ] ) was, in fact, a sampling artifact. Given that species toward the left side of the x -axis are increasingly rare, they may be missed in a random species sample. As the sample size increases however, the likelihood of collecting rare species in a way that accurately represents their abundance also increases, and more of the normal distribution becomes visible. [ 7 ] The point at which rare species cease to be sampled has been termed Preston's veil line . As the sample size increases Preston's veil is pushed farther to the left and more of the normal curve becomes visible [ 2 ] [ 10 ] (Figure 6). Williams' moth data, originally used by Fisher to develop the logseries distribution, became increasingly lognormal as more years of sampling were completed. [ 1 ] [ 3 ]
Preston's theory has an application: if a community is truly lognormal yet under-sampled, the lognormal distribution can be used to estimate the true species richness of a community. Assuming the shape of the total distribution can be confidently predicted from the collected data, the normal curve can be fit via statistical software or by completing the Gaussian formula : [ 7 ]
where:
It is then possible to predict how many species are in the community by calculating the total area under the curve ( N ):
The number of species missing from the data set (the missing area to the left of the veil line) is simply N minus the number of species sampled. [ 2 ] Preston did this for two lepidopteran data sets, predicting that, even after 22 years of collection, only 72% and 88% of the species present had been sampled. [ 7 ]
The Yule model is based on a much earlier, Galton–Watson model which was used to describe the distribution of species among genera . [ 16 ] The Yule model assumes random branching of species trees, with each species (branch tip) having the equivalent probability of giving rise to new species or becoming extinct. As the number of species within a genus, within a clade, has a similar distribution to the number of individuals within a species, within a community (i.e. the "hollow curve"), Sean Nee (2003) used the model to describe relative species abundances. [ 4 ] [ 17 ] In many ways this model is similar to niche apportionment models , however, Nee intentionally did not propose a biological mechanism for the model behavior, arguing that any distribution can be produced by a variety of mechanisms. [ 17 ]
Note : This section provides a general summary of niche apportionment theory, more information can be found under niche apportionment models .
Most mechanistic approaches to species abundance distributions use niche-space, i.e. available resources, as the mechanism driving abundances. If species in the same trophic level consume the same resources (such as nutrients or sunlight in plant communities, prey in carnivore communities, nesting locations or food in bird communities) and these resources are limited, how the resource "pie" is divided among species determines how many individuals of each species can exist in the community. Species with access to abundant resources will have higher carrying capacities than those with little access. Mutsunori Tokeshi [ 18 ] later elaborated niche apportionment theory to include niche filling in unexploited resource space. [ 9 ] Thus, a species may survive in the community by carving out a portion of another species' niche (slicing up the pie into smaller pieces) or by moving into a vacant niche (essentially making the pie larger, for example, by being the first to arrive in a newly available location or through the development of a novel trait that allows access previously unavailable resources). Numerous niche apportionment models have been developed. Each make different assumptions about how species carve up niche-space.
The Unified Neutral Theory of Biodiversity and Biogeography (UNTB) is a special form of mechanistic model that takes an entirely different approach to community composition than the niche apportionment models. [ 1 ] Instead of species populations reaching equilibrium within a community, the UNTB model is dynamic, allowing for continuing changes in relative species abundances through drift.
A community in the UNTB model can be best visualized as a grid with a certain number of spaces, each occupied with individuals of different species. The model is zero-sum as there are a limited number of spaces that can be occupied: an increase in the number of individuals of one species in the grid must result in corresponding decrease in the number of individuals of other species in the grid. The model then uses birth, death, immigration, extinction and speciation to modify community composition over time.
The UNTB model produces a dimensionless "fundamental biodiversity" number, θ , which is derived using the formula:
where :
Relative species abundances in the UNTB model follow a zero-sum multinomial distribution. [ 19 ] The shape of this distribution is a function of the immigration rate, the size of the sampled community (grid), and θ . [ 19 ] When the value of θ is small, the relative species abundance distribution is similar to the geometric series (high dominance). As θ gets larger, the distribution becomes increasingly s-shaped (log-normal) and, as it approaches infinity, the curve becomes flat (the community has infinite diversity and species abundances of one). Finally, when θ = 0 the community described consists of only one species (extreme dominance). [ 1 ]
An unexpected result of the UNTB is that at very large sample sizes, predicted relative species abundance curves describe the metacommunity and become identical to Fisher's logseries. At this point θ also becomes identical to Fisher's α {\displaystyle \alpha \,\!} for the equivalent distribution and Fisher's constant x is equal to the ratio of birthrate : deathrate. Thus, the UNTB unintentionally offers a mechanistic explanation of the logseries 50 years after Fisher first developed his descriptive model. [ 1 ] | https://en.wikipedia.org/wiki/Relative_species_abundance |
The Relative thermal index (RTI) is a characteristic parameter related to the ability of plastic materials to resist thermal degradation .
The RTI is part of the longterm thermal aging program (LTTA) described in the UL 746B standard from UL . [ 1 ]
During the process of the UL 746B program, the degradation in hot air of certain properties of the material like dielectric and mechanic strength , is investigated with regards to thermal-aging. For a full study, the candidate material (B) is aged together with a reference material A (control) with already known RTI value in the same ovens. The RTI is the rounded temperature in degrees C , at which the properties of B have decreased to 50 percent of their initial value in about the same amount of time (correlation time) than it takes for A at its own RTI value. A maximum correlation time of 60.000 hours is considered acceptable for many electrical applications, however it may also become as low as 5.000 hours according to UL 746B. If a material has not been investigated (yet), the RTI shown is based on the generic class ( polymer type) of the material.
Though the RTI is an index, it is given in Celsius units. The UL 746B standard distinguishes between three sub-categories of the RTI:
There is also the RTI Elongation (by means of Elongation at break ) for films and other nonrigid materials. | https://en.wikipedia.org/wiki/Relative_thermal_index |
The relative velocity of an object B relative to an observer A , denoted v B ∣ A {\displaystyle \mathbf {v} _{B\mid A}} (also v B A {\displaystyle \mathbf {v} _{BA}} or v B rel A {\displaystyle \mathbf {v} _{B\operatorname {rel} A}} ), is the velocity vector of B measured in the rest frame of A .
The relative speed v B ∣ A = ‖ v B ∣ A ‖ {\displaystyle v_{B\mid A}=\|\mathbf {v} _{B\mid A}\|} is the vector norm of the relative velocity.
We begin with relative motion in the classical , (or non- relativistic , or the Newtonian approximation ) that all speeds are much less than the speed of light. This limit is associated with the Galilean transformation . The figure shows a man on top of a train, at the back edge. At 1:00 pm he begins to walk forward at a walking speed of 10 km/h (kilometers per hour). The train is moving at 40 km/h. The figure depicts the man and train at two different times: first, when the journey began, and also one hour later at 2:00 pm. The figure suggests that the man is 50 km from the starting point after having traveled (by walking and by train) for one hour. This, by definition, is 50 km/h, which suggests that the prescription for calculating relative velocity in this fashion is to add the two velocities.
The diagram displays clocks and rulers to remind the reader that while the logic behind this calculation seem flawless, it makes false assumptions about how clocks and rulers behave. (See The train-and-platform thought experiment .) To recognize that this classical model of relative motion violates special relativity , we generalize the example into an equation:
where:
Fully legitimate expressions for "the velocity of A relative to B" include "the velocity of A with respect to B" and "the velocity of A in the coordinate system where B is always at rest". The violation of special relativity occurs because this equation for relative velocity falsely predicts that different observers will measure different speeds when observing the motion of light. [ note 1 ]
The figure shows two objects A and B moving at constant velocity. The equations of motion are:
where the subscript i refers to the initial displacement (at time t equal to zero). The difference between the two displacement vectors, r B − r A {\displaystyle \mathbf {r} _{B}-\mathbf {r} _{A}} , represents the location of B as seen from A.
Hence:
After making the substitutions v A | C = v A {\displaystyle \mathbf {v} _{A|C}=\mathbf {v} _{A}} and v B | C = v B {\displaystyle \mathbf {v} _{B|C}=\mathbf {v} _{B}} , we have:
To construct a theory of relative motion consistent with the theory of special relativity, we must adopt a different convention. Continuing to work in the (non-relativistic) Newtonian limit we begin with a Galilean transformation in one dimension: [ note 2 ]
where x' is the position as seen by a reference frame that is moving at speed, v, in the "unprimed" (x) reference frame. [ note 3 ] Taking the differential of the first of the two equations above, we have, d x ′ = d x − v d t {\displaystyle dx'=dx-v\,dt} , and what may seem like the obvious [ note 4 ] statement that d t ′ = d t {\displaystyle dt'=dt} , we have:
To recover the previous expressions for relative velocity, we assume that particle A is following the path defined by dx/dt in the unprimed reference (and hence dx ′/ dt ′ in the primed frame). Thus d x / d t = v A ∣ O {\displaystyle dx/dt=v_{A\mid O}} and d x ′ / d t = v A ∣ O ′ {\displaystyle dx'/dt=v_{A\mid O'}} , where O {\displaystyle O} and O ′ {\displaystyle O'} refer to motion of A as seen by an observer in the unprimed and primed frame, respectively. Recall that v is the motion of a stationary object in the primed frame, as seen from the unprimed frame. Thus we have v = v O ′ ∣ O {\displaystyle v=v_{O'\mid O}} , and:
where the latter form has the desired (easily learned) symmetry.
As in classical mechanics, in special relativity the relative velocity v B | A {\displaystyle \mathbf {v} _{\mathrm {B|A} }} is the velocity of an object or observer B in the rest frame of another object or observer A . However, unlike the case of classical mechanics, in Special Relativity, it is generally not the case that
This peculiar lack of symmetry is related to Thomas precession and the fact that two successive Lorentz transformations rotate the coordinate system. This rotation has no effect on the magnitude of a vector, and hence relative speed is symmetrical.
In the case where two objects are traveling in parallel directions, the relativistic formula for relative velocity is similar in form to the formula for addition of relativistic velocities.
The relative speed is given by the formula:
In the case where two objects are traveling in perpendicular directions, the relativistic relative velocity v B | A {\displaystyle \mathbf {v} _{\mathrm {B|A} }} is given by the formula:
where
The relative speed is given by the formula
The general formula for the relative velocity v B | A {\displaystyle \mathbf {v} _{\mathrm {B|A} }} of an object or observer B in the rest frame of another object or observer A is given by the formula: [ 1 ]
where
The relative speed is given by the formula | https://en.wikipedia.org/wiki/Relative_velocity |
Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. [ 1 ] [ 2 ] [ 3 ] In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as α {\displaystyle \alpha } .
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages .
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide ).
For a liquid mixture of two components (called a binary mixture ) at a given temperature and pressure , the relative volatility is defined as
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a K {\displaystyle K} value (= y / x {\displaystyle y/x} ) for a more volatile component is larger than a K {\displaystyle K} value for a less volatile component. That means that α {\displaystyle \alpha } ≥ 1 since the larger K {\displaystyle K} value of the more volatile component is in the numerator and the smaller K {\displaystyle K} of the less volatile component is in the denominator.
α {\displaystyle \alpha } is a unitless quantity. When the volatilities of both key components are equal, α {\displaystyle \alpha } = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same ( azeotrope ). As the value of α {\displaystyle \alpha } increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation column consists predominantly of the more volatile component and some small amount of the less volatile component and the bottoms fraction consists predominantly of the less volatile component and some small amount of the more volatile component.
A liquid mixture containing many components is called a multi-component mixture. When a multi-component mixture is distilled, the overhead fraction and the bottoms fraction typically contain much more than one or two components. For example, some intermediate products in an oil refinery are multi-component liquid mixtures that may contain alkane , alkene and alkyne hydrocarbons —ranging from methane , having one carbon atom, to decanes having ten carbon atoms. For distilling such a mixture, the distillation column may be designed (for example) to produce:
Such a distillation column is typically called a depropanizer.
The designer would designate the key components governing the separation design to be propane as the so-called light key (LK) and isobutane as the so-called heavy key (HK) . In that context, a lighter component means a component with a lower boiling point (or a higher vapor pressure) and a heavier component means a component with a higher boiling point (or a lower vapor pressure).
Thus, for the distillation of any multi-component mixture, the relative volatility is often defined as
Large-scale industrial distillation is rarely undertaken if the relative volatility is less than 1.05. [ 2 ]
The values of K {\displaystyle K} have been correlated empirically or theoretically in terms of temperature, pressure and phase compositions in the form of equations, tables or graph such as the well-known DePriester charts . [ 4 ]
K {\displaystyle K} values are widely used in the design of large-scale distillation columns for distilling multi-component mixtures in oil refineries, petrochemical and chemical plants , natural gas processing plants and other industries. | https://en.wikipedia.org/wiki/Relative_volatility |
Relative wind stress is a shear stress that is produced by wind blowing over the surface of the ocean, or another large body of water. Relative wind stress is related to wind stress but takes the difference between the surface ocean current velocity and wind velocity into account. The units are Newton per meter squared [ N m − 2 ] {\displaystyle [Nm^{-2}]} or Pascal [ P a ] {\displaystyle [Pa]} . Wind stress over the ocean is important as it is a major source of kinetic energy input to the ocean which in turn drives large scale ocean circulation. [ 1 ] The use of relative wind stress instead of wind stress, where the ocean current is assumed to be stationary, reduces the stress felt over the ocean in models. This leads to a decrease in the calculation of power input into the ocean of 20–35% and thus, results in a different simulation of the large scale ocean circulation. [ 2 ]
The wind stress ( τ ) {\displaystyle (\tau )} acting on the ocean surface is usually parameterized using the turbulent drag formula
τ = C d ρ a | u → a | u → a {\displaystyle \quad \tau =C_{d}\rho _{a}|{\vec {u}}_{a}|{\vec {u}}_{a}} .
where C d {\displaystyle C_{d}} is the turbulent drag coefficient (usually determined empirically), ρ a {\displaystyle \rho _{a}} is the air density , and u → a {\displaystyle {\vec {u}}_{a}} is the wind velocity vector, usually taken at 10m above sea level. This parameterization is commonly referred to as resting ocean approximation. [ 3 ] From now on we will refer to wind stress in resting ocean approximation as simply resting ocean wind stress.
On the other hand, relative wind stress ( τ r e l ) {\displaystyle (\tau _{rel})} makes use of the velocity of the surface wind relative to the velocity at the ocean surface u → o {\displaystyle {\vec {u}}_{o}} , as follows,
τ r e l = C d ρ a | u → a − u → o | ( u → a − u → o ) {\displaystyle \quad \tau _{rel}=C_{d}\rho _{a}|{\vec {u}}_{a}-{\vec {u}}_{o}|({\vec {u}}_{a}-{\vec {u}}_{o})} .
where u → o {\displaystyle {\vec {u}}_{o}} is the surface ocean velocity and thus, the terms with ( u → a − u → o ) {\displaystyle ({\vec {u}}_{a}-{\vec {u}}_{o})} represent the wind velocity relative to the surface ocean velocity. [ 4 ] [ 5 ] Therefore, the difference between wind stress and relative wind stress is that relative wind stress takes into account the relative motion of the wind with respect to the surface ocean current.
The work wind does on the ocean can be computed by
P = τ ⋅ u → o {\displaystyle \qquad P=\tau \cdot {\vec {u}}_{o}}
where τ {\displaystyle \tau } is the chosen parameterization for the wind stress.
Thus, in resting ocean approximation, the work done on the ocean by the wind is
P 0 = τ ⋅ u → o = C d ρ a | u → a | u → a ⋅ u → o {\displaystyle {\begin{aligned}\qquad P_{0}&=\tau \cdot {\vec {u}}_{o}\\&=C_{d}\rho _{a}|{\vec {u}}_{a}|{\vec {u}}_{a}\cdot {\vec {u}}_{o}\end{aligned}}} .
Furthermore, if the relative wind stress parameterization is used, the work done on the ocean is given by
P 1 = τ r e l ⋅ u → o = C d ρ a | u → a − u → o | ( u → a − u → o ) ⋅ u → o = C d ρ a | u → a − u → o | u → a ⋅ u → o − C d ρ a | u → a − u → o | u → o ⋅ u → o {\displaystyle {\begin{aligned}\qquad P_{1}&=\tau _{rel}\cdot {\vec {u}}_{o}\\&=C_{d}\rho _{a}|{\vec {u}}_{a}-{\vec {u}}_{o}|({\vec {u}}_{a}-{\vec {u}}_{o})\cdot {\vec {u}}_{o}\\&=C_{d}\rho _{a}|{\vec {u}}_{a}-{\vec {u}}_{o}|{\vec {u}}_{a}\cdot {\vec {u}}_{o}-C_{d}\rho _{a}|{\vec {u}}_{a}-{\vec {u}}_{o}|{\vec {u}}_{o}\cdot {\vec {u}}_{o}\end{aligned}}}
Then, assuming u → o {\displaystyle {\vec {u}}_{o}} is the same in both situations, the difference between work done by resting ocean wind stress and relative wind stress is given by
P 0 − P 1 = C d ρ a | u → a − u → o | u → o ⋅ u → o − C d ρ a ( | u → a − u → o | − | u → a | ) u → a ⋅ u → o {\displaystyle \qquad P_{0}-P_{1}=C_{d}\rho _{a}|{\vec {u}}_{a}-{\vec {u}}_{o}|{\vec {u}}_{o}\cdot {\vec {u}}_{o}-C_{d}\rho _{a}(|{\vec {u}}_{a}-{\vec {u}}_{o}|-|{\vec {u}}_{a}|){\vec {u}}_{a}\cdot {\vec {u}}_{o}} .
Analysing this expression, we first see that the term C d ρ a | u → a − u → o | u → o ⋅ u → o {\displaystyle C_{d}\rho _{a}|{\vec {u}}_{a}-{\vec {u}}_{o}|{\vec {u}}_{o}\cdot {\vec {u}}_{o}} is always positive (since u → o ⋅ u → o = | u → o | 2 > 0 {\displaystyle {\vec {u}}_{o}\cdot {\vec {u}}_{o}=|{\vec {u}}_{o}|^{2}>0} and all the other terms are positive). Next, for the term − C d ρ a ( | u → a − u → o | − | u → a | ) u → a ⋅ u → o {\displaystyle -C_{d}\rho _{a}(|{\vec {u}}_{a}-{\vec {u}}_{o}|-|{\vec {u}}_{a}|){\vec {u}}_{a}\cdot {\vec {u}}_{o}} , we have:
Therefore, it is always the case that P 0 − P 1 > 0 {\displaystyle P_{0}-P_{1}>0} , meaning the calculation of the work done is always larger when using the resting ocean wind stress. [ 6 ] This overestimate is referred to in the literature as a "positive bias". [ 1 ] [ 3 ] Note that this may not be the case if the u → o {\displaystyle {\vec {u}}_{o}} used in the calculation of P 0 {\displaystyle P_{0}} is different from the u → o {\displaystyle {\vec {u}}_{o}} used in the calculations of P 1 {\displaystyle P_{1}} ( See section: Ocean currents as output of ocean models ). [ 6 ]
The mathematical explanation for the positive bias in the calculation of work using the resting ocean wind stress can also be interpreted physically through the mechanical damping effect. [ 3 ]
As seen in Figure 2, when the wind velocity and ocean current velocity are in the same direction, the relative wind stress is smaller than the resting ocean wind stress. In other words, less positive work is using relative wind stress. When the wind and the ocean velocities are in opposite directions, then the relative wind stress does more negative work than the resting ocean wind stress. Consequently, in both scenarios less work is being done on the ocean when the relative wind stress is used for the calculation.
This physical interpretation can also be adapted to a scenario where there is an ocean eddy . As illustrated on the top part of Figure 3, in the eddy situation, the relative wind stress is smaller when the wind and ocean velocities are aligned, a similar situation to the top part of Figure 2. At the bottom part of Figure 3 we have a situation analogous to the bottom part of Figure 2, where more negative work is being performed on the system than in the resting ocean case. Therefore, at the top of the eddy less energy is being put in and at the bottom more energy is being taken out, which means the eddy is being dampened more in the relative wind case.
The two situations depicted in Figures 2 and 3 are the physical reason why there is a positive bias when estimating the power (work per unit time) input to the ocean when using the resting ocean stress rather than the relative wind stress. [ 3 ] [ 7 ]
For the computation of surface currents, a general circulation model is forced with surface winds. A study by Pacanowski (1987) [ 8 ] shows that including ocean current velocity through relative wind stress in an Atlantic circulation model reduces the surface currents by 30%. [ 9 ] This decrease in surface current can impact sea surface temperature and upwelling along the equator. However, the greatest impact of including ocean currents in the air-sea stress is in the calculation of Power input to the general circulation, with the mechanism as described above. An additional effect of the computation with relative wind stress instead of resting ocean wind stress leads to a lower Residual Meridional Overturning Circulation in models.
Figure 4 shows the difference between relative wind stress and resting ocean wind stress. Data for relative wind stress is obtained from scatterometers . These accurately represent the relative wind stress as they measure backscatter from small-scale structures on the ocean surface, which respond to the sea surface-air interface and not to wind speed.
Overestimations of power input into the ocean in models have been identified when using wind stress calculated from zonal mean wind instead of relative wind stress, ranging between 20-35%. [ 11 ] [ 12 ] [ 6 ] In regions where wind speeds are relatively low and current speeds relatively high this effect is the greatest. An example is the tropical Pacific Ocean where trade winds blow with 5–9 m/s and the ocean current velocities can exceed 1 m/s. [ 13 ] In this region, depending on if it is an El Niño or La Niña state, the wind stress difference (resting ocean wind stress minus relative wind stress) can vary between negative and positive, respectively.
In the Southern Ocean , the use of relative wind stress is important because eddies are crucial in the Antarctic Circumpolar Circulation , and the damping of these eddies with relative wind stress will affect the overturning circulation. The Residual Meridional Overturning Circulation (RMOC), is a streamfunction that quantifies the transport of tracers across isopycnals . [ 14 ] Wind stress is taken into account through the formulation of the RMOC, which is the sum of the Eulerian mean MOC Ψ ¯ {\displaystyle {\bar {\Psi }}} and eddy-induced bolus overturning Ψ ∗ {\displaystyle \Psi ^{*}} . The Eulerian mean MOC is dependent on the meridional winds that drive Ekman transport in zonal direction. The eddy-induced bolus overturning acts to restore sloping isopycnals to the horizontal, which are induced by eddies. The formulation of the RMOC is given by:
Ψ r e s = Ψ ¯ + Ψ ∗ = − τ ¯ x ρ 0 f + K s {\displaystyle {\begin{aligned}\Psi _{res}={\bar {\Psi }}+\Psi ^{*}={\frac {-{\bar {\tau }}_{x}}{\rho _{0}f}}+Ks\end{aligned}}}
with τ ¯ x {\displaystyle {\bar {\tau }}_{x}} being the zonal mean wind stress, ρ 0 {\displaystyle \rho _{0}} the reference density, f {\displaystyle f} the Coriolis parameter (negative in Southern Hemisphere), K {\displaystyle K} the quasi-Stokes eddy diffusivity field, equal to L e d d y ⋅ U e d d y {\displaystyle L_{eddy}\cdot U_{eddy}} being the length and the velocity of the eddy, respectively, and s {\displaystyle s} the slope of the isopycnals.
Inserting a lower wind stress, by using relative wind stress instead of resting ocean wind stress, directly leads to lower residual overturning, by reducing the Eulerian mean MOC ( Ψ ¯ {\displaystyle {\bar {\Psi }}} ). Furthermore, it affects the eddy-induced bolus overturning ( Ψ ∗ {\displaystyle \Psi ^{*}} ) by damping eddies which results in reduced length and velocity scale ( L e d d y {\displaystyle L_{eddy}} & U e d d y {\displaystyle U_{eddy}} ) of eddies. The sum of this thus leads to a lower Ψ r e s {\displaystyle \Psi _{res}} .
As briefly mentioned in Section: Impact on Models for large-scale Ocean Circulation , the surface currents can be calculated by forcing surface winds into a general circulation model. The case of a model which is also forced by relative wind stress can be visualized in Figure 5. Firstly, the satellite data is used to input the 10m wind velocity into the calculation of the relative wind stress. However, if the parameterization for relative wind stress is used, this will result in a coupled problem. The ocean model requires the relative wind stress τ r e l {\displaystyle \tau _{rel}} to output the ocean current velocity, which in turn the calculation of τ r e l {\displaystyle \tau _{rel}} relies on. [ 15 ] This coupled system needs to be formulated as an inverse problem .
Another consequence is that, depending on the parameterization used for the wind stress, a different vector field will be inputted into the ocean model and, consequently, a different value of u → o {\displaystyle {\vec {u}}_{o}} will be outputted by the ocean model. Therefore, if a different wind field u → o {\displaystyle {\vec {u}}_{o}} is used for the calculations of P 0 {\displaystyle P_{0}} and P 1 {\displaystyle P_{1}} then it could be that P 0 − P 1 < 0 {\displaystyle P_{0}-P_{1}<0} . In other words, there may be a negative bias when calculating the work done on the ocean using the resting ocean approximation. [ 7 ] On the global scale, however, the literature has found an over rather than underestimation, as previously mentioned. [ 3 ] | https://en.wikipedia.org/wiki/Relative_wind_stress |
The relativistic Breit–Wigner distribution (after the 1936 nuclear resonance formula [ 1 ] of Gregory Breit and Eugene Wigner ) is a continuous probability distribution with the following probability density function , [ 2 ] f ( E ) = k ( E 2 − M 2 ) 2 + M 2 Γ 2 , {\displaystyle f(E)={\frac {k}{(E^{2}-M^{2})^{2}+M^{2}\Gamma ^{2}}},} where k is a constant of proportionality, equal to k = 2 2 M Γ γ π M 2 + γ , γ = M 2 ( M 2 + Γ 2 ) . {\displaystyle k={\frac {2{\sqrt {2}}\,M\Gamma \gamma }{\pi {\sqrt {M^{2}+\gamma }}}},\quad \gamma ={\sqrt {M^{2}(M^{2}+\Gamma ^{2})}}.} (This equation is written using natural units , ħ = c = 1 .)
It is most often used to model resonances (unstable particles) in high-energy physics . In this case, E is the center-of-mass energy that produces the resonance, M is the mass of the resonance, and Γ is the resonance width (or decay width ), related to its mean lifetime according to τ = 1/Γ . (With units included, the formula is τ = ħ /Γ .)
The probability of producing the resonance at a given energy E is proportional to f ( E ) , so that a plot of the production rate of the unstable particle as a function of energy traces out the shape of the relativistic Breit–Wigner distribution. Note that for values of E off the maximum at M such that | E 2 − M 2 | = M Γ , (hence | E − M | = Γ/2 for M ≫ Γ ), the distribution f has attenuated to half its maximum value, which justifies the name width at half-maximum for Γ .
In the limit of vanishing width, Γ → 0 , the particle becomes stable as the Lorentzian distribution f sharpens infinitely to 2 Mδ ( E 2 − M 2 ) , where δ is the Dirac delta function (point impulse).
In general, Γ can also be a function of E ; this dependence is typically only important when Γ is not small compared to M , and the phase space -dependence of the width needs to be taken into account. (For example, in the decay of the rho meson into a pair of pions .) The factor of M 2 that multiplies Γ 2 should also be replaced with E 2 (or E 4 / M 2 , etc.) when the resonance is wide. [ 3 ]
The form of the relativistic Breit–Wigner distribution arises from the propagator of an unstable particle, [ 4 ] which has a denominator of the form p 2 − M 2 + iM Γ . (Here, p 2 is the square of the four-momentum carried by that particle in the tree Feynman diagram involved.) The propagator in its rest frame then is proportional to the quantum-mechanical amplitude for the decay utilized to reconstruct that resonance, k ( E 2 − M 2 ) + i M Γ . {\displaystyle {\frac {\sqrt {k}}{(E^{2}-M^{2})+iM\Gamma }}.} The resulting probability distribution is proportional to the absolute square of the amplitude, so then the above relativistic Breit–Wigner distribution for the probability density function.
The form of this distribution is similar to the amplitude of the solution to the classical equation of motion for a driven harmonic oscillator damped and driven by a sinusoidal external force. It has the standard resonance form of the Lorentz, or Cauchy distribution , but involves relativistic variables s = p 2 , here = E 2 . The distribution is the solution of the differential equation for the amplitude squared w.r.t. the energy energy (frequency), in such a classical forced oscillator, f ′ ( E ) [ ( E 2 − M 2 ) 2 + Γ 2 M 2 ] − 4 E ( M 2 − E 2 ) f ( E ) = 0 , {\displaystyle f'(\mathrm {E} ){\big [}(\mathrm {E} ^{2}-M^{2})^{2}+\Gamma ^{2}M^{2}{\big ]}-4\mathrm {E} (M^{2}-\mathrm {E} ^{2})f(\mathrm {E} )=0,} or rather f ′ ( E ) f ( E ) = 4 ( M 2 − E 2 ) E ( E 2 − M 2 ) 2 + Γ 2 M 2 , {\displaystyle {\frac {f'(\mathrm {E} )}{f(\mathrm {E} )}}={\frac {4(M^{2}-\mathrm {E} ^{2})\mathrm {E} }{(\mathrm {E} ^{2}-M^{2})^{2}+\Gamma ^{2}M^{2}}},} with f ( M ) = k Γ 2 M 2 . {\displaystyle f(M)={\frac {k}{\Gamma ^{2}M^{2}}}.}
The cross-section for resonant production of a spin- J {\displaystyle J} particle of mass M {\displaystyle M} by the collision of two particles with spins S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} is generally described by the relativistic Breit–Wigner formula: [ 5 ] σ ( E cm ) = 2 J + 1 ( 2 S 1 + 1 ) ( 2 S 2 + 1 ) 4 π p cm 2 [ Γ 2 / 4 ( E cm − E 0 ) 2 + Γ 2 / 4 ] B in , {\displaystyle \sigma (E_{\text{cm}})={\frac {2J+1}{(2S_{1}+1)(2S_{2}+1)}}{\frac {4\pi }{p_{\text{cm}}^{2}}}\left[{\frac {\Gamma ^{2}/4}{(E_{\text{cm}}-E_{0})^{2}+\Gamma ^{2}/4}}\right]B_{\text{in}},} where E cm {\displaystyle E_{\text{cm}}} is the centre-of-mass energy of the collision, E 0 = M c 2 {\displaystyle E_{0}=Mc^{2}} , p cm {\displaystyle p_{\text{cm}}} is the centre-of-mass momentum of each of the two colliding particles, Γ {\displaystyle \Gamma } is the resonance's full width at half maximum , and B in {\displaystyle B_{\text{in}}} is the branching fraction for the resonance's decay into particles S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} .
If the resonance is only being detected in a specific output channel, then the observed cross-section will be reduced by the branching fraction ( B out {\displaystyle B_{\text{out}}} ) for that decay channel.
In experiment, the incident beam that produces resonance always has some spread of energy around a central value. Usually, that is a Gaussian/normal distribution . The resulting resonance shape in this case is given by the convolution of the Breit–Wigner and the Gaussian distribution: V 2 ( E ; M , Γ , k , σ ) = ∫ − ∞ ∞ k ( E ′ 2 − M 2 ) 2 + ( M Γ ) 2 1 σ 2 π e − ( E ′ − E ) 2 2 σ 2 d E ′ . {\displaystyle V_{2}(E;M,\Gamma ,k,\sigma )=\int _{-\infty }^{\infty }{\frac {k}{(E'^{2}-M^{2})^{2}+(M\Gamma )^{2}}}{\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {(E'-E)^{2}}{2\sigma ^{2}}}}\,dE'.}
This function can be simplified [ 6 ] by introducing new variables, t = E − E ′ 2 σ , u 1 = E − M 2 σ , u 2 = E + M 2 σ , a = k π 2 σ 2 , {\displaystyle t={\frac {E-E'}{{\sqrt {2}}\,\sigma }},\quad u_{1}={\frac {E-M}{{\sqrt {2}}\,\sigma }},\quad u_{2}={\frac {E+M}{{\sqrt {2}}\,\sigma }},\quad a={\frac {k\pi }{2\sigma ^{2}}},} to obtain V 2 ( E ; M , Γ , k , σ ) = H 2 ( a , u 1 , u 2 ) σ 2 2 π , {\displaystyle V_{2}(E;M,\Gamma ,k,\sigma )={\frac {H_{2}(a,u_{1},u_{2})}{\sigma ^{2}2{\sqrt {\pi }}}},} where the relativistic line broadening function [ 6 ] has the following definition: H 2 ( a , u 1 , u 2 ) = a π ∫ − ∞ ∞ e − t 2 ( u 1 − t ) 2 ( u 2 − t ) 2 + a 2 d t . {\displaystyle H_{2}(a,u_{1},u_{2})={\frac {a}{\pi }}\int _{-\infty }^{\infty }{\frac {e^{-t^{2}}}{(u_{1}-t)^{2}(u_{2}-t)^{2}+a^{2}}}\,dt.}
H 2 {\displaystyle H_{2}} is the relativistic counterpart of the similar line-broadening function [ 7 ] for the Voigt profile used in spectroscopy (see also § 7.19 of [ 8 ] ). | https://en.wikipedia.org/wiki/Relativistic_Breit–Wigner_distribution |
In theoretical physics , relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity .
The relativistic Lagrangian can be derived in relativistic mechanics to be of the form:
Although, unlike non-relativistic mechanics, the relativistic Lagrangian is not expressed as difference of kinetic energy with potential energy , the relativistic Hamiltonian corresponds to total energy in a similar manner but without including rest energy . The form of the Lagrangian also makes the relativistic action functional proportional to the proper time of the path in spacetime .
In covariant form, the Lagrangian is taken to be: [ 1 ] [ 2 ]
where σ is an affine parameter which parametrizes the spacetime curve.
Lagrangian mechanics can be formulated in special relativity as follows. Consider one particle ( N particles are considered later).
If a system is described by a Lagrangian L , the Euler–Lagrange equations
retain their form in special relativity , provided the Lagrangian generates equations of motion consistent with special relativity. Here r = ( x , y , z ) is the position vector of the particle as measured in some lab frame where Cartesian coordinates are used for simplicity, and
is the coordinate velocity, the derivative of position r with respect to coordinate time t . (Throughout this article, overdots are with respect to coordinate time, not proper time). It is possible to transform the position coordinates to generalized coordinates exactly as in non-relativistic mechanics, r = r ( q , t ) . Taking the total differential of r obtains the transformation of velocity v to the generalized coordinates, generalized velocities, and coordinate time
remains the same. However, the energy of a moving particle is different from non-relativistic mechanics. It is instructive to look at the total relativistic energy of a free test particle. An observer in the lab frame defines events by coordinates r and coordinate time t , and measures the particle to have coordinate velocity v = d r / dt . By contrast, an observer moving with the particle will record a different time, this is the proper time , τ . Expanding in a power series , the first term is the particle's rest energy , plus its non-relativistic kinetic energy , followed by higher order relativistic corrections;
where c is the speed of light in vacuum. The differentials in t and τ are related by the Lorentz factor γ , [ nb 1 ]
where · is the dot product . The relativistic kinetic energy for an uncharged particle of rest mass m 0 is
and we may naïvely guess the relativistic Lagrangian for a particle to be this relativistic kinetic energy minus the potential energy. However, even for a free particle for which V = 0, this is wrong. Following the non-relativistic approach, we expect the derivative of this seemingly correct Lagrangian with respect to the velocity to be the relativistic momentum, which it is not.
The definition of a generalized momentum can be retained, and the advantageous connection between cyclic coordinates and conserved quantities will continue to apply. The momenta can be used to "reverse-engineer" the Lagrangian. For the case of the free massive particle, in Cartesian coordinates, the x component of relativistic momentum is
and similarly for the y and z components. Integrating this equation with respect to dx / dt gives
where X is an arbitrary function of dy / dt and dz / dt from the integration. Integrating p y and p z obtains similarly
where Y and Z are arbitrary functions of their indicated variables. Since the functions X , Y , Z are arbitrary, without loss of generality we can conclude the common solution to these integrals, a possible Lagrangian that will correctly generate all the components of relativistic momentum, is
where X = Y = Z = 0 .
Alternatively, since we wish to build a Lagrangian out of relativistically invariant quantities, take the action as proportional to the integral of the Lorentz invariant line element in spacetime , the length of the particle's world line between proper times τ 1 and τ 2 , [ nb 1 ]
where ε is a constant to be found, and after converting the proper time of the particle to the coordinate time as measured in the lab frame, the integrand is the Lagrangian by definition. The momentum must be the relativistic momentum,
which requires ε = − m 0 c 2 , in agreement with the previously obtained Lagrangian.
Either way, the position vector r is absent from the Lagrangian and therefore cyclic, so the Euler–Lagrange equations are consistent with the constancy of relativistic momentum,
which must be the case for a free particle. Also, expanding the relativistic free particle Lagrangian in a power series to first order in ( v / c ) 2 ,
in the non-relativistic limit when v is small, the higher order terms not shown are negligible, and the Lagrangian is the non-relativistic kinetic energy as it should be. The remaining term is the negative of the particle's rest energy, a constant term which can be ignored in the Lagrangian.
For the case of an interacting particle subject to a potential V , which may be non-conservative, it is possible for a number of interesting cases to simply subtract this potential from the free particle Lagrangian,
and the Euler–Lagrange equations lead to the relativistic version of Newton's second law . The derivative of relativistic momentum with respect to the time coordinate is equal to the force acting on the particle:
assuming the potential V can generate the corresponding force F in this way. If the potential cannot obtain the force as shown, then the Lagrangian would need modification to obtain the correct equations of motion.
Although this has been shown by taking Cartesian coordinates, it follows due to invariance of Euler Lagrange equations , that it is also satisfied in any arbitrary co-ordinate system as it physically corresponds to action minimization being independent of the co-ordinate system used to describe it. In a similar manner, several properties in Lagrangian mechanics are preserved whenever they are also independent of the specific form of the Lagrangian or the laws of motion governing the particles. For example, it is also true that if the Lagrangian is explicitly independent of time and the potential V ( r ) independent of velocities, then the total relativistic energy
is conserved, although the identification is less obvious since the first term is the relativistic energy of the particle which includes the rest mass of the particle, not merely the relativistic kinetic energy. Also, the argument for homogeneous functions does not apply to relativistic Lagrangians.
The extension to N particles is straightforward, the relativistic Lagrangian is just a sum of the "free particle" terms, minus the potential energy of their interaction;
where all the positions and velocities are measured in the same lab frame, including the time.
The advantage of this coordinate formulation is that it can be applied to a variety of systems, including multiparticle systems. The disadvantage is that some lab frame has been singled out as a preferred frame, and none of the equations are manifestly covariant (in other words, they do not take the same form in all frames of reference). For an observer moving relative to the lab frame, everything must be recalculated; the position r , the momentum p , total energy E , potential energy, etc. In particular, if this other observer moves with constant relative velocity then Lorentz transformations must be used. However, the action will remain the same since it is Lorentz invariant by construction.
A seemingly different but completely equivalent form of the Lagrangian for a free massive particle, which will readily extend to general relativity as shown below, can be obtained by inserting [ nb 1 ]
into the Lorentz invariant action so that
where ε = − m 0 c 2 is retained for simplicity. Although the line element and action are Lorentz invariant, the Lagrangian is not , because it has explicit dependence on the lab coordinate time. Still, the equations of motion follow from Hamilton's principle
Since the action is proportional to the length of the particle's worldline (in other words its trajectory in spacetime), this route illustrates that finding the stationary action is asking to find the trajectory of shortest or largest length in spacetime. Correspondingly, the equations of motion of the particle are akin to the equations describing the trajectories of shortest or largest length in spacetime, geodesics .
For the case of an interacting particle in a potential V , the Lagrangian is still
which can also extend to many particles as shown above, each particle has its own set of position coordinates to define its position.
In the covariant formulation, time is placed on equal footing with space, so the coordinate time as measured in some frame is part of the configuration space alongside the spatial coordinates (and other generalized coordinates). [ 3 ] For a particle, either massless or massive, the Lorentz invariant action is (abusing notation) [ 4 ]
where lower and upper indices are used according to covariance and contravariance of vectors , σ is an affine parameter , and u μ = dx μ / dσ is the four-velocity of the particle.
For massive particles, σ can be the arc length s , or proper time τ , along the particle's world line,
For massless particles, it cannot because the proper time of a massless particle is always zero;
For a free particle, the Lagrangian has the form [ 1 ] [ 2 ]
where the irrelevant factor of 1/2 is allowed to be scaled away by the scaling property of Lagrangians. No inclusion of mass is necessary since this also applies to massless particles. The Euler–Lagrange equations in the spacetime coordinates are
which is the geodesic equation for affinely parameterized geodesics in spacetime. In other words, the free particle follows geodesics. Geodesics for massless particles are called "null geodesics", since they lie in a " light cone " or "null cone" of spacetime (the null comes about because their inner product via the metric is equal to 0), massive particles follow "timelike geodesics", and hypothetical particles that travel faster than light known as tachyons follow "spacelike geodesics".
This manifestly covariant formulation does not extend to an N -particle system, since then the affine parameter of any one particle cannot be defined as a common parameter for all the other particles.
For a 1d relativistic free particle , the Lagrangian is [ 5 ]
This results in the following equation of motion:
For a 1d relativistic simple harmonic oscillator , the Lagrangian is [ 6 ] [ 7 ]
where k is the spring constant.
For a particle under a constant force, the Lagrangian is [ 8 ]
where g is the force per unit mass.
This results in the following equation of motion:
Which, given initial conditions of
results in the position of the particle as a function of time being
From Euler-Lagrange equation we have
Integrating with respect to time:
Where A {\displaystyle A} is an undetermined constant.
Solving this equation for x ˙ {\displaystyle {\dot {x}}} :
Then, using x ˙ ( t = 0 ) = v 0 {\displaystyle {\dot {x}}(t=0)=v_{0}} ,
This implies that
Thus
Note that for a large value of g t {\displaystyle gt} , we have 1 << ( γ 0 v 0 − g t ) 2 ⇒ 1 + ( γ 0 v 0 − g t ) 2 c 2 ≈ γ 0 v 0 − g t c {\displaystyle 1<<(\gamma _{0}v_{0}-gt)^{2}\Rightarrow {\sqrt {1+{\frac {(\gamma _{0}v_{0}-gt)^{2}}{c^{2}}}}}\approx {\frac {\gamma _{0}v_{0}-gt}{c}}} and see that x ˙ ( t → ∞ ) = c {\displaystyle {\dot {x}}(t\rightarrow \infty )=c} .
Then, given that
we have
Picking u ≡ 1 + ( γ 0 v 0 − g t ) 2 c 2 ⇒ d u = − 2 g c 2 ( γ 0 v 0 − g t ) d t {\displaystyle u\equiv 1+{\frac {(\gamma _{0}v_{0}-gt)^{2}}{c^{2}}}\Rightarrow du=-{\frac {2g}{c^{2}}}(\gamma _{0}v_{0}-gt)dt} , we have
Then note that ∫ u − 1 2 d u = 2 u 1 2 + C {\displaystyle \int u^{-{\frac {1}{2}}}du=2u^{\frac {1}{2}}+C} for some undetermined constant C {\displaystyle C} so that
Using u = 1 + ( γ 0 v 0 − g t ) 2 c 2 {\displaystyle u=1+{\frac {(\gamma _{0}v_{0}-gt)^{2}}{c^{2}}}} :
Recalling that x ( t = 0 ) = x 0 {\displaystyle x(t=0)=x_{0}} :
Since γ 0 = 1 1 − v 0 2 c 2 {\displaystyle \gamma _{0}={\frac {1}{\sqrt {1-{\frac {v_{0}^{2}}{c^{2}}}}}}} , we have 1 + ( γ 0 v 0 ) 2 c 2 = γ 0 2 {\displaystyle 1+{\frac {(\gamma _{0}v_{0})^{2}}{c^{2}}}=\gamma _{0}^{2}} and come to
Therefore
Plugging in the definition of γ 0 = 1 1 − v 0 2 c 2 {\displaystyle \gamma _{0}={\frac {1}{\sqrt {1-{\frac {v_{0}^{2}}{c^{2}}}}}}} and using γ 0 2 = 1 + ( γ 0 v 0 ) 2 c 2 {\displaystyle \gamma _{0}^{2}=1+{\frac {(\gamma _{0}v_{0})^{2}}{c^{2}}}} brings the solution to
The Newtonian limit of this solution can be obtained by making the following approximations, which are equivalent to stating that x ˙ ( t ) << c {\displaystyle {\dot {x}}(t)<<c} :
This simplifies the solution to
Then using the approximation that α << 1 ⇒ 1 + α ≈ 1 + α 2 {\displaystyle \alpha <<1\Rightarrow {\sqrt {1+\alpha }}\approx 1+{\frac {\alpha }{2}}} :
Which simplifies to
This is expected solution to the equation of motion to the Newtonian particle subject to a constant force: x ¨ = − g {\displaystyle {\ddot {x}}=-g}
In special relativity, the Lagrangian of a massive charged test particle in an electromagnetic field modifies to [ 9 ] [ 10 ]
The Lagrangian equations in r lead to the Lorentz force law, in terms of the relativistic momentum
In the language of four-vectors and tensor index notation , the Lagrangian takes the form
where u μ = dx μ / dτ is the four-velocity of the test particle, and A μ the electromagnetic four-potential .
The Euler–Lagrange equations are (notice the total derivative with respect to proper time instead of coordinate time )
obtains
Under the total derivative with respect to proper time, the first term is the relativistic momentum, the second term is
then rearranging, and using the definition of the antisymmetric electromagnetic tensor , gives the covariant form of the Lorentz force law in the more familiar form,
The Lagrangian is that of a single particle plus an interaction term L I
Varying this with respect to the position of the particle x α as a function of time t gives
This gives the equation of motion
where
is the non-gravitational force on the particle. (For m to be independent of time, we must have f α dx α / dt = 0 .)
Rearranging gets the force equation
where Γ is the Christoffel symbol , which describes the gravitational field.
If we let
be the (kinetic) linear momentum for a particle with mass, then
and
hold even for a massless particle.
In general relativity , the first term generalizes (includes) both the classical kinetic energy and the interaction with the gravitational field. For a charged particle in an electromagnetic field, the Lagrangian is given by
If the four spacetime coordinates x μ are given in arbitrary units (i.e. unitless), then g μν is the rank 2 symmetric metric tensor , which is also the gravitational potential. Also, A μ is the electromagnetic 4-vector potential.
There exists an equivalent formulation of the relativistic Lagrangian, which has two advantages:
In this alternative formulation, the Lagrangian is given by
where λ {\displaystyle \lambda } is an arbitrary affine parameter and e {\displaystyle e} is an auxiliary parameter that can be viewed as an einbein field along the worldline. In the original Lagrangian with the square root the energy-momentum relation appears as a primary constraint that is also a first class constraint . In this reformulation this is no longer the case. Instead, the energy-momentum relation appears as the equation of motion for the auxiliary field e {\displaystyle e} . Therefore, the constraint is now a secondary constraint that is still a first class constraint , reflecting the invariance of the action under reparameterization of the affine parameter λ {\displaystyle \lambda } . After the equation of motion has been derived, one must gauge fix the auxiliary field e {\displaystyle e} . The standard gauge choice is as follows: | https://en.wikipedia.org/wiki/Relativistic_Lagrangian_mechanics |
In physics , relativistic beaming (also known as Doppler beaming, Doppler boosting, or the headlight effect ) is the process by which relativistic effects modify the apparent luminosity of emitting matter that is moving at speeds close to the speed of light . In an astronomical context, relativistic beaming commonly occurs in two oppositely-directed relativistic jets of plasma that originate from a central compact object that is accreting matter. Accreting compact objects and relativistic jets are invoked to explain x-ray binaries , gamma-ray bursts , and, on a much larger scale, (AGN) active galactic nuclei (of which quasars are a particular variety).
Beaming affects the apparent brightness of a moving object. Consider a cloud of gas moving relative to the observer and emitting electromagnetic radiation. If the gas is moving towards the observer, it will be brighter than if it were at rest, but if the gas is moving away, it will appear fainter. The magnitude of the effect is illustrated by the AGN jets of the galaxies M87 and 3C 31 (see images at right). M87 has twin jets aimed almost directly towards and away from Earth; the jet moving towards Earth is clearly visible (the long, thin blueish feature in the top image at right), while the other jet is so much fainter it is not visible. [ 1 ] In 3C 31, both jets (labeled in the lower figure at right) are at roughly right angles to our line of sight, and thus, both are visible. The upper jet points slightly more in Earth's direction and is therefore brighter. [ 2 ]
Relativistically, moving objects are beamed due to a variety of physical effects. Light aberration causes most of the photons to be emitted along the object's direction of motion. The Doppler effect changes the energy of the photons by red- or blue shifting them. Finally, time intervals as measured by clocks moving alongside the emitting object are different from those measured by an observer on Earth due to time dilation and photon arrival time effects. How all of these effects modify the brightness, or apparent luminosity, of a moving object is determined by the equation describing the relativistic Doppler effect (which is why relativistic beaming is also known as Doppler beaming).
The simplest model for a jet is one where a single, homogeneous sphere is travelling towards the Earth at nearly the speed of light. This simple model is also an unrealistic one, but it illustrates the physical process of beaming.
Relativistic jets emit most of their energy via synchrotron emission . In our simple model, the sphere contains highly relativistic electrons and a steady magnetic field . Electrons inside the blob travel at speeds a tiny fraction below the speed of light and are whipped around by the magnetic field. Each change in direction by an electron is accompanied by the release of energy in the form of a photon. With enough electrons and a powerful enough magnetic field, the relativistic sphere can emit a huge number of photons, ranging from those at relatively weak radio frequencies to powerful X-ray photons.
Features of a simple synchrotron spectrum include, at low frequencies, the jet sphere is opaque and its luminosity increases with frequency until it peaks and begins to decline. This peak frequency occurs at log ν = 3 {\displaystyle \log \nu =3} . At frequencies higher than this, the jet sphere is transparent. The luminosity decreases with frequency until a break frequency is reached, after which it declines more rapidly. The break frequency occurs when log ν = 7 {\displaystyle \log \nu =7} . The sharp break frequency occurs because at very high frequencies, the electrons which emit the photons lose most of their energy rapidly. A sharp decrease in the number of high energy electrons means a sharp decrease in the spectrum.
The changes in slope in the synchrotron spectrum are parameterized with a spectral index . The spectral index , α, over a given frequency range is simply the slope on a diagram of log S {\displaystyle \log S} vs. log ν {\displaystyle \log \nu } . (Of course for α to have real meaning the spectrum must be very nearly a straight line across the range in question.)
In the simple jet model of a single homogeneous sphere the observed luminosity is related to the intrinsic luminosity as
S o = S e D p , {\displaystyle S_{o}=S_{e}D^{p}\,,}
where p = 3 − α . {\displaystyle p=3-\alpha \,.}
The observed luminosity therefore depends on the speed of the jet and the angle to the line of sight through the Doppler factor, D {\displaystyle D} , and also on the properties inside the jet, as shown by the exponent with the spectral index.
The beaming equation can be broken down into a series of three effects:
Aberration is the change in an object's apparent direction caused by the relative transverse motion of the observer. In inertial systems it is equal and opposite to the light time correction .
In everyday life aberration is a well-known phenomenon. Consider a person standing in the rain on a day when there is no wind. If the person is standing still, then the rain drops will follow a path that is straight down to the ground. However, if the person is moving, for example in a car, the rain will appear to be approaching at an angle. This apparent change in the direction of the incoming raindrops is aberration.
The amount of aberration depends on the speed of the emitted object or wave relative to the observer. In the example above this would be the speed of a car compared to the speed of the falling rain. This does not change when the object is moving at a speed close to c {\displaystyle c} . Like the classic and relativistic effects, aberration depends on: 1) the speed of the emitter at the time of emission, and 2) the speed of the observer at the time of absorption.
In the case of a relativistic jet, beaming (emission aberration) will make it appear as if more energy is sent forward, along the direction the jet is traveling. In the simple jet model a homogeneous sphere will emit energy equally in all directions in the rest frame of the sphere. In the rest frame of Earth the moving sphere will be observed to be emitting most of its energy along its direction of motion. The energy, therefore, is ‘beamed’ along that direction.
Quantitatively, aberration accounts for a change in luminosity of
D 2 . {\displaystyle D^{2}.}
Time dilation is a well-known consequence of special relativity and accounts for a change in observed luminosity of D 1 . {\displaystyle D^{1}.}
Blue- or redshifting can change the observed luminosity at a particular frequency, but this is not a beaming effect.
Blueshifting accounts for a change in observed luminosity of
1 D α . {\displaystyle {\frac {1}{D^{\alpha }}}.}
A more-sophisticated method of deriving the beaming equations starts with the quantity S ν 3 {\displaystyle {\frac {S}{\nu ^{3}}}} . This quantity is a Lorentz invariant, so the value is the same in different reference frames. | https://en.wikipedia.org/wiki/Relativistic_beaming |
In general relativity , the relativistic disk expression refers to a class of axi-symmetric self-consistent solutions to Einstein's field equations corresponding to the gravitational field generated by axi-symmetric isolated sources. To find such solutions, one has to pose correctly and solve together the ‘outer’ problem, a boundary value problem for vacuum Einstein's field equations whose solution determines the external field, and the ‘inner’ problem, whose solution determines the structure and the dynamics of the matter source in its own gravitational field . Physically reasonable solutions must satisfy some additional conditions such as finiteness and positiveness of mass, physically reasonable kind of matter and finite geometrical size. [ 1 ] [ 2 ] Exact solutions describing relativistic static thin disks as their sources were first studied by Bonnor and Sackfield and Morgan and Morgan. Subsequently, several classes of exact solutions corresponding to static and stationary thin disks have been obtained by different authors.
This relativity -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relativistic_disk |
For classical dynamics at relativistic speeds, see relativistic mechanics .
Relativistic dynamics refers to a combination of relativistic and quantum concepts to describe the relationships between the motion and properties of a relativistic system and the forces acting on the system. What distinguishes relativistic dynamics from other physical theories is the use of an invariant scalar evolution parameter to monitor the historical evolution of space-time events. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved. [ 1 ] Twentieth century experiments showed that the physical description of microscopic and submicroscopic objects moving at or near the speed of light raised questions about such fundamental concepts as space, time, mass, and energy. The theoretical description of the physical phenomena required the integration of concepts from relativity and quantum theory .
Vladimir Fock [ 2 ] was the first to propose an evolution parameter theory for describing relativistic quantum phenomena, but the evolution parameter theory introduced by Ernst Stueckelberg [ 3 ] [ 4 ] is more closely aligned with recent work. [ 5 ] [ 6 ] Evolution parameter theories were used by Feynman , [ 7 ] Schwinger [ 8 ] [ 9 ] and others to formulate quantum field theory in the late 1940s and early 1950s. Silvan S. Schweber [ 10 ] wrote a historical exposition of Feynman's investigation of such a theory. A resurgence of interest in evolution parameter theories began in the 1970s with the work of Horwitz and Piron , [ 11 ] and Fanchi and Collins. [ 12 ]
Some researchers view the evolution parameter as a mathematical artifact while others view the parameter as a physically measurable quantity. To understand the role of an evolution parameter and the fundamental difference between the standard theory and evolution parameter theories, it is necessary to review the concept of time.
Time t played the role of a monotonically increasing evolution parameter in classical Newtonian mechanics, as in the force law F = dP/dt for a non-relativistic, classical object with momentum P. To Newton, time was an “arrow” that parameterized the direction of evolution of a system.
Albert Einstein rejected the Newtonian concept and identified t as the fourth coordinate of a space-time four- vector . Einstein's view of time requires a physical equivalence between coordinate time and coordinate space. In this view, time should be a reversible coordinate in the same manner as space. Particles moving backward in time are often used to display antiparticles in Feynman-diagrams, but they are not thought of as really moving backward in time usually it is done to simplify notation. However a lot of people think they are really moving backward in time and take it as evidence for time reversibility.
The development of non-relativistic quantum mechanics in the early twentieth century preserved the Newtonian concept of time in the Schrödinger equation. The ability of non-relativistic quantum mechanics and special relativity to successfully describe observations motivated efforts to extend quantum concepts to the relativistic domain. Physicists had to decide what role time should play in relativistic quantum theory. The role of time was a key difference between Einsteinian and Newtonian views of classical theory. Two hypotheses that were consistent with special relativity were possible:
Assume t = Einsteinian time and reject Newtonian time.
Introduce two temporal variables:
Hypothesis I led to a relativistic probability conservation equation that is essentially a re-statement of the non-relativistic continuity equation. Time in the relativistic probability conservation equation is Einstein's time and is a consequence of implicitly adopting Hypothesis I . By adopting Hypothesis I , the standard paradigm has at its foundation a temporal paradox: motion relative to a single temporal variable must be reversible even though the second law of thermodynamics establishes an “arrow of time” for evolving systems, including relativistic systems. Thus, even though Einstein's time is reversible in the standard theory, the evolution of a system is not time reversal invariant. From the perspective of Hypothesis I , time must be both an irreversible arrow tied to entropy and a reversible coordinate in the Einsteinian sense. [ 13 ] The development of relativistic dynamics is motivated in part by the concern that Hypothesis I was too restrictive.
The problems associated with the standard formulation of relativistic quantum mechanics provide a clue to the validity of Hypothesis I . These problems included negative probabilities, hole theory, the Klein paradox , non-covariant expectation values, and so forth. [ 14 ] [ 15 ] [ 16 ] Most of these problems were never solved; they were avoided when quantum field theory (QFT) was adopted as the standard paradigm. The QFT perspective, particularly its formulation by Schwinger, is a subset of the more general Relativistic Dynamics. [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ]
Relativistic Dynamics is based on Hypothesis II and employs two temporal variables: a coordinate time, and an evolution parameter. The evolution parameter, or parameterized time, may be viewed as a physically measurable quantity, and a procedure has been presented for designing evolution parameter clocks. [ 23 ] [ 24 ] By recognizing the existence of a distinct parameterized time and a distinct coordinate time, the conflict between a universal direction of time and a time that may proceed as readily from future to past as from past to future is resolved. The distinction between parameterized time and coordinate time removes ambiguities in the properties associated with the two temporal concepts in Relativistic Dynamics. | https://en.wikipedia.org/wiki/Relativistic_dynamics |
Relativistic electron beams are streams of electrons moving at relativistic speeds. They are the lasing medium in free electron lasers to be used in atmospheric research conducted at entities such as the Pan-oceanic Environmental and Atmospheric Research Laboratory (PEARL) at the University of Hawaii and NASA . It has been suggested that relativistic electron beams could be used to heat and accelerate the reaction mass in electrical rocket engines that Dr. Robert W. Bussard called quiet electric-discharge engines (QEDs). [ 1 ]
This relativity -related article is a stub . You can help Wikipedia by expanding it .
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relativistic_electron_beam |
Relativistic heat conduction refers to the modelling of heat conduction (and similar diffusion processes) in a way compatible with special relativity . In special (and general ) relativity, the usual heat equation for non-relativistic heat conduction must be modified, as it leads to faster-than-light signal propagation. [ 1 ] [ 2 ] Relativistic heat conduction, therefore, encompasses a set of models for heat propagation in continuous media (solids, fluids, gases) that are consistent with relativistic causality , namely the principle that an effect must be within the light-cone associated to its cause. Any reasonable relativistic model for heat conduction must also be stable , in the sense that differences in temperature propagate both slower than light and are damped over time (this stability property is intimately intertwined with relativistic causality [ 3 ] ).
Heat conduction in a Newtonian context is modelled by the Fourier equation , [ 4 ] namely a parabolic partial differential equation of the kind: ∂ θ ∂ t = α ∇ 2 θ , {\displaystyle {\frac {\partial \theta }{\partial t}}~=~\alpha ~\nabla ^{2}\theta ,} where θ is temperature , [ 5 ] t is time , α = k /( ρ c ) is thermal diffusivity , k is thermal conductivity , ρ is density , and c is specific heat capacity . The Laplace operator , ∇ 2 {\textstyle \nabla ^{2}} , is defined in Cartesian coordinates as ∇ 2 = ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 . {\displaystyle \nabla ^{2}~=~{\frac {\partial ^{2}}{\partial x^{2}}}~+~{\frac {\partial ^{2}}{\partial y^{2}}}~+~{\frac {\partial ^{2}}{\partial z^{2}}}.}
This Fourier equation can be derived by substituting Fourier’s linear approximation of the heat flux vector, q , as a function of temperature gradient , q = − k ∇ θ , {\displaystyle \mathbf {q} ~=~-k~\nabla \theta ,} into the first law of thermodynamics ρ c ∂ θ ∂ t + ∇ ⋅ q = 0 , {\displaystyle \rho ~c~{\frac {\partial \theta }{\partial t}}~+~\nabla \cdot \mathbf {q} ~=~0,} where the del operator, ∇, is defined in 3D as ∇ = i ∂ ∂ x + j ∂ ∂ y + k ∂ ∂ z . {\displaystyle \nabla ~=~\mathbf {i} ~{\frac {\partial }{\partial x}}~+~\mathbf {j} ~{\frac {\partial }{\partial y}}~+~\mathbf {k} ~{\frac {\partial }{\partial z}}.}
It can be shown that this definition of the heat flux vector also satisfies the second law of thermodynamics, [ 6 ] ∇ ⋅ ( q θ ) + ρ ∂ s ∂ t = σ , {\displaystyle \nabla \cdot \left({\frac {\mathbf {q} }{\theta }}\right)~+~\rho ~{\frac {\partial s}{\partial t}}~=~\sigma ,} where s is specific entropy and σ is entropy production . This mathematical model is inconsistent with special relativity: the Green function associated to the heat equation (also known as heat kernel ) has support that extends outside the light-cone , leading to faster-than-light propagation of information. For example, consider a pulse of heat at the origin; then according to Fourier equation, it is felt (i.e. temperature changes) at any distant point, instantaneously. The speed of propagation of heat is faster than the speed of light in vacuum, which is inadmissible within the framework of relativity.
The parabolic model for heat conduction discussed above shows that the Fourier equation (and the more general Fick's law of diffusion ) is incompatible with the theory of relativity [ 7 ] for at least one reason: it admits infinite speed of propagation of the continuum field (in this case: heat, or temperature gradients). To overcome this contradiction, workers such as Carlo Cattaneo , [ 2 ] Vernotte, [ 8 ] Chester, [ 9 ] and others [ 10 ] proposed that Fourier equation should be upgraded from the parabolic to a hyperbolic form, where the n, the temperature field θ {\displaystyle \theta } is governed by: 1 C 2 ∂ 2 θ ∂ t 2 + 1 α ∂ θ ∂ t = ∇ 2 θ . {\displaystyle {\frac {1}{C^{2}}}~{\frac {\partial ^{2}\theta }{\partial t^{2}}}~+~{\frac {1}{\alpha }}~{\frac {\partial \theta }{\partial t}}~=~\nabla ^{2}\theta .}
In this equation, C is called the speed of second sound (that is related to excitations and quasiparticles , like phonons ). The equation is known as the " hyperbolic heat conduction" (HHC) equation. [ 11 ] Mathematically, the above equation is called "telegraph equation", as it is formally equivalent to the telegrapher's equations , which can be derived from Maxwell’s equations of electrodynamics.
For the HHC equation to remain compatible with the first law of thermodynamics, it is necessary to modify the definition of heat flux vector, q , to τ 0 ∂ q ∂ t + q = − k ∇ θ , {\displaystyle \tau _{_{0}}~{\frac {\partial \mathbf {q} }{\partial t}}~+~\mathbf {q} ~=~-k~\nabla \theta ,} where τ 0 {\textstyle \tau _{_{0}}} is a relaxation time , such that C 2 = α / τ 0 . {\textstyle C^{2}~=~\alpha /\tau _{_{0}}.} This equation for the heat flux is often referred to as "Maxwell-Cattaneo equation". The most important implication of the hyperbolic equation is that by switching from a parabolic ( dissipative ) to a hyperbolic (includes a conservative term) partial differential equation , there is the possibility of phenomena such as thermal resonance [ 12 ] [ 13 ] [ 14 ] and thermal shock waves . [ 15 ] | https://en.wikipedia.org/wiki/Relativistic_heat_conduction |
In physics , relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non- quantum mechanical description of a system of particles, or of a fluid , in cases where the velocities of moving objects are comparable to the speed of light c . As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics , while attempts for that of GR is quantum gravity , an unsolved problem in physics .
As with classical mechanics, the subject can be divided into " kinematics "; the description of motion by specifying positions , velocities and accelerations , and " dynamics "; a full description by considering energies , momenta , and angular momenta and their conservation laws , and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by " statics " in classical mechanics—depends on the relative motion of observers who measure in frames of reference .
Some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum ( Newton's second law ), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done. However, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames . In addition to modifying notions of space and time , SR forces one to reconsider the concepts of mass , momentum , and energy all of which are important constructs in Newtonian mechanics . SR shows that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated.
The equations become more complicated in the more familiar three-dimensional vector calculus formalism, due to the nonlinearity in the Lorentz factor , which accurately accounts for relativistic velocity dependence and the speed limit of all particles and fields. However, they have a simpler and elegant form in four -dimensional spacetime , which includes flat Minkowski space (SR) and curved spacetime (GR), because three-dimensional vectors derived from space and scalars derived from time can be collected into four vectors , or four-dimensional tensors . The six-component angular momentum tensor is sometimes called a bivector because in the 3D viewpoint it is two vectors (one of these, the conventional angular momentum, being an axial vector ).
The relativistic four-velocity, that is the four-vector representing velocity in relativity, is defined as follows:
In the above, τ {\displaystyle {\tau }} is the proper time of the path through spacetime , called the world-line, followed by the object velocity the above represents, and
is the four-position ; the coordinates of an event . Due to time dilation , the proper time is the time between two events in a frame of reference where they take place at the same location. The proper time is related to coordinate time t by:
where γ ( v ) {\displaystyle {\gamma }(\mathbf {v} )} is the Lorentz factor :
(either version may be quoted) so it follows:
The first three terms, excepting the factor of γ ( v ) {\displaystyle {\gamma (\mathbf {v} )}} , is the velocity as seen by the observer in their own reference frame. The γ ( v ) {\displaystyle {\gamma (\mathbf {v} )}} is determined by the velocity v {\displaystyle \mathbf {v} } between the observer's reference frame and the object's frame, which is the frame in which its proper time is measured. This quantity is invariant under Lorentz transformation, so to check to see what an observer in a different reference frame sees, one simply multiplies the velocity four-vector by the Lorentz transformation matrix between the two reference frames.
The mass of an object as measured in its own frame of reference is called its rest mass or invariant mass and is sometimes written m 0 {\displaystyle m_{0}} . If an object moves with velocity v {\displaystyle \mathbf {v} } in some other reference frame, the quantity m = γ ( v ) m 0 {\displaystyle m=\gamma (\mathbf {v} )m_{0}} is often called the object's "relativistic mass" in that frame. [ 1 ] Some authors use m {\displaystyle m} to denote rest mass, but for the sake of clarity this article will follow the convention of using m {\displaystyle m} for relativistic mass and m 0 {\displaystyle m_{0}} for rest mass. [ 2 ]
Lev Okun has suggested that the concept of relativistic mass "has no rational justification today" and should no longer be taught. [ 3 ] Other physicists, including Wolfgang Rindler and T. R. Sandin, contend that the concept is useful. [ 4 ] See mass in special relativity for more information on this debate.
A particle whose rest mass is zero is called massless . Photons and gravitons are thought to be massless, and neutrinos are nearly so.
There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws . If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy, one sees that these quantities are not conserved in SR. One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities . It is these new definitions which are taken as the correct ones for momentum and energy in SR.
The four-momentum of an object is straightforward, identical in form to the classical momentum, but replacing 3-vectors with 4-vectors:
The energy and momentum of an object with invariant mass m 0 {\displaystyle m_{0}} , moving with velocity v {\displaystyle \mathbf {v} } with respect to a given frame of reference, are respectively given by
The factor γ {\displaystyle \gamma } comes from the definition of the four-velocity described above. The appearance of γ {\displaystyle \gamma } may be stated in an alternative way, which will be explained in the next section.
The kinetic energy, K {\displaystyle K} , is defined as
and the speed as a function of kinetic energy is given by
The spatial momentum may be written as p = m v {\displaystyle \mathbf {p} =m\mathbf {v} } , preserving the form from Newtonian mechanics with relativistic mass substituted for Newtonian mass. However, this substitution fails for some quantities, including force and kinetic energy. Moreover, the relativistic mass is not invariant under Lorentz transformations, while the rest mass is. For this reason, many people prefer to use the rest mass and account for γ {\displaystyle \gamma } explicitly through the 4-velocity or coordinate time.
A simple relation between energy, momentum, and velocity may be obtained from the definitions of energy and momentum by multiplying the energy by v {\displaystyle \mathbf {v} } , multiplying the momentum by c 2 {\displaystyle c^{2}} , and noting that the two expressions are equal. This yields
v {\displaystyle \mathbf {v} } may then be eliminated by dividing this equation by c {\displaystyle c} and squaring,
dividing the definition of energy by γ {\displaystyle \gamma } and squaring,
and substituting:
This is the relativistic energy–momentum relation .
While the energy E {\displaystyle E} and the momentum p {\displaystyle \mathbf {p} } depend on the frame of reference in which they are measured, the quantity E 2 − ( p c ) 2 {\displaystyle E^{2}-(pc)^{2}} is invariant. Its value is − c 2 {\displaystyle -c^{2}} times the squared magnitude of the 4-momentum vector.
The invariant mass of a system may be written as
Due to kinetic energy and binding energy, this quantity is different from the sum of the rest masses of the particles of which the system is composed. Rest mass is not a conserved quantity in special relativity, unlike the situation in Newtonian physics. However, even if an object is changing internally, so long as it does not exchange energy or momentum with its surroundings, its rest mass will not change and can be calculated with the same result in any reference frame.
The relativistic energy–momentum equation holds for all particles, even for massless particles for which m 0 = 0. In this case:
When substituted into Ev = c 2 p , this gives v = c : massless particles (such as photons ) always travel at the speed of light.
Notice that the rest mass of a composite system will generally be slightly different from the sum of the rest masses of its parts since, in its rest frame, their kinetic energy will increase its mass and their (negative) binding energy will decrease its mass. In particular, a hypothetical "box of light" would have rest mass even though made of particles which do not since their momenta would cancel.
Looking at the above formula for invariant mass of a system, one sees that, when a single massive object is at rest ( v = 0 , p = 0 ), there is a non-zero mass remaining: m 0 = E / c 2 .
The corresponding energy, which is also the total energy when a single particle is at rest, is referred to as "rest energy". In systems of particles which are seen from a moving inertial frame, total energy increases and so does momentum. However, for single particles the rest mass remains constant, and for systems of particles the invariant mass remain constant, because in both cases, the energy and momentum increases subtract from each other, and cancel. Thus, the invariant mass of systems of particles is a calculated constant for all observers, as is the rest mass of single particles.
For systems of particles, the energy–momentum equation requires summing the momentum vectors of the particles:
The inertial frame in which the momenta of all particles sums to zero is called the center of momentum frame . In this special frame, the relativistic energy–momentum equation has p = 0, and thus gives the invariant mass of the system as merely the total energy of all parts of the system, divided by c 2
This is the invariant mass of any system which is measured in a frame where it has zero total momentum, such as a bottle of hot gas on a scale. In such a system, the mass which the scale weighs is the invariant mass, and it depends on the total energy of the system. It is thus more than the sum of the rest masses of the molecules, but also includes all the totaled energies in the system as well. Like energy and momentum, the invariant mass of isolated systems cannot be changed so long as the system remains totally closed (no mass or energy allowed in or out), because the total relativistic energy of the system remains constant so long as nothing can enter or leave it.
An increase in the energy of such a system which is caused by translating the system to an inertial frame which is not the center of momentum frame , causes an increase in energy and momentum without an increase in invariant mass. E = m 0 c 2 , however, applies only to isolated systems in their center-of-momentum frame where momentum sums to zero.
Taking this formula at face value, we see that in relativity, mass is simply energy by another name (and measured in different units). In 1927 Einstein remarked about special relativity, "Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy." [ 5 ]
In a "totally-closed" system (i.e., isolated system ) the total energy, the total momentum, and hence the total invariant mass are conserved. Einstein's formula for change in mass translates to its simplest Δ E = Δ mc 2 form, however, only in non-closed systems in which energy is allowed to escape (for example, as heat and light), and thus invariant mass is reduced. Einstein's equation shows that such systems must lose mass, in accordance with the above formula, in proportion to the energy they lose to the surroundings. Conversely, if one can measure the differences in mass between a system before it undergoes a reaction which releases heat and light, and the system after the reaction when heat and light have escaped, one can estimate the amount of energy which escapes the system.
In both nuclear and chemical reactions, such energy represents the difference in binding energies of electrons in atoms (for chemistry) or between nucleons in nuclei (in atomic reactions). In both cases, the mass difference between reactants and (cooled) products measures the mass of heat and light which will escape the reaction, and thus (using the equation) give the equivalent energy of heat and light which may be emitted if the reaction proceeds.
In chemistry, the mass differences associated with the emitted energy are around 10 −9 of the molecular mass. [ 6 ] However, in nuclear reactions the energies are so large that they are associated with mass differences, which can be estimated in advance, if the products and reactants have been weighed (atoms can be weighed indirectly by using atomic masses, which are always the same for each nuclide ). Thus, Einstein's formula becomes important when one has measured the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have stored energy that can be released by certain nuclear reactions , providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb . Historically, for example, Lise Meitner was able to use the mass differences in nuclei to estimate that there was enough energy available to make nuclear fission a favorable process. The implications of this special form of Einstein's formula have thus made it one of the most famous equations in all of science.
The equation E = m 0 c 2 applies only to isolated systems in their center of momentum frame . It has been popularly misunderstood to mean that mass may be converted to energy, after which the mass disappears. However, popular explanations of the equation as applied to systems include open (non-isolated) systems for which heat and light are allowed to escape, when they otherwise would have contributed to the mass ( invariant mass ) of the system.
Historically, confusion about mass being "converted" to energy has been aided by confusion between mass and " matter ", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered "matter". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass.
For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent "window" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them. [ 7 ]
In relativistic mechanics, the time-varying mass moment
and orbital 3-angular momentum
of a point-like particle are combined into a four-dimensional bivector in terms of the 4-position X and the 4-momentum P of the particle: [ 8 ] [ 9 ]
where ∧ denotes the exterior product . This tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system. So, for an assembly of discrete particles one sums the angular momentum tensors over the particles, or integrates the density of angular momentum over the extent of a continuous mass distribution.
Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields.
In special relativity, Newton's second law does not hold in the form F = m a , but it does if it is expressed as
where p = γ( v ) m 0 v is the momentum as defined above and m 0 is the invariant mass . Thus, the force is given by
Starting from
Carrying out the derivatives gives
If the acceleration is separated into the part parallel to the velocity ( a ∥ ) and the part perpendicular to it ( a ⊥ ) , so that:
one gets
By construction a ∥ and v are parallel, so ( v · a ∥ ) v is a vector with magnitude v 2 a ∥ in the direction of v (and hence a ∥ ) which allows the replacement:
then
Consequently, in some old texts, γ( v ) 3 m 0 is referred to as the longitudinal mass , and γ( v ) m 0 is referred to as the transverse mass , which is numerically the same as the relativistic mass . See mass in special relativity .
If one inverts this to calculate acceleration from force, one gets
The force described in this section is the classical 3-D force which is not a four-vector . This 3-D force is the appropriate concept of force since it is the force which obeys Newton's third law of motion . It should not be confused with the so-called four-force which is merely the 3-D force in the comoving frame of the object transformed as if it were a four-vector. However, the density of 3-D force (linear momentum transferred per unit four-volume ) is a four-vector ( density of weight +1) when combined with the negative of the density of power transferred.
The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time: [ 10 ] [ 11 ]
or in tensor components:
where F is the 4d force acting on the particle at the event X . As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass.
The work-energy theorem says [ 12 ] the change in kinetic energy is equal to the work done on the body. In special relativity:
If in the initial state the body was at rest, so v 0 = 0 and γ 0 ( v 0 ) = 1, and in the final state it has speed v 1 = v , setting γ 1 ( v 1 ) = γ( v ), the kinetic energy is then;
a result that can be directly obtained by subtracting the rest energy m 0 c 2 from the total relativistic energy γ( v ) m 0 c 2 .
The Lorentz factor γ( v ) can be expanded into a Taylor series or binomial series for ( v / c ) 2 < 1, obtaining:
and consequently
For velocities much smaller than that of light, one can neglect the terms with c 2 and higher in the denominator. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities. | https://en.wikipedia.org/wiki/Relativistic_mechanics |
In particle physics , a relativistic particle is an elementary particle with kinetic energy greater than or equal to its rest-mass energy given by Einstein's relation, E = m 0 c 2 {\displaystyle E=m_{0}c^{2}} , or specifically, of which the velocity is comparable to the speed of light c {\displaystyle c} . [ 1 ]
This is achieved by photons to the extent that effects described by special relativity are able to describe those of such particles themselves. Several approaches exist as a means of describing the motion of single and multiple relativistic particles, with a prominent example being postulations through the Dirac equation of single particle motion. [ 2 ]
Since the energy-momentum relation of an particle can be written as: [ 3 ]
E 2 = ( p c ) 2 + ( m 0 c 2 ) 2 {\displaystyle E^{2}=(p{\textrm {c}})^{2}+\left(m_{0}{\textrm {c}}^{2}\right)^{2}\,}
where E {\displaystyle E} is the energy, p {\displaystyle p} is the momentum, and m 0 {\displaystyle m_{0}} is the rest mass,
when the rest mass tends to be zero, e.g. for a photon, or the momentum tends to be large, e.g. for a large-speed proton, this relation will collapses into a linear dispersion, i.e.
E = p c {\displaystyle E=p{\textrm {c}}}
This is different from the parabolic energy-momentum relation for classical particles. Thus, in practice, the linearity or the non-parabolicity of the energy-momentum relation is considered as a key feature for relativistic particles. These two types of relativistic particles are remarked as massless and massive, respectively.
In experiments, massive particles are relativistic when their kinetic energy is comparable to or greater than the energy E = m 0 c 2 {\displaystyle E=m_{0}c^{2}} corresponding to their rest mass. In other words, a massive particle is relativistic when its total mass-energy is at least twice its rest mass. This condition implies that the speed of the particle is close to the speed of light. According to the Lorentz factor formula, this requires the particle to move at roughly 85% of the speed of light. Such relativistic particles are generated in particle accelerators , [ a ] as well as naturally occurring in cosmic radiation . [ b ] In astrophysics , jets of relativistic plasma are produced by the centers of active galaxies and quasars . [ 4 ]
A charged relativistic particle crossing the interface of two media with different dielectric constants emits transition radiation . This is exploited in the transition radiation detectors of high-velocity particles. [ 5 ]
Relativistic electrons can also exist in some solid state materials, [ 6 ] [ 7 ] [ 8 ] [ 9 ] including semimetals such as graphene, [ 6 ] topological insulators, [ 10 ] bismuth antimony alloys, [ 11 ] and semiconductors such as transitional metal dichalcogenide [ 12 ] and black phosphorene layers. [ 13 ] These lattice confined electrons with relativistic effects that can be described using the Dirac equation are also called desktop relativistic electrons or Dirac electrons. | https://en.wikipedia.org/wiki/Relativistic_particle |
Relativistic plasmas in physics are plasmas for which relativistic corrections to a particle's mass and velocity are important. Such corrections typically become important when a significant number of electrons reach speeds greater than 0.86 c ( Lorentz factor γ {\displaystyle \gamma } =2).
Such plasmas may be created either by heating a gas to very high temperatures or by the impact of a high-energy particle beam. A relativistic plasma with a thermal distribution function has temperatures greater than around 260 keV, or 3.0 GK (5.5 billion degrees Fahrenheit), where approximately 10% of the electrons have γ > 2 {\displaystyle \gamma >2} . Since these temperatures are so high, most relativistic plasmas are small and brief, and are often the result of a relativistic beam impacting some target. (More mundanely, "relativistic plasma" might denote a normal, cold plasma moving at a significant fraction of the speed of light relative to the observer.)
Relativistic plasmas may result when two particle beams collide at speeds comparable to the speed of light, and in the cores of supernovae. Plasmas hot enough for particles other than electrons to be relativistic are even more rare, since other particles are more massive and thus require more energy to accelerate to a significant fraction of the speed of light. (About 10% of protons would have γ > 2 {\displaystyle \gamma >2} at a temperature of 481 MeV - 5.6 TK .) Still higher energies are necessary to achieve a quark–gluon plasma .
The primary changes in a plasma's behavior as it approaches the relativistic regime is slight modifications to the equations which describe a non-relativistic plasma and to collision and interaction cross sections . The equations may also need modifications to account for pair production of electron-positron pairs (or other particles at the highest temperatures).
A plasma double layer with a large potential drop and layer separation, may accelerate electrons to relativistic velocities, and produce synchrotron radiation .
This plasma physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relativistic_plasma |
Relativistic quantum chemistry combines relativistic mechanics with quantum chemistry to calculate elemental properties and structure, especially for the heavier elements of the periodic table . A prominent example is an explanation for the color of gold : due to relativistic effects, it is not silvery like most other metals. [ 1 ]
The term relativistic effects was developed in light of the history of quantum mechanics. Initially, quantum mechanics was developed without considering the theory of relativity . [ 2 ] Relativistic effects are those discrepancies between values calculated by models that consider relativity and those that do not. [ 3 ] Relativistic effects are important for heavier elements with high atomic numbers , such as lanthanides and actinides . [ 4 ]
Relativistic effects in chemistry can be considered to be perturbations , or small corrections, to the non-relativistic theory of chemistry, which is developed from the solutions of the Schrödinger equation . These corrections affect the electrons differently depending on the electron speed compared with the speed of light . Relativistic effects are more prominent in heavy elements because only in these elements do electrons attain sufficient speeds for the elements to have properties that differ from what non-relativistic chemistry predicts. [ 5 ]
Beginning in 1935, Bertha Swirles described a relativistic treatment of a many-electron system, [ 6 ] despite Paul Dirac 's 1929 assertion that the only imperfections remaining in quantum mechanics "give rise to difficulties only when high-speed particles are involved and are therefore of no importance in the consideration of the atomic and molecular structure and ordinary chemical reactions in which it is, indeed, usually sufficiently accurate if one neglects relativity variation of mass and velocity and assumes only Coulomb forces between the various electrons and atomic nuclei". [ 7 ]
Theoretical chemists by and large agreed with Dirac's sentiment until the 1970s, when relativistic effects were observed in heavy elements. [ 8 ] The Schrödinger equation had been developed without considering relativity in Schrödinger's 1926 article. [ 9 ] Relativistic corrections were made to the Schrödinger equation (see Klein–Gordon equation ) to describe the fine structure of atomic spectra, but this development and others did not immediately trickle into the chemical community. Since atomic spectral lines were largely in the realm of physics and not in that of chemistry, most chemists were unfamiliar with relativistic quantum mechanics, and their attention was on lighter elements typical for the organic chemistry focus of the time. [ 10 ]
Dirac's opinion on the role relativistic quantum mechanics would play for chemical systems has been largely dismissed for two main reasons. First, electrons in s and p atomic orbitals travel at a significant fraction of the speed of light. Second, relativistic effects give rise to indirect consequences that are especially evident for d and f atomic orbitals. [ 8 ]
One of the most important and familiar results of relativity is that the relativistic mass of the electron increases as
m rel = m e 1 − ( v e / c ) 2 , {\displaystyle m_{\text{rel}}={\frac {m_{\text{e}}}{\sqrt {1-(v_{\text{e}}/c)^{2}}}},}
where m e , v e , c {\displaystyle m_{e},v_{e},c} are the electron rest mass , velocity of the electron, and speed of light respectively. The figure at the right illustrates this relativistic effect as a function of velocity.
This has an immediate implication on the Bohr radius ( a 0 {\displaystyle a_{0}} ), which is given by
a 0 = ℏ m e c α , {\displaystyle a_{0}={\frac {\hbar }{m_{\text{e}}c\alpha }},}
where ℏ {\displaystyle \hbar } is the reduced Planck constant , and α is the fine-structure constant (a relativistic correction for the Bohr model ).
Bohr calculated that a 1s orbital electron of a hydrogen atom orbiting at the Bohr radius of 0.0529 nm travels at nearly 1/137 the speed of light. [ 11 ] One can extend this to a larger element with an atomic number Z by using the expression v ≈ Z c 137 {\displaystyle v\approx {\frac {Zc}{137}}} for a 1s electron, where v is its radial velocity , i.e., its instantaneous speed tangent to the radius of the atom. For gold with Z = 79, v ≈ 0.58 c , so the 1s electron will be moving at 58% of the speed of light. Substituting this in for v / c in the equation for the relativistic mass, one finds that m rel = 1.22 m e , and in turn putting this in for the Bohr radius above one finds that the radius shrinks by 22%.
If one substitutes the "relativistic mass" into the equation for the Bohr radius it can be written a rel = ℏ 1 − ( v e / c ) 2 m e c α . {\displaystyle a_{\text{rel}}={\frac {\hbar {\sqrt {1-(v_{\text{e}}/c)^{2}}}}{m_{\text{e}}c\alpha }}.}
It follows that a rel a 0 = 1 − ( v e / c ) 2 . {\displaystyle {\frac {a_{\text{rel}}}{a_{0}}}={\sqrt {1-(v_{\text{e}}/c)^{2}}}.}
At right, the above ratio of the relativistic and nonrelativistic Bohr radii has been plotted as a function of the electron velocity. Notice how the relativistic model shows the radius decreases with increasing velocity.
When the Bohr treatment is extended to hydrogenic atoms , the Bohr radius becomes r = n 2 Z a 0 = n 2 ℏ 2 4 π ε 0 m e Z e 2 , {\displaystyle r={\frac {n^{2}}{Z}}a_{0}={\frac {n^{2}\hbar ^{2}4\pi \varepsilon _{0}}{m_{\text{e}}Ze^{2}}},} where n {\displaystyle n} is the principal quantum number , and Z is an integer for the atomic number . In the Bohr model , the angular momentum is given as m v e r = n ℏ {\displaystyle mv_{\text{e}}r=n\hbar } . Substituting into the equation above and solving for v e {\displaystyle v_{\text{e}}} gives r = n 2 a 0 Z = n ℏ m v e , v e = Z n 2 a 0 n ℏ m , v e c = Z α n = Z e 2 4 π ε 0 ℏ c n . {\displaystyle {\begin{aligned}r&={\frac {n^{2}a_{0}}{Z}}={\frac {n\hbar }{mv_{\text{e}}}},\\v_{\text{e}}&={\frac {Z}{n^{2}a_{0}}}{\frac {n\hbar }{m}},\\{\frac {v_{\text{e}}}{c}}&={\frac {Z\alpha }{n}}={\frac {Ze^{2}}{4\pi \varepsilon _{0}\hbar cn}}.\end{aligned}}}
From this point, atomic units can be used to simplify the expression into; v e = Z n . {\displaystyle v_{\text{e}}={\frac {Z}{n}}.}
Substituting this into the expression for the Bohr ratio mentioned above gives a rel a 0 = 1 − ( Z n c ) 2 . {\displaystyle {\frac {a_{\text{rel}}}{a_{0}}}={\sqrt {1-\left({\frac {Z}{nc}}\right)^{2}}}.}
At this point one can see that a low value of n {\displaystyle n} and a high value of Z {\displaystyle Z} results in a rel a 0 < 1 {\displaystyle {\frac {a_{\text{rel}}}{a_{0}}}<1} . This fits with intuition: electrons with lower principal quantum numbers will have a higher probability density of being nearer to the nucleus. A nucleus with a large charge will cause an electron to have a high velocity. A higher electron velocity means an increased electron relativistic mass, and as a result the electrons will be near the nucleus more of the time and thereby contract the radius for small principal quantum numbers. [ 12 ]
Mercury (Hg) is a liquid down to approximately −39 °C , its melting point . Bonding forces are weaker for Hg–Hg bonds than for their immediate neighbors such as cadmium (m.p. 321 °C) and gold (m.p. 1064 °C). The lanthanide contraction only partially accounts for this anomaly. [ 11 ] Because the 6s 2 orbital is contracted by relativistic effects and may therefore only weakly contribute to any chemical bonding, Hg–Hg bonding must be mostly the result of van der Waals forces . [ 11 ] [ 13 ] [ 14 ]
Mercury gas is mostly monatomic, Hg(g). Hg 2 (g) rarely forms and has a low dissociation energy, as expected due to the lack of strong bonds. [ 15 ]
Au 2 (g) and Hg(g) are analogous with H 2 (g) and He(g) with regard to having the same nature of difference. The relativistic contraction of the 6s 2 orbital leads to gaseous mercury sometimes being referred to as a pseudo noble gas . [ 11 ]
The reflectivity of aluminium (Al), silver (Ag), and gold (Au) is shown in the graph to the right. The human eye sees electromagnetic radiation with a wavelength near 600 nm as yellow. Gold absorbs blue light more than it absorbs other visible wavelengths of light; the reflected light reaching the eye is therefore lacking in blue compared with the incident light. Since yellow is complementary to blue, this makes a piece of gold under white light appear yellow to human eyes.
The electronic transition from the 5d orbital to the 6s orbital is responsible for this absorption. An analogous transition occurs in silver, but the relativistic effects are smaller than in gold. While silver's 4d orbital experiences some relativistic expansion and the 5s orbital contraction, the 4d–5s distance in silver is much greater than the 5d–6s distance in gold. The relativistic effects increase the 5d orbital's distance from the atom's nucleus and decrease the 6s orbital's distance. Due to the decreased 6s orbital distance, the electronic transition primarily absorbs in the violet/blue region of the visible spectrum, as opposed to the UV region. [ 16 ]
Caesium , the heaviest of the alkali metals that can be collected in quantities sufficient for viewing, has a golden hue, whereas the other alkali metals are silver-white. However, relativistic effects are not very significant at Z = 55 for caesium (not far from Z = 47 for silver). The golden color of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium, this frequency is in the ultraviolet, but for caesium it reaches the blue-violet end of the visible spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially, while other colors (having lower frequency) are reflected; hence it appears yellowish. [ 17 ]
Without relativity, lead ( Z = 82) would be expected to behave much like tin ( Z = 50), so tin–acid batteries should work just as well as the lead–acid batteries commonly used in cars. However, calculations show that about 10 V of the 12 V produced by a 6-cell lead–acid battery arises purely from relativistic effects, explaining why tin–acid batteries do not work. [ 18 ]
In Tl(I) ( thallium ), Pb(II) ( lead ), and Bi(III) ( bismuth ) complexes a 6s 2 electron pair exists. The inert pair effect is the tendency of this pair of electrons to resist oxidation due to a relativistic contraction of the 6s orbital. [ 8 ]
Additional phenomena commonly caused by relativistic effects are the following: | https://en.wikipedia.org/wiki/Relativistic_quantum_chemistry |
In physics , relativistic quantum mechanics ( RQM ) is any Poincaré - covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c , and can accommodate massless particles . The theory has application in high-energy physics , [ 1 ] particle physics and accelerator physics , [ 2 ] as well as atomic physics , chemistry [ 3 ] and condensed matter physics . [ 4 ] [ 5 ] Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity , more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators . Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity . Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Dirac or path-integral formalism) also work with special relativity.
Key features common to all RQMs include: the prediction of antimatter , spin magnetic moments of elementary spin-1/2 fermions , fine structure , and quantum dynamics of charged particles in electromagnetic fields . [ 6 ] The key result is the Dirac equation , from which these predictions emerge automatically. By contrast, in non-relativistic quantum mechanics, terms have to be introduced artificially into the Hamiltonian operator to achieve agreement with experimental observations.
The most successful (and most widely used) RQM is relativistic quantum field theory (QFT), in which elementary particles are interpreted as field quanta . A unique consequence of QFT that has been tested against other RQMs is the failure of conservation of particle number, for example, in matter creation and annihilation . [ 7 ]
Paul Dirac 's work between 1927 and 1933 shaped the synthesis of special relativity and quantum mechanics. [ 8 ] His work was instrumental, as he formulated the Dirac equation and also originated quantum electrodynamics , both of which were successful in combining the two theories. [ 9 ]
In this article, the equations are written in familiar 3D vector calculus notation and use hats for operators (not necessarily in the literature), and where space and time components can be collected, tensor index notation is shown also (frequently used in the literature), in addition the Einstein summation convention is used. SI units are used here; Gaussian units and natural units are common alternatives. All equations are in the position representation; for the momentum representation the equations have to be Fourier-transformed – see position and momentum space .
One approach is to modify the Schrödinger picture to be consistent with special relativity. [ 2 ]
A postulate of quantum mechanics is that the time evolution of any quantum system is given by the Schrödinger equation :
using a suitable Hamiltonian operator Ĥ corresponding to the system. The solution is a complex -valued wavefunction ψ ( r , t ) , a function of the 3D position vector r of the particle at time t , describing the behavior of the system.
Every particle has a non-negative spin quantum number s . The number 2 s is an integer, odd for fermions and even for bosons . Each s has 2 s + 1 z -projection quantum numbers; σ = s , s − 1, ... , − s + 1, − s . [ a ] This is an additional discrete variable the wavefunction requires; ψ ( r , t , σ ) .
Historically, in the early 1920s Pauli , Kronig , Uhlenbeck and Goudsmit were the first to propose the concept of spin. The inclusion of spin in the wavefunction incorporates the Pauli exclusion principle (1925) and the more general spin–statistics theorem (1939) due to Fierz , rederived by Pauli a year later. This is the explanation for a diverse range of subatomic particle behavior and phenomena: from the electronic configurations of atoms, nuclei (and therefore all elements on the periodic table and their chemistry ), to the quark configurations and colour charge (hence the properties of baryons and mesons ).
A fundamental prediction of special relativity is the relativistic energy–momentum relation ; for a particle of rest mass m , and in a particular frame of reference with energy E and 3- momentum p with magnitude in terms of the dot product p = p ⋅ p {\displaystyle p={\sqrt {\mathbf {p} \cdot \mathbf {p} }}} , it is: [ 10 ]
These equations are used together with the energy and momentum operators , which are respectively:
to construct a relativistic wave equation (RWE): a partial differential equation consistent with the energy–momentum relation, and is solved for ψ to predict the quantum dynamics of the particle. For space and time to be placed on equal footing, as in relativity, the orders of space and time partial derivatives should be equal, and ideally as low as possible, so that no initial values of the derivatives need to be specified. This is important for probability interpretations, exemplified below. The lowest possible order of any differential equation is the first (zeroth order derivatives would not form a differential equation).
The Heisenberg picture is another formulation of QM, in which case the wavefunction ψ is time-independent , and the operators A ( t ) contain the time dependence, governed by the equation of motion:
This equation is also true in RQM, provided the Heisenberg operators are modified to be consistent with SR. [ 11 ] [ 12 ]
Historically, around 1926, Schrödinger and Heisenberg show that wave mechanics and matrix mechanics are equivalent, later furthered by Dirac using transformation theory .
A more modern approach to RWEs, first introduced during the time RWEs were developing for particles of any spin, is to apply representations of the Lorentz group .
In classical mechanics and non-relativistic QM, time is an absolute quantity all observers and particles can always agree on, "ticking away" in the background independent of space. Thus in non-relativistic QM one has for a many particle system ψ ( r 1 , r 2 , r 3 , ..., t , σ 1 , σ 2 , σ 3 ...) .
In relativistic mechanics , the spatial coordinates and coordinate time are not absolute; any two observers moving relative to each other can measure different locations and times of events . The position and time coordinates combine naturally into a four-dimensional spacetime position X = ( ct , r ) corresponding to events, and the energy and 3-momentum combine naturally into the four-momentum P = ( E / c , p ) of a dynamic particle, as measured in some reference frame , change according to a Lorentz transformation as one measures in a different frame boosted and/or rotated relative the original frame in consideration. The derivative operators, and hence the energy and 3-momentum operators, are also non-invariant and change under Lorentz transformations.
Under a proper orthochronous Lorentz transformation ( r , t ) → Λ( r , t ) in Minkowski space , all one-particle quantum states ψ σ locally transform under some representation D of the Lorentz group : [ 13 ] [ 14 ]
where D (Λ) is a finite-dimensional representation, in other words a (2 s + 1)×(2 s + 1) square matrix . Again, ψ is thought of as a column vector containing components with the (2 s + 1) allowed values of σ . The quantum numbers s and σ as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of σ may occur more than once depending on the representation.
The classical Hamiltonian for a particle in a potential is the kinetic energy p · p /2 m plus the potential energy V ( r , t ) , with the corresponding quantum operator in the Schrödinger picture :
and substituting this into the above Schrödinger equation gives a non-relativistic QM equation for the wavefunction: the procedure is a straightforward substitution of a simple expression. By contrast this is not as easy in RQM; the energy–momentum equation is quadratic in energy and momentum leading to difficulties. Naively setting:
is not helpful for several reasons. The square root of the operators cannot be used as it stands; it would have to be expanded in a power series before the momentum operator, raised to a power in each term, could act on ψ . As a result of the power series, the space and time derivatives are completely asymmetric : infinite-order in space derivatives but only first order in the time derivative, which is inelegant and unwieldy. Again, there is the problem of the non-invariance of the energy operator, equated to the square root which is also not invariant. Another problem, less obvious and more severe, is that it can be shown to be nonlocal and can even violate causality : if the particle is initially localized at a point r 0 so that ψ ( r 0 , t = 0) is finite and zero elsewhere, then at any later time the equation predicts delocalization ψ ( r , t ) ≠ 0 everywhere, even for | r | > ct which means the particle could arrive at a point before a pulse of light could. This would have to be remedied by the additional constraint ψ ( | r | > ct , t ) = 0 . [ 15 ]
There is also the problem of incorporating spin in the Hamiltonian, which isn't a prediction of the non-relativistic Schrödinger theory. Particles with spin have a corresponding spin magnetic moment quantized in units of μ B , the Bohr magneton : [ 16 ] [ 17 ]
where g is the (spin) g-factor for the particle, and S the spin operator , so they interact with electromagnetic fields. For a particle in an externally applied magnetic field B , the interaction term [ 18 ]
has to be added to the above non-relativistic Hamiltonian. On the contrary; a relativistic Hamiltonian introduces spin automatically as a requirement of enforcing the relativistic energy-momentum relation. [ 19 ]
Relativistic Hamiltonians are analogous to those of non-relativistic QM in the following respect; there are terms including rest mass and interaction terms with externally applied fields, similar to the classical potential energy term, as well as momentum terms like the classical kinetic energy term. A key difference is that relativistic Hamiltonians contain spin operators in the form of matrices , in which the matrix multiplication runs over the spin index σ , so in general a relativistic Hamiltonian:
is a function of space, time, and the momentum and spin operators.
Substituting the energy and momentum operators directly into the energy–momentum relation may at first sight seem appealing, to obtain the Klein–Gordon equation : [ 20 ]
and was discovered by many people because of the straightforward way of obtaining it, notably by Schrödinger in 1925 before he found the non-relativistic equation named after him, and by Klein and Gordon in 1927, who included electromagnetic interactions in the equation. This is relativistically invariant , yet this equation alone isn't a sufficient foundation for RQM for at least two reasons: one is that negative-energy states are solutions, [ 2 ] [ 21 ] another is the density (given below), and this equation as it stands is only applicable to spinless particles. This equation can be factored into the form: [ 22 ] [ 23 ]
where α = ( α 1 , α 2 , α 3 ) and β are not simply numbers or vectors, but 4 × 4 Hermitian matrices that are required to anticommute for i ≠ j :
and square to the identity matrix :
so that terms with mixed second-order derivatives cancel while the second-order derivatives purely in space and time remain. The first factor:
is the Dirac equation. The other factor is also the Dirac equation, but for a particle of negative mass . [ 22 ] Each factor is relativistically invariant. The reasoning can be done the other way round: propose the Hamiltonian in the above form, as Dirac did in 1928, then pre-multiply the equation by the other factor of operators E + c α · p + βmc 2 , and comparison with the KG equation determines the constraints on α and β . The positive mass equation can continue to be used without loss of continuity. The matrices multiplying ψ suggest it isn't a scalar wavefunction as permitted in the KG equation, but must instead be a four-component entity. The Dirac equation still predicts negative energy solutions, [ 6 ] [ 24 ] so Dirac postulated that negative energy states are always occupied, because according to the Pauli principle , electronic transitions from positive to negative energy levels in atoms would be forbidden. See Dirac sea for details.
In non-relativistic quantum mechanics, the square modulus of the wavefunction ψ gives the probability density function ρ = | ψ | 2 . This is the Copenhagen interpretation , circa 1927. In RQM, while ψ ( r , t ) is a wavefunction, the probability interpretation is not the same as in non-relativistic QM. Some RWEs do not predict a probability density ρ or probability current j (really meaning probability current density ) because they are not positive-definite functions of space and time. The Dirac equation does: [ 25 ]
where the dagger denotes the Hermitian adjoint (authors usually write ψ = ψ † γ 0 for the Dirac adjoint ) and J μ is the probability four-current , while the Klein–Gordon equation does not: [ 26 ]
where ∂ μ is the four-gradient . Since the initial values of both ψ and ∂ ψ /∂ t may be freely chosen, the density can be negative.
Instead, what appears look at first sight a "probability density" and "probability current" has to be reinterpreted as charge density and current density when multiplied by electric charge . Then, the wavefunction ψ is not a wavefunction at all, but reinterpreted as a field . [ 15 ] The density and current of electric charge always satisfy a continuity equation :
as charge is a conserved quantity . Probability density and current also satisfy a continuity equation because probability is conserved, however this is only possible in the absence of interactions.
Including interactions in RWEs is generally difficult. Minimal coupling is a simple way to include the electromagnetic interaction. For one charged particle of electric charge q in an electromagnetic field, given by the magnetic vector potential A ( r , t ) defined by the magnetic field B = ∇ × A , and electric scalar potential ϕ ( r , t ) , this is: [ 27 ]
where P μ is the four-momentum that has a corresponding 4-momentum operator , and A μ the four-potential . In the following, the non-relativistic limit refers to the limiting cases:
that is, the total energy of the particle is approximately the rest energy for small electric potentials, and the momentum is approximately the classical momentum.
In RQM, the KG equation admits the minimal coupling prescription;
In the case where the charge is zero, the equation reduces trivially to the free KG equation so nonzero charge is assumed below. This is a scalar equation that is invariant under the irreducible one-dimensional scalar (0,0) representation of the Lorentz group. This means that all of its solutions will belong to a direct sum of (0,0) representations. Solutions that do not belong to the irreducible (0,0) representation will have two or more independent components. Such solutions cannot in general describe particles with nonzero spin since spin components are not independent. Other constraint will have to be imposed for that, e.g. the Dirac equation for spin 1 / 2 , see below. Thus if a system satisfies the KG equation only , it can only be interpreted as a system with zero spin.
The electromagnetic field is treated classically according to Maxwell's equations and the particle is described by a wavefunction, the solution to the KG equation. The equation is, as it stands, not always very useful, because massive spinless particles, such as the π -mesons, experience the much stronger strong interaction in addition to the electromagnetic interaction. It does, however, correctly describe charged spinless bosons in the absence of other interactions.
The KG equation is applicable to spinless charged bosons in an external electromagnetic potential. [ 2 ] As such, the equation cannot be applied to the description of atoms, since the electron is a spin 1 / 2 particle. In the non-relativistic limit the equation reduces to the Schrödinger equation for a spinless charged particle in an electromagnetic field: [ 18 ]
Non relativistically, spin was phenomenologically introduced in the Pauli equation by Pauli in 1927 for particles in an electromagnetic field :
by means of the 2 × 2 Pauli matrices , and ψ is not just a scalar wavefunction as in the non-relativistic Schrödinger equation, but a two-component spinor field :
where the subscripts ↑ and ↓ refer to the "spin up" ( σ = + 1 / 2 ) and "spin down" ( σ = − 1 / 2 ) states. [ b ]
In RQM, the Dirac equation can also incorporate minimal coupling, rewritten from above;
and was the first equation to accurately predict spin, a consequence of the 4 × 4 gamma matrices γ 0 = β , γ = ( γ 1 , γ 2 , γ 3 ) = β α = ( βα 1 , βα 2 , βα 3 ) . There is a 4 × 4 identity matrix pre-multiplying the energy operator (including the potential energy term), conventionally not written for simplicity and clarity (i.e. treated like the number 1). Here ψ is a four-component spinor field, which is conventionally split into two two-component spinors in the form: [ c ]
The 2-spinor ψ + corresponds to a particle with 4-momentum ( E , p ) and charge q and two spin states ( σ = ± 1 / 2 , as before). The other 2-spinor ψ − corresponds to a similar particle with the same mass and spin states, but negative 4-momentum −( E , p ) and negative charge − q , that is, negative energy states, time-reversed momentum, and negated charge . This was the first interpretation and prediction of a particle and corresponding antiparticle . See Dirac spinor and bispinor for further description of these spinors. In the non-relativistic limit the Dirac equation reduces to the Pauli equation (see Dirac equation for how). When applied a one-electron atom or ion, setting A = 0 and ϕ to the appropriate electrostatic potential, additional relativistic terms include the spin–orbit interaction , electron gyromagnetic ratio , and Darwin term . In ordinary QM these terms have to be put in by hand and treated using perturbation theory . The positive energies do account accurately for the fine structure.
Within RQM, for massless particles the Dirac equation reduces to:
the first of which is the Weyl equation , a considerable simplification applicable for massless neutrinos . [ 28 ] This time there is a 2 × 2 identity matrix pre-multiplying the energy operator conventionally not written. In RQM it is useful to take this as the zeroth Pauli matrix σ 0 which couples to the energy operator (time derivative), just as the other three matrices couple to the momentum operator (spatial derivatives).
The Pauli and gamma matrices were introduced here, in theoretical physics, rather than pure mathematics itself. They have applications to quaternions and to the SO(2) and SO(3) Lie groups , because they satisfy the important commutator [ , ] and anticommutator [ , ] + relations respectively:
where ε abc is the three-dimensional Levi-Civita symbol . The gamma matrices form bases in Clifford algebra , and have a connection to the components of the flat spacetime Minkowski metric η αβ in the anticommutation relation:
(This can be extended to curved spacetime by introducing vierbeins , but is not the subject of special relativity).
In 1929, the Breit equation was found to describe two or more electromagnetically interacting massive spin 1 / 2 fermions to first-order relativistic corrections; one of the first attempts to describe such a relativistic quantum many-particle system . This is, however, still only an approximation, and the Hamiltonian includes numerous long and complicated sums.
The helicity operator is defined by;
where p is the momentum operator, S the spin operator for a particle of spin s , E is the total energy of the particle, and m 0 its rest mass. Helicity indicates the orientations of the spin and translational momentum vectors. [ 29 ] Helicity is frame-dependent because of the 3-momentum in the definition, and is quantized due to spin quantization, which has discrete positive values for parallel alignment, and negative values for antiparallel alignment.
An automatic occurrence in the Dirac equation (and the Weyl equation) is the projection of the spin 1 / 2 operator on the 3-momentum (times c ), σ · c p , which is the helicity (for the spin 1 / 2 case) times E 2 − ( m 0 c 2 ) 2 {\displaystyle {\sqrt {E^{2}-(m_{0}c^{2})^{2}}}} .
For massless particles the helicity simplifies to:
The Dirac equation can only describe particles of spin 1 / 2 . Beyond the Dirac equation, RWEs have been applied to free particles of various spins. In 1936, Dirac extended his equation to all fermions, three years later Fierz and Pauli rederived the same equation. [ 30 ] The Bargmann–Wigner equations were found in 1948 using Lorentz group theory, applicable for all free particles with any spin. [ 31 ] [ 32 ] Considering the factorization of the KG equation above, and more rigorously by Lorentz group theory, it becomes apparent to introduce spin in the form of matrices.
The wavefunctions are multicomponent spinor fields , which can be represented as column vectors of functions of space and time:
where the expression on the right is the Hermitian conjugate . For a massive particle of spin s , there are 2 s + 1 components for the particle, and another 2 s + 1 for the corresponding antiparticle (there are 2 s + 1 possible σ values in each case), altogether forming a 2(2 s + 1) -component spinor field:
with the + subscript indicating the particle and − subscript for the antiparticle. However, for massless particles of spin s , there are only ever two-component spinor fields; one is for the particle in one helicity state corresponding to + s and the other for the antiparticle in the opposite helicity state corresponding to − s :
According to the relativistic energy-momentum relation, all massless particles travel at the speed of light, so particles traveling at the speed of light are also described by two-component spinors. Historically, Élie Cartan found the most general form of spinors in 1913, prior to the spinors revealed in the RWEs following the year 1927.
For equations describing higher-spin particles, the inclusion of interactions is nowhere near as simple minimal coupling, they lead to incorrect predictions and self-inconsistencies. [ 33 ] For spin greater than ħ / 2 , the RWE is not fixed by the particle's mass, spin, and electric charge; the electromagnetic moments ( electric dipole moments and magnetic dipole moments ) allowed by the spin quantum number are arbitrary. (Theoretically, magnetic charge would contribute also). For example, the spin 1 / 2 case only allows a magnetic dipole, but for spin 1 particles magnetic quadrupoles and electric dipoles are also possible. [ 28 ] For more on this topic, see multipole expansion and (for example) Cédric Lorcé (2009). [ 34 ] [ 35 ]
The Schrödinger/Pauli velocity operator can be defined for a massive particle using the classical definition p = m v , and substituting quantum operators in the usual way: [ 36 ]
which has eigenvalues that take any value. In RQM, the Dirac theory, it is:
which must have eigenvalues between ± c . See Foldy–Wouthuysen transformation for more theoretical background.
The Hamiltonian operators in the Schrödinger picture are one approach to forming the differential equations for ψ . An equivalent alternative is to determine a Lagrangian (really meaning Lagrangian density ), then generate the differential equation by the field-theoretic Euler–Lagrange equation :
For some RWEs, a Lagrangian can be found by inspection. For example, the Dirac Lagrangian is: [ 37 ]
and Klein–Gordon Lagrangian is:
This is not possible for all RWEs; and is one reason the Lorentz group theoretic approach is important and appealing: fundamental invariance and symmetries in space and time can be used to derive RWEs using appropriate group representations. The Lagrangian approach with field interpretation of ψ is the subject of QFT rather than RQM: Feynman's path integral formulation uses invariant Lagrangians rather than Hamiltonian operators, since the latter can become extremely complicated, see (for example) Weinberg (1995). [ 38 ]
In non-relativistic QM, the angular momentum operator is formed from the classical pseudovector definition L = r × p . In RQM, the position and momentum operators are inserted directly where they appear in the orbital relativistic angular momentum tensor defined from the four-dimensional position and momentum of the particle, equivalently a bivector in the exterior algebra formalism: [ 39 ] [ d ]
which are six components altogether: three are the non-relativistic 3-orbital angular momenta; M 12 = L 3 , M 23 = L 1 , M 31 = L 2 , and the other three M 01 , M 02 , M 03 are boosts of the centre of mass of the rotating object. An additional relativistic-quantum term has to be added for particles with spin. For a particle of rest mass m , the total angular momentum tensor is:
where the star denotes the Hodge dual , and
is the Pauli–Lubanski pseudovector . [ 40 ] For more on relativistic spin, see (for example) Troshin & Tyurin (1994). [ 41 ]
In 1926, the Thomas precession is discovered: relativistic corrections to the spin of elementary particles with application in the spin–orbit interaction of atoms and rotation of macroscopic objects. [ 42 ] [ 43 ] In 1939 Wigner derived the Thomas precession.
In classical electromagnetism and special relativity , an electron moving with a velocity v through an electric field E but not a magnetic field B , will in its own frame of reference experience a Lorentz-transformed magnetic field B′ :
In the non-relativistic limit v << c :
so the non-relativistic spin interaction Hamiltonian becomes: [ 44 ]
where the first term is already the non-relativistic magnetic moment interaction, and the second term the relativistic correction of order ( v/c )² , but this disagrees with experimental atomic spectra by a factor of 1 ⁄ 2 . It was pointed out by L. Thomas that there is a second relativistic effect: An electric field component perpendicular to the electron velocity causes an additional acceleration of the electron perpendicular to its instantaneous velocity, so the electron moves in a curved path. The electron moves in a rotating frame of reference , and this additional precession of the electron is called the Thomas precession . It can be shown [ 45 ] that the net result of this effect is that the spin–orbit interaction is reduced by half, as if the magnetic field experienced by the electron has only one-half the value, and the relativistic correction in the Hamiltonian is:
In the case of RQM, the factor of 1 ⁄ 2 is predicted by the Dirac equation. [ 44 ]
The events which led to and established RQM, and the continuation beyond into quantum electrodynamics (QED), are summarized below [see, for example, R. Resnick and R. Eisberg (1985), [ 46 ] and P.W Atkins (1974) [ 47 ] ]. More than half a century of experimental and theoretical research from the 1890s through to the 1950s in the new and mysterious quantum theory as it was up and coming revealed that a number of phenomena cannot be explained by QM alone. SR, found at the turn of the 20th century, was found to be a necessary component, leading to unification: RQM. Theoretical predictions and experiments mainly focused on the newly found atomic physics , nuclear physics , and particle physics ; by considering spectroscopy , diffraction and scattering of particles, and the electrons and nuclei within atoms and molecules. Numerous results are attributed to the effects of spin.
Albert Einstein in 1905 explained of the photoelectric effect ; a particle description of light as photons . In 1916, Sommerfeld explains fine structure ; the splitting of the spectral lines of atoms due to first order relativistic corrections. The Compton effect of 1923 provided more evidence that special relativity does apply; in this case to a particle description of photon–electron scattering. de Broglie extends wave–particle duality to matter : the de Broglie relations , which are consistent with special relativity and quantum mechanics. By 1927, Davisson and Germer and separately G. Thomson successfully diffract electrons, providing experimental evidence of wave-particle duality.
In 1935, Einstein, Rosen , Podolsky published a paper [ 50 ] concerning quantum entanglement of particles, questioning quantum nonlocality and the apparent violation of causality upheld in SR: particles can appear to interact instantaneously at arbitrary distances. This was a misconception since information is not and cannot be transferred in the entangled states; rather the information transmission is in the process of measurement by two observers (one observer has to send a signal to the other, which cannot exceed c ). QM does not violate SR. [ 51 ] [ 52 ] In 1959, Bohm and Aharonov publish a paper [ 53 ] on the Aharonov–Bohm effect , questioning the status of electromagnetic potentials in QM. The EM field tensor and EM 4-potential formulations are both applicable in SR, but in QM the potentials enter the Hamiltonian (see above) and influence the motion of charged particles even in regions where the fields are zero. In 1964, Bell's theorem was published in a paper on the EPR paradox, [ 54 ] showing that QM cannot be derived from local hidden-variable theories if locality is to be maintained.
In 1947, the Lamb shift was discovered: a small difference in the 2 S 1 ⁄ 2 and 2 P 1 ⁄ 2 levels of hydrogen, due to the interaction between the electron and vacuum. Lamb and Retherford experimentally measure stimulated radio-frequency transitions the 2 S 1 ⁄ 2 and 2 P 1 ⁄ 2 hydrogen levels by microwave radiation. [ 55 ] An explanation of the Lamb shift is presented by Bethe . Papers on the effect were published in the early 1950s. [ 56 ] | https://en.wikipedia.org/wiki/Relativistic_quantum_mechanics |
A relativistic runaway electron avalanche ( RREA ) is an avalanche growth of a population of relativistic electrons driven through a material (typically air) by an electric field. RREA has been hypothesized to be related to lightning initiation, [ 1 ] terrestrial gamma-ray flashes , [ 2 ] sprite lightning , [ 3 ] and spark development. [ 4 ] RREA is unique as it can occur at electric fields an order of magnitude lower than the dielectric strength of the material.
When an electric field is applied to a material, free electrons will drift slowly through the material as described by the electron mobility . For low-energy electrons, faster drift velocities result in more interactions with surrounding particles. These interactions create a form of friction that slow the electrons down. Thus, for low-energy cases, the electron velocities tend to stabilize.
At higher energies, above about 100 keV , these collisional events become less common as the mean free path of the electron rises. These higher-energy electrons thus see less frictional force as their velocity increases. In the presence of the same electric field, these electrons will continue accelerating, "running away".
As runaway electrons gain energy from an electric field, they occasionally collide with atoms in the material, knocking off secondary electrons. If the secondary electrons also have high enough energy to run away, they too accelerate to high energies, produce further secondary electrons, etc. As such, the total number of energetic electrons grows exponentially in an avalanche.
The dynamic friction function, shown in the Figure, takes into account only energy losses due to inelastic collisions and has a minimum of ~216 keV/cm at electron energy of ~1.23 MeV. More useful thresholds, however, must include also the effects due to electron momentum loss due to elastic collisions. In that case, an analytical estimate [ 5 ] gives the runaway threshold of ~282 keV/cm, which occurs at the electron energy of ~7 MeV. This result approximately agrees with numbers obtained from Monte Carlo simulations, of ~284 keV/cm [ 6 ] and 10 MeV, [ 7 ] respectively.
The RREA mechanism above only describes the growth of the avalanche. An initial energetic electron is needed to start the process. In ambient air, such energetic electrons typically come from cosmic rays . [ 8 ] In very strong electric fields, stronger than the maximum frictional force experienced by electrons, even low-energy ("cold" or "thermal") electrons can accelerate to relativistic energies, a process dubbed "thermal runaway." [ 9 ]
RREA avalanches generally move opposite the direction of the electric field. As such, after the avalanches leave the electric field region, frictional forces dominate, the electrons lose energy, and the process stops. There is the possibility, however, that photons or positrons produced by the avalanche will wander back to where the avalanche began and can produce new seeds for a second generation of avalanches. If the electric field region is large enough, the number of second-generation avalanches will exceed the number of first-generation avalanches and the number of avalanches itself grows exponentially. This avalanche of avalanches can produce extremely large populations of energetic electrons. This process eventually leads to the decay of the electric field below the level at which feedback is possible and therefore acts as a limit to the large-scale electric field strength. [ 6 ]
The large population of energetic electrons produced in RREA will produce a correspondingly large population of energetic photons by bremsstrahlung . These photons are proposed as the source of terrestrial gamma-ray flashes . Large RREA events in thunderstorms may also contribute rare but large radiation doses to commercial airline flights. [ 10 ] The American physicist Joseph Dwyer coined the term " dark lightning " for this phenomenon, [ 11 ] which is still the subject of research. [ 12 ] | https://en.wikipedia.org/wiki/Relativistic_runaway_electron_avalanche |
A relativistic star is a rotating star whose behavior is well described by general relativity , but not by classical mechanics . The first such object to be identified was radio pulsars , which consist of rotating neutron stars . Rotating supermassive stars are a hypothetical form of a relativistic star. [ 3 ] Relativistic stars are one possible source to allow gravitational waves to be studied.
Another definition of a relativistic star is one with the equation of state of a special relativistic gas. This can happen when the core of a massive main-sequence star becomes hot enough to generate electron - positron pairs . Stability analysis shows that such a star is only marginally bound, and is unstable to either collapse or explode . This instability is believed to limit the mass of main-sequence stars to a couple of hundred solar masses or so. Stars of this size and larger are able to directly collapse into a black hole of either intermediate or supermassive size. [ 4 ]
This article about stellar astronomy is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relativistic_star |
In mathematics, a non-autonomous system of ordinary differential equations is defined to be a dynamic equation on a smooth fiber bundle Q → R {\displaystyle Q\to \mathbb {R} } over R {\displaystyle \mathbb {R} } . For instance, this is the case of non-relativistic non-autonomous mechanics , but not relativistic mechanics . To describe relativistic mechanics , one should consider a system of ordinary differential equations on a smooth manifold Q {\displaystyle Q} whose fibration over R {\displaystyle \mathbb {R} } is not fixed. Such a system admits transformations of a coordinate t {\displaystyle t} on R {\displaystyle \mathbb {R} } depending on other coordinates on Q {\displaystyle Q} . Therefore, it is called the relativistic system . In particular, Special Relativity on the
Minkowski space Q = R 4 {\displaystyle Q=\mathbb {R} ^{4}} is of this type.
Since a configuration space Q {\displaystyle Q} of a relativistic system has no
preferable fibration over R {\displaystyle \mathbb {R} } , a
velocity space of relativistic system is a first order jet
manifold J 1 1 Q {\displaystyle J_{1}^{1}Q} of one-dimensional submanifolds of Q {\displaystyle Q} . The notion of jets of submanifolds
generalizes that of jets of sections of fiber bundles which are utilized in covariant classical field theory and non-autonomous mechanics . A first order jet bundle J 1 1 Q → Q {\displaystyle J_{1}^{1}Q\to Q} is projective and, following the terminology of Special Relativity , one can think of its fibers as being spaces
of the absolute velocities of a relativistic system. Given coordinates ( q 0 , q i ) {\displaystyle (q^{0},q^{i})} on Q {\displaystyle Q} , a first order jet manifold J 1 1 Q {\displaystyle J_{1}^{1}Q} is provided with the adapted coordinates ( q 0 , q i , q 0 i ) {\displaystyle (q^{0},q^{i},q_{0}^{i})} possessing transition functions
The relativistic velocities of a relativistic system are represented by
elements of a fibre bundle R × T Q {\displaystyle \mathbb {R} \times TQ} , coordinated by ( τ , q λ , a τ λ ) {\displaystyle (\tau ,q^{\lambda },a_{\tau }^{\lambda })} , where T Q {\displaystyle TQ} is the tangent bundle of Q {\displaystyle Q} . Then a generic equation of motion of a relativistic system in terms of relativistic velocities reads
For instance, if Q {\displaystyle Q} is the Minkowski space with a Minkowski metric G μ ν {\displaystyle G_{\mu \nu }} , this is an equation of a relativistic charge in the presence of an electromagnetic field. | https://en.wikipedia.org/wiki/Relativistic_system_(mathematics) |
In physics , specifically relativistic quantum mechanics (RQM) and its applications to particle physics , relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light . In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields .
The solutions to the equations, universally denoted as ψ or Ψ ( Greek psi ), are referred to as " wave functions " in the context of RQM, and " fields " in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background).
In the Schrödinger picture , the wave function or field is the solution to the Schrödinger equation , i ℏ ∂ ∂ t ψ = H ^ ψ , {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi ={\hat {H}}\psi ,} one of the postulates of quantum mechanics . All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system . Alternatively, Feynman 's path integral formulation uses a Lagrangian rather than a Hamiltonian operator.
More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group . [ 1 ]
The failure of classical mechanics applied to molecular , atomic , and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics . The mathematical formulation was led by De Broglie , Bohr , Schrödinger , Pauli , and Heisenberg , and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant ħ , the quantum of action , tends to zero. This is the correspondence principle . At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light , or when the number of each type of particle changes (this happens in real particle interactions ; the numerous forms of particle decays , annihilation , matter creation , pair production , and so on).
A description of quantum mechanical systems which could account for relativistic effects was sought for by many theoretical physicists from the late 1920s to the mid-1940s. [ 2 ] The first basis for relativistic quantum mechanics , i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation :
by inserting the energy operator and momentum operator into the relativistic energy–momentum relation :
The solutions to ( 1 ) are scalar fields . The KG equation is undesirable due to its prediction of negative energies and probabilities , as a result of the quadratic nature of ( 2 ) – inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation ) was still of importance. Nevertheless, ( 1 ) is applicable to spin-0 bosons . [ 3 ]
Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the fine structure in the Hydrogen spectral series . The mysterious underlying property was spin . The first two-dimensional spin matrices (better known as the Pauli matrices ) were introduced by Pauli in the Pauli equation ; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields , but this was phenomenological . Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation , for massless spin-1/2 fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation ( 2 ) to the electron – by various manipulations he factorized the equation into the form
and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices α and β in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to ( 3A ) are multi-component spinor fields , and each component satisfies ( 1 ). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle ; in this case the electron and positron . The Dirac equation is now known to apply for all massive spin-1/2 fermions . In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation.
Although a landmark in quantum theory, the Dirac equation is only true for spin-1/2 fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular – not all physicists were comfortable with the " Dirac sea " of negative energy states).
The natural problem became clear: to generalize the Dirac equation to particles with any spin ; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions. [ 2 ]
This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of ( 3A ):
where ψ is a spinor field, now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices α and β are infinite-dimensional matrices, related to infinitesimal Lorentz transformations . He did not demand that each component of 3B satisfy equation ( 2 ); instead he regenerated the equation using a Lorentz-invariant action , via the principle of least action , and application of Lorentz group theory. [ 4 ] [ 5 ]
Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939) see Duffin–Kemmer–Petiau algebra . The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana's, as spinors were new mathematical tools in the early twentieth century, although Majorana's paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940. [ 2 ]
Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors A and B , symmetric in all indices, for a massive particle of spin n + 1/2 for integer n (see Van der Waerden notation for the meaning of the dotted indices):
where p is the momentum as a covariant spinor operator. For n = 0 , the equations reduce to the coupled Dirac equations, and A and B together transform as the original Dirac spinor . Eliminating either A or B shows that A and B each fulfill ( 1 ). [ 2 ] The direct derivation of the Dirac–Pauli–Fierz equations using the Bargmann–Wigner operators is given by Isaev and Podoinitsyn. [ 6 ]
In 1941, Rarita and Schwinger focussed on spin-3/2 particles and derived the Rarita–Schwinger equation , including a Lagrangian to generate it, and later generalized the equations analogous to spin n + 1/2 for integer n . In 1945, Pauli suggested Majorana's 1932 paper to Bhabha , who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in ( 3A ) and ( 3B ) by an arbitrary constant, subject to a set of conditions which the wave functions must obey. [ 7 ]
Finally, in the year 1948 (the same year as Feynman 's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations . [ 2 ] [ 8 ] In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg , the Joos–Weinberg equation . Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles. [ 1 ] [ 9 ] [ 10 ]
The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present. [ 5 ]
The following equations have solutions which satisfy the superposition principle , that is, the wave functions are additive .
Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted ψ , and ∂ μ are the components of the four-gradient operator.
In matrix equations, the Pauli matrices are denoted by σ μ in which μ = 0, 1, 2, 3 , where σ 0 is the 2 × 2 identity matrix : σ 0 = ( 1 0 0 1 ) {\displaystyle \sigma ^{0}={\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}} and the other matrices have their usual representations. The expression σ μ ∂ μ ≡ σ 0 ∂ 0 + σ 1 ∂ 1 + σ 2 ∂ 2 + σ 3 ∂ 3 {\displaystyle \sigma ^{\mu }\partial _{\mu }\equiv \sigma ^{0}\partial _{0}+\sigma ^{1}\partial _{1}+\sigma ^{2}\partial _{2}+\sigma ^{3}\partial _{3}} is a 2 × 2 matrix operator which acts on 2-component spinor fields.
The gamma matrices are denoted by γ μ , in which again μ = 0, 1, 2, 3 , and there are a number of representations to select from. The matrix γ 0 is not necessarily the 4 × 4 identity matrix. The expression i ℏ γ μ ∂ μ + m c ≡ i ℏ ( γ 0 ∂ 0 + γ 1 ∂ 1 + γ 2 ∂ 2 + γ 3 ∂ 3 ) + m c ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle i\hbar \gamma ^{\mu }\partial _{\mu }+mc\equiv i\hbar (\gamma ^{0}\partial _{0}+\gamma ^{1}\partial _{1}+\gamma ^{2}\partial _{2}+\gamma ^{3}\partial _{3})+mc{\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} is a 4 × 4 matrix operator which acts on 4-component spinor fields .
Note that terms such as " mc " scalar multiply an identity matrix of the relevant dimension , the common sizes are 2 × 2 or 4 × 4 , and are conventionally not written for simplicity.
[ ( γ 2 ) μ ( p 2 − A ~ 2 ) μ + m 2 + S ~ 2 ] Ψ = 0. {\displaystyle [(\gamma _{2})_{\mu }(p_{2}-{\tilde {A}}_{2})^{\mu }+m_{2}+{\tilde {S}}_{2}]\Psi =0.}
where ψ is a rank-2 s 4-component spinor .
The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles: ( i ℏ β a ∂ a − m c ) ψ = 0 {\displaystyle (i\hbar \beta ^{a}\partial _{a}-mc)\psi =0}
Start with the standard special relativity (SR) 4-vectors
Note that each 4-vector is related to another by a Lorentz scalar :
Now, just apply the standard Lorentz scalar product rule to each one:
The last equation is a fundamental quantum relation.
When applied to a Lorentz scalar field ψ {\displaystyle \psi } , one gets the Klein–Gordon equation, the most basic of the quantum relativistic wave equations.
The Schrödinger equation is the low-velocity limiting case ( v ≪ c ) of the Klein–Gordon equation.
When the relation is applied to a four-vector field A μ {\displaystyle A^{\mu }} instead of a Lorentz scalar field ψ {\displaystyle \psi } , then one gets the Proca equation (in Lorenz gauge ): [ ∂ ⋅ ∂ + ( m 0 c ℏ ) 2 ] A μ = 0 {\displaystyle \left[\mathbf {\partial } \cdot \mathbf {\partial } +\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]A^{\mu }=0}
If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge ) [ ∂ ⋅ ∂ ] A μ = 0 {\displaystyle [\mathbf {\partial } \cdot \mathbf {\partial } ]A^{\mu }=0}
Under a proper orthochronous Lorentz transformation x → Λ x in Minkowski space , all one-particle quantum states ψ j σ of spin j with spin z-component σ locally transform under some representation D of the Lorentz group : [ 12 ] [ 13 ] ψ ( x ) → D ( Λ ) ψ ( Λ − 1 x ) {\displaystyle \psi (x)\rightarrow D(\Lambda )\psi (\Lambda ^{-1}x)} where D (Λ) is some finite-dimensional representation, i.e. a matrix. Here ψ is thought of as a column vector containing components with the allowed values of σ . The quantum numbers j and σ as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of σ may occur more than once depending on the representation. Representations with several possible values for j are considered below.
The irreducible representations are labeled by a pair of half-integers or integers ( A , B ) . From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums . In particular, space-time itself constitutes a 4-vector representation ( 1 / 2 , 1 / 2 ) so that Λ ∈ D (1/2, 1/2) . To put this into context; Dirac spinors transform under the ( 1 / 2 , 0) ⊕ (0, 1 / 2 ) representation. In general, the ( A , B ) representation space has subspaces that under the subgroup of spatial rotations , SO(3) , transform irreducibly like objects of spin j , where each allowed value: j = A + B , A + B − 1 , … , | A − B | , {\displaystyle j=A+B,A+B-1,\dots ,|A-B|,} occurs exactly once. [ 14 ] In general, tensor products of irreducible representations are reducible; they decompose as direct sums of irreducible representations.
The representations D ( j , 0) and D (0, j ) can each separately represent particles of spin j . A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation.
There are equations which have solutions that do not satisfy the superposition principle. | https://en.wikipedia.org/wiki/Relativistic_wave_equations |
Albert Einstein presented the theories of special relativity and general relativity in publications that either contained no formal references to previous literature, or referred only to a small number of his predecessors for fundamental results on which he based his theories, most notably to the work of Henri Poincaré and Hendrik Lorentz for special relativity, and to the work of David Hilbert , Carl F. Gauss , Bernhard Riemann , and Ernst Mach for general relativity. Subsequently, claims have been put forward about both theories, asserting that they were formulated, either wholly or in part, by others before Einstein. At issue is the extent to which Einstein and various other individuals should be credited for the formulation of these theories, based on priority considerations.
Various scholars have questioned aspects of the work of Einstein, Poincaré, and Lorentz leading up to the theories’ publication in 1905. Questions raised by these scholars include asking to what degree Einstein was familiar with Poincaré's work, whether Einstein was familiar with Lorentz's 1904 paper or a review of it, and how closely Einstein followed other physicists at the time. It is known that Einstein was familiar with Poincaré's 1902 paper [Poi02], but it is not known to what extent he was familiar with other work of Poincaré in 1905. However, it is known that he knew [Poi00] in 1906, because he quoted it in [Ein06]. Lorentz's 1904 paper [Lor04] contained the transformations bearing his name that appeared in the Annalen der Physik . Some authors claim that Einstein worked in relative isolation and with restricted access to the physics literature in 1905. Others, however, disagree; a personal friend of Einstein, Maurice Solovine , acknowledged that he and Einstein pored over Poincaré's 1902 book, keeping them "breathless for weeks on end" [Rot06]. One television show raised the question of whether Einstein's wife Mileva Marić contributed to Einstein's work, but the network's ombudsman and historians on the topic say that there is no substantive evidence that she made significant contributions. [ 1 ]
In the history of special relativity , the most important names that are mentioned in discussions about the distribution of credit are Albert Einstein , Hendrik Lorentz , Henri Poincaré , and Hermann Minkowski . Consideration is also given to numerous other scientists for either anticipations of some aspects of the theory, or else for contributions to the development or elaboration of the theory. These include Woldemar Voigt , August Föppl , Joseph Larmor , Emil Cohn , Friedrich Hasenöhrl , Max Planck , Max von Laue , Gilbert Newton Lewis and Richard Chase Tolman , and others. In addition, polemics exist about alleged contributions of others such as Olinto De Pretto who according to some mathematical scholars did not create relativity but was the first to use the equation. [ 2 ] Einstein's first wife Mileva Marić was featured in a PBS bibliography and claimed she made uncredited contributions, but the network later wrote that the show was "factually flawed and ultimately misleading" and these claims have no foundation according to serious scholars. [ 1 ]
In his History of the theories of ether and electricity from 1953, E. T. Whittaker claimed that relativity is the creation of Poincaré and Lorentz and attributed to Einstein's papers only little importance. [ 3 ] However, most historians of science, like Gerald Holton , Arthur I. Miller , Abraham Pais , John Stachel , or Olivier Darrigol have other points of view. They admit that Lorentz and Poincaré developed the mathematics of special relativity, and many scientists originally spoke about the "Lorentz–Einstein theory". But they argue that it was Einstein who eliminated the classical ether and demonstrated the relativity of space and time. They also argue that Poincaré demonstrated the relativity of space and time only in his philosophical writings, but in his physical papers he maintained the ether as a privileged frame of reference that is perfectly undetectable, and continued (like Lorentz) to distinguish between "real" lengths and times measured by observers at rest within the aether, and "apparent" lengths and times measured by observers in motion within the aether. [ B 1 ] [ B 2 ] [ B 3 ] [ B 4 ] [ B 5 ] Darrigol summarizes:
Most of the components of Einstein's paper appeared in others' anterior works on the electrodynamics of moving bodies. Poincaré and Alfred Bucherer had the relativity principle. Lorentz and Larmor had most of the Lorentz transformations, Poincaré had them all. Cohn and Bucherer rejected the ether. Poincaré, Cohn, and Abraham had a physical interpretation of Lorentz's local time. Larmor and Cohn alluded to the dilation of time. Lorentz and Poincaré had the relativistic dynamics of the electron. None of these authors, however, dared to reform the concepts of space and time. None of them imagined a new kinematics based on two postulates. None of them derived the Lorentz transformations on this basis. None of them fully understood the physical implications of these transformations. It all was Einstein's unique feat. [ B 6 ]
The following facts are well established and referable:
In a paper that was written in 1914 and published in 1921, [ 9 ] Lorentz expressed appreciation for Poincaré's Palermo paper (1906) [ 10 ] on relativity. Lorentz stated:
I did not indicate the transformation which suits best. That was done by Poincaré and then by Mr. Einstein and Minkowski. [...] Because I had not thought of the direct way which led there, and because I had the idea that there is an essential difference between systems x, y, z, t and x′, y′, z′, t′. In one we use – such was my thought – coordinate axes which have a fixed position in the aether and which we can call "true" time; in the other system, on the contrary, we would deal with simple auxiliary quantities whose introduction is only a mathematical artifice. [...] I did not establish the principle of relativity as rigorously and universally true. Poincaré, on the contrary, obtained a perfect invariance of the equations of electrodynamics, and he formulated the "postulate of relativity", terms which he was the first to employ. [...] Let us add that by correcting the imperfections of my work he never reproached me for them.
However, a 1916 reprint of his main work "The theory of electrons" contains notes (written in 1909 and 1915) in which Lorentz sketched the differences between his results and that of Einstein as follows: [ 11 ]
[p. 230]: the chief difference [is] that Einstein simply postulates what we have deduced, with some difficulty and not altogether satisfactorily, from the fundamental equations of the electromagnetic field. [p. 321]: The chief cause of my failure was my clinging to the idea that the variable t only can be considered as the true time and that my local time t′ must be regarded as no more than an auxiliary mathematical quantity. In Einstein's theory, on the contrary, t′ plays the same part as t; if we want to describe phenomena in terms of x′, y′, z′, t′ we must work with these variables exactly as we could do with x, y, z, t.
Regarding the fact, that in this book Lorentz only mentioned Einstein and not Poincaré in connection with a) the synchronisation by light signals, b) the reciprocity of the Lorentz transformation, and c) the relativistic transformation law for charge density, Janssen comments: [ B 7 ]
[p.90]: My guess is that it has to do with the fact that Einstein made the physical interpretation of the Lorentz transformation the basis for a remarkably clear and simple discussion of the electrodynamics of moving bodies, whereas Poincaré's remarks on the physical interpretation of Lorentz transformed quantities may have struck Lorentz as inconsequential philosophical asides in expositions that otherwise closely followed his own. I also have a sense that Lorentz found Einstein's physically very intuitive approach more appealing than Poincaré's rather abstract but mathematically more elegant approach.
And at a conference on the Michelson–Morley experiment in 1927 at which Lorentz and Michelson were present, Michelson suggested that Lorentz was the initiator of the theory of relativity. Lorentz then replied: [ 12 ]
I considered my time transformation only as a heuristic working hypothesis. So the theory of relativity is really solely Einstein's work. And there can be no doubt that he would have conceived it even if the work of all his predecessors in the theory of this field had not been done at all. His work is in this respect independent of the previous theories.
Poincaré attributed the development of the new mechanics almost entirely to Lorentz. He only mentioned Einstein in connection with the photoelectric effect , [ 13 ] but not in connection with special relativity. For example, in 1912 Poincaré raises the question whether "the mechanics of Lorentz" will still exist after the development of the quantum theory . He wrote: [ 13 ]
In all instances in which it differs from that of Newton, the mechanics of Lorentz endures. We continue to believe that no body in motion will ever be able to exceed the speed of light; that the mass of a body is not a constant, but depends on its speed and the angle formed by this speed with the force which acts upon the body; that no experiment will ever be able to determine whether a body is at rest or in absolute motion either in relation to absolute space or even in relation to the ether.
It is now known that Einstein was well aware of the scientific research of his time. The well known historian of science, Jürgen Renn, Director of the Max Planck Institute for the History of Science , wrote on Einstein's contributions to the Annalen der Physik : [ 14 ]
The Annalen also served as a source of modest additional income for Einstein, who wrote more than twenty reports for its Beiblätter – mainly on the theory of heat – thus demonstrating an impressive mastery of the contemporary literature. This activity started in 1905. [ 15 ] and probably resulted from his earlier publications in the Annalen in this field. Going by his publications between 1900 and early 1905, one would conclude that Einstein's specialty was thermodynamics.
Einstein wrote in 1907 [ 16 ] that one needed only to realize that an auxiliary quantity that was introduced by Lorentz and that he called "local time" can simply be defined as "time". In 1909 [ 17 ] and 1912 [ 18 ] Einstein explained: [ B 8 ]
...it is impossible to base a theory of the transformation laws of space and time on the principle of relativity alone. As we know, this is connected with the relativity of the concepts of "simultaneity" and "shape of moving bodies." To fill this gap, I introduced the principle of the constancy of the velocity of light, which I borrowed from H. A. Lorentz's theory of the stationary luminiferous ether, and which, like the principle of relativity, contains a physical assumption that seemed to be justified only by the relevant experiments (experiments by Fizeau, Rowland, etc.) [ 18 ]
But Einstein and his supporters took the position that this "light postulate" together with the principle of relativity renders the ether superfluous and leads directly to Einstein's version of relativity. It is also known [ 19 ] that Einstein had been reading and studying Poincaré's 1902 book Science and hypothesis well before 1905, which included:
Einstein refers to Poincaré in connection with the inertia of energy in 1906 [ 20 ] and the non-Euclidean geometry in 1921, [ 21 ] but not in connection with the Lorentz transformation, the relativity principle or the synchronization procedure by light signals. However, in the last years before his death Einstein acknowledged some of Poincaré's contributions (according to Darrigol, maybe because his biographer Pais in 1950 sent Einstein a copy of Poincarè's Palermo paper, which he said that he had not read before). Einstein wrote in 1953: [ B 9 ]
There is no doubt, that the special theory of relativity, if we regard its development in retrospect, was ripe for discovery in 1905. Lorentz had already recognized that the transformations named after him are essential for the analysis of Maxwell's equations , and Poincaré deepened this insight still further. Concerning myself, I knew only Lorentz's important work of 1895 [...] but not Lorentz's later work, nor the consecutive investigations by Poincaré. In this sense my work of 1905 was independent. [...] The new feature of it was the realization of the fact that the bearing of the Lorentz transformation transcended its connection with Maxwell's equations and was concerned with the nature of space and time in general. A further new result was that the "Lorentz invariance" is a general condition for any physical theory.
This section cites notable publications where people have expressed a view on the issues outlined above.
In 1954, Sir Edmund Taylor Whittaker , an English mathematician and historian of science, credited Henri Poincaré with the equation E = m c 2 {\displaystyle E=mc^{2}} , and he included a chapter entitled The Relativity Theory of Poincaré and Lorentz in his book A History of the Theories of Aether and Electricity . [ B 10 ] He credited Poincaré and Lorentz, and especially alluded to Lorentz's 1904 paper (dated by Whittaker as 1903), Poincaré's St. Louis speech ( The Principles of Mathematical Physics ) of September 1904, and Poincaré's June 1905 paper. Whittaker attributed to Einstein's relativity paper only little importance, i.e., the formulation of the Doppler and aberration formulas. Max Born spent three years trying to dissuade Whittaker, but Whittaker insisted that everything of importance had already been said by Poincaré, and that Lorentz quite plainly had the physical interpretation. [ 22 ]
Whittaker's claims were criticized by Gerald Holton (1960, 1973). [ B 1 ] He argued that there are fundamental differences between the theories of Einstein on one hand, and Poincaré and Lorentz on the other hand. Einstein radically reformulated the concepts of space and time, and by that removed "absolute space" and thus the stationary luminiferous aether from physics. On the other hand, Holton argued that Poincaré and Lorentz still adhered to the stationary aether concept, and tried only to modify Newtonian dynamics, not to replace it. Holton argued, that "Poincaré's silence" (i.e., why Poincaré never mentioned Einstein's contributions to relativity) was due to their fundamentally different conceptual viewpoints. Einstein's views on space and time and the abandonment of the aether were, according to Holton, not acceptable to Poincaré, therefore the latter only referred to Lorentz as the creator of the "new mechanics". Holton also pointed out that although Poincaré's 1904 St. Louis speech was "acute and penetrating" and contained a "principle of relativity" that is confirmed by experience and needs new development, it did not "enunciate a new relativity principle". He also alluded to mistakes of Whittaker, like predating Lorentz's 1904 paper (published April 1904) to 1903.
Views similar to Holton's were later (1967, 1970) expressed by his former student, Stanley Goldberg. [ B 11 ]
In a 1965 series of articles tracing the history of relativity, [ B 12 ] Keswani claimed that Poincaré and Lorentz should have the main credit for special relativity – claiming that Poincaré pointedly credited Lorentz multiple times, while Lorentz credited Poincaré and Einstein, refusing to take credit for himself. He also downplayed the theory of general relativity, saying "Einstein's general theory of relativity is only a theory of gravitation and of modifications in the laws of physics in gravitational fields". [ B 12 ] This would leave the special theory of relativity as the unique theory of relativity. Keswani cited also Vladimir Fock for this same opinion.
This series of articles prompted responses, among others from Herbert Dingle and Karl Popper .
Dingle said, among other things, ".. the 'principle of relativity' had various meanings, and the theories associated with it were quite distinct; they were not different forms of the same theory. Each of the three protagonists.... was very well aware of the others .... but each preferred his own views" [ B 13 ]
Karl Popper says "Though Einstein appears to have known Poincaré's Science and Hypothesis prior to 1905, there is no theory like Einstein's in this great book." [ B 14 ]
Keswani did not accept the criticism, and replied in two letters also published in the same journal ( [ B 15 ] and [ B 16 ] – in his reply to Dingle, he argues that the three relativity theories were at heart the same: ".. they meant much that was common. And that much mattered the most." [ B 15 ]
Dingle commented the year after on the history of crediting: "Until the first World War, Lorentz's and Einstein's theories were regarded as different forms of the same idea, but Lorentz, having priority and being a more established figure speaking a more familiar language, was credited with it." (Dingle 1967, Nature 216 p. 119–122).
Miller (1973, 1981) [ B 2 ] agreed with the analysis of Holton and Goldberg, and further argued that although the terminology (like the principle of relativity) used by Poincaré and Einstein were very similar, their content differs sharply. According to Miller, Poincaré used this principle to complete the aether based "electromagnetic world view" of Lorentz and Abraham. He also argued that Poincaré distinguished (in his July 1905 paper) between "ideal" and "real" systems and electrons. That is, Lorentz's and Poincaré's usage of reference frames lacks an unambiguous physical interpretation, because in many cases they are only mathematical tools, while in Einstein's theory the processes in inertial frames are not only mathematically, but also physically equivalent. Miller wrote in 1981:
Miller (1996) [ B 2 ] argues that Poincaré was guided by empiricism, and was willing to admit that experiments might prove relativity wrong, and so Einstein is more deserving of credit, even though he might have been substantially influenced by Poincaré's papers. Miller also argues that "Emphasis on conventionalism ... led Poincaré and Lorentz to continue to believe in the mathematical and observational equivalence of special relativity and Lorentz's electron theory. This is incorrect." [p. 96] Instead, Miller claims that the theories are mathematically equivalent but not physically equivalent. [p. 91–92]
In his 1982 Einstein biography Subtle is the Lord , [ B 3 ] Abraham Pais argued that Poincaré "comes near" to discovering special relativity (in his St. Louis lecture of September 1904, and the June 1905 paper), but eventually he failed, because in 1904 and also later in 1909, Poincaré treated length contraction as a third independent hypothesis besides the relativity principle and the constancy of the speed of light. According to Pais, Poincaré thus never understood (or at least he never accepted) special relativity, in which the whole theory including length contraction can simply be derived from two postulates. Consequently, he sharply criticized Whittaker's chapter on the "Relativity theory of Poincaré and Lorentz", saying " how well the author's lack of physical insight matches his ignorance of the literature ", although Pais admitted that both he and his colleagues hold the original version of Whittaker's History as a masterpiece. Although he was apparently trying to make a point concerning Whittaker's treatment of the origin of special relativity, Pais' phrasing of that statement was rebuked by at least one notable reviewer of his 1982 book as being "scurrilous" and "lamentable". [ 23 ] Also in contrast to Pais' overgeneralized claim, notable scientists such as Max Born refer to parts of Whittaker's second volume, especially the history of quantum mechanics, as "the most amazing feats of learning, insight, and discriminations" [ 24 ] while Freeman Dyson says of the two volumes of Whittaker's second edition: "it is likely that this is the most scholarly and generally authoritative history of its period that we shall ever get." [ 25 ]
Pais goes on to argue that Lorentz never abandoned the stationary aether concept, either before or after 1905:
In several papers, Elie Zahar (1983, 2000) [ B 17 ] argued that both Einstein (in his June paper) and Poincaré (in his July paper) independently discovered special relativity. He said that " though Whittaker was unjust towards Einstein, his positive account of Poincaré's actual achievement contains much more than a simple grain of truth ". According to him, it was Poincaré's unsystematic and sometimes erroneous statements regarding his philosophical papers (often connected with conventionalism ), which hindered many to give him due credit. In his opinion, Poincaré was rather a "structural realist" and from that he concludes, that Poincaré actually adhered to the relativity of time and space, while his allusions to the aether are of secondary importance. He continues, that due to his treatment of gravitation and four-dimensional space, Poincaré's 1905/6 paper was superior to Einstein's 1905 paper. Yet Zahar gives also credit to Einstein, who introduced Mass–Energy equivalence, and also transcended special relativity by taking a path leading to the development of general relativity.
John Stachel (1995) [ B 18 ] argued that there is a debate over the respective contributions of Lorentz, Poincaré and Einstein to relativity. These questions depend on the definition of relativity, and Stachel argued that kinematics and the new view of space and time is the core of special relativity, and dynamical theories must be formulated in accordance with this scheme. Based on this definition, Einstein is the main originator of the modern understanding of special relativity. In his opinion, Lorentz interpreted the Lorentz transformation only as a mathematical device, while Poincaré's thinking was much nearer to the modern understanding of relativity. Yet Poincaré still believed in the dynamical effects of the aether and distinguished between observers being at rest or in motion with respect to it. Stachel wrote: " He never organized his many brilliant insights into a coherent theory that resolutely discarded the aether and the absolute time or transcended its electrodynamic origins to derive a new kinematics of space and time on a formulation of the relativity principle that makes no reference to the ether ".
In his book Einstein's clocks, Poincaré's maps (2002), [ B 5 ] [ B 19 ] Peter Galison compared the approaches of both Poincaré and Einstein to reformulate the concepts of space and time. He wrote: " Did Einstein really discover relativity? Did Poincaré already have it? These old questions have grown as tedious as they are fruitless ." This is because it depends on the question, which parts of relativity one considers as essential: the rejection of the aether, the Lorentz transformation, the connection with the nature of space and time, predictions of experimental results, or other parts. For Galison, it is more important to acknowledge that both thinkers were concerned with clock synchronization problems, and thus both developed the new operational meaning of simultaneity. However, while Poincaré followed a constructive approach and still adhered to the concepts of Lorentz's stationary aether and the distinction between "apparent" and "true" times, Einstein abandoned the aether and therefore all times in different inertial frames are equally valid. Galison argued that this does not mean that Poincaré was conservative, since Poincaré often alluded to the revolutionary character of the "new mechanics" of Lorentz.
In Anatoly Logunov 's book [ B 20 ] about Poincaré's relativity theory, there is an English translation (on p. 113, using modern notations) of the part of Poincaré's 1900 article containing E=mc 2 . Logunov states that Poincaré's two 1905 papers are superior to Einstein's 1905 paper. According to Logunov, Poincaré was the first scientist to recognize the importance of invariance under the Poincaré group as a guideline for developing new theories in physics. In chapter 9 of this book, Logunov points out that Poincaré's second paper was the first one to formulate a complete theory of relativistic dynamics, containing the correct relativistic analogue of Newton's F=ma .
On p. 142, Logunov points out that Einstein wrote reviews for the Beiblätter Annalen der Physik , writing 21 reviews in 1905. In his view, this contradicts the claims that Einstein worked in relative isolation and with limited access to the scientific literature. Among the papers reviewed in the Beiblätter in the fourth (of 24) issue of 1905, there is a review of Lorentz' 1904 paper by Richard Gans , which contains the Lorentz transformations. In Logunov's view, this supports the view that Einstein was familiar with the Lorentz' paper containing the correct relativistic transformation in early 1905, while his June 1905 paper does not mention Lorentz in connection with this result.
Harvey R. Brown (2005) [ B 21 ] (who favors a dynamical view of relativistic effects similar to Lorentz, but "without a hidden aether frame") wrote about the road to special relativity from Michelson to Einstein in section 4:
Regarding Lorentz's work before 1905, Brown wrote about the development of Lorentz's " theorem of corresponding states " and then continued:
Then the contribution of Poincaré's to relativity:
However, Brown continued with the reasons which speak against crediting Poincaré with co-discovery:
Brown denies the idea of other authors and historians that the major difference between Einstein and his predecessors is Einstein's rejection of the aether, because it is always possible to add for whatever reason the notion of a privileged frame to special relativity as long as one accepts that it will remain unobservable, and also Poincaré argued that " some day, no doubt, the aether will be thrown aside as useless ". However Brown gave some examples of what in his opinion were the new features in Einstein's work:
After that, Brown develops his own dynamical interpretation of special relativity as opposed to the kinematical approach of Einstein's 1905 paper (although he says that this dynamical view is already contained in Einstein's 1905 paper, "masqueraded in the language of kinematics", p. 82), and the modern understanding of spacetime.
Roger Cerf (2006) [ B 22 ] gave priority to Einstein for developing special relativity, and criticized the assertions of Leveugle and others concerning the priority of Poincaré. While Cerf agreed that Poincaré made important contributions to relativity, he argued (following Pais) that Poincaré " stopped short before the crucial step " because he handled length contraction as a "third hypothesis", therefore Poincaré lacked a complete understanding of the basic principles of relativity. " Einstein's crucial step was that he abandoned the mechanistic ether in favor of a new kinematics. " He also denies the idea, that Poincaré invented E=mc² in its modern relativistic sense, because he did not realize the implications of this relationship. Cerf considers Leveugle's Hilbert–Planck–Einstein connection an implausible conspiracy theory .
Katzir (2005) [ B 23 ] argued that " Poincaré's work should not be seen as an attempt to formulate special relativity, but as an independent attempt to resolve questions in electrodynamics. " Contrary to Miller and others, Katzir thinks that Poincaré's development of electrodynamics led him to the rejection of the pure electromagnetic world view (due to the non-electromagnetic Poincaré stresses introduced in 1905), and Poincaré's theory represents a " relativistic physics " which is guided by the relativity principle. In this physics, however, " Lorentz's theory and Newton's theory remained as the fundamental bases of electrodynamics and gravitation ."
Walter (2005) argues that both Poincaré and Einstein put forward the theory of relativity in 1905. And in 2007 he wrote, that although Poincaré formally introduced four-dimensional spacetime in 1905/6, he was still clinging to the idea of "Galilei spacetime". That is, Poincaré preferred Lorentz covariance over Galilei covariance when it is about phenomena accessible to experimental tests; yet in terms of space and time, Poincaré preferred Galilei spacetime over Minkowski spacetime, and length contraction and time dilation " are merely apparent phenomena due to motion with respect to the ether ". This is the fundamental difference in the two principal approaches to relativity theory, namely that of "Lorentz and Poincaré" on one side, and "Einstein and Minkowski" on the other side. [ B 24 ] | https://en.wikipedia.org/wiki/Relativity_priority_dispute |
A relaxase is a single-strand DNA transesterase enzyme produced by some prokaryotes and viruses . Relaxases are responsible for site- and strand-specific nicks in unwound double-stranded DNA . Known relaxases belong to the rolling circle replication (RCR) initiator superfamily of enzymes and fall into two broad classes: replicative (Rep) and mobilization (Mob). [ 1 ] The nicks produced by Rep relaxases initiate plasmid or virus RCR. Mob relaxases nick at origin of transfer (oriT) to initiate the process of DNA mobilization and transfer known as bacterial conjugation . Relaxases are so named because the single-stranded DNA nick that they catalyze lead to relaxation of helical tension.
Known relaxases are metal ion dependent tyrosine transesterases. This means that they use a metal ion to aid the transfer of an ester bond from the DNA phosphodiester backbone to a catalytic tyrosine side chain , resulting in a long-lived covalent phosphotyrosine intermediate that essentially unified the nicked DNA strand and the enzyme as one molecule. Preliminary reports of relaxase inhibition by small molecules that mimic intermediates of this reaction were first reported in 2007. [ 2 ] Such inhibition has implications related to preventing the propagation of antibiotic resistance in clinical settings.
The first relaxase x-ray crystal and NMR structures – of Rep relaxases from tomato yellow leaf curl virus (TYLCV) [ 3 ] and adeno associated virus serotype 5 (AAV-5) [ 4 ] – were solved in 2002. These revealed compact molecules composed of five-stranded, antiparallel beta sheet cores and peripheral alpha helices . A histidine -rich motif , previously identified by sequence conservation , was shown to be a metal ion binding site located on the beta sheet core, nearby the carboxy-terminal catalytic tyrosine residue. Later structures of the Mob relaxases TrwC from plasmid R388 [ 5 ] and TraI from the F-plasmid [ 6 ] confirmed that the Mob and Rep classes are evolutionarily related to one another through circular permutation . This means that they share a general fold , but the amino-terminal sequence of one is homologous to the C-terminus of the other, and vice versa. Thus the Catalytic tyrosines of TraI and TrwC are amino-terminal rather than carboxy-terminal.
Relaxase nomenclature is varied. In conjugative bacterial plasmids, Mob-class relaxases go by names such as TraI (in plasmid RP4), VirD2 (pTi), TrwC (R388), TraI ( F-plasmid ), MobB (CloDF13), or TrsK (pGO1). | https://en.wikipedia.org/wiki/Relaxase |
In magnetic resonance imaging (MRI) and nuclear magnetic resonance spectroscopy (NMR), an observable nuclear spin polarization ( magnetization ) is created by a homogeneous magnetic field. This field makes the magnetic dipole moments of the sample precess at the resonance ( Larmor ) frequency of the nuclei. At thermal equilibrium, nuclear spins precess randomly about the direction of the applied field. They become abruptly phase coherent when they are hit by radiofrequency (RF) pulses at the resonant frequency, created orthogonal to the field. The RF pulses cause the population of spin-states to be perturbed from their thermal equilibrium value. The generated transverse magnetization can then induce a signal in an RF coil that can be detected and amplified by an RF receiver. The return of the longitudinal component of the magnetization to its equilibrium value is termed spin-lattice relaxation while the loss of phase-coherence of the spins is termed spin-spin relaxation, which is manifest as an observed free induction decay (FID). [ 1 ]
For spin- 1 / 2 nuclei (such as 1 H), the polarization due to spins oriented with the field N − relative to the spins oriented against the field N + is given by the Boltzmann distribution :
where ΔE is the energy level difference between the two populations of spins, k is the Boltzmann constant, and T is the sample temperature. At room temperature, the number of spins in the lower energy level, N−, slightly outnumbers the number in the upper level, N+. The energy gap between the spin-up and spin-down states in NMR is minute by atomic emission standards at magnetic fields conventionally used in MRI and NMR spectroscopy. Energy emission in NMR must be induced through a direct interaction of a nucleus with its external environment rather than by spontaneous emission . This interaction may be through the electrical or magnetic fields generated by other nuclei, electrons, or molecules. Spontaneous emission of energy is a radiative process involving the release of a photon and typified by phenomena such as fluorescence and phosphorescence. As stated by Abragam, the probability per unit time of the nuclear spin-1/2 transition from the + into the
- state through spontaneous emission of a photon is a negligible phenomenon. [ 2 ] [ 3 ] Rather, the return to equilibrium is a much slower thermal process induced by the fluctuating local magnetic fields due to molecular or electron (free radical) rotational motions that return the excess energy in the form of heat to the surroundings.
The decay of RF-induced NMR spin polarization is characterized in terms of two separate processes, each with their own time constants. One process, called T 1 , is responsible for the loss of resonance intensity following pulse excitation. The other process, called T 2 , characterizes the width or broadness of resonances. Stated more formally, T 1 is the time constant for the physical processes responsible for the relaxation of the components of the nuclear spin magnetization vector M parallel to the external magnetic field, B 0 (which is conventionally designated as the z -axis). T 2 relaxation affects the coherent components of M perpendicular to B 0 . In conventional NMR spectroscopy, T 1 limits the pulse repetition rate and affects the overall time an NMR spectrum can be acquired. Values of T 1 range from milliseconds to several seconds, depending on the size of the molecule, the viscosity of the solution, the temperature of the sample, and the possible presence of paramagnetic species (e.g., O 2 or metal ions).
The longitudinal (or spin-lattice) relaxation time T 1 is the decay constant for the recovery of the z component of the nuclear spin magnetization, M z , towards its thermal equilibrium value, M z , e q {\displaystyle M_{z,\mathrm {eq} }} . In general,
In specific cases:
i.e. the magnetization recovers to 63% of its equilibrium value after one time constant T 1 .
T 1 relaxation involves redistributing the populations of the nuclear spin states in order to reach the thermal equilibrium distribution . By definition, this is not energy conserving. Moreover, spontaneous emission is negligibly slow at NMR frequencies. Hence truly isolated nuclear spins would show negligible rates of T 1 relaxation. However, a variety of relaxation mechanisms allow nuclear spins to exchange energy with their surroundings, the lattice , allowing the spin populations to equilibrate. The fact that T 1 relaxation involves an interaction with the surroundings is the origin of the alternative description, spin-lattice relaxation .
Note that the rates of T 1 relaxation (i.e., 1/ T 1 ) are generally strongly dependent on the NMR frequency and so vary considerably with magnetic field strength B . Small amounts of paramagnetic substances in a sample speed up relaxation very much. By degassing, and thereby removing dissolved oxygen , the T 1 / T 2 of liquid samples easily go up to an order of ten seconds.
Especially for molecules exhibiting slowly relaxing ( T 1 ) signals, the technique spin saturation transfer (SST) provides information on chemical exchange reactions. The method is widely applicable to fluxional molecules . This magnetization transfer technique provides rates, provided that they exceed 1/ T 1 . [ 4 ]
The transverse (or spin-spin) relaxation time T 2 is the decay constant for the component of M perpendicular to B 0 , designated M xy , M T , or M ⊥ {\displaystyle M_{\perp }} . For instance, initial xy magnetization at time zero will decay to zero (i.e. equilibrium) as follows:
i.e. the transverse magnetization vector drops to 37% of its original magnitude after one time constant T 2 .
T 2 relaxation is a complex phenomenon, but at its most fundamental level, it corresponds to a decoherence of the transverse nuclear spin magnetization. Random fluctuations of the local magnetic field lead to random variations in the instantaneous NMR precession frequency of different spins. As a result, the initial phase coherence of the nuclear spins is lost, until eventually the phases are disordered and there is no net xy magnetization. Because T 2 relaxation involves only the phases of other nuclear spins it is often called "spin-spin" relaxation.
T 2 values are generally much less dependent on field strength, B, than T 1 values.
Hahn echo decay experiment can be used to measure the T 2 time, as shown in the animation below. The size of the echo is recorded for different spacings of the two applied pulses. This reveals the decoherence which is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the T 2 {\displaystyle T_{2}} time.
In an idealized system, all nuclei in a given chemical environment, in a magnetic field, precess with the same frequency. However, in real systems, there are minor differences in chemical environment which can lead to a distribution of resonance frequencies around the ideal. Over time, this distribution can lead to a dispersion of the tight distribution of magnetic spin vectors, and loss of signal ( free induction decay ). In fact, for most magnetic resonance experiments, this "relaxation" dominates. This results in dephasing .
However, decoherence because of magnetic field inhomogeneity is not a true "relaxation" process; it is not random, but dependent on the location of the molecule in the magnet. For molecules that aren't moving, the deviation from ideal relaxation is consistent over time, and the signal can be recovered by performing a spin echo experiment.
The corresponding transverse relaxation time constant is thus T 2 * , which is usually much smaller than T 2 . The relation between them is:
where γ represents gyromagnetic ratio , and ΔB 0 the difference in strength of the locally varying field. [ 5 ] [ 6 ]
Unlike T 2 , T 2 * is influenced by magnetic field gradient irregularities. The T 2 * relaxation time is always shorter than the T 2 relaxation time and is typically milliseconds for water samples in imaging magnets.
In NMR systems, the following relation holds absolute true [ 7 ] T 2 ≤ 2 T 1 {\displaystyle T_{2}\leq 2T_{1}} . In most situations (but not in principle) T 1 {\displaystyle T_{1}} is greater than T 2 {\displaystyle T_{2}} . The cases in which 2 T 1 > T 2 > T 1 {\displaystyle 2T_{1}>T_{2}>T_{1}} are rare, but not impossible. [ 8 ]
Bloch equations are used to calculate the nuclear magnetization M = ( M x , M y , M z ) as a function of time when relaxation times T 1 and T 2 are present. Bloch equations are phenomenological equations that were introduced by Felix Bloch in 1946. [ 9 ]
Where × {\displaystyle \times } is the cross-product, γ is the gyromagnetic ratio and B ( t ) = ( B x ( t ), B y ( t ), B 0 + B z (t)) is the magnetic flux density experienced by the nuclei.
The z component of the magnetic flux density B is typically composed of two terms: one, B 0 , is constant in time, the other one, B z (t), is time dependent. It is present in magnetic resonance imaging and helps with the spatial decoding of the NMR signal.
The equation listed above in the section on T 1 and T 2 relaxation are those in the Bloch equations.
Solomon equations are used to calculate the transfer of magnetization as a result of relaxation in a dipolar system. They can be employed to explain the nuclear Overhauser effect , which is an important tool in determining molecular structure.
Following is a table of the approximate values of the two relaxation time constants for hydrogen nuclear spins in nonpathological human tissues.
Following is a table of the approximate values of the two relaxation time constants for chemicals that commonly show up in human brain magnetic resonance spectroscopy (MRS) studies, physiologically or pathologically .
The discussion above describes relaxation of nuclear magnetization in the presence of a constant magnetic field B 0 . This is called relaxation in the laboratory frame .
Another technique, called relaxation in the rotating frame , is the relaxation of nuclear magnetization in the presence of the field B 0 together with a time-dependent magnetic field B 1 . The field B 1 rotates in the plane perpendicular to B 0 at the Larmor frequency of the nuclei in the B 0 . The magnitude of B 1 is typically much smaller than the magnitude of B 0 . Under these circumstances the relaxation of the magnetization is similar to laboratory frame relaxation in a field B 1 . The decay constant for the recovery of the magnetization component along B 1 is called the spin-lattice relaxation time in the rotating frame and is denoted T 1ρ .
Relaxation in the rotating frame is useful because it provides information on slow motions of nuclei.
Relaxation of nuclear spins requires a microscopic mechanism for a nucleus to change orientation with respect to the applied magnetic field and/or interchange energy with the surroundings (called the lattice). The most common mechanism is the magnetic dipole-dipole interaction between the magnetic moment of a nucleus and the magnetic moment of another nucleus or other entity (electron, atom, ion, molecule). This interaction depends on the distance between the pair of dipoles (spins) but also on their orientation relative to the external magnetic field. Several other relaxation mechanisms also exist. The chemical shift anisotropy (CSA) relaxation mechanism arises whenever the electronic environment around the nucleus is non spherical, the magnitude of the electronic shielding of the nucleus will then be dependent on the molecular orientation relative to the (fixed) external magnetic field. The spin rotation (SR) relaxation mechanism arises from an interaction between the nuclear spin and a coupling to the overall molecular rotational angular momentum. Nuclei with spin I ≥ 1 will have not only a nuclear dipole but a quadrupole. The nuclear quadrupole has an interaction with the electric field gradient at the nucleus which is again orientation dependent as with the other mechanisms described above, leading to the so-called quadrupolar relaxation mechanism.
Molecular reorientation or tumbling can then modulate these orientation-dependent spin interaction energies.
According to quantum mechanics , time-dependent interaction energies cause transitions between the nuclear spin states which result in nuclear spin relaxation. The application of time-dependent perturbation theory in quantum mechanics shows that the relaxation rates (and times) depend on spectral density functions that are the Fourier transforms of the autocorrelation function of the fluctuating magnetic dipole interactions. [ 12 ] The form of the spectral density functions depend on the physical system, but a simple approximation called the BPP theory is widely used.
Another relaxation mechanism is the electrostatic interaction between a nucleus with an electric quadrupole moment and the electric field gradient that exists at the nuclear site due to surrounding charges. Thermal motion of a nucleus can result in fluctuating electrostatic interaction energies. These fluctuations produce transitions between the nuclear spin states in a similar manner to the magnetic dipole-dipole interaction.
In 1948, Nicolaas Bloembergen , Edward Mills Purcell , and Robert Pound proposed the so-called Bloembergen-Purcell-Pound theory (BPP theory) to explain the relaxation constant of a pure substance in correspondence with its state, taking into account the effect of tumbling motion of molecules on the local magnetic field disturbance. [ 13 ] The theory agrees well with experiments on pure substances, but not for complicated environments such as the human body.
This theory makes the assumption that the autocorrelation function of the microscopic fluctuations causing the relaxation is proportional to e − t / τ c {\displaystyle e^{-t/\tau _{c}}} , where τ c {\displaystyle \tau _{c}} is called the correlation time . From this theory, one can get T 1 > T 2 for magnetic dipolar relaxation:
where ω 0 {\displaystyle \omega _{0}} is the Larmor frequency in correspondence with the strength of the main magnetic field B 0 {\displaystyle B_{0}} . τ c {\displaystyle \tau _{c}} is the correlation time of the molecular tumbling motion. K = 3 μ 0 2 160 π 2 ℏ 2 γ 4 r 6 {\displaystyle K={\frac {3\mu _{0}^{2}}{160\pi ^{2}}}{\frac {\hbar ^{2}\gamma ^{4}}{r^{6}}}} is defined for spin-1/2 nuclei and is a constant with μ 0 {\displaystyle \mu _{0}} being the magnetic permeability of free space of the ℏ = h 2 π {\displaystyle \hbar ={\frac {h}{2\pi }}} the reduced Planck constant , γ the gyromagnetic ratio of such species of nuclei, and r the distance between the two nuclei carrying magnetic dipole moment.
Taking for example the H 2 O molecules in liquid phase without the contamination of oxygen-17 , the value of K is 1.02×10 10 s −2 and the correlation time τ c {\displaystyle \tau _{c}} is on the order of picoseconds = 10 − 12 {\displaystyle 10^{-12}} s , while hydrogen nuclei 1 H ( protons ) at 1.5 tesla precess at a Larmor frequency of approximately 64 MHz (Simplified. BPP theory uses angular frequency indeed). We can then estimate using τ c = 5×10 −12 s:
which is close to the experimental value, 3.6 s. Meanwhile, we can see that at this extreme case, T 1 equals T 2 .
As follows from the BPP theory, measuring the T 1 times leads to internuclear distances r. One of the examples is accurate determinations of the metal – hydride (M-H) bond lengths in solutions by measurements of 1 H selective and non-selective T 1 times in variable-temperature relaxation experiments via the equation: [ 14 ] [ 15 ]
where r, frequency and T 1 are measured in Å, MHz and s, respectively, and I M is the spin of M. | https://en.wikipedia.org/wiki/Relaxation_(NMR) |
In the physical sciences, relaxation usually means the return of a perturbed system into equilibrium .
Each relaxation process can be categorized by a relaxation time τ . The simplest theoretical description of relaxation as function of time t is an exponential law exp(− t / τ ) ( exponential decay ).
Let the homogeneous differential equation :
model damped unforced oscillations of a weight on a spring.
The displacement will then be of the form y ( t ) = A e − t / T cos ( μ t − δ ) {\displaystyle y(t)=Ae^{-t/T}\cos(\mu t-\delta )} . The constant T ( = 2 m / γ {\displaystyle =2m/\gamma } ) is called the relaxation time of the system and the constant μ is the quasi-frequency.
In an RC circuit containing a charged capacitor and a resistor, the voltage decays exponentially:
The constant τ = R C {\displaystyle \tau =RC\ } is called the relaxation time or RC time constant of the circuit. A nonlinear oscillator circuit which generates a repeating waveform by the repetitive discharge of a capacitor through a resistance is called a relaxation oscillator .
In condensed matter physics , relaxation is usually studied as a linear response to a small external perturbation. Since the underlying microscopic processes are active even in the absence of external perturbations, one can also study "relaxation in equilibrium" instead of the usual "relaxation into equilibrium" (see fluctuation-dissipation theorem ).
In continuum mechanics , stress relaxation is the gradual disappearance of stresses from a viscoelastic medium after it has been deformed.
In dielectric materials, the dielectric polarization P depends on the electric field E . If E changes, P ( t ) reacts: the polarization relaxes towards a new equilibrium, i.e., the surface charges equalize. It is important in dielectric spectroscopy . Very long relaxation times are responsible for dielectric absorption .
The dielectric relaxation time is closely related to the electrical conductivity . In a semiconductor it is a measure of how long it takes to become neutralized by conduction process. This relaxation time is small in metals and can be large in semiconductors and insulators .
An amorphous solid such as amorphous indomethacin displays a temperature dependence of molecular motion, which can be quantified as the average relaxation time for the solid in a metastable supercooled liquid or glass to approach the molecular motion characteristic of a crystal . Differential scanning calorimetry can be used to quantify enthalpy change due to molecular structural relaxation.
The term "structural relaxation" was introduced in the scientific literature in 1947/48 without any explanation, applied to NMR, and meaning the same as "thermal relaxation". [ 1 ] [ 2 ] [ 3 ]
In nuclear magnetic resonance (NMR), various relaxations are the properties that it measures.
In chemical kinetics , relaxation methods are used for the measurement of very fast reaction rates . A system initially at equilibrium is perturbed by a rapid change in a parameter such as the temperature (most commonly), the pressure, the electric field or the pH of the solvent. The return to equilibrium is then observed, usually by spectroscopic means, and the relaxation time measured. In combination with the chemical equilibrium constant of the system, this enables the determination of the rate constants for the forward and reverse reactions. [ 4 ]
A monomolecular, first order reversible reaction which is close to equilibrium can be visualized by the following symbolic structure: A → k B → k ′ A {\displaystyle {\ce {A}}~{\overset {k}{\rightarrow }}~{\ce {B}}~{\overset {k'}{\rightarrow }}~{\ce {A}}} A ↽ − − ⇀ B {\displaystyle {\ce {A <=> B}}}
In other words, reactant A and product B are forming into one another based on reaction rate constants k and k'.
To solve for the concentration of A, recognize that the forward reaction ( A → k B {\displaystyle {\ce {A ->[{k}] B}}} ) causes the concentration of A to decrease over time, whereas the reverse reaction ( B → k ′ A {\displaystyle {\ce {B ->[{k'}] A}}} ) causes the concentration of A to increase over time.
Therefore, d [ A ] d t = − k [ A ] + k ′ [ B ] {\displaystyle {d{\ce {[A]}} \over dt}=-k{\ce {[A]}}+k'{\ce {[B]}}} , where brackets around A and B indicate concentrations.
If we say that at t = 0 , [ A ] ( t ) = [ A ] 0 {\displaystyle t=0,{\ce {[A]}}(t)={\ce {[A]}}_{0}} , and applying the law of conservation of mass, we can say that at any time, the sum of the concentrations of A and B must be equal to the concentration of A 0 {\displaystyle A_{0}} , assuming the volume into which A and B are dissolved does not change: [ A ] + [ B ] = [ A ] 0 ⇒ [ B ] = [ A ] 0 − [ A ] {\displaystyle {\ce {[A]}}+{\ce {[B]}}={\ce {[A]}}_{0}\Rightarrow {\ce {[B]}}={\ce {[A]}}_{0}-{\ce {[A]}}}
Substituting this value for [B] in terms of [A] 0 and [A]( t ) yields d [ A ] d t = − k [ A ] + k ′ [ B ] = − k [ A ] + k ′ ( [ A ] 0 − [ A ] ) = − ( k + k ′ ) [ A ] + k ′ [ A ] 0 , {\displaystyle {d{\ce {[A]}} \over dt}=-k{\ce {[A]}}+k'{\ce {[B]}}=-k{\ce {[A]}}+k'({\ce {[A]}}_{0}-{\ce {[A]}})=-(k+k'){\ce {[A]}}+k'{\ce {[A]}}_{0},} which becomes the separable differential equation d [ A ] − ( k + k ′ ) [ A ] + k ′ [ A ] 0 = d t {\displaystyle {\frac {d{\ce {[A]}}}{-(k+k'){\ce {[A]}}+k'{\ce {[A]}}_{0}}}=dt}
This equation can be solved by substitution to yield [ A ] = k ′ − k e − ( k + k ′ ) t k + k ′ [ A ] 0 {\displaystyle {\ce {[A]}}={k'-ke^{-(k+k')t} \over k+k'}{\ce {[A]}}_{0}}
Consider a supersaturated portion of a cloud. Then shut off the updrafts, entrainment, and any other vapor sources/sinks and things that would induce the growth of the particles (ice or water). Then wait for this supersaturation to reduce and become just saturation (relative humidity = 100%), which is the equilibrium state. The time it takes for the supersaturation to dissipate is called relaxation time. It will happen as ice crystals or liquid water content grow within the cloud and will thus consume the contained moisture. The dynamics of relaxation are very important in cloud physics for accurate mathematical modelling .
In water clouds where the concentrations are larger (hundreds per cm 3 ) and the temperatures are warmer (thus allowing for much lower supersaturation rates as compared to ice clouds), the relaxation times will be very low (seconds to minutes). [ 5 ]
In ice clouds the concentrations are lower (just a few per liter) and the temperatures are colder (very high supersaturation rates) and so the relaxation times can be as long as several hours. Relaxation time is given as
where:
In astronomy , relaxation time relates to clusters of gravitationally interacting bodies, for instance, stars in a galaxy . The relaxation time is a measure of the time it takes for one object in the system (the "test star") to be significantly perturbed by other objects in the system (the "field stars"). It is most commonly defined as the time for the test star's velocity to change by of order itself. [ 6 ]
Suppose that the test star has velocity v . As the star moves along its orbit, its motion will be randomly perturbed by the gravitational field of nearby stars. The relaxation time can be shown to be [ 7 ]
where ρ is the mean density, m is the test-star mass, σ is the 1d velocity dispersion of the field stars, and ln Λ is the Coulomb logarithm .
Various events occur on timescales relating to the relaxation time, including core collapse , energy equipartition , and formation of a Bahcall-Wolf cusp around a supermassive black hole . | https://en.wikipedia.org/wiki/Relaxation_(physics) |
Relaxometry refers to the study and/or measurement of relaxation variables in Nuclear Magnetic Resonance and Magnetic Resonance Imaging . Often referred to as Time-Domain NMR. In NMR , nuclear magnetic moments are used to measure specific physical and chemical properties of materials.
Relaxation of the nuclear spin system is crucial for all NMR applications. The relaxation rate depends strongly on the mobility (fluctuations, diffusion) of the microscopic environment and the strength of the applied magnetic field. As a rule of thumb, strong magnetic fields lead to increased sensitivity on fast dynamics while low fields lead to increased sensitivity on slow dynamics. Thus, the relaxation rate as a function of the magnetic field strength is a fingerprint of the microscopic dynamics.
Key Materials science properties are often described in different fields using the terms mobility / dynamics / stiffness / viscosity / rigidity of the sample. These properties are usually dependent on atomic and molecular motion in the sample, which may be measured using time-domain NMR and fast field cycling relaxometry. [ 1 ] [ 2 ]
Apparatus and technological support of the method is constantly developed. An NMR relaxometer is a device for relaxation time measuring. Laboratory NMR relaxometers for NMR signal registration are available in small sizes. [ 3 ] In NMR relaxometry (NMRR) only one specific NMRR parameter is measured, not the whole spectrum (which is not always needed). This helps to save time and resources and makes it possible to use an NMR relaxometer as a portable express analyzer in different branches of industry, science and technology, environmental protection, etc. [ 4 ] [ 5 ]
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relaxometry |
Relaxor ferroelectrics are ferroelectric materials that exhibit high electrostriction . As of 2015 [update] , although they have been studied for over fifty years, [ 1 ] the mechanism for this effect is still not completely understood, and is the subject of continuing research. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Examples of relaxor ferroelectrics include:
Relaxor Ferroelectric materials find application in high efficiency energy storage and conversion as they have high dielectric constants, orders-of-magnitude higher than those of conventional ferroelectric materials. Like conventional ferroelectrics, Relaxor Ferroelectrics show permanent dipole moment in domains. However, these domains are on the nano-length scale, unlike conventional ferroelectrics domains that are generally on the micro-length scale, and take less energy to align. Consequently, Relaxor Ferroelectrics have very high specific capacitance and have thus generated interest in the fields of energy storage. [ 10 ] Furthermore, due to their slim hysteresis curve with high saturated polarization and low remnant polarization, Relaxor ferroelectrics have high discharge energy density and high discharge rates. BT-BZNT Multilayer Energy Storage Ceramic Capacitors (MLESCC) were experimentally determined to have very high efficiency(>80%) and stable thermal properties over a wide temperature range. [ 12 ]
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
This electromagnetism -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relaxor_ferroelectric |
The relaxosome is the complex of proteins that facilitates plasmid transfer during bacterial conjugation . The proteins are encoded by the tra operon on a fertility plasmid in the region near the origin of transfer, oriT . The most important of these proteins is relaxase , which is responsible for beginning the conjugation process by cutting at the nic site via transesterification . This nicking results in a DNA - Protein complex with the relaxosome bound to a single strand of the plasmid DNA and an exposed 3' hydroxyl group. Relaxase also unwinds the plasmid being conjugated with its helicase properties. The relaxosome interacts with integration host factors within the oriT.
Other genes that code for relaxosome components include TraH , which stabilizes the relaxosome's structural formation, TraI , which encodes for the relaxase protein, TraJ , which recruits the complex to the oriT site, TraK , which increases the 'nicked' state of the target plasmid, and TraY , which imparts single-stranded DNA character on the oriT site. TraM plays a particularly important role in relaxase interaction by stimulating 'relaxed' DNA formation.
This bacteria -related article is a stub . You can help Wikipedia by expanding it .
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Relaxosome |
A relay attack (also known as the two-thief attack) [ 1 ] in computer security is a type of hacking technique related to man-in-the-middle and replay attacks. In a classic man-in-the-middle attack, an attacker intercepts and manipulates communications between two parties initiated by one of the parties. In a classic relay attack, communication with both parties is initiated by the attacker who then merely relays messages between the two parties without manipulating them or even necessarily reading them.
Peggy works in a high-security building that she accesses using a smart card in her purse. When she approaches the door of the building, the building detects the presence of a smart card and initiates an exchange of messages that constitute a zero-knowledge password proof that the card is Peggy's. The building then allows Peggy to enter.
Mallory wants to break into the building. | https://en.wikipedia.org/wiki/Relay_attack |
In information theory , a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes.
A discrete memoryless single-relay channel can be modelled as four finite sets, X 1 , X 2 , Y 1 , {\displaystyle X_{1},X_{2},Y_{1},} and Y {\displaystyle Y} , and a conditional probability distribution p ( y , y 1 | x 1 , x 2 ) {\displaystyle p(y,y_{1}|x_{1},x_{2})} on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by p ( x 1 , x 2 ) {\displaystyle p(x_{1},x_{2})} .
There exist three main relaying schemes: Decode-and-Forward, Compress-and-Forward and Amplify-and-Forward. The first two schemes were first proposed in the pioneer article by Cover and El-Gamal.
The first upper bound on the capacity of the relay channel is derived in the pioneer article by Cover and El-Gamal and is known as the Cut-set upper bound. This bound says C ≤ max p ( x 1 , x 2 ) min ( I ( x 1 ; y 1 , y | x 2 ) , I ( x 1 , x 2 ; y ) ) {\displaystyle C\leq \max _{p(x_{1},x_{2})}\min \left(I\left(x_{1};y_{1},y|x_{2}\right),I\left(x_{1},x_{2};y\right)\right)} where C is the capacity of the relay channel. The first term and second term in the minimization above are called broadcast bound and multi-access bound, respectively.
A relay channel is said to be degraded if y depends on x 1 {\displaystyle x_{1}} only through y 1 {\displaystyle y_{1}} and x 2 {\displaystyle x_{2}} , i.e., p ( y | x 1 , x 2 , y 1 ) = p ( y | x 2 , y 1 ) {\displaystyle p(y|x_{1},x_{2},y_{1})=p(y|x_{2},y_{1})} . In the article by Cover and El-Gamal it is shown that the capacity of the degraded relay channel can be achieved using Decode-and-Forward scheme. It turns out that the capacity in this case is equal to the Cut-set upper bound.
A relay channel is said to be reversely degraded if p ( y , y 1 | x 1 , x 2 ) = p ( y | x 1 , x 2 ) p ( y 1 | y , x 2 ) {\displaystyle p(y,y_{1}|x_{1},x_{2})=p(y|x_{1},x_{2})p(y_{1}|y,x_{2})} . Cover and El-Gamal proved that the Direct Transmission Lower Bound (wherein relay is not used) is tight when the relay channel is reversely degraded.
In a relay-without-delay channel (RWD), each transmitted relay symbol can depend on relay's past as well as present received symbols. Relay Without Delay was shown to achieve rates that are outside the Cut-set upper bound. Recently, it was also shown that instantaneous relays (a special case of relay-without-delay) are capable of improving not only the capacity, but also Degrees of Freedom (DoF) of the 2-user interference channel. | https://en.wikipedia.org/wiki/Relay_channel |
A relay network is a broad class of network topology commonly used in wireless networks , where the source and destination are interconnected by means of some nodes . In such a network the source and destination cannot communicate to each other directly because the distance between the source and destination is greater than the transmission range of both of them, hence the need for intermediate node(s) to relay .
A relay network is a type of network used to send information between two devices, for e.g. server and computer, that are too far away to send the information to each other directly. Thus the network must send or "relay" the information to different devices, referred to as nodes , that pass on the information to its destination. A well-known example of a relay network is the Internet . A user can view a web page from a server halfway around the world by sending and receiving the information through a series of connected nodes.
In many ways, a relay network resembles a chain of people standing together. One person has a note he needs to pass to the girl at the end of the line. He is the sender, she is the recipient, and the people in between them are the messengers, or the nodes. He passes the message to the first node, or person, who passes it to the second and so on until it reaches the girl and she reads it.
The people might stand in a circle, however, instead of a line. Each person is close enough to reach the person on either side of him and across from him. Together the people represent a network and several messages can now pass around or through the network in different directions at once, as opposed to the straight line that could only run messages in a specific direction. This concept, the way a network is laid out and how it shares data, is known as network topology . Relay networks can use many different topologies, from a line to a ring to a tree shape, to pass along information in the fastest and most efficient way possible.
Often the relay network is complex and branches off in multiple directions to connect many servers and computers. Where two lines from two different computers or servers meet forms the nodes of the relay network. Two computer lines might run into the same router , for example, making this the node.
Wireless networks also take advantage of the relay network system. A laptop , for example, might connect to a wireless network which sends and receives information through another network and another until it reaches its destination. Even though not all parts of the network have physical wires, they still connect to other devices that function as the nodes.
This type of network holds several advantages. Information can travel long distances, even if the sender and receiver are far apart. It also speeds up data transmission by choosing the best path to travel between nodes to the receiver's computer. If one node is too busy, the information is simply routed to a different one. Without relay networks, sending an email from one computer to another would require the two computers be hooked directly together before it could work.
An array of adaptive units receives its input signals through a relaying network. [ 1 ]
The TOR Network is an example of a relay network as data transfer on the TOR network takes place over the TOR relay such that the data is transmitted over multiple relay nodes before it reaches the client node. | https://en.wikipedia.org/wiki/Relay_network |
A release agent (also mold release agent, release coating, or mold release coating) is a chemical used to prevent other materials from bonding to surfaces. Release agents aid in processes involving mold release, die-cast release, plastic release, adhesive release, and tire and web release. [ 1 ] Release agents are one of many additives used in the production of plastics. [ 2 ]
Release agents provide a barrier between a molding surface and the substrate, facilitating separation of the cured part from the mold. Without such a barrier, the substrate would become fused to the mold surface, resulting in difficult clean-up and dramatic loss in production efficiency. Even when a release agent is used, factors such as irregular applications or improper release agent choice may have a dramatic effect on the quality and consistency of the finished product. Many kinds of release agents are used. They are waxes , fatty ester , silicones , and metallic soaps . [ 1 ]
Volatile organic compound (VOC) reduction along with the elimination of health and safety concerns surrounding solvent-based release agents were primary drivers in the development of cosolvent mold release. Cosolvent based release agents combine the benefits of a solvent based system and the safety of water-based release agents. [ 3 ]
One of the key attributes of a release agent is its degree of permanence: how long will it last before reapplication is necessary. A semi-permanent release agent does not need to be reapplied for every cycle of a molding operation and even works better when it is not over-applied to the mold surface.
How many releases can be achieved before reapplication is necessary varies by process, material, and application method. In order to achieve multiple releases per application, the semi-permanent release coating generally must be applied to a clean, dry surface free of dirt, rust, grime or previous coatings. This allows the release agent to properly bond to the mold and mold tooling, improving durability and longevity of the coating.
Sacrificial coatings must be applied before every cycle of a molding operation and are therefore considered more labor intensive. Most molders will prefer semi-permanent coatings to sacrificial coatings, especially when molding rubber and plastic parts. These coatings contain fewer solid ingredients, and thus do not last as long as semi-permanent coatings.
Release agents may be water or solvent-based and use of either will depend on the personal preference of the molder, plant safety regulations, hazardous materials shipping costs, state, local, or federal regulations, and/or desired drying times of the release coating. Water-based die lubricant (WBD) has been used for about 40 years. All die casting machines have been designed with the use of WBD. [ 4 ] Water-based release coatings generally dry slower than solvent-based release agents but present fewer health and safety concerns. Water-based release agents will be less expensive to ship because of their inherently non-flammable nature and satisfy most plant-safety goals. Solvent-based release coatings dry almost instantly but present serious health and safety concerns. Fumes from solvent-based release agents may be hazardous without proper ventilation of the work area. Most solvents used in release agents are flammable.
Asphalt release agents are chemical products developed and manufactured as alternatives to diesel and solvents commonly used for cleaning equipment associated with hot mix asphaltic concrete (HMAC) production and placement on government and private facilities. The United States Oil Pollution Act of 1980 was used as the foundation to build the current program. The intent of asphalt release agents is to eliminate harmful stripping products that come into contact with bituminous products and strip the asphalt (binding agent) from the aggregates causing potholes, raveling, and other detrimental pavement failures.
In the concrete construction industry, form release agents prevent the adhesion of freshly placed concrete to the forming surface, usually plywood , overlaid plywood, steel or aluminum . In this application, there are two types of release agents available: barrier and reactive.
Barrier release agents prevent adhesion by the development of a physical film or barrier between the forming surface and the concrete.
Reactive release agents are chemically active and work by the process of a chemical reaction between the release agent and the free limes available in fresh concrete. A soapy film is created which prevents adhesion. Because it is a chemically reactive process, there is generally little to no residue or non-reacted product left on the forming surface or concrete which provides for a cleaner process.
Release agents are used to aid in the separation of food from a cooking container after baking or roasting . Traditionally fat or flour have been used, but in industrial food processing other chemicals might be used. The application is called bakery release.
In bakery paper or greaseproof paper release agents like catalyst -cured silicone release coatings may be used.
Mold release agent also can be used in die casting or metal forging process of metal, such as aluminum, aluminum alloy, zinc, zinc alloy, magnesium, etc. Graphite or talc is often used.
In industrial papermaking release agents are used to get slip effect of the paper from the processing equipment. A release agent may be applied on the process rolls (like the yankee dryer ) or in the paper coating .
Some paper types are made with low surface energy release coatings:
Release agents (e.g., magnesium stearate ) are added to powdered and granulated drug compositions, to serve as a lubricant for mold release purposes during tabletting.
Release agents are coated onto some plastic films to prevent adhesives from bonding to the plastic surface. Some release agents, also known as de-molding agent, form oil, parting agent or form releaser, are substances used in molding and casting that aid in the separation of a mould from the material being moulded and reduce imperfections in the moulded surface. Slip Additives are similarly used to prevent thin polyolefin films from adhering to metal surfaces (or each other) during processing, for instance in film blowing .
There are two types of release agents used in the molding of rubber products. Both are silicone based. The decision on which to use has to do with lubricity and release. Water-diluted silicone is used when you have rubber sliding over a hot mold (sheets or slugs). The silicone keeps the rubber from sticking to the mold but just as important it lubricates the rubber so it will slide over the hot mold as it is loaded. Diluted silicone typically has to be applied every cycle. Semi-permanent mold release builds a silicone matrix on the mold that becomes a barrier between the rubber and the metal surface of the mold. The matrix is created by the other ingredients in the semi-permanent mold release. Applications of semi-permanent mold release vary from every cycle to once daily applications depending on the compound being molded and the design and quality of the mold. Silicone-based rubber products, however, require a non-silicone based releasing agent.
Related to release agents are blocking agents . These chemicals aid in keeping collections of polymeric materials from sticking together. Typical that are used for stacked sheets or rolls of plastics. They inhibit cold flow . [ 2 ] [ 5 ] | https://en.wikipedia.org/wiki/Release_agent |
A relevant cost (also called avoidable cost or differential cost ) [ 1 ] is a cost that differs between alternatives being considered. [ 2 ] In order for a cost to be a relevant cost it must be:
It is often important for businesses to distinguish between relevant and irrelevant costs when analyzing alternatives because erroneously considering irrelevant costs can lead to unsound business decisions. [ 1 ] Also, ignoring irrelevant data in analysis can save time and effort.
Types of irrelevant costs are: [ 3 ]
A construction firm is in the middle of constructing an office building, having spent $1 million on it so far. It requires an additional $0.5 million to complete construction. Because of a downturn in the real estate market , the finished building will not fetch its original intended price, and is expected to sell for only $1.2 million. If, in deciding whether or not to continue construction, the $1 million sunk cost were incorrectly included in the analysis, the firm may conclude that it should abandon the project because it would be spending $1.5 million for a return of $1.2 million. However, the $1 million is an irrelevant cost, and should be excluded. Continuing the construction actually involves spending $0.5 million for a return of $1.2 million, which makes it the correct course of action.
A managerial accounting term for costs that are specific to management's decisions. The concept of relevant costs eliminates unnecessary data that could complicate the decision-making process. | https://en.wikipedia.org/wiki/Relevant_cost |
In engineering , reliability, availability, maintainability and safety ( RAMS ) [ 1 ] [ 2 ] is used to characterize a product or system:
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reliability,_availability,_maintainability_and_safety |
Reliability, availability and serviceability ( RAS ), also known as reliability, availability, and maintainability ( RAM ), is a computer hardware engineering term involving reliability engineering , high availability , and serviceability design. The phrase was originally used by IBM as a term to describe the robustness of their mainframe computers . [ 1 ] [ 2 ]
Computers designed with higher levels of RAS have many features that protect data integrity and help them stay available for long periods of time without failure . [ 3 ] This data integrity and uptime is a particular selling point for mainframes and fault-tolerant systems .
While RAS originated as a hardware-oriented [ citation needed ] term, systems thinking has extended the concept of reliability-availability-serviceability to systems in general, including software : [ 4 ]
Note the distinction between reliability and availability: reliability measures the ability of a system to function correctly, including avoiding data corruption, whereas availability measures how often the system is available for use, even though it may not be functioning correctly. For example, a server may run forever and so have ideal availability, but may be unreliable, with frequent data corruption. [ 6 ]
Physical faults can be temporary or permanent:
Transient and intermittent faults can typically be handled by detection and correction by e.g., ECC codes or instruction replay (see below). Permanent faults will lead to uncorrectable errors which can be handled by replacement by duplicate hardware, e.g., processor sparing, or by the passing of the uncorrectable error to high level recovery mechanisms. A successfully corrected intermittent fault can also be reported to the operating system (OS) to provide information for predictive failure analysis .
Example hardware features for improving RAS include the following, listed by subsystem:
Fault-tolerant designs extended the idea by making RAS to be the defining feature of their computers for applications like stock market exchanges or air traffic control , where system crashes would be catastrophic. Fault-tolerant computers (e.g., see Tandem Computers and Stratus Technologies ), which tend to have duplicate components running in lock-step for reliability, have become less popular, due to their high cost. High availability systems , using distributed computing techniques like computer clusters , are often used as cheaper alternatives. [ citation needed ] | https://en.wikipedia.org/wiki/Reliability,_availability_and_serviceability |
Reliability-centered maintenance ( RCM ) is a concept of maintenance planning to ensure that systems continue to do what their users require in their present operating context. [ 1 ] Successful implementation of RCM will lead to increase in cost effectiveness, reliability, machine uptime, and a greater understanding of the level of risk that the organization is managing.
It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. Successful implementation of RCM will lead to increase in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the organization is managing.
John Moubray characterized RCM as a process to establish the safe minimum levels of maintenance. [ 2 ] This description echoed statements in the Nowlan and Heap report from United Airlines.
It is defined by the technical standard SAE JA1011, Evaluation Criteria for RCM Processes, which sets out the minimum criteria that any process should meet before it can be called RCM. This starts with the seven questions below, worked through in the order that they are listed:
Reliability centered maintenance is an engineering framework that enables the definition of a complete maintenance regimen. It regards maintenance as the means to maintain the functions a user may require of machinery in a defined operating context. As a discipline it enables machinery stakeholders to monitor, assess, predict and generally understand the working of their physical assets. This is embodied in the initial part of the RCM process which is to identify the operating context of the machinery, and write a Failure Mode Effects and Criticality Analysis (FMECA) . The second part of the analysis is to apply the "RCM logic", which helps determine the appropriate maintenance tasks for the identified failure modes in the FMECA. Once the logic is complete for all elements in the FMECA, the resulting list of maintenance is "packaged", so that the periodicities of the tasks are rationalised to be called up in work packages; it is important not to destroy the applicability of maintenance in this phase. Lastly, RCM is kept live throughout the "in-service" life of machinery, where the effectiveness of the maintenance is kept under constant review and adjusted in light of the experience gained.
RCM can be used to create a cost-effective maintenance strategy to address dominant causes of equipment failure. It is a systematic approach to defining a routine maintenance program composed of cost-effective tasks that preserve important functions.
The important functions (of a piece of equipment) to preserve with routine maintenance are identified, their dominant failure modes and causes determined and the consequences of failure ascertained. Levels of criticality are assigned to the consequences of failure. Some functions are not critical and are left to "run to failure" while other functions must be preserved at all cost. Maintenance tasks are selected that address the dominant failure causes. This process directly addresses maintenance preventable failures. Failures caused by unlikely events, non-predictable acts of nature, etc. will usually receive no action provided their risk (combination of severity and frequency) is trivial (or at least tolerable). When the risk of such failures is very high, RCM encourages (and sometimes mandates) the user to consider changing something which will reduce the risk to a tolerable level.
The result is a maintenance program that focuses scarce economic resources on those items that would cause the most disruption if they were to fail.
RCM emphasizes the use of predictive maintenance (PdM) techniques in addition to traditional preventive measures.
The term "reliability-centered maintenance" authored by Tom Matteson, Stanley Nowlan and Howard Heap of United Airlines (UAL) to describe a process used to determine the optimum maintenance requirements for aircraft [ 3 ] [ disputed – discuss ] (having left United Airlines to pursue a consulting career a few months before the publication of the final Nowlan-Heap report, Matteson received no authorial credit for the work [ citation needed ] ). The US Department of Defense (DOD) sponsored the authoring of both a textbook (by UAL) and an evaluation report (by Rand Corporation ) on Reliability-Centered Maintenance, both published in 1978. They brought RCM concepts to the attention of a wider audience.
The first generation of jet aircraft had a crash rate that would be considered highly alarming today, and both the Federal Aviation Administration (FAA) and the airlines' senior management felt strong pressure to improve matters. In the early 1960s, with FAA approval the airlines began to conduct a series of intensive engineering studies on in-service aircraft. The studies proved that the fundamental assumption of design engineers and maintenance planners—that every aircraft and every major component thereof (such as its engines) had a specific "lifetime" of reliable service, after which it had to be replaced (or overhauled ) in order to prevent failures—was wrong in nearly every specific example in a complex modern jet airliner.
This was one of many astounding discoveries that have revolutionized the managerial discipline of physical asset management and have been at the base of many developments since this seminal work was published. Among some of the paradigm shifts inspired by RCM were:
Later RCM was defined in the standard SAE JA1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. This sets out the minimum criteria for what is, and for what is not, able to be defined as RCM. The standard is a watershed event in the ongoing evolution of the discipline of physical asset management. Prior to the development of the standard many processes were labeled as RCM even though they were not true to the intentions and the principles in the original report that defined the term publicly.
The RCM process described in the DOD/UAL report recognized three principal risks from equipment failures: threats
Modern RCM gives threats to the environment a separate classification, though most forms manage them in the same way as threats to safety.
RCM offers five principal options among the risk management strategies:
RCM also offers specific criteria to use when selecting a risk management strategy for a system that presents a specific risk when it fails. Some are technical in nature (can the proposed task detect the condition it needs to detect? does the equipment actually wear out, with use?). Others are goal-oriented (is it reasonably likely that the proposed task-and-task-frequency will reduce the risk to a tolerable level?). The criteria are often presented in the form of a decision-logic diagram, though this is not intrinsic to the nature of the process.
After being created by the commercial aviation industry, RCM was adopted by the U.S. military (beginning in the mid-1970s) and by the U.S. commercial nuclear power industry (in the 1980s).
Starting in the late 1980s, an independent initiative led by John Moubray corrected some early flaws in the process, and adapted it for use in the wider industry. [ 2 ] Moubray was also responsible for popularizing the method and for introducing it to much of the industrial community outside of the aviation industry. In the two decades since this approach (called by the author RCM2) was first released, industry has undergone massive change with advances in lean thinking and efficiency methods. At this point in time many methods sprung up that took an approach of reducing the rigour of the RCM approach. The result was the propagation of methods that called themselves RCM, yet had little in common with the original concepts. In some cases these were misleading and inefficient, while in other cases they were even dangerous. Since each initiative is sponsored by one or more consulting firms eager to help clients use it, there is still considerable disagreement about their relative dangers (or merits). [ citation needed ]
The RCM standard ( SAE JA1011 , available from http://www.sae.org ) provides the minimum criteria that processes must comply with if they are to be called RCM.
Although a voluntary standard, it provides a reference for companies looking to implement RCM to ensure they are getting a process, software package or service that is in line with the original report.
The Walt Disney Company introduced RCM to its parks in 1997, led by Paul Pressler and consultants McKinsey & Company , laying off a large number of maintenance workers and saving large amounts of money. Some people blamed the new cost-conscious maintenance culture for some of the Incidents at Disneyland Resort that occurred in the following years. [ 4 ] | https://en.wikipedia.org/wiki/Reliability-centered_maintenance |
In computer networking , a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance , which is the term used by the ITU and ATM Forum , and leads to fault-tolerant messaging .
Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols.
Transmission Control Protocol (TCP), the main protocol used on the Internet , is a reliable unicast protocol; it provides the abstraction of a reliable byte stream to applications. UDP is an unreliable protocol and is often used in computer games , streaming media or in other situations where speed is an issue and some data loss may be tolerated because of the transitory nature of the data.
Often, a reliable unicast protocol is also connection oriented . For example, TCP is connection oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. However, some unreliable protocols are connection oriented, such as Asynchronous Transfer Mode and Frame Relay . In addition, some connectionless protocols, such as IEEE 802.11 , are reliable.
Building on the packet switching concepts proposed by Donald Davies , the first communication protocol on the ARPANET was a reliable packet delivery procedure to connect its hosts via the 1822 interface . [ 1 ] [ 2 ] A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor (IMP). Once the message was delivered to the destination host, an acknowledgment was delivered to the sending host. If the network could not deliver the message, the IMP would send an error message back to the sending host.
Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet .
If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle , which is one of the Internet 's fundamental design principles.
A reliable service is one that notifies the user if delivery fails, while an unreliable one does not notify the user if delivery fails. [ citation needed ] For example, Internet Protocol (IP) provides an unreliable service. Together, Transmission Control Protocol (TCP) and IP provide a reliable service, whereas User Datagram Protocol (UDP) and IP provide an unreliable one.
In the context of distributed protocols, reliability properties specify the guarantees that the protocol provides with respect to the delivery of messages to the intended recipient(s).
An example of a reliability property for a unicast protocol is "at least once", i.e. at least one copy of the message is guaranteed to be delivered to the recipient.
Reliability properties for multicast protocols can be expressed on a per-recipient basis (simple reliability properties), or they may relate the fact of delivery or the order of delivery among the different recipients (strong reliability properties). In the context of multicast protocols, strong reliability properties express the guarantees that the protocol provides with respect to the delivery of messages to different recipients.
An example of a strong reliability property is last copy recall , meaning that as long as at least a single copy of a message remains available at any of the recipients, every other recipient that does not fail eventually also receives a copy. Strong reliability properties such as this one typically require that messages are retransmitted or forwarded among the recipients.
An example of a reliability property stronger than last copy recall is atomicity . The property states that if at least a single copy of a message has been delivered to a recipient, all other recipients will eventually receive a copy of the message. In other words, each message is always delivered to either all or none of the recipients.
One of the most complex strong reliability properties is virtual synchrony .
Reliable messaging is the concept of message passing across an unreliable infrastructure whilst being able to make certain guarantees about the successful transmission of the messages. [ 3 ] For example, that if the message is delivered, it is delivered at most once, or that all messages successfully delivered arrive in a particular order.
Reliable delivery can be contrasted with best-effort delivery , where there is no guarantee that messages will be delivered quickly, in order, or at all.
A reliable delivery protocol can be built on an unreliable protocol. An extremely common example is the layering of Transmission Control Protocol on the Internet Protocol , a combination known as TCP/IP .
Strong reliability properties are offered by group communication systems (GCSs) such as IS-IS , Appia framework , JGroups or QuickSilver Scalable Multicast . The QuickSilver Properties Framework is a flexible platform that allows strong reliability properties to be expressed in a purely declarative manner, using a simple rule-based language, and automatically translated into a hierarchical protocol.
One protocol that implements reliable messaging is WS-ReliableMessaging , which handles reliable delivery of SOAP messages. [ 4 ]
The ATM Service-Specific Coordination Function provides for transparent assured delivery with AAL5 . [ 5 ] [ 6 ] [ 7 ]
IEEE 802.11 attempts to provide reliable service for all traffic. The sending station will resend a frame if the sending station does not receive an ACK frame within a predetermined period of time.
There is, however, a problem with the definition of reliability as "delivery or notification of failure" in real-time computing . In such systems, failure to deliver the real-time data will adversely affect the performance of the systems, and some systems, e.g. safety-critical , safety-involved , and some secure mission-critical systems, must be proved to perform at some specified minimum level. This, in turn, requires that a specified minimum reliability for the delivery of the critical data be met. Therefore, in these cases, it is only the delivery that matters; notification of the failure to deliver does ameliorate the failure. In hard real-time systems , all data must be delivered by the deadline or it is considered a system failure. In firm real-time systems , late data is still valueless but the system can tolerate some amount of late or missing data. [ 8 ] [ 9 ]
There are a number of protocols that are capable of addressing real-time requirements for reliable delivery and timeliness:
MIL-STD-1553B and STANAG 3910 are well-known examples of such timely and reliable protocols for avionic data buses . MIL-1553 uses a 1 Mbit/s shared media for the transmission of data and the control of these transmissions, and is widely used in federated military avionics systems. [ 10 ] It uses a bus controller (BC) to command the connected remote terminals (RTs) to receive or transmit this data. The BC can, therefore, ensure that there will be no congestion , and transfers are always timely. The MIL-1553 protocol also allows for automatic retries that can still ensure timely delivery and increase the reliability above that of the physical layer. STANAG 3910, also known as EFABus in its use on the Eurofighter Typhoon , is, in effect, a version of MIL-1553 augmented with a 20 Mbit/s shared media bus for data transfers, retaining the 1 Mbit/s shared media bus for control purposes.
The Asynchronous Transfer Mode (ATM), the Avionics Full-Duplex Switched Ethernet (AFDX), and Time Triggered Ethernet (TTEthernet) are examples of packet-switched networks protocols where the timeliness and reliability of data transfers can be assured by the network. AFDX and TTEthernet are also based on IEEE 802.3 Ethernet, though not entirely compatible with it.
ATM uses connection-oriented virtual channels (VCs) which have fully deterministic paths through the network, and usage and network parameter control (UPC/NPC), which are implemented within the network, to limit the traffic on each VC separately. This allows the usage of the shared resources (switch buffers) in the network to be calculated from the parameters of the traffic to be carried in advance, i.e. at system design time. That they are implemented by the network means that these calculations remain valid even when other users of the network behave in unexpected ways, i.e. transmit more data than they are expected to. The calculated usages can then be compared with the capacities of these resources to show that, given the constraints on the routes and the bandwidths of these connections, the resource used for these transfers will never be over-subscribed. These transfers will therefore never be affected by congestion and there will be no losses due to this effect. Then, from the predicted maximum usages of the switch buffers, the maximum delay through the network can also be predicted. However, for the reliability and timeliness to be proved, and for the proofs to be tolerant of faults in and malicious actions by the equipment connected to the network, the calculations of these resource usages cannot be based on any parameters that are not actively enforced by the network, i.e. they cannot be based on what the sources of the traffic are expected to do or on statistical analyses of the traffic characteristics (see network calculus ). [ 11 ]
AFDX uses frequency domain bandwidth allocation and traffic policing , that allows the traffic on each virtual link to be limited so that the requirements for shared resources can be predicted and congestion prevented so it can be proved not to affect the critical data. [ 12 ] However, the techniques for predicting the resource requirements and proving that congestion is prevented are not part of the AFDX standard.
TTEthernet provides the lowest possible latency in transferring data across the network by using time-domain control methods – each time triggered transfer is scheduled at a specific time so that contention for shared resources is controlled and thus the possibility of congestion is eliminated. The switches in the network enforce this timing to provide tolerance of faults in, and malicious actions on the part of, the other connected equipment. However, "synchronized local clocks are the fundamental prerequisite for time-triggered communication". [ 13 ] This is because the sources of critical data will have to have the same view of time as the switch, in order that they can transmit at the correct time and the switch will see this as correct. This also requires that the sequence with which a critical transfer is scheduled has to be predictable to both source and switch. This, in turn, will limit the transmission schedule to a highly deterministic one, e.g. the cyclic executive .
However, low latency in transferring data over the bus or network does not necessarily translate into low transport delays between the application processes that source and sink this data. This is especially true where the transfers over the bus or network are cyclically scheduled (as is commonly the case with MIL-STD-1553B and STANAG 3910, and necessarily so with AFDX and TTEthernet) but the application processes are not synchronized with this schedule.
With both AFDX and TTEthernet, there are additional functions required of the interfaces, e.g. AFDX's Bandwidth Allocation Gap control, and TTEthernet's requirement for very close synchronization of the sources of time-triggered data, that make it difficult to use standard Ethernet interfaces. Other methods for control of the traffic in the network that would allow the use of such standard IEEE 802.3 network interfaces is a subject of current research. [ 14 ] | https://en.wikipedia.org/wiki/Reliability_(computer_networking) |
A reliability block diagram (RBD) is a diagrammatic method for showing how component reliability contributes to the success or failure of a redundant system. RBD is also known as a dependence diagram (DD).
An RBD is drawn as a series of blocks connected in parallel or series configuration . Parallel blocks indicate redundant subsystems or components that contribute to a lower failure rate. Each block represents a component of the system with a failure rate . RBDs will indicate the type of redundancy in the parallel path. [ 1 ] For example, a group of parallel blocks could require two out of three components to succeed for the system to succeed. By contrast, any failure along a series path causes the entire series path to fail. [ 2 ] [ 3 ]
An RBD may be drawn using switches in place of blocks, where a closed switch represents a working component and an open switch represents a failed component. If a path may be found through the network of switches from beginning to end, the system still works.
An RBD may be converted to a success tree or a fault tree depending on how the RBD is defined. A success tree may then be converted to a fault tree or vice versa by applying de Morgan's theorem .
To evaluate an RBD, closed form solutions are available when blocks or components have statistical independence .
When statistical independence is not satisfied, specific formalisms and solution tools such as dynamic RBD have to be considered. [ 4 ]
The first thing one must determine when calculating an RBD is whether to use probability or rate. Failure rates are often used in RBDs to determine system failure rates. Use probabilities or rates in an RBD but not both.
Series probabilities are calculated by multiplying the reliability (a probability) of the series components:
Parallel probabilities are calculated by multiplying the unreliability ( Q ) of the series components where Q = 1 – R if only one unit needs to function for system success:
For constant failure rates, series rates are calculated by superimposing the Poisson point processes of the series components:
Parallel rates can be evaluated using a number of formulas including this formula [ 5 ] for all units active with equal component failure rates. n − q out of n redundant units are required for success. μ >> λ
If the components in a parallel system have n different failure rates a more general formula can be used as follows. For the repairable model Q = λ / μ as long as μ ≫ λ {\textstyle \mu \gg \lambda } . | https://en.wikipedia.org/wiki/Reliability_block_diagram |
The reliability theory of aging is an attempt to apply the principles of reliability theory to create a mathematical model of senescence . [ 1 ] The theory was published in Russian by Leonid A. Gavrilov and Natalia S. Gavrilova as Biologiia prodolzhitelʹnosti zhizni in 1986, and in English translation as The Biology of Life Span: A Quantitative Approach in 1991. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
One of the models suggested in the book is based on an analogy with the reliability theory. The underlying hypothesis is based on the previously suggested premise that humans are born in a highly defective state. This is then made worse by environmental and mutational damage; exceptionally high redundancy due to the extremely high number of low-reliable components (e.g.., cells ) allows the organism to survive for a while. [ 6 ] [ 7 ]
The theory suggests an explanation of two aging phenomena for higher organisms: the Gompertz law of exponential increase in mortality rates with age and the "late-life mortality plateau" (mortality deceleration compared to the Gompertz law at higher ages). [ 7 ]
The book criticizes a number of hypotheses known at the time, discusses drawbacks of the hypotheses put forth by the authors themselves, and concludes that regardless of the suggested mathematical models, the underlying biological mechanisms remain unknown. [ 8 ] [ 9 ]
• DNA damage theory of aging | https://en.wikipedia.org/wiki/Reliability_theory_of_aging_and_longevity |
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. [ 1 ] Reliability is closely related to availability , which is typically described as the ability of a component or system to function at a specified moment or interval of time.
The reliability function is theoretically defined as the probability of success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling. Availability , testability , maintainability , and maintenance are often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in the cost-effectiveness of systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of " lifetime " engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not only achieved by mathematics and statistics. [ 2 ] [ 3 ] "Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." [ 4 ] For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massively multivariate , so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering, safety engineering , and system safety , in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims. [ 5 ]
The word reliability can be traced back to 1816 and is first attested to the poet Samuel Taylor Coleridge . [ 6 ] Before World War II the term was linked mostly to repeatability ; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use of statistical process control was promoted by Dr. Walter A. Shewhart at Bell Labs , [ 7 ] around the time that Waloddi Weibull was working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. The IEEE formed the Reliability Society in 1948. In 1950, the United States Department of Defense formed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment. [ 8 ] This group recommended three main ways of working:
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published by RCA and was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt
electronics to replace older mechanical switching systems. Bellcore issued the first consumer prediction methodology for telecommunications, and SAE developed a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve [ 9 ] —see also reliability-centered maintenance . During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding the physics of failure . Failure rates for components kept dropping, but system-level issues became more prominent. Systems thinking has become more and more important. For software, the CMM model ( Capability Maturity Model ) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems ( MEMS ), handheld GPS , and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations. [ 10 ]
The objectives of reliability engineering, in decreasing order of priority, are: [ 11 ]
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
Reliability engineering for " complex systems " requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
Effective reliability engineering requires understanding of the basics of failure mechanisms for which experience, broad engineering skills and good knowledge from many different special fields of engineering are required, [ 12 ] for example:
Reliability may be defined in the following ways:
Many engineering techniques are used in reliability risk assessments , such as reliability block diagrams, hazard analysis , failure mode and effects analysis (FMEA), [ 13 ] fault tree analysis (FTA), Reliability Centered Maintenance , (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks ( statement of work (SoW) requirements) that will be performed for that specific system.
Consistent with the creation of safety cases , for example per ARP4761 , the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take [ 14 ] are to:
The risk here is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In a de minimis definition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices. [ 15 ]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separate document . Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability, maintainability , and the resulting system availability , and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by other stakeholders . An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and the total cost of ownership (TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/or predictive maintenance ), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overall availability needs and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical. [ 16 ] The provision of only quantitative minimum targets (e.g., Mean Time Between Failure (MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else. [ 17 ] The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, a systems engineering -based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems. [ 18 ]
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
In practice, most failures can be traced back to some type of human error , for example in:
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines. [ 19 ]
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robust systems engineering process with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
Reliability prediction combines:
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability. [ 21 ] DfR is often used as part of an overall Design for Excellence (DfX) strategy.
Reliability design begins with the development of a (system) model . Reliability and availability models use block diagrams and Fault Tree Analysis to provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for example Mean time to repair (MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques is redundancy . This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures. RCM (Reliability Centered Maintenance) programs can be used for this.
For electronic assemblies, there has been an increasing shift towards a different approach called physics of failure . This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modern finite element method (FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design ( Monte Carlo Methods /DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is component derating : i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expected electric current .
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine the optimum balance between reliability requirements and other constraints.
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000) [ 23 ] For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used [ 4 ] than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language and proposition logic, but also based on experience with similar items. This can for example be seen in descriptions of events in fault tree analysis , FMEA analysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does in safety engineering or in-general within systems engineering .
Correct use of language can also be key to identifying or reducing the risks of human error , which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English or Simplified Technical English , where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system's availability behavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis and reliability block diagrams . At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
Reliability is defined as the probability that a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
R ( t ) = P r { T > t } = ∫ t ∞ f ( x ) d x {\displaystyle R(t)=Pr\{T>t\}=\int _{t}^{\infty }f(x)\,dx\ \!} ,
where f ( x ) {\displaystyle f(x)\!} is the failure probability density function and t {\displaystyle t} is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Quantitative requirements are specified using reliability parameters . The most common reliability parameter is the mean time to failure (MTTF), which can also be specified as the failure rate (this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF). [ 17 ]
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used in system safety engineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobile airbags , thermal batteries and missiles . Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, the probability of failure on demand (PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statistical confidence intervals .
The purpose of reliability testing or reliability verification is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered. [ 10 ] It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action. [ 24 ] Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality in R&D , design, and manufacturing. [ 25 ]
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance. [ 26 ] Most product on the market requires reliability testing, such as automotive, integrated circuit , heavy machinery used to mine nature resources, Aircraft auto software. [ 27 ] [ 28 ]
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels. [ 29 ] (The test level nomenclature varies among applications.) For example, performing environmental stress screening tests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statistical type I and type II errors could be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; some failure modes may take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing, design of experiments , and simulations .
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test and burn-in . These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics. [ 30 ]
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common: [ 31 ] [ 32 ]
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction. [ 34 ] [ 35 ]
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statistical confidence levels are used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, an MTBF of 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component, subsystem and system . Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product. [ 36 ] A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction. [ 37 ]
The purpose of accelerated life testing (ALT test) is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
An accelerated testing program can be broken down into the following steps:
Common ways to determine a life stress relationship are:
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, several software reliability models based on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure ( Shooman 1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences . There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews , unit tests , configuration management , software metrics and software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testing is an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individual units , through integration and full-up system testing . All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such as code coverage .
The Software Engineering Institute's capability maturity model is a common means of assessing the overall software development process for reliability and quality purposes.
Structural reliability or the reliability of structures is the application of reliability theory to the behavior of structures . It is used in both the design and maintenance of different types of structures including concrete and steel structures. [ 38 ] [ 39 ] In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereas safety engineering focuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production. [ 40 ]
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries). [ 40 ]
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g. 2oo3 voting logic ) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have a fail-safe mode. For example, aircraft may use triple modular redundancy for flight computers and control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning. Six Sigma has its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time. [ 41 ]
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications. [ 42 ] Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time. [ 43 ] Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model. [ 42 ] Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (see Reliability engineering vs Safety engineering above).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations have quality control groups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematic root cause analysis that identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment are failure reporting, analysis, and corrective action systems (FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
Systems of any significant complexity are developed by organizations of people, such as a commercial company or a government agency. The reliability engineering organization must be consistent with the company's organizational structure . For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance or specialty engineering organization, which may include reliability, maintainability , quality , safety, human factors , logistics , etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of an integrated product team .
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as a professional engineer by the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD), [ 44 ] the IEEE Reliability Society , the American Society for Quality (ASQ), [ 45 ] and the Society of Reliability Engineers (SRE). [ 46 ]
http://standards.sae.org/ja1000/1_199903/ SAE JA1000/1 Reliability Program Standard Implementation Guide
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained from DSTAN . There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE. | https://en.wikipedia.org/wiki/Reliability_verification |
Reliable Data Transfer is a topic in computer networking concerning the transfer of data across unreliable channels. Unreliability is one of the drawbacks of packet switched networks such as the modern internet, as packet loss can occur for a variety of reasons, and delivery of packets is not guaranteed to happen in the order that the packets were sent. Therefore, in order to create long-term data streams over the internet, techniques have been developed to provide reliability, which are generally implemented in the Transport layer of the internet protocol suite.
In instructional materials, the topic is often presented in the form of theoretical example protocols which are themselves referred to as "RDT", in order to introduce students to the problems and solutions encountered in Transport layer protocols such as the Transmission Control Protocol . [ 1 ] [ 2 ] [ 3 ] [ 4 ] These sources often describe a pseudo- API and include Finite-state machine diagrams to illustrate how such a protocol might be implemented, as well as a version history. These details are generally consistent between sources, yet are often left uncited, so the origin of this theoretical RDT protocol is unclear.
Sources that describe an example RDT protocol often provide a "version history" to illustrate the development of modern Transport layer techniques, generally resembling the below:
With Reliable Data Transfer 1.0, the data can only be transferred via a reliable data channel. It is the most simple of the Reliable Data Transfer protocols in terms of algorithm processing.
Reliable Data Transfer 2.0 supports reliable data transfer in unreliable data channels. It uses a checksum to detect errors. The receiver sends acknowledgement message if the message is complete, and if the message is incomplete, it sends a negative acknowledgement message and requests the data again.
Reliable Data Transfer 2.1 also supports reliable data transfers in unreliable data channels and uses a checksum to detect errors. However, to prevent duplicate messages, it adds a sequence number to each packet . The receiver sends acknowledgement message with corresponding sequence ID if the data is complete, and sends a negative acknowledgement message with corresponding sequence ID and asks the sender to send again if the message is not complete.
Reliable Data Transfer 3.0, like earlier versions of the protocol, supports reliable data transfer in unreliable data channels, uses checksums to check for errors, and adds sequence numbers to data packets. Additionally, it includes a countdown timer to detect packet loss. If the sender cannot acknowledge specific data in a certain duration, it will consider as packet as lost and will send it again.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reliable_Data_Transfer |
Reliable Internet Stream Transport ( RIST ) is an open-source , open-specification transport protocol designed for reliable transmission of video over lossy networks (including the Internet ) with low latency and high quality. It is currently under development in the Video Services Forum 's "RIST Activity Group." [ 1 ]
RIST is intended as a more reliable successor to Secure Reliable Transport , and as an open alternative to proprietary commercial options such as ActionStreamer, Zixi, VideoFlow, QVidium, and DVEO (Dozer).
Technically, RIST seeks to provide reliable, high performance media transport by using RTP / UDP at the transport layer to avoid the limitations of TCP . Reliability is achieved by using NACK-based retransmissions ( ARQ ). SMPTE-2022 Forward Error Correction can be combined with RIST but is known to be significantly less effective than ARQ. [ 2 ]
RIST Simple Profile [ 3 ] was published in October 2018 and includes the following features:
The RIST AG is working on an update to RIST Simple Profile that adds link probing to allow for dynamic ARQ protection.
RIST Main Profile [ 4 ] was published in March 2020 and adds the following features to Simple Profile:
The RIST AG has defined a number of Main Profile compliance levels. Approval of this document is expected soon.
RIST Advanced Profile was published in 2022 and updated in 2023.
VideoFlow has provided IPR that covers both Simple Profile and Main Profile under RAND-Z terms. | https://en.wikipedia.org/wiki/Reliable_Internet_Stream_Transport |
In computer networking , a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance , which is the term used by the ITU and ATM Forum , and leads to fault-tolerant messaging .
Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols.
Transmission Control Protocol (TCP), the main protocol used on the Internet , is a reliable unicast protocol; it provides the abstraction of a reliable byte stream to applications. UDP is an unreliable protocol and is often used in computer games , streaming media or in other situations where speed is an issue and some data loss may be tolerated because of the transitory nature of the data.
Often, a reliable unicast protocol is also connection oriented . For example, TCP is connection oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. However, some unreliable protocols are connection oriented, such as Asynchronous Transfer Mode and Frame Relay . In addition, some connectionless protocols, such as IEEE 802.11 , are reliable.
Building on the packet switching concepts proposed by Donald Davies , the first communication protocol on the ARPANET was a reliable packet delivery procedure to connect its hosts via the 1822 interface . [ 1 ] [ 2 ] A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor (IMP). Once the message was delivered to the destination host, an acknowledgment was delivered to the sending host. If the network could not deliver the message, the IMP would send an error message back to the sending host.
Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet .
If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle , which is one of the Internet 's fundamental design principles.
A reliable service is one that notifies the user if delivery fails, while an unreliable one does not notify the user if delivery fails. [ citation needed ] For example, Internet Protocol (IP) provides an unreliable service. Together, Transmission Control Protocol (TCP) and IP provide a reliable service, whereas User Datagram Protocol (UDP) and IP provide an unreliable one.
In the context of distributed protocols, reliability properties specify the guarantees that the protocol provides with respect to the delivery of messages to the intended recipient(s).
An example of a reliability property for a unicast protocol is "at least once", i.e. at least one copy of the message is guaranteed to be delivered to the recipient.
Reliability properties for multicast protocols can be expressed on a per-recipient basis (simple reliability properties), or they may relate the fact of delivery or the order of delivery among the different recipients (strong reliability properties). In the context of multicast protocols, strong reliability properties express the guarantees that the protocol provides with respect to the delivery of messages to different recipients.
An example of a strong reliability property is last copy recall , meaning that as long as at least a single copy of a message remains available at any of the recipients, every other recipient that does not fail eventually also receives a copy. Strong reliability properties such as this one typically require that messages are retransmitted or forwarded among the recipients.
An example of a reliability property stronger than last copy recall is atomicity . The property states that if at least a single copy of a message has been delivered to a recipient, all other recipients will eventually receive a copy of the message. In other words, each message is always delivered to either all or none of the recipients.
One of the most complex strong reliability properties is virtual synchrony .
Reliable messaging is the concept of message passing across an unreliable infrastructure whilst being able to make certain guarantees about the successful transmission of the messages. [ 3 ] For example, that if the message is delivered, it is delivered at most once, or that all messages successfully delivered arrive in a particular order.
Reliable delivery can be contrasted with best-effort delivery , where there is no guarantee that messages will be delivered quickly, in order, or at all.
A reliable delivery protocol can be built on an unreliable protocol. An extremely common example is the layering of Transmission Control Protocol on the Internet Protocol , a combination known as TCP/IP .
Strong reliability properties are offered by group communication systems (GCSs) such as IS-IS , Appia framework , JGroups or QuickSilver Scalable Multicast . The QuickSilver Properties Framework is a flexible platform that allows strong reliability properties to be expressed in a purely declarative manner, using a simple rule-based language, and automatically translated into a hierarchical protocol.
One protocol that implements reliable messaging is WS-ReliableMessaging , which handles reliable delivery of SOAP messages. [ 4 ]
The ATM Service-Specific Coordination Function provides for transparent assured delivery with AAL5 . [ 5 ] [ 6 ] [ 7 ]
IEEE 802.11 attempts to provide reliable service for all traffic. The sending station will resend a frame if the sending station does not receive an ACK frame within a predetermined period of time.
There is, however, a problem with the definition of reliability as "delivery or notification of failure" in real-time computing . In such systems, failure to deliver the real-time data will adversely affect the performance of the systems, and some systems, e.g. safety-critical , safety-involved , and some secure mission-critical systems, must be proved to perform at some specified minimum level. This, in turn, requires that a specified minimum reliability for the delivery of the critical data be met. Therefore, in these cases, it is only the delivery that matters; notification of the failure to deliver does ameliorate the failure. In hard real-time systems , all data must be delivered by the deadline or it is considered a system failure. In firm real-time systems , late data is still valueless but the system can tolerate some amount of late or missing data. [ 8 ] [ 9 ]
There are a number of protocols that are capable of addressing real-time requirements for reliable delivery and timeliness:
MIL-STD-1553B and STANAG 3910 are well-known examples of such timely and reliable protocols for avionic data buses . MIL-1553 uses a 1 Mbit/s shared media for the transmission of data and the control of these transmissions, and is widely used in federated military avionics systems. [ 10 ] It uses a bus controller (BC) to command the connected remote terminals (RTs) to receive or transmit this data. The BC can, therefore, ensure that there will be no congestion , and transfers are always timely. The MIL-1553 protocol also allows for automatic retries that can still ensure timely delivery and increase the reliability above that of the physical layer. STANAG 3910, also known as EFABus in its use on the Eurofighter Typhoon , is, in effect, a version of MIL-1553 augmented with a 20 Mbit/s shared media bus for data transfers, retaining the 1 Mbit/s shared media bus for control purposes.
The Asynchronous Transfer Mode (ATM), the Avionics Full-Duplex Switched Ethernet (AFDX), and Time Triggered Ethernet (TTEthernet) are examples of packet-switched networks protocols where the timeliness and reliability of data transfers can be assured by the network. AFDX and TTEthernet are also based on IEEE 802.3 Ethernet, though not entirely compatible with it.
ATM uses connection-oriented virtual channels (VCs) which have fully deterministic paths through the network, and usage and network parameter control (UPC/NPC), which are implemented within the network, to limit the traffic on each VC separately. This allows the usage of the shared resources (switch buffers) in the network to be calculated from the parameters of the traffic to be carried in advance, i.e. at system design time. That they are implemented by the network means that these calculations remain valid even when other users of the network behave in unexpected ways, i.e. transmit more data than they are expected to. The calculated usages can then be compared with the capacities of these resources to show that, given the constraints on the routes and the bandwidths of these connections, the resource used for these transfers will never be over-subscribed. These transfers will therefore never be affected by congestion and there will be no losses due to this effect. Then, from the predicted maximum usages of the switch buffers, the maximum delay through the network can also be predicted. However, for the reliability and timeliness to be proved, and for the proofs to be tolerant of faults in and malicious actions by the equipment connected to the network, the calculations of these resource usages cannot be based on any parameters that are not actively enforced by the network, i.e. they cannot be based on what the sources of the traffic are expected to do or on statistical analyses of the traffic characteristics (see network calculus ). [ 11 ]
AFDX uses frequency domain bandwidth allocation and traffic policing , that allows the traffic on each virtual link to be limited so that the requirements for shared resources can be predicted and congestion prevented so it can be proved not to affect the critical data. [ 12 ] However, the techniques for predicting the resource requirements and proving that congestion is prevented are not part of the AFDX standard.
TTEthernet provides the lowest possible latency in transferring data across the network by using time-domain control methods – each time triggered transfer is scheduled at a specific time so that contention for shared resources is controlled and thus the possibility of congestion is eliminated. The switches in the network enforce this timing to provide tolerance of faults in, and malicious actions on the part of, the other connected equipment. However, "synchronized local clocks are the fundamental prerequisite for time-triggered communication". [ 13 ] This is because the sources of critical data will have to have the same view of time as the switch, in order that they can transmit at the correct time and the switch will see this as correct. This also requires that the sequence with which a critical transfer is scheduled has to be predictable to both source and switch. This, in turn, will limit the transmission schedule to a highly deterministic one, e.g. the cyclic executive .
However, low latency in transferring data over the bus or network does not necessarily translate into low transport delays between the application processes that source and sink this data. This is especially true where the transfers over the bus or network are cyclically scheduled (as is commonly the case with MIL-STD-1553B and STANAG 3910, and necessarily so with AFDX and TTEthernet) but the application processes are not synchronized with this schedule.
With both AFDX and TTEthernet, there are additional functions required of the interfaces, e.g. AFDX's Bandwidth Allocation Gap control, and TTEthernet's requirement for very close synchronization of the sources of time-triggered data, that make it difficult to use standard Ethernet interfaces. Other methods for control of the traffic in the network that would allow the use of such standard IEEE 802.3 network interfaces is a subject of current research. [ 14 ] | https://en.wikipedia.org/wiki/Reliable_messaging |
Reliance Institute of Life Sciences (RILS) , established by Dhirubhai Ambani Foundation, is an institution of higher education in various fields of life sciences and related technologies. [ 1 ] | https://en.wikipedia.org/wiki/Reliance_Institute_of_Life_Sciences |
A relict is a surviving remnant of a natural phenomenon.
A relict (or relic) is an organism that at an earlier time was abundant in a large area but now occurs at only one or a few small areas.
In geology , a relict is a structure or mineral from a parent rock that did not undergo metamorphosis when the surrounding rock did, or a rock that survived a destructive geologic process.
In geomorphology , a relict landform is a landform formed by either erosive or constructive surficial processes that are no longer active as they were in the past.
A glacial relict is a cold-adapted organism that is a remnant of a larger distribution that existed in the ice ages .
As revealed by DNA testing , a relict population is an ancient people in an area, who have been largely supplanted by a later group of migrants and their descendants.
In various places around the world, minority ethnic groups represent lineages of ancient human migrations in places now occupied by more populous ethnic groups, whose ancestors arrived later. For example, the first human groups to inhabit the Caribbean islands were hunter-gatherer tribes from South and Central America. Genetic testing of natives of Cuba show that, in late pre-Columbian times, the island was home to agriculturalists of Taino ethnicity. In addition, a relict population of the original hunter-gatherers remained in western Cuba as the Ciboney people . [ 1 ] | https://en.wikipedia.org/wiki/Relict |
In biogeography and paleontology , a relict is a population or taxon of organisms that was more widespread or more diverse in the past. A relictual population is a population currently inhabiting a restricted area whose range was far wider during a previous geologic epoch . Similarly, a relictual taxon is a taxon (e.g. species or other lineage) which is the sole surviving representative of a formerly diverse group. [ 1 ]
A relict (or relic) plant or animal is a taxon that persists as a remnant of what was once a diverse and widespread population. Relictualism occurs when a widespread habitat or range changes and a small area becomes cut off from the whole. A subset of the population is then confined to the available hospitable area, and survives there while the broader population either shrinks or evolves divergently . This phenomenon differs from endemism in that the range of the population was not always restricted to the local region. In other words, the species or group did not necessarily arise in that small area, but rather was stranded, or insularized, by changes over time. The agent of change could be anything from competition from other organisms, continental drift , or climate change such as an ice age .
When a relict is representative of taxa found in the fossil record, and yet is still living, such an organism is sometimes referred to as a living fossil . However, a relict need not be currently living. An evolutionary relict is any organism that was characteristic of the flora or fauna of one age and that persisted into a later age, with the later age being characterized by newly evolved flora or fauna significantly different from those that came before.
A notable example is the thylacine of Tasmania, a relict marsupial carnivore that survived into modern times on an island, whereas the rest of its species on mainland Australia had gone extinct between 3000 and 2000 years ago. [ 3 ]
Another example is Omma , a genus of beetle with a fossil record extending back over 200 million years to the Late Triassic and found worldwide during the Jurassic and Cretaceous, now confined to a single living species in Australia. [ 4 ] Another relict from the Triassic is Pholadomya , a common clam genus during the Mesozoic, now confined to a single rare species in the Caribbean. [ 5 ]
The tuatara endemic to New Zealand is the only living member of the once-diverse reptile order Rhynchocephalia , which has a fossil record stretching back 240 million years and during the Mesozoic era was globally distributed and ecologically diverse. [ 6 ]
An example from the fossil record would be a specimen of Nimravidae , an extinct branch of carnivores in the mammalian evolutionary tree, if said specimen came from Europe in the Miocene epoch. If that was the case, the specimen would represent, not the main population, but a last surviving remnant of the nimravid lineage. These carnivores were common and widespread in the previous epoch, the Oligocene , and disappeared when the climate changed and woodlands were replaced by savanna . They persisted in Europe in the last remaining forests as a relict of the Oligocene: a relict species in a relict habitat. [ 7 ]
An example of divergent evolution creating relicts is found in the shrews of the islands off the coast of Alaska, namely the Pribilof Island shrew and the St. Lawrence Island shrew . These species are apparently relicts of a time when the islands were connected to the mainland, and these species were once conspecific with a more widespread species, now the cinereus shrew , the three populations having diverged through speciation . [ 8 ]
In botany , an example of an ice age relict plant population is the Snowdon lily , notable as being precariously rare in Wales . The Welsh population is confined to the north-facing slopes of Snowdonia , where climatic conditions are apparently similar to ice age Europe. Some have expressed concern that the warming climate will cause the lily to die out in Great Britain . [ 9 ] Other populations of the same plant can be found in the Arctic and in the mountains of Europe and North America, where it is known as the common alplily.
While the extirpation of a geographically disjunct population of a relict species may be of regional conservation concern, outright extinction at the species level may occur in this century of rapid climate change if geographic range occupied by a relict species has already contracted to the degree that it is narrowly endemic . For this reason, the traditional conservation tool of translocation has recently been reframed as assisted migration of narrowly endemic, critically endangered species that are already (or soon expected) to experience climate change beyond their levels of tolerance. [ 10 ] Two examples of critically endangered relict species for which assisted migration projects are already underway are the western swamp tortoise of Australia and a subcanopy conifer tree in the United States called Florida Torreya . [ 11 ]
A well-studied botanical example of a relictual taxon is Ginkgo biloba , the last living representative of Ginkgoales that is restricted to China in the wild. Ginkgo trees had a diverse and widespread northern distribution during the Mesozoic , but are not known from the fossil record after the Pliocene other than G. biloba . [ 12 ] [ 13 ]
The Saimaa ringed seal ( Phoca hispida saimensis ) is an endemic subspecies, a relict of last ice age that lives only in Finland in the landlocked and fragmented Saimaa freshwater lake complex. [ 14 ] Now the population has less than 400 individuals, which poses a threat to its survival. [ 15 ]
Another example is the relict leopard frog once found throughout Nevada , Arizona , Utah , and Colorado , but now only found at Lake Mead National Recreation Area in Nevada and Arizona.
The concept of relictualism is useful in understanding the ecology and conservation status of populations that have become insularized, meaning confined to one small area or multiple small areas with no chance of movement between populations. Insularization makes a population vulnerable to forces that can lead to extinction , such as disease, inbreeding , habitat destruction , competition from introduced species , and global warming . Consider the case of the white-eyed river martin , a very localized species of bird found only in Southeast Asia, and extremely rare, if not already extinct. Its closest and only surviving living relative is the African river martin , also very localized in central Africa. These two species are the only known members of the subfamily Pseudochelidoninae, and their widely disjunct populations suggest they are relict populations of a more common and widespread ancestor. Known to science only since 1968, it seems to have disappeared. [ 16 ]
Studies have been done on relict populations in isolated mountain and valley habitats in western North America, where the basin and range topography creates areas that are insular in nature, such as forested mountains surrounded by inhospitable desert, called sky islands . Such situations can serve as refuges for certain Pleistocene relicts, such as Townsend's pocket gopher , [ 8 ] while at the same time creating barriers for biological dispersal . Studies have shown that such insular habitats have a tendency toward decreasing species richness . This observation has significant implications for conservation biology, because habitat fragmentation can also lead to the insularization of stranded populations. [ 3 ] [ 17 ]
So-called "relics of cultivation" [ 18 ] are plant species that were grown in the past for various purposes (medicinal, food, dyes, etc.), but are no longer utilized. They are naturalized and can be found at archaeological sites. | https://en.wikipedia.org/wiki/Relict_(biology) |
Relief Therapeutics is a Swiss biopharmaceutical company based in Geneva . [ 1 ] The company focuses on developing drugs for serious diseases with few or no existing treatment options. Its lead compound, RLF-100 , is a synthetic form of a natural peptide that protects the lung. The company was incorporated as Relief Therapeutics Holdings AG (RFLB.S) and listed on the SIX Swiss Exchange in 2016. [ 2 ]
Relief Therapeutics was founded in 2013 by Gael Hédou with the aim of developing new treatments for diseases with high unmet needs. [ 3 ] The company today considers itself the successor to Mondobiotech , which was founded in 2000 by Fabio Cavalli and Dorian Bevec . [ 4 ] Mondobiotech began research into Vasoactive intestinal peptide (VIP), a naturally occurring substance in humans that was first identified in the 1970s. They were granted US and European patents for a synthetic version of VIP known as aviptadil in 2006. [ 5 ]
On June 23, 2013, Mondobiotech merged with Italian pharmaceutical company Pierrel Research International to form a new Contract research organization known as Therametrics. On July 14, 2016, Therametrics merged with Relief Therapeutics to form Relief Therapeutics Holdings AG, which inherited all patents related to aviptadil. [ 6 ]
In the wake of the COVID-19 pandemic , scientists at Relief conducted initial studies into the efficacy of RLF-100 in treating severe COVID-19 patients. In June 2020, the U.S. Food and Drug Administration granted fast-track designation to RLF-100 for treatment of respiratory distress in COVID-19. [ 7 ] [ 8 ] In September 2020, Relief partnered with US-Israeli firm NRX Pharmaceuticals (formerly NeuroRx Inc) for the co-development of the drug and the co-ordination of US trials. [ 9 ] In April 2021, a reformulated version of aviptadil, known as Zyesami, was included in a National Institutes of Health (NIH) sponsored Phase 3 trial with the aim of testing aviptadil against remdesivir . [ 10 ] In May 2021, NRX submitted a request for an Emergency Use Authorization (EUA) to the US FDA for aviptadil's use in patients in intensive care. [ 11 ] [ 12 ] On 7 October 2021 Relief Therapeutics filed a lawsuit against NRX Pharmaceuticals and its CEO Dr. Jonathan Javitt in the Supreme Court of the State of New York , [ 13 ] citing multiple alleged breaches of the collaboration agreement signed by the two companies for the co-development of aviptadil. [ 14 ]
On 4 November 2021 the FDA declined EUA for the drug, but committed to working with NRX to further develop it. [ 15 ] On 29 November 2021, NRX announced that data analysis from the NIH-sponsored Phase 3 trial showed a fourfold increase in survival at 60 days for patients administered with Zyesami (Aviptadil) vs those who received placebo. [ 16 ]
On 27 October 2021, Applied Pharma Research (APR), a wholly-owned subsidiary of Relief, announced positive interim data from its clinical trial of Sentinox, a nasal spray aimed at reducing the viral load of patients with COVID-19, in-turn reducing the transmissibility of the virus. [ 17 ]
In October 2021, Relief announced that its collaboration partner, Texas -based Acer Therapeutics, had successfully filed for a New Drug Application with US FDA for their drug ACER-001, for the treatment of Urea Cycle Disorders (UCDs) and Maple syrup urine disease . [ 18 ]
In September 2021, APR launched a chewable tablet for the treatment of Phenylketonuria , called PKU GOLIKE KRUNCH, in Germany and Italy. [ 19 ] APR are also developing Nexodyn, a drug which aids in the management of hard-to-heal ulcers requiring long periods of treatment. [ 20 ]
Relief is actively developing RLF-100 for non-COVID-19 related acute and chronic lung diseases, such as Pulmonary sarcoidosis . [ 21 ] | https://en.wikipedia.org/wiki/Relief_Therapeutics |
A relief valve or pressure relief valve ( PRV ) is a type of safety valve used to control or limit the pressure in a system; excessive pressure might otherwise build up and create a process upset, instrument or equipment failure, explosion, or fire.
Excess pressure is relieved by allowing the pressurized fluid to flow from an auxiliary passage out of the system. The relief valve is designed or set to open at a predetermined set pressure to protect pressure vessels and other equipment from being subjected to pressures that exceed their design limits. When the set pressure is exceeded, the relief valve becomes the " path of least resistance " as the valve is forced open and a portion of the fluid is diverted through the auxiliary route.
In systems containing flammable fluids, the diverted fluid (liquid, gas or liquid-gas mixture) is either recaptured [ 1 ] by a low pressure, high-flow vapor recovery system or is routed through a piping system known as a flare header or relief header to a central, elevated gas flare where it is burned, releasing naked combustion gases into the atmosphere. [ 2 ] In non-hazardous systems, the fluid is often discharged to the atmosphere by a suitable discharge pipework designed to prevent rainwater ingress which can affect the set lift pressure, and positioned not to cause a hazard to personnel.
As the fluid is diverted, the pressure inside the vessel will stop rising. Once it reaches the valve's reseating pressure, the valve will close. The blowdown is usually stated as a percentage of set pressure and refers to how much the pressure needs to drop before the valve reseats. The blowdown can vary roughly 2–20%, and some valves have adjustable blowdowns.
In high-pressure gas systems, it is recommended that the outlet of the relief valve be in the open air. In systems where the outlet is connected to piping, the opening of a relief valve will give a pressure build-up in the piping system downstream of the relief valve. This often means that the relief valve will not re-seat once the set pressure is reached. For these systems often so-called "differential" relief valves are used. This means that the pressure is only working on an area that is much smaller than the area of the opening of the valve. If the valve is opened, the pressure has to decrease enormously before the valve closes and also the outlet pressure of the valve can easily keep the valve open. Another consideration is that if other relief valves are connected to the outlet pipe system, they may open as the pressure in the exhaust pipe system increases. This may cause undesired operation.
In some cases, a so-called bypass valve acts as a relief valve by being used to return all or part of the fluid discharged by a pump or gas compressor back to either a storage reservoir or the inlet of the pump or gas compressor. This is done to protect the pump or gas compressor and any associated equipment from excessive pressure. The bypass valve and bypass path can be internal (an integral part of the pump or compressor) or external (installed as a component in the fluid path). Many fire engines have such relief valves to prevent the overpressurization of fire hoses .
In other cases, equipment must be protected against being subjected to an internal vacuum (i.e., low pressure) that is lower than the equipment can withstand. In such cases, vacuum relief valves are used to open at a predetermined low-pressure limit and to admit air or an inert gas into the equipment to control the amount of vacuum.
In the petroleum refining , petrochemical and chemical manufacturing , natural gas processing and power generation industries, the term relief valve is associated with the terms pressure relief valve ( PRV ), pressure safety valve ( PSV ) and safety valve :
In most countries, industries are legally required to protect pressure vessels and other equipment by using relief valves. Also in most countries, equipment design codes such as those provided by the American Society of Mechanical Engineers (ASME), American Petroleum Institute (API) and other organizations like ISO (ISO 4126) must be complied with and those codes include design standards for relief valves. [ 3 ] [ 4 ]
The main standards, laws, or directives are:
Formed in 1977, the Design Institute for Emergency Relief Systems [ 5 ] was a consortium of 29 companies under the auspices of the American Institute of Chemical Engineers (AIChE) that developed methods for the design of emergency relief systems to handle runaway reactions. Its purpose was to develop the technology and methods needed for sizing pressure relief systems for chemical reactors, particularly those in which exothermic reactions are carried out. Such reactions include many classes of industrially important processes including polymerizations, nitrations, diazotizations, sulphonations, epoxidations, aminations, esterifications, neutralizations, and many others. Pressure relief systems can be difficult to design, not least because what is expelled can be gas/vapor, liquid, or a mixture of the two – just as with a can of carbonated drink when it is suddenly opened. For chemical reactions, it requires extensive knowledge of both chemical reaction hazards and fluid flow.
DIERS has investigated the two-phase vapor-liquid onset/disengagement dynamics and the hydrodynamics of emergency relief systems with extensive experimental and analysis work. [ 6 ] Of particular interest to DIERS were the prediction of two-phase flow venting and the applicability of various sizing methods for two-phase vapor-liquid flashing flow. DIERS became a user's group in 1985.
European DIERS Users' Group (EDUG) [ 7 ] is a group of mainly European industrialists, consultants and academics who use the DIERS technology. The EDUG started in the late 1980s and has an annual meeting. A summary of many of key aspects of the DIERS technology has been published in the UK by the HSE. [ 8 ] | https://en.wikipedia.org/wiki/Relief_valve |
The Religious Orders Study conducted at the Rush Alzheimer's Disease Center at Rush University in Chicago is a research project begun in 1994 exploring the effects of aging on the brain . [ 1 ] More than 1,500 nuns , priests, and other religious professionals are participating across the United States. [ 1 ] The study is finding that cognitive exercise including social activities and learning new skills has a protective effect on brain health and the onset of dementia , while negative psychological factors like anxiety and clinical depression are correlated with cognitive decline. [ 1 ] The Religious Orders Study follows the earlier Nun Study .
Initial funding was provided by the National Institute on Aging in 1993. [ 2 ]
This neuroscience article is a stub . You can help Wikipedia by expanding it .
This article about religious studies is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Religious_Orders_Study |
In mathematics , the Rellich–Kondrachov theorem is a compact embedding theorem concerning Sobolev spaces . It is named after the Austrian-German mathematician Franz Rellich and the Russian mathematician Vladimir Iosifovich Kondrashov . Rellich proved the L 2 theorem and Kondrashov the L p theorem.
Let Ω ⊆ R n be an open , bounded Lipschitz domain , and let 1 ≤ p < n . Set
Then the Sobolev space W 1, p (Ω; R ) is continuously embedded in the L p space L p ∗ (Ω; R ) and is compactly embedded in L q (Ω; R ) for every 1 ≤ q < p ∗ . In symbols,
and
On a compact manifold with C 1 boundary, the Kondrachov embedding theorem states that if k > ℓ and k − n / p > ℓ − n / q then the Sobolev embedding
is completely continuous (compact). [ 1 ]
Since an embedding is compact if and only if the inclusion (identity) operator is a compact operator , the Rellich–Kondrachov theorem implies that any uniformly bounded sequence in W 1, p (Ω; R ) has a subsequence that converges in L q (Ω; R ). Stated in this form, in the past the result was sometimes referred to as the Rellich–Kondrachov selection theorem , since one "selects" a convergent subsequence. (However, today the customary name is "compactness theorem", whereas "selection theorem" has a precise and quite different meaning, referring to set-valued functions .)
The Rellich–Kondrachov theorem may be used to prove the Poincaré inequality , [ 2 ] which states that for u ∈ W 1, p (Ω; R ) (where Ω satisfies the same hypotheses as above),
for some constant C depending only on p and the geometry of the domain Ω, where
denotes the mean value of u over Ω. | https://en.wikipedia.org/wiki/Rellich–Kondrachov_theorem |
A relocatable building is a partially or completely assembled building that was constructed in a building manufacturing facility using a modular construction process. They are designed to be reused or repurposed multiple times and transported to different locations.
Relocatable buildings can offer more flexibility and a much quicker time to occupancy than conventionally built structures. They are essential in cases where speed, temporary swing space, and the ability to relocate are necessary. These buildings are cost effective, code compliant solutions for many markets.
Modular buildings can also contribute to LEED requirements in any category site-built construction can, and can even provide an advantage in the areas of sustainable sites, energy and atmosphere, materials and resources, and indoor environmental quality. [ 2 ] Modular construction can also provide an advantage in similar categories in the International Green Construction Code.
Relocatable modular buildings are utilized in any application where a relocatable building can meet a temporary space need. The primary markets served are education, general office, retail, healthcare, construction-site and in-plant offices, security, telecommunications/data/equipment centers, and emergency housing/disaster relief. | https://en.wikipedia.org/wiki/Relocatable_building |
A relying party (RP) is a computer term used to refer to a server providing access to a secured software application.
Claims-based applications, where a claim is a statement an entity makes about itself in order to establish access, are also called relying party (RP) applications. RPs can also be called “claims aware applications” and “claims-based applications”. Web applications and services can both be RPs. [ 1 ]
With a Security Token Service (STS) , the RP redirects clients to an STS which authenticates the client and issues it a security token containing a set of claims about the client's identity, which it can present to the RP. Instead of the application authenticating the user directly, the RP can extract these claims from the token and use them for identity related tasks. [ 2 ]
The OpenID standard defines a situation whereby a cooperating site can act as an RP, allowing the user to log into multiple sites using one set of credentials. The user benefits from not having to share their login credentials with multiple sites, and the operators of the cooperating site avoid having to develop their own login mechanism. [ 3 ]
An application demonstrating the concept of relying party is software running on mobile devices, which can be used not only for granting user access to software applications, but also for secure building access, without the user having to enter their credentials each time. [ 4 ] | https://en.wikipedia.org/wiki/Relying_party |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.