text
stringlengths
11
320k
source
stringlengths
26
161
The Research Designs & Standards Organisation (RDSO) is the research and development and railway technical specification development organisation under the Ministry of Railways of the Government of India , which functions as a technical adviser and consultant to the Railway Board , the Zonal Railways, the Railway Production Units, RITES , RailTel and Ircon International in respect of design and standardization of railway equipment and problems related to railway construction, operations and maintenance. [ 1 ] [ 2 ] [ 3 ] To enforce standardization and co-ordination between various railway systems in British India, the Indian Railway Conference Association (IRCA) was set up in 1902. It was followed by the establishment of the Central Standards Office (CSO) in 1930, for preparation of designs, standards and specifications. However, till India's independence in 1947, most of the designs and manufacture of railway equipment was entrusted to foreign consultants. After independence, a new organisation called Railway Testing and Research Centre (RTRC) was set up in 1952 at Lucknow, for undertaking the intensive investigation of railway problems, providing basic criteria and new concepts for design purposes, for testing prototypes and generally assisting in finding solutions for specific problems. In 1957, the Central Standards Office (CSO) and the Railway Testing and Research Centre (RTRC) were integrated into a single unit named Research Designs and Standards Organisation (RDSO) under the Ministry of Railways with its headquarters at Manak Nagar, Lucknow . [ 1 ] The status of RDSO was changed from an "Attached Office" to a "Zonal Railway" on 1 January 2003, to give it greater flexibility and a boost to the research and development activities. [ 4 ] [ 5 ] RDSO is headed by the Director-General who ranks equivalent to the general manager of a Zonal Railway. The present Director General is Sanjeev Bhutani [ 6 ] The Director-General is assisted by an Additional Director General and 23 Sr. Executive Directors and Executive Directors, who are in charge of the 27 directorates: Bridges and Structures, the Centre for Advanced Maintenance Technology (CAMTECH), Carriage, Geotechnical Engineering, Testing, Track Design, Medical, EMU & Power Supply, Engine Development, Finance & Accounts, Telecommunication, Quality Assurance, Personnel, Works, Psycho-Technical, Research, Signal, Wagon Design, Electric Locomotive, Stores, Track Machines & Monitoring, Traction Installation, Energy Management, Traffic, Metallurgical & Chemical, Motive Power and Library & Publications. All the directorates except Defence Research are located in Lucknow . Trail run of world's first double decker cargo liner train, which can haul passengers as well as cargo at 180kmph. [ 7 ] [ 8 ] Trail run of world's first triple stack container train. [ 9 ] Design and specification of 12,000 hp WAG-11 electric locomotive. [ 10 ] Design and specification of Vande Bharat [ 11 ] Design and development of dual purpose Double Decker Express for cargo as well as passenger. [ 12 ] [ 13 ] Design and specification of WDAP-5 Design and specification of Utkrisht Double Decker Development of design & specification of WAG-12 Development of Double Decker Express Design of WAGC3 locomotive Development of a new crashworthy design of 4500 HP WDG4 locomotive incorporating new technology to improve dynamic braking and attain significant fuel savings. [ citation needed ] Development of Drivers’ Vigilance Telemetric Control System which directly measures and analyses variations in biometric parameters to determine the state of alertness of the driver. Development of Kavach . Development of Computer Aided Drivers Aptitude test equipment for screening high-speed train drivers for Rajdhani/Shatabdi Express trains to evaluate their reaction time, form perception, vigilance and speed anticipation. Assessment of residual fatigue life of critical railway components like rail, rail weld, wheels, cylinder head, OHE mast, catenary wire, contact wire, wagon components, low components, etc. to formulate remedial actions. Modification of specification of Electric Lifting Barrier to improve its strength and reliability. [ 14 ] Design and development of modern fault tolerant, fail-safe, maintainer friendly Electronic Interlocking system. Development of 4500 HP Hotel Load Locomotive to provide clean and noise-free power supply to coaches from the locomotive to eliminate the existing generator car of Garib Rath express trains. Field trials conducted for electric locomotive hauling Rajdhani/Shatabdi express trains with Head On Generation (HOG) system to provide clean and noise-free power supply to end on coaches. Development of WiMAX technology to provide internet access to the passengers in running trains. Design and Development of Ballastless Track with the indigenous fastening system (BLT-IFS). Design and Development of Rail Free Fastening (RFF) for Girder Bridges. Reduction in de-stressing temperature in LWR with the use of wider and heavier sleepers. Carrying Long Welded Rails through Points and Crossings. Laying of Long Welded Rails in Sharp curve of less than 440 m radius. Design and development of 25T Axle load bogie for different wagons.
https://en.wikipedia.org/wiki/Research_Design_and_Standards_Organisation
The Research Institute for Fragrance Materials ( RIFM ) is a global non-profit scientific organization dedicated to the systematic assessment of fragrance ingredients to ensure their safe use in consumer products. Founded in 1966, RIFM conducts and evaluates research in toxicology, dermatology, environmental science, and other fields related to fragrance safety. It provides the scientific foundation for the standards of the International Fragrance Association (IFRA). [ 1 ] RIFM was established in 1966 by Thomas Parks in response to growing scientific and public interest in the safety of fragrance materials used in consumer products. [ 2 ] The Expert Panel for Fragrance Safety, RIFM's independent scientific review board, was formed in 1967. [ 3 ] In 1973, RIFM began publishing safety monographs on fragrance ingredients. [ 4 ] RIFM supports the fragrance industry by developing safety assessments for materials used in personal care products , household products , and cosmetics . These assessments are published in peer-reviewed journals and form the basis for IFRA Standards. [ 5 ] As of 2025, over 2,000 fragrance ingredient assessments are publicly accessible via the Fragrance Material Safety Resource Center (FMSRC), an Elsevier-managed platform. [ 6 ] RIFM is a 501(c)(3) organization headquartered in New Jersey, governed by a Board of Directors from fragrance industry member companies. The Board does not influence scientific assessments, which are conducted independently and reviewed by an Expert Panel. [ 7 ] [ 8 ] Since 1984, RIFM has maintained a proprietary database of over 80,000 references and approximately 200,000 studies on fragrance safety. [ 9 ] The database includes toxicology, clinical, regulatory, and environmental data, and is accessible to regulators, researchers, and industry scientists. [ 10 ] RIFM’s methodology includes data gathering, quality evaluation, data gap analysis, exposure and risk assessment, peer review, and publication. [ 11 ] RIFM uses NAMs—non-animal testing strategies—including in vitro, in silico, PBPK modeling, chemical grouping, and high-throughput screening. [ 12 ] [ 13 ] Seven endpoints are evaluated: Established in 1967, the Expert Panel for Fragrance Safety includes scientists in toxicology, dermatology, and related fields. Members rotate regularly and are independent of the fragrance industry. [ 15 ] [ 16 ] [ 17 ] Studies focus on aquatic ecosystems, bioaccumulation, biodegradation, and wastewater monitoring. [ 18 ] [ 19 ] Clinical studies address skin sensitization and dermal absorption. RIFM pioneered the QRA (Quantitative Risk Assessment) model for allergen safety. [ 20 ] [ 21 ] RIFM develops realistic exposure models based on consumer habits across product categories. [ 22 ] [ 23 ] RIFM collaborates with the European Chemicals Agency , FDA , and EPA . It participates in global forums such as the OECD and ICCR. [ 24 ] [ 25 ] RIFM science underpins IFRA Standards and is used by regulators such as the EU SCCS and U.S. EPA and FDA. [ 26 ] [ 27 ] [ 28 ] RIFM has faced scrutiny over industry funding. In response, it promotes transparency, peer-reviewed publications, and independent reviews. [ 31 ] [ 32 ] Environmental groups have also raised concerns about persistent synthetic musks. RIFM has expanded its research in response. [ 33 ] RIFM is advancing computational toxicology, AI-based hazard prediction, and aggregate exposure assessment. [ 34 ] [ 35 ] [ 36 ]
https://en.wikipedia.org/wiki/Research_Institute_of_Fragrance_Materials
In computing , a Research Object is a method for the identification, aggregation and exchange of scholarly information on the Web . The primary goal of the research object approach is to provide a mechanism to associate related resources about a scientific investigation so that they can be shared using a single identifier. As such, research objects are an advanced form of Enhanced publication . [ 1 ] Current implementations build upon existing Web technologies and methods including Linked Data , HTTP , Uniform Resource Identifiers (URIs) , the Open Archives Initiative Object Reuse and Exchange (OAI-ORE) and the Open Annotation model, as well as existing approaches for identification and knowledge representation in the scientific domain including Digital Object Identifiers for documents, ORCID identifiers for people, and the Investigation, Study, and Assay (ISA) data model. The research object approach is primarily motivated by a desire to improve reproducibility of scientific investigations. Central to the proposal is need to share research artifacts commonly distributed across specialist repositories on the Web including supporting data, software executables, source code, presentation slides, presentation videos. Research Objects are not one specific technology but are instead guided by a set of principles. Specifically research objects are guided by three principles of identity, aggregation and annotation [ 2 ] A number of communities are developing the research object concept. A W3C community group entitled the Research Objects for Scholarly Communication (ROSC) Community Group was started in April 2013. The community charter states that the goals of the ROSC activity are: [ 3 ] "to exchange requirements and expectations for supporting a new form of scholarly communication" The Community Group aims to produce the following types of deliverables: The FAIR digital object forum is a community that brings together experts from the FAIR data movement, semantic web, and digital publishing of scholarly work. The first conference on FAIR digital objects led the coalition to ratify the Leiden Declaration [ 4 ] on FAIR digital objects. The principles contained in the Leiden Declaration provides a prescriptive framework for infrastructure development around digital research objects. This framework draws from the FAIR data principles and ideas around distributed infrastructure that relies on open protocols to prevent vendor lock-in and ensure access that is "as open as possible, as restricted as necessary". The Mozilla Science Lab have initiated an activity in collaboration with GitHub and Figshare to develop "Code as research object". The initial proposal of the activity is to allow users to transfer code from a GitHub repository to figshare, and provide that code with a Digital Object Identifier (DOI), providing a permanent record of the code that can be cited in future publications.
https://en.wikipedia.org/wiki/Research_Object
The Research Parasite Award is an honor given annually at the Pacific Symposium on Biocomputing to recognize scientists who study previously published data in ways not anticipated by the researchers who first generated it. The tongue-in-cheek name of the award refers to a New England Journal of Medicine editorial [ 1 ] that coined the term "research parasite" to disparage such work. [ 2 ] [ 3 ] The idea was first suggested on Twitter by Iowa State University researcher Iddo Friedberg shortly after the editorial was published, [ 4 ] and was then initiated by Casey Greene, a pharmacologist at the University of Pennsylvania . [ 5 ] Two Research Parasite Awards are given to recognize scientists who have made outstanding and rigorous contributions to analysis of secondary data in biology. Recipients must reuse data generated by someone else to extend, replicate, or disprove a research study in a reproducible manner. The junior parasite award recognizes an outstanding contribution from an early career scientist such as a postdoctoral, graduate, or undergraduate trainee. The senior parasitism award recognizes an individual who has engaged in exemplary research parasitism for a sustained period of time. Since the launch of the award in 2017, [ 6 ] a travel grant to attend the Pacific Symposium on Biocomputing has been provided to the junior parasite award winner by GigaScience . [ 7 ] Starting for the 2019 award year the awards are supported in part by an endowment housed at the University of Pennsylvania . The Research Symbiont Awards, inspired by the Research Parasite Award, was founded by J. Brian Byrd, a physician-scientist at the University of Michigan . [ 8 ] Recognizing exemplars in the practice of data sharing, they are given to scientists working in any area of study who have shared data beyond the expectations of their field. [ 9 ] Unlike a parasite, naming the data sharing award after symbionts helps stress that this process can be mutually beneficial to the data producing "host" because it increases the scientific impact of their investigators. From 2021 the award has been sponsored by the Wellcome Trust and Dragon Master Foundation. The 2021 winners of the General Symbionts prize were Zhang Yongzhen and Edward C. Holmes for their sharing of the sequence of the first SARSCov2 genome. [ 10 ] Recipients self-nominate using a letter that references their published manuscripts that exemplify data reuse in a manner that enhances reproducibility. These published manuscripts should describe original scientific research that involves data re-use, or the secondary analysis of shared data and that extend, replicate, or disprove the results from the original manuscript describing the data. The nomination materials are reviewed by the Selection Committee, which is made up of at least 3 four-year term positions as well as the past two recipients of the Sustained Parasitism award. 2017 2018 2019 2020 2021 2022 2023 2024
https://en.wikipedia.org/wiki/Research_Parasite_Award
Research Unix refers to the early versions of the Unix operating system for DEC PDP-7 , PDP-11 , VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Sciences Research Center (CSRC). The term Research Unix first appeared in the Bell System Technical Journal (Vol. 57, No. 6, Part 2 July/August 1978) to distinguish it from other versions internal to Bell Labs (such as PWB/UNIX and MERT ) whose code-base had diverged from the primary CSRC version. However, that term was little-used until Version 8 Unix (1985), but has been retroactively applied to earlier versions as well. Prior to V8, the operating system was most commonly called simply UNIX (in caps) or the UNIX Time-Sharing System. AT&T licensed Version 5 to educational institutions, and Version 6 also to commercial sites. Schools paid $200 and others $20,000, discouraging most commercial use, but Version 6 was the most widely used version into the 1980s. Research Unix versions are often referred to by the edition of the manual that describes them, [ 1 ] because early versions and the last few were never officially released outside of Bell Labs, and grew organically. So, the first Research Unix would be the First Edition, and the last the Tenth Edition. Another common way of referring to them is as "Version x Unix" or "V x Unix", where x is the manual edition. All modern editions of Unix—excepting Unix-like implementations such as Coherent , Minix , and Linux —derive from the 7th Edition. [ citation needed ] Starting with the 8th Edition, versions of Research Unix had a close relationship to BSD . This began by using 4.1cBSD as the basis for the 8th Edition. In a Usenet post from 2000, Dennis Ritchie described these later versions of Research Unix as being closer to BSD than they were to UNIX System V , [ 2 ] which also included some BSD code: [ 1 ] Research Unix 8th Edition started from (I think) BSD 4.1c, but with enormous amounts scooped out and replaced by our own stuff. This continued with 9th and 10th. The ordinary user command-set was, I guess, a bit more BSD-flavored than SysVish, but it was pretty eclectic. In 2002, Caldera International released [ 12 ] Unix V1, V2, V3, V4, V5, V6 , V7 on PDP-11 and Unix 32V on VAX as FOSS under a permissive BSD-like software license . [ 13 ] [ 14 ] [ 15 ] In 2017, The Unix Heritage Society and Alcatel-Lucent USA Inc., on behalf of itself and Nokia Bell Laboratories , released V8, V9, and V10 under the condition that only non-commercial use was allowed, and that they would not assert copyright claims against such use. [ 16 ]
https://en.wikipedia.org/wiki/Research_Unix
Research chemicals are chemical substances which scientists use for medical and scientific research purposes . One characteristic of a research chemical is that it is for laboratory research use only; a research chemical is not intended for human or veterinary use. In the United States, this distinction is required on the labels of research chemicals and exempts them from regulation under parts 100-740 in Title 21 of the Code of Federal Regulations ( 21CFR ). [ 1 ] Research agrochemicals are created and evaluated to select effective substances for commercial off-the-shelf end-user products. Many research agrochemicals are never publicly marketed. Agricultural research chemicals often use sequential code names . [ 2 ] [ 3 ] • Designer drug
https://en.wikipedia.org/wiki/Research_chemical
In patent law , the research exemption or safe harbor exemption is an exemption to the rights conferred by patents, which is especially relevant to drugs . According to this exemption, despite the patent rights, performing research and tests for preparing regulatory approval, for instance by the FDA in the United States , does not constitute infringement for a limited term before the end of patent term . [ 1 ] This exemption allows generic manufacturers to prepare generic drugs in advance of the patent expiration. In the United States , this exemption is also technically called § 271(e)(1) exemption or Hatch-Waxman exemption . In 2005, the U.S. Supreme Court considered the scope of the Hatch-Waxman exemption in Merck v. Integra . The Supreme Court held that the statute exempts from infringement all uses of compounds that are reasonably related to submission of information to the government under any law regulating the manufacture, use or distribution of drugs. In Canada , this exemption is known as the Bolar provision or Roche-Bolar provision , named after the case Roche Products v. Bolar Pharmaceutical . In the European Union , equivalent exemptions are allowed under the terms of EC Directives 2001/82/EC (as amended by Directive 2004/28/EC ) and 2001/83/EC (as amended by Directives 2002/98/EC , 2003/63/EC , 2004/24/EC and 2004/27/EC ). The common law research exemption is an affirmative defense to infringement where the alleged infringer is using a patented invention for research purposes. The doctrine originated in the 1813 decision by Justice Joseph Story appellate decision Whittemore v. Cutter , 29 Fed. Cas. 1120 (C.C.D. Mass. 1813). Story famously wrote that the intent of the legislature could not have been to punish someone who infringes "merely for [scientific] experiments, or for the purpose of ascertaining the sufficiency of the machine to produce its described effects." Subsequent decisions later distinguished between commercial and non-commercial research. In 2002, the Court of Appeals for the Federal Circuit dramatically limited the scope of the research exemption in Madey v. Duke University , 307 F.3d 1351, 1362 (Fed. Cir. 2002). The court did not reject the defense, but left only a "very narrow and strictly limited experimental use defense" for "amusement, to satisfy idle curiosity, or for strictly philosophical inquiry." The court also precludes the defense where, regardless of profit motive, the research was done "in furtherance of the alleged infringer’s legitimate business." In the case of a research university like Duke University , the court held that the alleged use was in furtherance of its legitimate business - namely "increas[ing] the status of the institution and lur[ing] lucrative research grants", and thus the defense was inapplicable. [ 2 ] In Merck KGaA v. Integra Lifesciences I, Ltd. , 545 U.S. 193 (2005), the United States Supreme Court held that the use of patented compounds in preclinical studies is protected under 35 U.S.C §271(e)(1) if there is a reasonable basis to believe that the compound tested could be the subject of an FDA submission and if the experiments will produce the types of information relevant to an Investigational New Drug or New Drug Application . In cases where the Supreme Court has ruled narrowly (e.g., pharmaceutical drugs only) and a lower court has ruled more broadly , further litigation in the lower courts will often be necessary before a subsequent case will resolve the issue more generally as a matter of settled case law . [ 3 ] This type of exception is permitted by Article 30 of the WTO 's TRIPs Agreement : Members may provide limited exceptions to the exclusive rights conferred by a patent, provided that such exceptions do not unreasonably conflict with a normal exploitation of the patent and do not unreasonably prejudice the legitimate interests of the patent owner, taking account of the legitimate interests of third parties.
https://en.wikipedia.org/wiki/Research_exemption
Research in Computational Molecular Biology ( RECOMB ) is an annual academic conference on the subjects of bioinformatics and computational biology . The conference has been held every year since 1997 and is widely considered as one of two best international conferences in computational biology publishing rigorously peer-reviewed papers, alongside the ISMB conference. The conference is affiliated with the International Society for Computational Biology . Since the first conference, authors of accepted proceedings papers have been invited to submit a revised version to a special issue of the Journal of Computational Biology . [ 1 ] RECOMB was established in 1997 by Sorin Istrail, Pavel Pevzner and Michael Waterman . The first conference was held at the Sandia National Laboratories in Santa Fe, New Mexico . [ 2 ] A series of RECOMB Satellite meetings was established by Pavel Pevzner in 2001. These meetings cover specialist aspects of bioinformatics, including massively parallel sequencing , comparative genomics , regulatory genomics and bioinformatics education. [ 3 ] Today, it consists of focused meetings covering various specialized aspects of bioinformatics. As of RECOMB 2010, the conference has included a highlights track, modelled on the success of a similar track at the ISMB conference. The highlights track contains presentations for computational biology papers published in the previous 18 months. [ 2 ] [ 4 ] As of 2016 the conference started a partnership with Cell Systems. Each year, a subset of work accepted at RECOMB is also considered for publication in a special issue of Cell Systems devoted to RECOMB. Other RECOMB papers are invited for a short synopsis (Cell Systems Calls) in the same issue. More recently, RECOMB has also partnered with Genome Research to publish revised version of subset of RECOMB-accepted papers. [ 5 ] The RECOMB Steering Committee [ 6 ] currently includes Bonnie Berger (chair), Vineet Bafna , Eleazar Eskin , Jian Ma , Teresa Przytycka , Cenk Sahinalp, Roded Sharan, and Martin Vingron .
https://en.wikipedia.org/wiki/Research_in_Computational_Molecular_Biology
A centenarian is a person who has attained the age of 100 years or more. Research on centenarians has become more common with clinical and general population studies now having been conducted in France , Hungary , Japan , Italy , Finland , Denmark , the United States , and China . [ 1 ] Centenarians are the second fastest-growing demographic in much of the developed world. [ 2 ] By 2030, it is expected that there will be around a million centenarians worldwide. [ 3 ] In the United States, a 2010 Census Bureau report found that more than 80 percent of centenarians are women. [ 4 ] Research carried out in Italy suggests that healthy centenarians have high levels of vitamin A and vitamin E and that this seems to be important in guaranteeing their extreme longevity. [ 5 ] Other research contradicts this and has found that these findings do not apply to centenarians from Sardinia , for whom other factors probably play a more important role. [ 6 ] A preliminary study carried out in Poland showed that, in comparison with young healthy female adults, centenarians living in Upper Silesia had significantly higher red blood cell glutathione reductase and catalase activities and higher, although insignificantly, serum levels of vitamin E. [ 7 ] Researchers in Denmark have also found that centenarians exhibit a high activity of glutathione reductase in red blood cells. In this study, those centenarians having the best cognitive and physical functional capacity tended to have the highest activity of this enzyme . [ 8 ] Some research suggests that high levels of vitamin D may be associated with longevity. [ 9 ] Other research has found that people having parents who became centenarians have an increased number of naïve B cells . [ 10 ] It is believed that centenarians possess a different adiponectin isoform pattern and have a favorable metabolic phenotype in comparison with elderly individuals. [ 11 ] Research carried out in the United States has found that people are much more likely to celebrate their 100th birthday if their brother or sister has reached the age. [ 12 ] These findings, from the New England Centenarian Study in Boston, suggest that the sibling of a centenarian is four times more likely to live past 90 than the general population. [ 13 ] Other research carried out by the New England Centenarian Study has identified 150 genetic variations that appeared to be associated with longevity which could be used to predict with 77 percent accuracy whether someone would live to be at least 100. [ 14 ] Research also suggests that there is a clear link between living to 100 and inheriting a hyperactive version of telomerase, an enzyme that prevents cells from ageing. Scientists from the Albert Einstein College of Medicine in the US say centenarian Ashkenazi Jews have this mutant gene. [ 15 ] Many centenarians manage to avoid chronic diseases even after indulging in a lifetime of serious health risks. For example, many people in the New England Centenarian Study experienced a century free of cancer or heart disease despite smoking as many as 60 cigarettes a day for 50 years. The same applies to people from Okinawa in Japan, where around half of supercentenarians had a history of smoking and one-third were regular alcohol drinkers. It is possible that these people may have had genes that protected them from the dangers of carcinogens or the random mutations that crop up naturally when cells divide. [ 16 ] Similarly, centenarian research carried out at the Albert Einstein College of Medicine found that the individuals studied had less than sterling health habits. As a group, for example, they were more obese, more sedentary and exercised less than other, younger cohorts. The researchers also discovered three uncommon genotype similarities among the centenarians: one gene that causes HDL cholesterol to be at levels two- to three-fold higher than average; another gene that results in a mildly underactive thyroid ; and a functional mutation in the human growth hormone axis that may be a safeguard from aging-associated diseases . [ 17 ] It is well known that the children of parents who have a long life are also likely to reach a healthy age, but it is not known why, although the inherited genes are probably important. [ 18 ] A variation in the gene FOXO3 is known to have a positive effect on the life expectancy of humans, and is found much more often in people living to 100 and beyond – moreover, this appears to be true worldwide. [ 19 ] Some research suggests that centenarian offspring are more likely to age in better cardiovascular health than their peers. [ 20 ] A 2011 study found people with exceptional longevity (aged 95 and older) not to be distinct from the general population in terms of lifestyle factors such as regular physical activity, diet or alcohol consumption. [ 21 ] A study indicates gut microbiomes with large amounts of microbes capable of generating unique secondary bile acids are a key element of centenarians' longevity . [ 22 ] [ 23 ] Several studies have shown that centenarians have better cardiovascular risk profiles compared to younger old people. The contribution of drug treatments to promote extreme longevity is not confirmed and centenarians in general have needed fewer drugs at younger ages due to a healthy lifestyle. [ 24 ] A study by the International Longevity Centre-UK, published in 2011, suggested that today's centenarians may be healthier than the next generation of centenarians. [ 25 ] Ninety percent of the centenarians studied in the New England Centenarian Study were functionally independent the vast majority of their lives up until the average age of 92 years and 75% were the same at an average age of 95 years. [ 26 ] Similarly, a study of US supercentenarians (age 110 to 119 years) showed that, even at these advanced ages, 40% needed little assistance or were independent. [ 27 ] A study supported by the US National Institute on Aging found significant associations between month of birth and longevity, with individuals born in September–November having a higher likelihood of becoming centenarians compared to March-born individuals. [ 28 ] In the United States, a 2010 Census Bureau report found that more than 80 percent of centenarians are women. [ 29 ] In 2024, Saul Justin Newman published a pre-print paper finding that supercentenarians and extreme age records tend to come from areas with no birth certificates, rampant clerical errors , pension fraud , and short life spans. The study argues that document validation, the only method that demographics use to verify old age, is susceptible to errors that have often been ignored due to confirmation bias and other factors, causing inflated number of valid cases. This suggests that many figures of supercentenarians' population, and studies that rely on those populations especially in the so called Blue zones , may contain significant errors that have yet to reassessed critically. [ 30 ] The study was awarded with the Ig Nobel Prize in 2024. [ 31 ]
https://en.wikipedia.org/wiki/Research_into_centenarians
Research software engineering is not, as the name might suggest, just the use of software engineering practices, methods and techniques for research software, i.e. software that was made for and is mainly used within research projects. It also includes aspects of other (varying) research fields as well as open science . [ 1 ] [ 2 ] The term was proposed in a research paper in 2010 in response to an empirical survey on tools used for software development in research projects. [ 3 ] It started to be used in United Kingdom in 2012, [ 4 ] [ 5 ] when it was needed to define the type of software development needed in research. This focuses on reproducibility , reusability , and accuracy of data analysis and applications created for research. [ 6 ] Various type of associations and organisations have been created around this role to support the creation of posts in universities and research institutes. In 2014 a Research Software Engineer Association was created in UK, [ 7 ] which attracted 160 members in the first three months and which lead to the creation of the Society of Research Software Engineering in 2019. Other countries like the Netherlands, Germany, and the USA followed creating similar communities and there are similar efforts being pursued in Asia, Australia, Canada, New Zealand, the Nordic countries , and Belgium. In January 2021 the International Council of RSE Associations was introduced. [ 8 ] UK counts over 40 universities and institutes [ 9 ] with groups that provide access to software expertise to different areas of research. Additionally, the Engineering and Physical Sciences Research Council created a Research Software Engineer fellowship to promote this role and help the creation of RSE groups across UK, with calls in 2015, 2017, and 2020. The world first RSE conference took place in UK in September 2016 [ 7 ] and it has been repeated annually (except for a gap in 2020) since. In 2019 the first national RSE conferences in Germany [ 10 ] and the Netherlands [ 11 ] were held, next editions were planned for 2020 and then cancelled. The SORSE (A Series of Online Research Software Events) community was established in late‑2020 in response to the COVID-19 pandemic and ran its first online event in September 2020. The annual Research Software Engineering Conference organised by the Society of Research Software Engineering recognises outstanding contributions to the field of research software engineering through awards presented at the conference. The RSE Society Award was first presented in 2019, at the Fourth Conference of Research Software Engineering held at the University of Birmingham, to recognise outstanding contributions to the research software engineering community over a sustained period of time. In 2022, three community awards were created to recognise contributions to the RSE community over the past 12 months: Rising Star, Training & Education, and Impact. [ 12 ] From 2023, these were renamed the Claire Wyatt Community Awards, "to recognise the incredible contribution that Claire [Wyatt] made to the Society over the last decade}. [ 13 ] Simon Hettrick This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Research_software_engineering
Reservoir engineering is a branch of petroleum engineering that applies scientific principles to the fluid flow through a porous medium during the development and production of oil and gas reservoirs so as to obtain a high economic recovery. The working tools of the reservoir engineer are subsurface geology, applied mathematics, and the basic laws of physics and chemistry governing the behavior of liquid and vapor phases of crude oil, natural gas, and water in reservoir rock. Of particular interest to reservoir engineers is generating accurate reserves estimates for use in financial reporting to the SEC and other regulatory bodies. Other job responsibilities include numerical reservoir modeling, production forecasting, well testing, well drilling and workover planning, economic modeling, and PVT analysis of reservoir fluids. Reservoir engineers also play a central role in field development planning, recommending appropriate and cost-effective reservoir depletion schemes such as waterflooding or gas injection to maximize hydrocarbon recovery. Due to legislative changes in many hydrocarbon-producing countries, they are also involved in the design and implementation of carbon sequestration projects in order to minimise the emission of greenhouse gases. Reservoir engineers often specialize in two areas: The dynamic model combines the static model, pressure- and saturation-dependent properties, well locations and geometries, as well as the facilities layout to calculate the pressure/saturation distribution into the reservoir, and the production profiles vs. time.
https://en.wikipedia.org/wiki/Reservoir_engineering
In a computer or data transmission system, a reset clears any pending errors or events and brings a system to normal condition or an initial state, usually in a controlled manner. It is usually done in response to an error condition when it is impossible or undesirable for a processing activity to proceed and all error recovery mechanisms fail. A computer storage program would normally perform a "reset" if a command times out and error recovery schemes like retry or abort also fail. [ 1 ] A software reset (or soft reset) is initiated by the software, for example, Control-Alt-Delete key combination have been pressed, or execute restart in Microsoft Windows . Most computers have a reset line that brings the device into the startup state and is active for a short time after powering on. For example, in the x86 architecture, asserting the RESET line halts the CPU; this is done after the system is switched on and before the power supply has asserted "power good" to indicate that it is ready to supply stable voltages at sufficient power levels. [ 2 ] Reset places less stress on the hardware than power cycling , as the power is not removed. Many computers, especially older models, have user accessible "reset" buttons that assert the reset line to facilitate a system reboot in a way that cannot be trapped (i.e. prevented) by the operating system, or holding a combination of buttons on some mobile devices. [ 3 ] [ 4 ] Devices may not have a dedicated Reset button, but have the user hold the power button to cut power, which the user can then turn the computer back on. [ 5 ] Out-of-band management also frequently provides the possibility to reset the remote system in this way. Many memory-capable digital circuits ( flip-flops , registers, counters and so on) accept the reset signal that sets them to the pre-determined state. This signal is often applied after powering on but may also be applied under other circumstances. After a hard reset, the register states of many hardware have been cleared. The ability for an electronic device to reset itself in case of error or abnormal power loss is an important aspect of embedded system design and programming . This ability can be observed with everyday electronics such as a television , audio equipment or the electronics of a car , which are able to function as intended again even after having lost power suddenly. A sudden and strange error with a device might sometimes be fixed by removing and restoring power, making the device reset. Some devices, such as portable media players , very often have a dedicated reset button as they are prone to freezing or locking up. The lack of a proper reset ability could otherwise possibly render the device useless after a power loss or malfunction. User initiated hard resets can be used to reset the device if the software hangs, crashes, or is otherwise unresponsive. However, data may become corrupted if this occurs. [ 6 ] Generally, a hard reset is initiated by pressing a dedicated reset button On some systems (e.g, the PlayStation 2 video game console), pressing and releasing the power button initiates a hard reset, and holding the button turns the system off. The 8086 microprocessors provide RESET pin that is used to do the hardware reset. When a HIGH is applied to the pin, the CPU immediately stops, and sets the major registers to these values: The CPU uses the values of CS and IP registers to find the location of the next instruction to execute. Location of next instruction is calculated using this simple equation: Location of next instruction = (CS<<4) + (IP) This implies that after the hardware reset, the CPU will start execution at the physical address 0xFFFF0. In IBM PC compatible computers , This address maps to BIOS ROM . The memory word at 0xFFFF0 usually contains a JMP instruction that redirects the CPU to execute the initialization code of BIOS. This JMP instruction is absolutely the first instruction executed after the reset. [ 7 ] Later x86 processors reset the CS and IP registers similarly, refer to Reset vector . Apple Mac computers allow various levels of resetting, [ 8 ] including (CTL,CMD,EJECT) analogous to the three-finger salute (CTL,ALT,DEL) on Windows computers.
https://en.wikipedia.org/wiki/Reset_(computing)
In metric geometry , the Reshetnyak gluing theorem gives information on the structure of a geometric object built by using as building blocks other geometric objects, belonging to a well defined class . Intuitively, it states that a manifold obtained by joining (i.e. " gluing ") together, in a precisely defined way, other manifolds having a given property inherit that very same property. The theorem was first stated and proved by Yurii Reshetnyak in 1968. [ 1 ] Theorem: Let X i {\displaystyle X_{i}} be complete locally compact geodesic metric spaces of CAT curvature ≤ κ {\displaystyle \leq \kappa } , and C i ⊂ X i {\displaystyle C_{i}\subset X_{i}} convex subsets which are isometric . Then the manifold X {\displaystyle X} , obtained by gluing all X i {\displaystyle X_{i}} along all C i {\displaystyle C_{i}} , is also of CAT curvature ≤ κ {\displaystyle \leq \kappa } . For an exposition and a proof of the Reshetnyak Gluing Theorem, see ( Burago, Burago & Ivanov 2001 , Theorem 9.1.21). This metric geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reshetnyak_gluing_theorem
The residence time of a fluid parcel is the total time that the parcel has spent inside a control volume (e.g.: a chemical reactor , a lake , a human body ). The residence time of a set of parcels is quantified in terms of the frequency distribution of the residence time in the set, which is known as residence time distribution (RTD) , or in terms of its average, known as mean residence time . Residence time plays an important role in chemistry and especially in environmental science and pharmacology . Under the name lead time or waiting time it plays a central role respectively in supply chain management and queueing theory , where the material that flows is usually discrete instead of continuous. The concept of residence time originated in models of chemical reactors. The first such model was an axial dispersion model by Irving Langmuir in 1908. This received little attention for 45 years; other models were developed such as the plug flow reactor model and the continuous stirred-tank reactor , and the concept of a washout function (representing the response to a sudden change in the input) was introduced. Then, in 1953, Peter Danckwerts resurrected the axial dispersion model and formulated the modern concept of residence time. [ 1 ] The time that a particle of fluid has been in a control volume (e.g. a reservoir) is known as its age . In general, each particle has a different age. The frequency of occurrence of the age τ {\displaystyle \tau } in the set of all the particles that are located inside the control volume at time t {\displaystyle t} is quantified by means of the (internal) age distribution I {\displaystyle I} . [ 2 ] At the moment a particle leaves the control volume, its age is the total time that the particle has spent inside the control volume, which is known as its residence time . The frequency of occurrence of the age τ {\displaystyle \tau } in the set of all the particles that are leaving the control volume at time t {\displaystyle t} is quantified by means of the residence time distribution , also known as exit age distribution E {\displaystyle E} . [ 2 ] Both distributions are positive and have by definition unitary integrals along the age: [ 2 ] In the case of steady flow , the distributions are assumed to be independent of time, that is ∂ t E = ∂ t I = 0 ∀ t {\displaystyle \partial _{t}E=\partial _{t}I=0\;\forall t} , which may allow to redefine the distributions as simple functions of the age only. If the flow is steady (but a generalization to non-steady flow is possible [ 3 ] ) and is conservative , then the exit age distribution and the internal age distribution can be related one to the other: [ 2 ] Distributions other than E {\displaystyle E} and I {\displaystyle I} can be usually traced back to them. For example, the fraction of particles leaving the control volume at time t {\displaystyle t} with an age greater or equal than τ {\displaystyle \tau } is quantified by means of the washout function W {\displaystyle W} , that is the complementary to one of the cumulative exit age distribution: The mean age of all the particles inside the control volume at time t is the first moment of the age distribution: [ 2 ] [ 3 ] The mean residence time or mean transit time , that is the mean age of all the particles leaving the control volume at time t , is the first moment of the residence time distribution: [ 2 ] [ 3 ] The mean age and the mean transit time generally have different values, even in stationary conditions: [ 2 ] If the flow is steady and conservative , the mean residence time equals the ratio between the amount of fluid contained in the control volume and the flow rate through it: [ 2 ] This ratio is commonly known as the turnover time or flushing time . [ 4 ] When applied to liquids, it is also known as the hydraulic retention time ( HRT ), hydraulic residence time or hydraulic detention time . [ 5 ] In the field of chemical engineering this is also known as space time . [ 6 ] The residence time of a specific compound in a mixture equals the turnover time (that of the compound, as well as that of the mixture) only if the compound does not take part in any chemical reaction (otherwise its flow is not conservative) and its concentration is uniform . [ 3 ] Although the equivalence between the residence time and the ratio m / f {\displaystyle m/f} does not hold if the flow is not stationary or it is not conservative, it does hold on average if the flow is steady and conservative on average , and not necessarily at any instant. Under such conditions, which are common in queueing theory and supply chain management , the relation is known as Little's Law . Design equations are equations relating the space time to the fractional conversion and other properties of the reactor. Different design equations have been derived for different types of the reactor and depending on the reactor the equation more or less resemble that describing the average residence time. Often design equations are used to minimize the reactor volume or volumetric flow rate required to operate a reactor. [ 7 ] In an ideal plug flow reactor (PFR) the fluid particles leave in the same order they arrived, not mixing with those in front and behind. Therefore, the particles entering at time t will exit at time t + T , all spending a time T inside the reactor. The residence time distribution will be then a Dirac delta function delayed by T : The mean is T and the variance is zero. [ 1 ] The RTD of a real reactor deviates from that of an ideal reactor, depending on the hydrodynamics within the vessel. A non-zero variance indicates that there is some dispersion along the path of the fluid, which may be attributed to turbulence, a non-uniform velocity profile, or diffusion. If the mean of the distribution is earlier than the expected time T it indicates that there is stagnant fluid within the vessel. If the RTD curve shows more than one main peak it may indicate channeling, parallel paths to the exit, or strong internal circulation. In PFRs, reactants enter the reactor at one end and react as they move down the reactor. Consequently, the reaction rate is dependent on the concentrations which vary along the reactor requiring the inverse of the reaction rate to be integrated over the fractional conversion. Batch reactors are reactors in which the reactants are put in the reactor at time 0 and react until the reaction is stopped. Consequently, the space time is the same as the average residence time in a batch reactor. In an ideal continuous stirred-tank reactor (CSTR), the flow at the inlet is completely and instantly mixed into the bulk of the reactor. The reactor and the outlet fluid have identical, homogeneous compositions at all times. The residence time distribution is exponential: Where; the mean is T and the variance is 1. [ 1 ] A notable difference from the plug flow reactor is that material introduced into the system will never completely leave it. [ 4 ] In reality, it is impossible to obtain such rapid mixing, as there is necessarily a delay between any molecule passing through the inlet and making its way to the outlet, and hence the RTD of a real reactor will deviate from the ideal exponential decay, especially in the case of large reactors. For example, there will be some finite delay before E reaches its maximum value and the length of the delay will reflect the rate of mass transfer within the reactor. Just as was noted for a plug-flow reactor, an early mean will indicate some stagnant fluid within the vessel, while the presence of multiple peaks could indicate channeling, parallel paths to the exit, or strong internal circulation. Short-circuiting fluid within the reactor would appear in an RTD curve as a small pulse of concentrated tracer that reaches the outlet shortly after injection. Reactants continuously enter and leave a tank where they are mixed. Consequently, the reaction proceeds at a rate dependent on the outlet concentration: In a laminar flow reactor , the fluid flows through a long tube or parallel plate reactor and the flow is in layers parallel to the walls of the tube. The velocity of the flow is a parabolic function of radius. In the absence of molecular diffusion , the RTD is [ 8 ] The variance is infinite. In a real reactor, diffusion will eventually mix the layers so that the tail of the RTD becomes exponential and the variance finite; but laminar flow reactors can have variance greater than 1, the maximum for CTSD reactors. [ 1 ] Recycle reactors are PFRs with a recycle loop. Consequently, they behave like a hybrid between PFRs and CSTRs. In all of these equations : − r A {\displaystyle -r_{A}} is the consumption rate of A , a reactant. This is equal to the rate expression A is involved in. The rate expression is often related to the fractional conversion both through the consumption of A and through any k changes through temperature changes that are dependent on conversion. [ 7 ] In some reactions the reactants and the products have significantly different densities. Consequently, as the reaction proceeds the volume of the reaction changes. This variable volume adds terms to the design equations. Taking this volume change into consideration the volume of the reaction becomes: Plugging this into the design equations results in the following equations: Generally, when reactions take place in the liquid and solid phases the change in volume due to reaction is not significant enough that it needs to be taken into account. Reactions in the gas phase often have significant changes in volume and in these cases one should use these modified equations. [ 7 ] Residence time distributions are measured by introducing a non-reactive tracer into the system at the inlet. Its input concentration is changed according to a known function and the output concentration measured. The tracer should not modify the physical characteristics of the fluid (equal density, equal viscosity) or the hydrodynamic conditions and it should be easily detectable. [ 9 ] In general, the change in tracer concentration will either be a pulse or a step . Other functions are possible, but they require more calculations to deconvolute the RTD curve. This method required the introduction of a very small volume of concentrated tracer at the inlet of the reactor, such that it approaches the Dirac delta function . [ 10 ] [ 8 ] Although an infinitely short injection cannot be produced, it can be made much smaller than the mean residence time of the vessel. If a mass of tracer, M {\displaystyle M} , is introduced into a vessel of volume V {\displaystyle V} and an expected residence time of τ {\displaystyle \tau } , the resulting curve of C ( t ) {\displaystyle C(t)} can be transformed into a dimensionless residence time distribution curve by the following relation: The concentration of tracer in a step experiment at the reactor inlet changes abruptly from 0 to C 0 {\displaystyle C_{0}} . The concentration of tracer at the outlet is measured and normalized to the concentration C 0 {\displaystyle C_{0}} to obtain the non-dimensional curve F ( t ) {\displaystyle F(t)} which goes from 0 to 1: The step- and pulse-responses of a reactor are related by the following: A step experiment is often easier to perform than a pulse experiment, but it tends to smooth over some of the details that a pulse response could show. It is easy to numerically integrate an experimental pulse response to obtain a very high-quality estimate of the step response, but the reverse is not the case because any noise in the concentration measurement will be amplified by numeric differentiation. In chemical reactors , the goal is to make components react with a high yield . In a homogeneous, first-order reaction , the probability that an atom or molecule will react depends only on its residence time: for a rate constant k {\displaystyle k} . Given a RTD, the average probability is equal to the ratio of the concentration a {\displaystyle a} of the component before and after: [ 1 ] If the reaction is more complicated, then the output is not uniquely determined by the RTD. It also depends on the degree of micromixing , the mixing between molecules that entered at different times. If there is no mixing, the system is said to be completely segregated , and the output can be given in the form For given RTD, there is an upper limit on the amount of mixing that can occur, called the maximum mixedness , and this determines the achievable yield. A continuous stirred-tank reactor can be anywhere in the spectrum between completely segregated and perfect mixing . [ 1 ] The RTD of chemical reactors can be obtained by CFD simulations. The very same procedure that is performed in experiments can be followed. A pulse of inert tracer particles (during a very short time) is injected into the reactor. The linear motion of tracer particles is governed by Newton's second law of motion and a one-way coupling is stablished between fluid and tracers. In one-way coupling, fluid affects tracer motion by drag force while tracer does not affect fluid. The size and density of tracers are chosen so small that the time constant of tracers becomes very small. In this way, tracer particles exactly follow the same path as the fluid does. [ 11 ] Hydraulic residence time (HRT) is an important factor in the transport of environmental toxins or other chemicals through groundwater . The amount of time that a pollutant spends traveling through a delineated subsurface space is related to the saturation and the hydraulic conductivity of the soil or rock. [ 12 ] Porosity is another significant contributing factor to the mobility of water through the ground (e.g. toward the water table ). The intersection between pore density and size determines the degree or magnitude of the flow rate through the media. This idea can be illustrated by a comparison of the ways water moves through clay versus gravel . The retention time through a specified vertical distance in clay will be longer than through the same distance in gravel, even though they are both characterized as high porosity materials. This is because the pore sizes are much larger in gravel media than in clay, and so there is less hydrostatic tension working against the subsurface pressure gradient and gravity. Groundwater flow is important parameter for consideration in the design of waste rock basins for mining operations. Waste rock is heterogeneous material with particles varying from boulders to clay-sized particles, and it contains sulfidic pollutants which must be controlled such that they do not compromise the quality of the water table and also so the runoff does not create environmental problems in the surrounding areas. [ 12 ] Aquitards are clay zones that can have such a degree of impermeability that they partially or completely retard water flow. [ 5 ] [ 13 ] These clay lenses can slow or stop seepage into the water table, although if an aquitard is fractured and contaminated then it can become a long-term source of groundwater contamination due to its low permeability and high HRT. [ 13 ] Primary treatment for wastewater or drinking water includes settling in a sedimentation chamber to remove as much of the solid matter as possible before applying additional treatments. [ 5 ] The amount removed is controlled by the hydraulic residence time (HRT). [ 5 ] When water flows through a volume at a slower rate, less energy is available to keep solid particles entrained in the stream and there is more time for them to settle to the bottom. Typical HRTs for sedimentation basins are around two hours, [ 5 ] although some groups recommend longer times to remove micropollutants such as pharmaceuticals and hormones. [ 14 ] Disinfection is the last step in the tertiary treatment of wastewater or drinking water. The types of pathogens that occur in untreated water include those that are easily killed like bacteria and viruses , and those that are more robust such as protozoa and cysts . [ 5 ] The disinfection chamber must have a long enough HRT to kill or deactivate all of them. Atoms and molecules of gas or liquid can be trapped on a solid surface in a process called adsorption . This is an exothermic process involving a release of heat , and heating the surface increases the probability that an atom will escape within a given time. At a given temperature T {\displaystyle T} , the residence time of an adsorbed atom is given by where R {\displaystyle R} is the gas constant , E a {\displaystyle E_{\mathrm {a} }} is an activation energy , and τ 0 {\displaystyle \tau _{0}} is a prefactor that is correlated with the vibration times of the surface atoms (generally of the order of 10 − 12 {\displaystyle 10^{-12}} seconds). [ 15 ] : 27 [ 16 ] : 196 In vacuum technology , the residence time of gases on the surfaces of a vacuum chamber can determine the pressure due to outgassing . If the chamber can be heated, the above equation shows that the gases can be "baked out"; but if not, then surfaces with a low residence time are needed to achieve ultra-high vacuums . [ 16 ] : 195 In environmental terms, the residence time definition is adapted to fit with ground water, the atmosphere, glaciers , lakes, streams, and oceans. More specifically it is the time during which water remains within an aquifer, lake, river, or other water body before continuing around the hydrological cycle . The time involved may vary from days for shallow gravel aquifers to millions of years for deep aquifers with very low values for hydraulic conductivity . Residence times of water in rivers are a few days, while in large lakes residence time ranges up to several decades. Residence times of continental ice sheets is hundreds of thousands of years, of small glaciers a few decades. Ground water residence time applications are useful for determining the amount of time it will take for a pollutant to reach and contaminate a ground water drinking water source and at what concentration it will arrive. This can also work to the opposite effect to determine how long until a ground water source becomes uncontaminated via inflow, outflow, and volume. The residence time of lakes and streams is important as well to determine the concentration of pollutants in a lake and how this may affect the local population and marine life. Hydrology, the study of water, discusses the water budget in terms of residence time. The amount of time that water spends in each different stage of life (glacier, atmosphere, ocean, lake, stream, river), is used to show the relation of all of the water on the earth and how it relates in its different forms. A large class of drugs are enzyme inhibitors that bind to enzymes in the body and inhibit their activity. In this case it is the drug-target residence time (the length of time the drug stays bound to the target) that is of interest. The residence time is defined as the reciprocal value of the koff rate constant (residence time = 1/koff). Drugs with long residence times are desirable because they remain effective for longer and therefore can be used in lower doses. [ 17 ] : 88 This residence time is determined by the kinetics of the interaction, [ 18 ] such as how complementary the shape and charges of the target and drug are and whether outside solvent molecules are kept out of the binding site (thereby preventing them from breaking any bonds formed), [ 19 ] and is proportional to the half-life of the chemical dissociation . [ 18 ] One way to measure the residence time is in a preincubation-dilution experiment where a target enzyme is incubated with the inhibitor, allowed to approach equilibrium, then rapidly diluted. The amount of product is measured and compared to a control in which no inhibitor is added. [ 17 ] : 87–88 Residence time can also refer to the amount of time that a drug spends in the part of the body where it needs to be absorbed. The longer the residence time, the more of it can be absorbed. If the drug is delivered in an oral form and destined for the upper intestines , it usually moves with food and its residence time is roughly that of the food. This generally allows 3 to 8 hours for absorption. [ 20 ] : 196 If the drug is delivered through a mucous membrane in the mouth, the residence time is short because saliva washes it away. Strategies to increase this residence time include bioadhesive polymers , gums, lozenges and dry powders. [ 20 ] : 274 In size-exclusion chromatography , the residence time of a molecule is related to its volume, which is roughly proportional to its molecular weight. Residence times also affect the performance of continuous fermentors . [ 1 ] Biofuel cells utilize the metabolic processes of anodophiles ( electronegative bacteria) to convert chemical energy from organic matter into electricity. [ 21 ] [ 22 ] [ 23 ] A biofuel cell mechanism consists of an anode and a cathode that are separated by an internal proton exchange membrane (PEM) and connected in an external circuit with an external load. Anodophiles grow on the anode and consume biodegradable organic molecules to produce electrons, protons, and carbon dioxide gas, and as the electrons travel through the circuit they feed the external load. [ 22 ] [ 23 ] The HRT for this application is the rate at which the feed molecules are passed through the anodic chamber. [ 23 ] This can be quantified by dividing the volume of the anodic chamber by the rate at which the feed solution is passed into the chamber. [ 22 ] The hydraulic residence time (HRT) affects the substrate loading rate of the microorganisms that the anodophiles consume, which affects the electrical output. [ 23 ] [ 24 ] Longer HRTs reduce substrate loading in the anodic chamber which can lead to reduced anodophile population and performance when there is a deficiency of nutrients. [ 23 ] Shorter HRTs support the development of non- exoelectrogenous bacteria which can reduce the Coulombic efficiency electrochemical performance of the fuel cell if the anodophiles must compete for resources or if they do not have ample time to effectively degrade nutrients. [ 23 ]
https://en.wikipedia.org/wiki/Residence_time
In computing , a resident monitor is a type of system software program that was used in many early computers from the 1950s to 1970s. It can be considered a precursor to the operating system . [ 1 ] The name is derived from a program which is always present in the computer's memory, thus being resident . [ 2 ] Because memory was very limited on those systems, the resident monitor was often little more than a stub that would gain control at the end of a job and load a non-resident portion to perform required job cleanup and setup tasks. On a general-use computer using punched card input, the resident monitor governed the machine before and after each job control card was executed, loaded and interpreted each control card, and acted as a job sequencer for batch processing operations. [ 3 ] The resident monitor could clear memory from the last used program (with the exception of itself), load programs, search for program data and maintain standard input-output routines in memory. [ 2 ] Similar system software layers were typically in use in the early days of the later minicomputers and microcomputers before they gained the power to support full operating systems. [ 2 ] Resident monitor functionality is present in many embedded systems, boot loaders, and various embedded command lines. The original functions present in all resident monitors are augmented with present-day functions dealing with boot time hardware, disks, ethernet, wireless controllers, etc. Typically, these functions are accessed using a serial terminal or a physical keyboard and display, if attached. Such a resident monitor is frequently called a debugger, boot loader, command-line interface (CLI), etc. The original meaning of serial-accessed or terminal-accessed resident monitor is not frequently used, although the functionality remained the same, and was augmented. Typical functions of a resident monitor include examining and editing ram and/or ROM (including flash EEPROM) and sometimes special function registers, the ability to jump into code at a specified address, the ability to call code at a given address, the ability to fill an address range with a constant such as 0x00, and several others. More advanced functions include local disassembly to processor assembly language instructions, and even assembly and writing into flash memory from code typed by the operator. Also, code can be downloaded and uploaded from various sources, and some advanced monitors support a range of network protocols to do so as well as formatting and reading FAT and other filesystems, typically from flash memory on USB or CFcard buses. For embedded processors, many in-circuit debuggers with software-only mode use resident monitor concepts and functions that are frequently accessed by a GUI IDE. They are not different from the traditional serial line accessed resident monitor command lines, but users are not aware of this. At the latest, developers and advanced users will discover these low-level embedded resident monitor functions when writing low-level API code on a host to communicate with an embedded target for debugging and code test case running. Several current microcontrollers have resident serial monitors or extended boot loaders available as options to be used by developers. Many are open source. Some examples are PAULMON2, [ 4 ] AVR DebugMonitor [ 5 ] and the Bamo128 Arduino boot loader and monitor. [ 6 ] In general, most current resident monitors for embedded computing can be compiled according to various memory constraints, from small and minimalistic, to large, filling up to 25% of the code space available on an AVR ATmega328 processor with 32 kilobytes of flash memory, for example. In many cases resident monitors can be a step up from printf debugging and are very helpful when developing on a budget that does not allow a proper hardware in-circuit debugger (ICD) to be used.
https://en.wikipedia.org/wiki/Resident_monitor
A resident space object ( RSO ) is a natural or artificial object that orbits another celestial body. For example, it may orbit the Sun, Earth, or Mars. The term RSO is most often applied to Earth-orbiting objects, in which case the possible orbit classifications for an object are low Earth orbit (LEO), medium Earth orbit (MEO), high Earth orbit (HEO) or geosynchronous Earth orbit (GEO). [ 1 ] RSO acquisition, tracking, and data collection can be extremely challenging. [ 2 ] The primary method for gathering this information is to make a direct observation of an RSO via space surveillance sensors. However, the system is not foolproof, and RSOs can become lost to the tracking system. Additionally, not all new objects are acquired in a timely fashion, which means that these new objects, in addition to the lost RSOs, result in uncorrelated detections when they are finally observed. Since space missions have been increasing over the years, the number of uncorrelated targets is at an all-time high. [ 3 ] A number of international agencies endeavor to maintain catalogs of the man-made RSOs currently orbiting Earth. [ 3 ] One example is the two-line element set public catalog. [ 4 ]
https://en.wikipedia.org/wiki/Resident_space_object
Series of residential buildings are residential structures built according to a standardized group of typical designs, which within a given series may vary in the number of floors, number of sections, orientation, and minor architectural finishing details. As a rule, a residential building series features a limited range of apartment layouts, a unified architectural style, and a consistent construction technology. The use of standardized designs is aimed at industrializing construction, allowing for the lowest possible cost per square meter of housing while ensuring high construction speed. However, this often results in architectural uniformity and a lack of diversity in residential neighborhoods . [ 1 ] Such buildings were most extensively constructed during urbanization periods in many countries, shaping the architectural appearance of residential districts in numerous cities. Series-based apartment building design saw its greatest development in the USSR during the era of mass post-war housing construction, was widely adopted in socialist and developing countries , and continues to be used today. [ 2 ] [ 1 ] Based on the materials used for load-bearing and exterior enclosing structures, series-built houses can be classified as reinforced concrete, cinder block, or brick. In standardized construction of individual houses, wood and various wood-based panels were also used. Reinforced concrete structures, depending on construction technology, can be block-based, panel-based , monolithic, or precast-monolithic. [ 1 ] Standardized housing design has evolved from typical projects that historically developed in different countries and among various peoples, optimally accounting for traditions, lifestyles, climate conditions, availability of building materials, family wealth, and other factors. In France, for the " Autumn Salon " exhibition of 1922, Édouard Le Corbusier and Pierre Jeanneret presented the project " A Contemporary City for 3 Million Inhabitants ," which proposed a new vision of the city of the future. This project was later transformed into the " Plan Voisin " (1925), an advanced proposal for the radical redevelopment of Paris . The Plan Voisin envisioned the construction of a new business center for Paris on a completely cleared site, requiring the demolition of 240 hectares of old buildings. According to the plan, eighteen identical 50-story office skyscrapers were to be freely spaced at sufficient distances from each other. Despite the scale of the project, only 5% of the land was to be built upon, with the remaining 95% allocated for roads, parks, and pedestrian areas. The Plan Voisin was widely discussed in the French press and became something of a sensation. In 1924, at the request of industrialist Henry Frugès, Le Corbusier designed and built the " Quartiers Modernes Frugès " in Pessac , near Bordeaux . This residential complex, consisting of 50 two- and three-story houses, was one of the first attempts at mass-produced housing construction in France . The project featured four types of buildings differing in configuration and layout, including ribbon houses, row houses , and freestanding homes. With this project, Le Corbusier sought to create a formula for a modern, affordable home—characterized by simple forms, ease of construction, and a contemporary level of comfort. At the 1925 International Exhibition of Modern Decorative and Industrial Arts in Paris, Le Corbusier designed the " Esprit Nouveau " pavilion. The pavilion included a full-scale residential unit of a multi-story apartment building—an experimental two-level apartment. [ 3 ] Le Corbusier later used a similar unit in the late 1940s for his Unité d'Habitation in Marseille . [ 4 ] [ 5 ] Also called The Marseille block (1947–1952) is a large multi-unit residential building situated on a spacious green plot. For this project, Le Corbusier used standardized duplex apartments with balconies facing both sides of the building. Inside, at its mid-height, was a communal service complex: a cafeteria, library, post office, grocery stores, and more. The balconies' enclosing walls were painted in bright primary colors— polychromy —on an unprecedented scale. Similar Unités d’Habitation (with some modifications) were later built in cities such as Nantes-Rezé (1955), Meaux (1960), Briey-en-Forêt (1961), Firminy (1968) in France, and in West Berlin (1957). These structures embodied Le Corbusier's concept of the " Radiant City "—a city designed for human well-being. [ 6 ] In 1950, at the invitation of the Indian authorities in the state of Punjab , Le Corbusier began the most ambitious project of his career—the design of the new state capital, Chandigarh . [ 7 ] As in the Marseille block, the exterior finishing utilized a special technique for treating concrete surfaces known as "béton brut" (French for "raw concrete"). This technique, which became a hallmark of Le Corbusier’s style, was later adopted by many architects across Europe and beyond, leading to the emergence of the architectural movement known as " Brutalism ." Brutalism became particularly widespread in the United Kingdom (especially in the 1960s) and the USSR (especially in the 1980s). By the early 1980s, Western Europe was swept by a wave of protests against this type of architecture. Over time, Brutalism came to be seen as embodying the worst aspects of modern architecture—alienation from human needs, soullessness, claustrophobia , etc.—and its popularity declined. The planned city of Brasília , the capital of Brazil , was built as a realization of Le Corbusier's vision and includes some of the most famous examples of standardized residential buildings designed by him in the 1920s–1940s. [ 8 ] [ 9 ] [ 10 ] Pre-war period After the 1917 revolutions in Russia, a housing redistribution began. Rich apartments, which had fewer residents than rooms, were requisitioned and redistributed to the poor, as industrialization progressed and people moved from rural areas to cities. In Moscow, the number of working-class families within the Garden Ring increased dramatically between 1917 and 1920. Due to the housing stock being inadequate for the new social conditions, communal apartments became common. [ 11 ] To address the housing shortage, various new housing types were proposed, including communal houses, though they were unsuccessful. Standard designs for two-story block houses and manor-type houses were developed. From 1924, sectional construction was revived, and in 1925, Moscow saw its first standard residential section for multi-story buildings. However, housing policies were inconsistent, and many new apartments were either inconvenient or too large, resulting in communal living. During the first five-year plans , the population grew rapidly, and there was a need for mass housing construction. New design organizations emerged, and prefabricated housing technologies were developed. In 1936, the government issued a resolution to streamline construction, leading to standardized buildings. By 1939-1940, national projects for low-rise buildings were created, with a focus on multi-apartment sections. Construction volumes increased significantly during this period. By 1940, all housing construction was focused on standard designs for industrial construction. The focus shifted from individual houses to large residential blocks, districts, and villages with supporting infrastructure. However, multi-story buildings began to be replaced by low-rise buildings due to a new government resolution promoting the use of local materials for construction. Post-war period During the Great Patriotic War , there was a sharp increase in the scale and volume of standardized housing design and construction, as housing was needed to accommodate evacuated industrial enterprises in the east. It was during this time that architectural design studios developed simple, cost-effective housing projects with minimal use of scarce materials for Siberia, the Far East, and Central Asia in a short period of time. [ 12 ] In the USSR, the forerunners of future mass construction based on industrial blocks and panels were cinder block " Stalinka ". The architecture of these buildings is utilitarian, there are no decorations, unplastered silicate brick for external walls, almost flat facades with standard stucco decoration. The first four-story frame-panel house in the USSR was built in 1948 in Moscow at 43 Budyonny Avenue (architects G. Kuznetsov, B. Smirnov). [ 13 ] At that time, the country's leadership set the task for builders to create the cheapest possible project of a residential building with the possibility of family settlement. [ 14 ] [ 15 ] The first stage of fulfilling this task was the implementation of the idea of industrial panel house construction with a load-bearing frame. In 1948-1951, Mikhail Posokhin , Ashot Mndoyants and Vitaly Lagutenko built up a quarter in Moscow (Kuusinena, Zorge streets) with 10-story frame-panel houses. In the same year, a project for a frameless panel house was developed (they have been under construction since 1950 in Magnitogorsk ). In 1954, a 7-story frameless panel house was built in Moscow on 6th street October Field (G. Kuznetsov, B. Smirnov, L. Wrangel, Z. Nesterova, N. A. Osterman). Khrushchevkas , which had been designed since the late 1940s, went into production after the State Committee for Construction 1955 decree “On elimination of excesses in design and construction”: “the outwardly ostentatious side of architecture, replete with great excesses,” characteristic of the Stalinist period , now “does not correspond to the line of the Party and the Leadership in architectural and construction matters. … Soviet architecture should be characterized by simplicity, rigor of forms, and economy of solutions”. The ideological and scientific justification for the new course was reduced to the following points: The turning point was the resolutions "On measures for further industrialization, improving the quality and reducing the cost of construction" of 1956 and "On the development of housing construction in the USSR " of 1957. The party's task for builders was to develop projects by the fall of 1956 that would drastically reduce the cost of housing construction and make it accessible to workers. This is how the famous "Khrushchevs" appeared. The goal of the project was that in 1980 every Soviet family would meet communism in a separate apartment. However, in the mid-1980s, only 85% of families had separate apartments. In 1986, Mikhail Gorbachev postponed the deadline for 15 years, putting forward the slogan "Every Soviet family - a separate apartment by the year 2000." [ 17 ] In 1959, the 21st Congress noted the existence of a housing problem and called the development of housing construction "one of the most important tasks." It was planned that in 1959-1965. 2.3 times more apartments would be commissioned than in the previous seven-year period. Moreover, the emphasis was placed on individual, not communal, apartments. [ 18 ] The prototype for the first "Khrushchev" was the block buildings German: Plattenbau, which were built in Berlin and Dresden from the 1920s. The construction of "Khrushchev" continued from 1959 to 1985. In 1956-1965, more than 13 thousand residential buildings were built in the USSR, and almost all of them were five-story buildings. This allowed to commission 110 million square meters of housing each year. An appropriate production base and infrastructure were created: house-building plants, reinforced concrete plants, etc. The first house-building plants were created in 1959 in the Glavleningradbuda system, and in 1962 they were organized in Moscow and other cities. In particular, during the period 1966-1970, 942 thousand people in Leningrad received living space, with 809 thousand moving into new houses and 133 thousand receiving space in old houses. Since 1960, construction of 9-story panel residential houses has been underway, and since 1963 - 12-story ones. The Soviet model of prefabricated panel buildings influenced housing projects in other socialist and developing countries. For instance, in East Germany, similar structures known as " Plattenbau " were constructed, " Panelház " in Hungary, " Panelák " in the Czech Republic, while in Poland, they were referred to as " Wielka Płyta ." [ 19 ]
https://en.wikipedia.org/wiki/Residential_building_series
Residual-resistivity ratio (also known as Residual-resistance ratio or just RRR ) is usually defined as the ratio of the resistivity of a material at room temperature and at 0 K . Of course, 0 K can never be reached in practice so some estimation is usually made. Since the RRR can vary quite strongly for a single material depending on the amount of impurities and other crystallographic defects , it serves as a rough index of the purity and overall quality of a sample. Since resistivity usually increases as defect prevalence increases, a large RRR is associated with a pure sample. RRR is also important for characterizing certain unusual low temperature states such as the Kondo effect and superconductivity . Note that since it is a unitless ratio there is no difference between a residual resistivity and residual-resistance ratio. Usually at "warm" temperatures the resistivity of a metal varies linearly with temperature. That is, a plot of the resistivity as a function of temperature is a straight line. If this straight line were extrapolated all the way down to absolute zero, a theoretical RRR could be calculated In the simplest case of a good metal that is free of scattering mechanisms one would expect ρ (0K) = 0, which would cause RRR to diverge. However, usually this is not the case because defects such as grain boundaries , impurities, etc. act as scattering sources that contribute a temperature independent ρ 0 value. This shifts the intercept of the curve to a higher number, giving a smaller RRR. In practice the resistivity of a given sample is measured down to as cold as possible, which on typical laboratory instruments is in the range of 2 K, though much lower is possible. By this point the linear resistive behavior is usually no longer applicable and by the low temperature ρ is taken as a good approximation to 0 K.
https://en.wikipedia.org/wiki/Residual-resistance_ratio
The residual bit error rate ( RBER ) is a receive quality metric in digital transmission , one of several used to quantify the accuracy of the received data. [ 1 ] In digital transmission schemes, including cellular telephony systems such as GSM , a certain percentage of received data will be detected as containing errors, and will be discarded. The likelihood that a particular bit will be detected as erroneous is the bit error rate . The RBER characterizes the likelihood that a given bit will be erroneous but will not be detected as such [ 2 ] When digital communication systems are being designed, the maximum acceptable residual bit error rate can be used, along with other quality metrics, to calculate the minimum acceptable signal-to-noise ratio in the system. This in turn provides minimum requirements for the physical and electronic design of the transmitter and receiver. [ 3 ] This computer networking article is a stub . You can help Wikipedia by expanding it . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Residual_bit_error_rate
In lysosomal digestion, residual bodies are vesicles containing indigestible materials. [ 1 ] Residual bodies are either secreted by the cell via exocytosis (this generally only occurs in macrophages ), or they become lipofuscin granules that remain in the cytosol indefinitely. Longer-living cells like neurons and muscle cells usually have a higher concentration of lipofuscin than other more rapidly proliferating cells. Karp, Gerald (2005). Cell and Molecular Biology: Concepts and Experiments . Hoboken, NJ: John Wiley & Sons. pp. 311–313 . ISBN 0-471-46580-1 .
https://en.wikipedia.org/wiki/Residual_body
Residual chemical shift anisotropy ( RCSA ) is the difference between the chemical shift anisotropy (CSA) of aligned and non-aligned molecules. It is normally three orders of magnitude smaller than the static CSA, with values on the order of parts-per-billion (ppb). RCSA is useful for structural determination and it is among the new developments in NMR spectroscopy . [ citation needed ] This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Residual_chemical_shift_anisotropy
The residual dipolar coupling between two spins in a molecule occurs if the molecules in solution exhibit a partial alignment leading to an incomplete averaging of spatially anisotropic dipolar couplings . [ 1 ] Partial molecular alignment leads to an incomplete averaging of anisotropic magnetic interactions such as the magnetic dipole-dipole interaction (also called dipolar coupling), the chemical shift anisotropy, or the electric quadrupole interaction. The resulting so-called residual anisotropic magnetic interactions are useful in biomolecular NMR spectroscopy . [ 2 ] NMR spectroscopy in partially oriented media was reported by Alfred Saupe . [ 3 ] [ 4 ] After this initiation, several NMR spectra in various liquid crystalline phases were reported (see e.g. [ 5 ] [ 6 ] [ 7 ] [ 8 ] ). A second technique for partial alignment that is not limited by a minimum anisotropy is strain-induced alignment in a gel (SAG). [ 9 ] The technique was extensively used to study the properties of polymer gels by means of high-resolution deuterium NMR, [ 10 ] but only lately gel alignment was used to induce RDCs in molecules dissolved into the gel. [ 11 ] [ 12 ] SAG allows the unrestricted scaling of alignment over a wide range and can be used for aqueous as well as organic solvents, depending on the polymer used. As a first example in organic solvents, RDC measurements in stretched polystyrene (PS) gels swollen in CDCl 3 were reported as a promising alignment method. [ 13 ] In 1995, NMR spectra were reported for cyanometmyoglobin, which has a very highly anisotropic paramagnetic susceptibility. When taken at very high field, these spectra may contain data that can usefully complement NOEs in determining a tertiary fold. [ 14 ] In 1996 and 1997, the RDCs of a diamagnetic protein ubiquitin were reported. The results were in good agreement with the crystal structures. [ 15 ] [ 16 ] The secular dipolar coupling Hamiltonian of two spins , I {\displaystyle I} and S , {\displaystyle S,} is given by: where The above equation can be rewritten in the following form: where In isotropic solution molecular tumbling reduces the average value of D I S {\displaystyle D_{IS}} to zero. We thus observe no dipolar coupling. If the solution is not isotropic then the average value of D I S {\displaystyle D_{IS}} may be different from zero, and one may observe residual couplings. RDC can be positive or negative, depending on the range of angles that are sampled. [ 17 ] In addition to static distance and angular information, RDCs may contain information about a molecule's internal motion. To each atom in a molecule one can associate a motion tensor B , that may be computed from RDCs according to the following relation: [ 18 ] where A is the molecular alignment tensor . The rows of B contain the motion tensors for each atom. The motion tensors also have five degrees of freedom . From each motion tensor, 5 parameters of interest can be computed. The variables S i 2 , η i , α i , β i and γ i are used to denote these 5 parameters for atom i. S i 2 is the magnitude of atom i's motion; η i is a measure of the anisotropy of atom i's motion; α i and β i are related to the polar coordinates of the bond vector expressed in the initial arbitrary reference frame (i.e., the PDB frame). If the motion of the atom is anisotropic (i.e., η i = 0), the final parameter, γ i measures the principal orientation of the motion. Note that the RDC-derived motion parameters are local measurements. Any RDC measurement in solution consists of two steps, aligning the molecules and NMR studies: For diamagnetic molecules at moderate field strengths, molecules have little preference in orientation, the tumbling samples a nearly isotropic distribution, and average dipolar couplings goes to zero. Actually, most molecules have preferred orientations in the presence of a magnetic field, because most have anisotropic magnetic susceptibility tensors , Χ. [ 14 ] The method is most suitable for systems with large values for magnetic susceptibility tensor. This includes: Protein-nucleic acid complex, nucleic acids , proteins with large number of aromatic residues, porphyrin containing proteins and metal binding proteins (metal may be replaced by lanthanides ). For a fully oriented molecule, the dipolar coupling for an 1 H- 15 N amide group would be over 20 kHz , and a pair of protons separated by 5 Å would have up to ~1 kHz coupling. However the degree of alignment achieved by applying magnetic field is so low that the largest 1 H- 15 N or 1 H- 13 C dipolar couplings are <5 Hz. [ 19 ] Therefore, many different alignment media have been designed: There are numerous methods that have been designed to accurately measure coupling constant between nuclei. [ 24 ] They have been classified into two groups: frequency based methods where separation of peaks centers (splitting) is measured in a frequency domain, and intensity based methods where the coupling is extracted from the resonance intensity instead of splitting. The two methods complement each other as each of them is subject to a different kind of systematic errors. Here are the prototypical examples of NMR experiments belonging to each of the two groups: RDC measurement provides information on the global folding of the protein or protein complex. As opposed to traditional NOE based NMR structure determinations , RDCs provide long distance structural information. It also provides information about the dynamics in molecules on time scales slower than nanoseconds. Most NMR studies of protein structure are based on analysis of the Nuclear Overhauser effect , NOE, between different protons in the protein. Because the NOE depends on the inverted sixth power of the distance between the nuclei, r −6 , NOEs can be converted into distance restraints that can be used in molecular dynamics -type structure calculations. RDCs provide orientational restraints rather than distance restraints, and has several advantages over NOEs: Provided that a very complete set of RDCs is available, it has been demonstrated for several model systems that molecular structures can be calculated exclusively based on these anisotropic interactions, without recourse to NOE restraints. However, in practice, this is not achievable and RDC is used mainly to refine a structure determined by NOE data and J-coupling . One problem with using dipolar couplings in structure determination is that a dipolar coupling does not uniquely describe an internuclear vector orientation. Moreover, if a very small set of dipolar couplings are available, the refinement may lead to a structure worse than the original one. For a protein with N aminoacids, 2N RDC constraint for backbone is the minimum needed for an accurate refinement. [ 25 ] The information content of an individual RDC measurement for a specific bond vector (such as a specific backbone NH bond in a protein molecule) can be understood by showing the target curve that traces out directions of perfect agreement between the observed RDC value and the value calculated from the model. Such a curve (see figure) has two symmetrical branches that lie on a sphere with its polar axis along the magnetic field direction. Their height from the sphere's equator depends on the magnitude of the RDC value and their shape depends on the "rhombicity" (asymmetry) of the molecular alignment tensor. If the molecular alignment were completely symmetrical around the magnetic field direction, the target curve would just consist of two circles at the same angle from the poles as the angle θ {\displaystyle \theta } that the specific bond vector makes to the applied magnetic field. [ 25 ] In the case of elongated molecules such as RNA , where local torsional information and short distances are not enough to constrain the structures, RDC measurements can provide information about the orientations of specific chemical bonds throughout a nucleic acid with respect to a single coordinate frame. Particularly, RNA molecules are proton -poor and overlap of ribose resonances make it very difficult to use J-coupling and NOE data to determine the structure. Moreover, RDCs between nuclei with a distance larger than 5-6 Å can be detected. This distance is too much for generation of NOE signal. This is because RDC is proportional to r −3 whereas NOE is proportional to r −6 . RDC measurements have also been proved to be extremely useful for a rapid determination of the relative orientations of units of known structures in proteins. [ 26 ] [ 27 ] In principle, the orientation of a structural subunit, which may be as small as a turn of a helix or as large as an entire domain, can be established from as few as five RDCs per subunit. [ 25 ] As a RDC provides spatially and temporally averaged information about an angle between the external magnetic field and a bond vector in a molecule, it may provide rich geometrical information about dynamics on a slow timescale (>10 −9 s) in proteins. In particular, due to its radial dependence the RDC is in particular sensitive to large-amplitude angular processes [ 28 ] An early example by Tolman et al. found previously published structures of myoglobin insufficient to explain measured RDC data, and devised a simple model of slow dynamics to remedy this. [ 29 ] However, for many classes of proteins, including intrinsically disordered proteins , analysis of RDCs becomes more involved, as defining an alignment frame is not trivial. [ 30 ] The problem can be addressed by circumventing the necessity of explicitly defining the alignment frame. [ 30 ] [ 31 ] Books : Review papers : Classic papers :
https://en.wikipedia.org/wiki/Residual_dipolar_coupling
In thermodynamics a residual property is defined as the difference between a real fluid property and an ideal gas property, both considered at the same density , temperature , and composition , typically expressed as X ( T , V , n ) = X i d ( T , V , n ) + X r e s ( T , V , n ) {\displaystyle X(T,V,n)=X^{id}(T,V,n)+X^{res}(T,V,n)} where X {\displaystyle X} is some thermodynamic property at given temperature, volume and mole numbers, X i d {\displaystyle X^{id}} is value of the property for an ideal gas , and X r e s {\displaystyle X^{res}} is the residual property. The reference state is typically incorporated into the ideal gas contribution to the value, as X i d ( T , V , n ) = X ∘ , i d ( T , n ) + Δ i d X ( T , V , n ) {\displaystyle X^{id}(T,V,n)=X^{\circ ,id}(T,n)+\Delta _{id}X(T,V,n)} where X ∘ , i d {\displaystyle X^{\circ ,id}} is the value of X {\displaystyle X} at the reference state (commonly pure, ideal gas species at 1 bar), and Δ i d X {\displaystyle \Delta _{id}X} is the departure of the property for an ideal gas at ( T , V , n ) {\displaystyle (T,V,n)} from this reference state. Residual properties should not be confused with excess properties , which are defined as the deviation of a thermodynamic property from some reference system, that is typically not an ideal gas system. Whereas excess properties and excess models (also known as activity coefficient models ) typically concern themselves with strictly liquid-phase systems, such as smelts , polymer blends or electrolytes , residual properties are intimately linked to equations of state which are commonly used to model systems in which vapour-liquid equilibria are prevalent, or systems where both gases and liquids are of interest. For some applications, activity coefficient models and equations of state are combined in what are known as " γ {\displaystyle \gamma } - ϕ {\displaystyle \phi } models" (read: Gamma-Phi) referring to the symbols commonly used to denote activity coefficients and fugacities . In the development and implementation of Equations of State , the concept of residual properties is valuable, as it allows one to separate the behaviour of a fluid that stems from non-ideality from that stemming from the properties of an ideal gas. For example, the isochoric heat capacity is given by C V = ( ∂ U ∂ T ) V , n = T ( ∂ S ∂ T ) V , n = T [ ( ∂ S i d ∂ T ) V , n + ( ∂ S r e s ∂ T ) V , n ] = C V i d + C V r e s {\displaystyle C_{V}=\left({\frac {\partial U}{\partial T}}\right)_{V,n}=T\left({\frac {\partial S}{\partial T}}\right)_{V,n}=T\left[\left({\frac {\partial S^{id}}{\partial T}}\right)_{V,n}+\left({\frac {\partial S^{res}}{\partial T}}\right)_{V,n}\right]=C_{V}^{id}+C_{V}^{res}} Where the ideal gas heat capacity, C V i d {\displaystyle C_{V}^{id}} , can be measured experimentally, by measuring the heat capacity at very low pressure. After measurement it is typically represented using a polynomial fit such as the Shomate equation . The residual heat capacity is given by T ( ∂ S r e s ∂ T ) V , n = − T ( ∂ 2 A r e s ∂ T 2 ) V , n {\displaystyle T\left({\frac {\partial S^{res}}{\partial T}}\right)_{V,n}=-T\left({\frac {\partial ^{2}A^{res}}{\partial T^{2}}}\right)_{V,n}} , and the accuracy of a given equation of state in predicting or correlating the heat capacity can be assessed by regarding only the residual contribution, as the ideal contribution is independent of the equation of state. In fluid phase equilibria (i.e. liquid-vapour or liquid-liquid equilibria), the notion of the fugacity coefficient is crucial, as it can be shown that the equilibrium condition for a system consisting of phases α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } , ... the condition for chemical equilibrium is x i α Φ i α = x i β Φ i β = x i γ Φ i γ = . . . {\displaystyle x_{i}^{\alpha }\Phi _{i}^{\alpha }=x_{i}^{\beta }\Phi _{i}^{\beta }=x_{i}^{\gamma }\Phi _{i}^{\gamma }=\quad ...} for all species i {\displaystyle i} , where x i j {\displaystyle x_{i}^{j}} denotes the mole fraction of species i {\displaystyle i} in phase j {\displaystyle j} , and Φ i j {\displaystyle \Phi _{i}^{j}} is the fugacity coefficient of species i {\displaystyle i} in phase j {\displaystyle j} . The fugacity coefficient, being defined by μ i = μ i ∘ + R T ln ⁡ Φ i x i p p ∘ {\displaystyle \mu _{i}=\mu _{i}^{\circ }+RT\ln {\frac {\Phi _{i}x_{i}p}{p^{\circ }}}} is directly related to the residual chemical potential, as μ i = μ i i d + μ i r e s = μ i ∘ + R T ln ⁡ x i p p ∘ + μ i r e s ⟹ μ i r e s = R T ln ⁡ Φ i {\displaystyle \mu _{i}=\mu _{i}^{id}+\mu _{i}^{res}=\mu _{i}^{\circ }+RT\ln {\frac {x_{i}p}{p^{\circ }}}+\mu _{i}^{res}\implies \mu _{i}^{res}=RT\ln \Phi _{i}} , thus, because μ i r e s = ( ∂ A r e s ∂ n i ) T , V {\displaystyle \mu _{i}^{res}=\left({\frac {\partial A^{res}}{\partial n_{i}}}\right)_{T,V}} , we can see that an accurate description of the residual Helmholtz energy , rather than the total Helmholtz energy, is the key to accurately computing the equilibrium state of a system. The residual entropy of a fluid has some special significance. In 1976, Yasha Rosenfeld published a landmark paper, showing that the transport coefficients of pure liquids, when expressed as functions of the residual entropy, can be treated as monovariate functions, rather than as functions of two variables (i.e. temperature and pressure, or temperature and density). [ 1 ] This discovery lead to the concept of residual entropy scaling , which has spurred a large amount of research, up until the modern day, in which various approaches for modelling transport coefficients as functions of the residual entropy have been explored. [ 2 ] Residual entropy scaling is still very much an area of active research. While any real state variable X {\displaystyle X} , in a real state ( T , V , p , n {\displaystyle T,V,p,n} ), is independent of whether one evaluates X ( T , p , n ) {\displaystyle X(T,p,n)} or X ( T , V , n ) {\displaystyle X(T,V,n)} , one should be aware that the residual property is in general dependent on the variable set, i.e. X r e s ( T , p , n ) ≠ X r e s ( T , V , n ) {\displaystyle X^{res}(T,p,n)\neq X^{res}(T,V,n)} This arises from the fact that the real state ( T , V , p , n ) {\displaystyle (T,V,p,n)} is in general not a valid ideal gas state, such that the ideal part of the property will be different depending on variable set. Take for example the chemical potential of a pure fluid: In a state ( T , V , p , n ) {\displaystyle (T,V,p,n)} that does not satisfy the ideal gas law, but may be a real state for some real fluid. The ideal gas chemical potential computed as a function of temperature, pressure and mole number is μ i d ( T , p , n ) = μ ∘ + R T ln ⁡ p p ∘ {\displaystyle \mu ^{id}(T,p,n)=\mu ^{\circ }+RT\ln {\frac {p}{p^{\circ }}}} , while computing it as a function of concentration ( c = n / V {\displaystyle c=n/V} ), we have μ i d ( T , V , n ) = μ ∘ + R T ln ⁡ c c ∘ {\displaystyle \mu ^{id}(T,V,n)=\mu ^{\circ }+RT\ln {\frac {c}{c^{\circ }}}} , such that μ i d ( T , p , n ) − μ i d ( T , V , n ) = R T ln ⁡ p p ∘ − R T ln ⁡ c c ∘ = R T ln ⁡ p V n R T = R T ln ⁡ Z {\displaystyle \mu ^{id}(T,p,n)-\mu ^{id}(T,V,n)=RT\ln {\frac {p}{p^{\circ }}}-RT\ln {\frac {c}{c^{\circ }}}=RT\ln {\frac {pV}{nRT}}=RT\ln Z} , where we have used p ∘ = c ∘ R T {\displaystyle p^{\circ }=c^{\circ }RT} , and Z {\displaystyle Z} denotes the compressibility factor. This leads to the result μ i ( T , p , n ) − μ i ( T , V , n ) = 0 ⟹ μ i r e s ( T , V , n ) − μ i r e s ( T , p , n ) = R T ln ⁡ Z {\displaystyle \mu _{i}(T,p,n)-\mu _{i}(T,V,n)=0\implies \mu _{i}^{res}(T,V,n)-\mu _{i}^{res}(T,p,n)=RT\ln Z} . In practice, the most significant residual property is the residual Helmholtz energy . The reason for this is that other residual properties can be computed from the residual Helmholtz energy as various derivatives (see: Maxwell relations ). We note that ( ∂ A ∂ V ) T , n = ( ∂ A i d ∂ V ) T , n + ( ∂ A r e s ∂ V ) T , n ⟺ ( ∂ A r e s ∂ V ) T , n = ( ∂ A ∂ V ) T , n − ( ∂ A i d ∂ V ) T , n = − p ( T , V , n ) − ( − p i d ( T , V , n ) ) {\displaystyle \left({\frac {\partial A}{\partial V}}\right)_{T,n}=\left({\frac {\partial A^{id}}{\partial V}}\right)_{T,n}+\left({\frac {\partial A^{res}}{\partial V}}\right)_{T,n}\iff \left({\frac {\partial A^{res}}{\partial V}}\right)_{T,n}=\left({\frac {\partial A}{\partial V}}\right)_{T,n}-\left({\frac {\partial A^{id}}{\partial V}}\right)_{T,n}=-p(T,V,n)-(-p^{id}(T,V,n))} such that A r e s ( T , V ′ , n ) − A r e s ( T , V = ∞ , n ) = ∫ ∞ V ′ ( ∂ A r e s ∂ V ) d V = ∫ ∞ V ′ p i d ( T , V , n ) − p ( T , V , n ) d V {\displaystyle A^{res}(T,V',n)-A^{res}(T,V=\infty ,n)=\int _{\infty }^{V'}\left({\frac {\partial A^{res}}{\partial V}}\right)dV=\int _{\infty }^{V'}p^{id}(T,V,n)-p(T,V,n)dV} further, because any fluid reduces to an ideal gas in the limit of infinite volume, A ( T , V = ∞ , n ) = A i d ( T , V = ∞ , n ) ⟺ A r e s ( T , V = ∞ , n ) = 0 {\displaystyle A(T,V=\infty ,n)=A^{id}(T,V=\infty ,n)\iff A^{res}(T,V=\infty ,n)=0} . Thus, for any Equation of State that is explicit in pressure , such as the van der Waals Equation of State , we may compute A r e s ( T , V , n ) = ∫ ∞ V n R T V ′ − p ( T , V ′ , n ) d V ′ {\displaystyle A^{res}(T,V,n)=\int _{\infty }^{V}{\frac {nRT}{V'}}-p(T,V',n)dV'} . However, in modern approaches to developing Equations of State, such as SAFT , it is found that it can be simpler to develop the equation of state by directly developing an equation for A r e s {\displaystyle A^{res}} , rather than developing an equation that is explicit in pressure. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Residual_property_(physics)
The residual sodium carbonate (RSC) index of irrigation water or soil water is used to indicate the alkalinity hazard for soil. The RSC index is used to find the suitability of the water for irrigation in clay soils which have a high cation exchange capacity . When dissolved sodium in comparison with dissolved calcium and magnesium is high in water, clay soil swells or undergoes dispersion which drastically reduces its infiltration capacity . [ 1 ] In the dispersed [ clarification needed ] soil structure , the plant roots are unable to spread deeper into the soil due to lack of moisture. However, high RSC index water does not enhance the osmotic pressure to impede the off take of water by the plant roots unlike high salinity water. Clay soils irrigation with high RSC index water leads to fallow alkali soils formation. [ 2 ] [ 3 ] [ 4 ] RSC is expressed in meq/L units. RSC should not be higher than 1 and preferably less than +0.5 for considering the water use for irrigation. [ 5 ] The formula for calculating RSC index is: While calculating RSC index, the water quality present at the root zone of the crop should be considered which would take into account the leaching factor in the field. [ 6 ] Calcium present in dissolved form is also influenced by the partial pressure of dissolved CO 2 at the plants root zone in the field water. [ 7 ] Soda ash [Na 2 CO 3 ] can be present in natural water from the weathering of basalt which is an igneous rock. Lime [Ca(OH) 2 ] can be present in natural water when rain water comes in contact with calcined minerals such as ash produced from the burning of calcareous coal or lignite in boilers. Anthropogenic use of soda ash also finally adds to the RSC of the river water. Where the river water and ground water are repeatedly used in the extensively irrigated river basins, the river water available in lower reaches is often rendered not useful in agriculture due to high RSC index or alkalinity. [ 8 ] The salinity of water need not be high. In industrial water treatment terminology, water quality with high RSC index is synonymous with the soft water but is chemically very different from naturally soft water which has a very low ionic concentration. [ 9 ] When calcium and magnesium salts are present in dissolved form in water, these salts precipitate on the heat transfer surfaces forming insulating hard scaling / coating which reduces the heat transfer efficiency of the heat exchangers. To avoid scaling in water cooled heat exchangers, water is treated by lime and or soda ash to remove the water hardness . The following chemical reactions take place in lime soda softening process which precipitates the calcium and magnesium salts as calcium carbonate and magnesium hydroxide which have very low solubility in water. The excess soda ash after precipitating the calcium and magnesium salts is in carbonates & bicarbonates of sodium which imparts high pH or alkalinity to soil water. The endorheic basin lakes are called soda or alkaline lakes when the water inflows contain high concentrations of Na 2 CO 3 . The pH of the soda lake water is generally above 9 and sometimes the salinity is close to brackish water due to depletion of pure water by solar evaporation. Soda lakes are rich with algal growth due to enhanced availability of dissolved CO 2 in the lake water compared to fresh water or saline water lakes. Sodium carbonate and sodium hydroxide are in equilibrium with availability of dissolved carbon dioxide as given below in the chemical reaction During day time when sun light is available, Algae undergoes photosynthesis process which absorbs CO 2 to shift the reaction towards NaOH formation and vice versa takes place during night time with the release of CO 2 from the respiration process of Algae towards Na 2 CO 3 and NaHCO 3 formation. In soda lake waters, carbonates of sodium act as catalyst for the algae growth by providing favourable higher concentration of dissolved CO 2 during the day time. Due to fluctuation in dissolved CO 2 , the pH and alkalinity of the water also keep varying. [ 10 ] .
https://en.wikipedia.org/wiki/Residual_sodium_carbonate_index
Residual strength is the load or force (usually mechanical ) that a damaged object or material can still carry without failing . Material toughness , fracture size and geometry as well as its orientation all contribute to residual strength. [ 1 ] This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Residual_strength
In materials science and solid mechanics , residual stresses are stresses that remain in a solid material after the original cause of the stresses has been removed. Residual stress may be desirable or undesirable. For example, laser peening imparts deep beneficial compressive residual stresses into metal components such as turbine engine fan blades, and it is used in toughened glass to allow for large, thin, crack- and scratch-resistant glass displays on smartphones . However, unintended residual stress in a designed structure may cause it to fail prematurely. Residual stresses can result from a variety of mechanisms including inelastic ( plastic ) deformations , temperature gradients (during thermal cycle) or structural changes ( phase transformation ). Heat from welding may cause localized expansion, which is taken up during welding by either the molten metal or the placement of parts being welded. When the finished weldment cools, some areas cool and contract more than others, leaving residual stresses. Another example occurs during semiconductor fabrication and microsystem fabrication [ 1 ] when thin film materials with different thermal and crystalline properties are deposited sequentially under different process conditions. The stress variation through a stack of thin film materials can be very complex and can vary between compressive and tensile stresses from layer to layer. While uncontrolled residual stresses are undesirable, some designs rely on them. In particular, brittle materials can be toughened by including compressive residual stress, as in the case for toughened glass and pre-stressed concrete . The predominant mechanism for failure in brittle materials is brittle fracture , which begins with initial crack formation. When an external tensile stress is applied to the material, the crack tips concentrate stress , increasing the local tensile stresses experienced at the crack tips to a greater extent than the average stress on the bulk material. This causes the initial crack to enlarge quickly (propagate) as the surrounding material is overwhelmed by the stress concentration, leading to fracture. A material having compressive residual stress helps to prevent brittle fracture because the initial crack is formed under compressive (negative tensile) stress. To cause brittle fracture by crack propagation of the initial crack, the external tensile stress must overcome the compressive residual stress before the crack tips experience sufficient tensile stress to propagate. The manufacture of some swords utilises a gradient in martensite formation to produce particularly hard edges (notably the katana ). The difference in residual stress between the harder cutting edge and the softer back of the sword gives such swords their characteristic curve [ citation needed ] . In toughened glass, compressive stresses are induced on the surface of the glass, balanced by tensile stresses in the body of the glass. Due to the residual compressive stress on the surface, toughened glass is more resistant to cracks, but shatter into small shards when the outer surface is broken. A demonstration of the effect is shown by Prince Rupert's Drop , a material-science novelty in which a molten glass globule is quenched in water: Because the outer surface cools and solidifies first, when the volume cools and solidifies, it "wants" to take up a smaller volume than the outer "skin" has already defined; this puts much of the volume in tension, pulling the "skin" in, putting the "skin" in compression. As a result, the solid globule is extremely tough, able to be hit with a hammer, but if its long tail is broken, the balance of forces is upset, causing the entire piece to shatter violently. In certain types of gun barrels made with two tubes forced together, the inner tube is compressed while the outer tube stretches, preventing cracks from opening in the rifling when the gun is fired. Common methods to induce compressive residual stress are shot peening for surfaces and High frequency impact treatment for weld toes. Depth of compressive residual stress varies depending on the method. Both methods can increase lifetime of constructions significantly. There are some techniques which are used to create uniform residual stress in a beam. For example, the four point bend allows inserting residual stress by applying a load on a beam using two cylinders. [ 2 ] [ 3 ] There are many techniques used to measure residual stresses, which are broadly categorised into destructive, semi-destructive and non-destructive techniques. The selection of the technique depends on the information required and the nature of the measurement specimen. Factors include the depth/penetration of the measurement (surface or through-thickness), the length scale to be measured over ( macroscopic , mesoscopic or microscopic ), the resolution of the information required, and also the composition geometry and location of the specimen. Additionally, some of the techniques need to be performed in specialised laboratory facilities, meaning that "on-site" measurements are not possible for all of the techniques. Destructive techniques result in large and irreparable structural change to the specimen, meaning that either the specimen cannot be returned to service or a mock-up or spare must be used. These techniques function using a "strain release" principle; cutting the measurement specimen to relax the residual stresses and then measuring the deformed shape. As these deformations are usually elastic, there is an exploitable linear relationship between the magnitude of the deformation and magnitude of the released residual stress. [ 4 ] Destructive techniques include: Similarly to the destructive techniques, these also function using the "strain release" principle. However, they remove only a small amount of material, leaving the overall integrity of the structure intact. These include: The non-destructive techniques measure the effects of relationships between the residual stresses and their action of crystallographic properties of the measured material. Some of these work by measuring the diffraction of high frequency electromagnetic radiation through the atomic lattice spacing (which has been deformed due to the stress) relative to a stress-free sample. The Ultrasonic and Magnetic techniques exploit the acoustic and ferromagnetic properties of materials to perform relative measurements of residual stress. Non-destructive techniques include: When undesired residual stress is present from prior metalworking operations, the amount of residual stress may be reduced using several methods. These methods may be classified into thermal and mechanical (or nonthermal) methods. [ 12 ] All the methods involve processing the part to be stress relieved as a whole. The thermal method involves changing the temperature of the entire part uniformly, either through heating or cooling. When parts are heated for stress relief, the process may also be known as stress relief bake. [ 13 ] Cooling parts for stress relief is known as cryogenic stress relief and is relatively uncommon. [ citation needed ] Most metals, when heated, experience a reduction in yield strength . If the material's yield strength is sufficiently lowered by heating, locations within the material that experienced residual stresses greater than the yield strength (in the heated state) would yield or deform. This leaves the material with residual stresses that are at most as high as the yield strength of the material in its heated state. Stress relief bake should not be confused with annealing or tempering , which are heat treatments to increase ductility of a metal. Although those processes also involve heating the material to high temperatures and reduce residual stresses, they also involve a change in metallurgical properties, which may be undesired. For certain materials such as low alloy steel, care must be taken during stress relief bake so as not to exceed the temperature at which the material achieves maximum hardness (See Tempering in alloy steels ). Cryogenic stress relief involves placing the material (usually steel) into a cryogenic environment such as liquid nitrogen. In this process, the material to be stress relieved will be cooled to a cryogenic temperature for a long period, then slowly brought back to room temperature. Mechanical methods to relieve undesirable surface tensile stresses and replace them with beneficial compressive residual stresses include shot peening and laser peening. Each works the surface of the material with a media: shot peening typically uses a metal or glass material; laser peening uses high intensity beams of light to induce a shock wave that propagates deep into the material.
https://en.wikipedia.org/wiki/Residual_stress
In chemistry , mechanically interlocked molecular architectures ( MIMAs ) are molecules that are connected as a consequence of their topology . This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond . Examples of mechanically interlocked molecular architectures include catenanes , rotaxanes , molecular knots , and molecular Borromean rings . Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa , Jean-Pierre Sauvage , and J. Fraser Stoddart . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both " supramolecular assemblies " and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots , cyclotides or lasso-peptides such as microcin J25 which are proteins , and a variety of peptides . Residual topology [ 5 ] is a descriptive stereochemical term to classify a number of intertwined and interlocked molecules, which cannot be disentangled in an experiment without breaking of covalent bonds , while the strict rules of mathematical topology allow such a disentanglement. Examples of such molecules are rotaxanes , catenanes with covalently linked rings (so-called pretzelanes ), and open knots (pseudoknots) which are abundant in proteins . The term "residual topology" was suggested on account of a striking similarity of these compounds to the well-established topologically nontrivial species, such as catenanes and knotanes (molecular knots). The idea of residual topological isomerism introduces a handy scheme of modifying the molecular graphs and generalizes former efforts of systemization of mechanically bound and bridged molecules. Experimentally the first examples of mechanically interlocked molecular architectures appeared in the 1960s with catenanes being synthesized by Wasserman and Schill and rotaxanes by Harrison and Harrison. The chemistry of MIMAs came of age when Sauvage pioneered their synthesis using templating methods. [ 6 ] In the early 1990s the usefulness and even the existence of MIMAs were challenged. The latter concern was addressed by X ray crystallographer and structural chemist David Williams. Two postdoctoral researchers who took on the challenge of producing [5]catenane (olympiadane) pushed the boundaries of the complexity of MIMAs that could be synthesized their success was confirmed in 1996 by a solid‐state structure analysis conducted by David Williams. [ 7 ] The introduction of a mechanical bond alters the chemistry of the sub components of rotaxanes and catenanes. Steric hindrance of reactive functionalities is increased and the strength of non-covalent interactions between the components are altered. [ 8 ] The strength of non-covalent interactions in a mechanically interlocked molecular architecture increases as compared to the non-mechanically bonded analogues. This increased strength is demonstrated by the necessity of harsher conditions to remove a metal template ion from catenanes as opposed to their non-mechanically bonded analogues. This effect is referred to as the "catenand effect". [ 9 ] [ 10 ] The augmented non-covalent interactions in interlocked systems compared to non-interlocked systems has found utility in the strong and selective binding of a range of charged species, enabling the development of interlocked systems for the extraction of a range of salts. [ 11 ] This increase in strength of non-covalent interactions is attributed to the loss of degrees of freedom upon the formation of a mechanical bond. The increase in strength of non-covalent interactions is more pronounced on smaller interlocked systems, where more degrees of freedom are lost, as compared to larger mechanically interlocked systems where the change in degrees of freedom is lower. Therefore, if the ring in a rotaxane is made smaller the strength of non-covalent interactions increases, the same effect is observed if the thread is made smaller as well. [ 12 ] The mechanical bond can reduce the kinetic reactivity of the products, this is ascribed to the increased steric hindrance. Because of this effect hydrogenation of an alkene on the thread of a rotaxane is significantly slower as compared to the equivalent non interlocked thread. [ 13 ] This effect has allowed for the isolation of otherwise reactive intermediates. The ability to alter reactivity without altering covalent structure has led to MIMAs being investigated for a number of technological applications. The ability for a mechanical bond to reduce reactivity and hence prevent unwanted reactions has been exploited in a number of areas. One of the earliest applications was in the protection of organic dyes from environmental degradation .
https://en.wikipedia.org/wiki/Residual_topology
In abstract algebra , a residuated lattice is an algebraic structure that is simultaneously a lattice x ≤ y and a monoid x • y which admits operations x \ z and z / y , loosely analogous to division or implication, when x • y is viewed as multiplication or conjunction, respectively. Called respectively right and left residuals, these operations coincide when the monoid is commutative. The general concept was introduced by Morgan Ward and Robert P. Dilworth in 1939. Examples, some of which existed prior to the general concept, include Boolean algebras , Heyting algebras , residuated Boolean algebras , relation algebras , and MV-algebras . Residuated semilattices omit the meet operation ∧, for example Kleene algebras and action algebras . In mathematics , a residuated lattice is an algebraic structure L = ( L , ≤, •, I ) such that In (iii), the "greatest y ", being a function of z and x , is denoted x \ z and called the right residual of z by x . Think of it as what remains of z on the right after "dividing" z on the left by x . Dually, the "greatest x " is denoted z / y and called the left residual of z by y . An equivalent, more formal statement of (iii) that uses these operations to name these greatest values is (iii)' for all x , y , z in L , y ≤ x \ z ⇔ x • y ≤ z ⇔ x ≤ z / y . As suggested by the notation, the residuals are a form of quotient. More precisely, for a given x in L , the unary operations x • and x \ are respectively the lower and upper adjoints of a Galois connection on L , and dually for the two functions • y and / y . By the same reasoning that applies to any Galois connection, we have yet another definition of the residuals, namely, together with the requirement that x • y be monotone in x and y . (When axiomatized using (iii) or (iii)' monotonicity becomes a theorem and hence not required in the axiomatization.) These give a sense in which the functions x • and x \ are pseudoinverses or adjoints of each other, and likewise for • x and / x . This last definition is purely in terms of inequalities, noting that monotonicity can be axiomatized as x • y ≤ ( x ∨ z ) • y and similarly for the other operations and their arguments. Moreover, any inequality x ≤ y can be expressed equivalently as an equation, either x ∧ y = x or x ∨ y = y . This along with the equations axiomatizing lattices and monoids then yields a purely equational definition of residuated lattices, provided the requisite operations are adjoined to the signature ( L , ≤, •, I ) thereby expanding it to ( L , ∧, ∨, •, I , /, \) . When thus organized, residuated lattices form an equational class or variety , whose homomorphisms respect the residuals as well as the lattice and monoid operations. Note that distributivity x • ( y ∨ z ) = ( x • y ) ∨ ( x • z ) and x •0 = 0 are consequences of these axioms and so do not need to be made part of the definition. This necessary distributivity of • over ∨ does not in general entail distributivity of ∧ over ∨ , that is, a residuated lattice need not be a distributive lattice. However distributivity of ∧ over ∨ is entailed when • and ∧ are the same operation, a special case of residuated lattices called a Heyting algebra . Alternative notations for x • y include x ◦ y , x ; y ( relation algebra ), and x ⊗ y ( linear logic ). Alternatives for I include e and 1'. Alternative notations for the residuals are x → y for x \ y and y ← x for y / x , suggested by the similarity between residuation and implication in logic, with the multiplication of the monoid understood as a form of conjunction that need not be commutative. When the monoid is commutative the two residuals coincide. When not commutative, the intuitive meaning of the monoid as conjunction and the residuals as implications can be understood as having a temporal quality: x • y means x and then y , x → y means had x (in the past) then y (now),   and y ← x means if-ever x (in the future) then y (at that time), as illustrated by the natural language example at the end of the examples. One of the original motivations for the study of residuated lattices was the lattice of (two-sided) ideals of a ring . Given a ring R , the ideals of R , denoted Id( R ) , forms a complete lattice with set intersection acting as the meet operation and "ideal addition" acting as the join operation. The monoid operation • is given by "ideal multiplication", and the element R of Id( R ) acts as the identity for this operation. Given two ideals A and B in Id( R ) , the residuals are given by It is worth noting that {0}/ B and B \{0} are respectively the left and right annihilators of B . This residuation is related to the conductor (or transporter ) in commutative algebra written as ( A : B )= A / B . One difference in usage is that B need not be an ideal of R : it may just be a subset. Boolean algebras and Heyting algebras are commutative residuated lattices in which x • y = x ∧ y (whence the unit I is the top element 1 of the algebra) and both residuals x \ y and y / x are the same operation, namely implication x → y . The second example is quite general since Heyting algebras include all finite distributive lattices , as well as all chains or total orders , for example the unit interval [0,1] in the real line, or the integers and ± ∞ {\displaystyle \pm \infty } . The structure ( Z , min , max , +, 0, −, −) (the integers with subtraction for both residuals) is a commutative residuated lattice such that the unit of the monoid is not the greatest element (indeed there is no least or greatest integer), and the multiplication of the monoid is not the meet operation of the lattice. In this example the inequalities are equalities because − (subtraction) is not merely the adjoint or pseudoinverse of + but the true inverse. Any totally ordered group under addition such as the rationals or the reals can be substituted for the integers in this example. The nonnegative portion of any of these examples is an example provided min and max are interchanged and − is replaced by monus , defined (in this case) so that x - y = 0 when x ≤ y and otherwise is ordinary subtraction. A more general class of examples is given by the Boolean algebra of all binary relations on a set X , namely the power set of X 2 , made a residuated lattice by taking the monoid multiplication • to be composition of relations and the monoid unit to be the identity relation I on X consisting of all pairs ( x , x ) for x in X . Given two relations R and S on X , the right residual R \ S of S by R is the binary relation such that x ( R \ S ) y holds just when for all z in X , zRx implies zSy (notice the connection with implication). The left residual is the mirror image of this: y ( S / R ) x holds just when for all z in X , xRz implies ySz . This can be illustrated with the binary relations < and > on {0,1} in which 0 < 1 and 1 > 0 are the only relationships that hold. Then x (>\<) y holds just when x = 1, while x (</>) y holds just when y = 0, showing that residuation of < by > is different depending on whether we residuate on the right or the left. This difference is a consequence of the difference between <•> and >•<, where the only relationships that hold are 0(<•>)0 (since 0<1>0) and 1(>•<)1 (since 1>0<1). Had we chosen ≤ and ≥ instead of < and >, ≥\≤ and ≤/≥ would have been the same because ≤•≥ = ≥•≤, both of which always hold between all x and y (since x ≤1≥ y and x ≥0≤ y ). The Boolean algebra 2 Σ* of all formal languages over an alphabet (set) Σ forms a residuated lattice whose monoid multiplication is language concatenation LM and whose monoid unit I is the language {ε} consisting of just the empty string ε. The right residual M \ L consists of all words w over Σ such that Mw ⊆ L . The left residual L / M is the same with wM in place of Mw . The residuated lattice of all binary relations on X is finite just when X is finite, and commutative just when X has at most one element. When X is empty the algebra is the degenerate Boolean algebra in which 0 = 1 = I . The residuated lattice of all languages on Σ is commutative just when Σ has at most one letter. It is finite just when Σ is empty, consisting of the two languages 0 (the empty language {}) and the monoid unit I = {ε} = 1 . The examples forming a Boolean algebra have special properties treated in the article on residuated Boolean algebras . In natural language residuated lattices formalize the logic of "and" when used with its noncommutative meaning of "and then." Setting x = bet , y = win , z = rich , we can read x • y ≤ z as "bet and then win entails rich." By the axioms this is equivalent to y ≤ x → z meaning "win entails had bet then rich", and also to x ≤ z ← y meaning "bet entails if-ever win then rich." Humans readily detect such non-sequiturs as "bet entails had win then rich" and "win entails if-ever bet then rich" as both being equivalent to the wishful thinking "win and then bet entails rich." [ citation needed ] Humans do not so readily detect that Peirce's law (( P → Q )→ P )→ P is a classical tautology , an interesting situation where humans exhibit more proficiency with non-classical reasoning than classical (for example, in relevance logic , Peirce's law is not a tautology). [ relevant? ] A residuated semilattice is defined almost identically for residuated lattices, omitting just the meet operation ∧. Thus it is an algebraic structure L = (L, ∨, •, 1, /, \) satisfying all the residuated lattice equations as specified above except those containing an occurrence of the symbol ∧. The option of defining x ≤ y as x ∧ y = x is then not available, leaving only the other option x ∨ y = y (or any equivalent thereof). Any residuated lattice can be made a residuated semilattice simply by omitting ∧. Residuated semilattices arise in connection with action algebras , which are residuated semilattices that are also Kleene algebras , for which ∧ is ordinarily not required.
https://en.wikipedia.org/wiki/Residuated_lattice
In mathematics, the concept of a residuated mapping arises in the theory of partially ordered sets . It refines the concept of a monotone function . If A , B are posets , a function f : A → B is defined to be monotone if it is order-preserving: that is, if x ≤ y implies f ( x ) ≤ f ( y ). This is equivalent to the condition that the preimage under f of every down-set of B is a down-set of A . We define a principal down-set to be one of the form ↓{ b } = { b ' ∈ B : b ' ≤ b }. In general the preimage under f of a principal down-set need not be a principal down-set. If all of them are, f is called residuated . The notion of residuated map can be generalized to a binary operator (or any higher arity ) via component-wise residuation. This approach gives rise to notions of left and right division in a partially ordered magma , additionally endowing it with a quasigroup structure. (One speaks only of residuated algebra for higher arities). A binary (or higher arity) residuated map is usually not residuated as a unary map. [ 1 ] If A , B are posets, a function f : A → B is residuated if and only if the preimage under f of every principal down-set of B is a principal down-set of A . If B is a poset, the set of functions A → B can be ordered by the pointwise order f ≤ g ↔ (∀ x ∈ A) f ( x ) ≤ g ( x ). It can be shown that a monotone function f is residuated if and only if there exists a (necessarily unique) monotone function f + : B → A such that f o f + ≤ id B and f + o f ≥ id A , where id is the identity function . The function f + is the residual of f . A residuated function and its residual form a Galois connection under the (more recent) monotone definition of that concept, and for every (monotone) Galois connection the lower adjoint is residuated with the residual being the upper adjoint. [ 2 ] Therefore, the notions of monotone Galois connection and residuated mapping essentially coincide. Additionally, we have f -1 (↓{ b }) = ↓{ f + ( b )}. If B ° denotes the dual order (opposite poset) to B then f : A → B is a residuated mapping if and only if there exists an f * such that f : A → B ° and f * : B ° → A form a Galois connection under the original antitone definition of this notion. If f : A → B and g : B → C are residuated mappings, then so is the function composition gf : A → C , with residual ( gf ) + = f + g + . The antitone Galois connections do not share this property. The set of monotone transformations (functions) over a poset is an ordered monoid with the pointwise order, and so is the set of residuated transformations. [ 3 ] If • : P × Q → R is a binary map and P , Q , and R are posets, then one may define residuation component-wise for the left and right translations, i.e. multiplication by a fixed element. For an element x in P define x λ ( y ) = x • y , and for x in Q define λ x ( y ) = y • x . Then • is said to be residuated if and only if x λ and λ x are residuated for all x (in P and respectively Q ). Left (and respectively right) division are defined by taking the residuals of the left (and respectively right) translations: x \ y = ( x λ ) + ( y ) and x / y = ( λ x ) + ( y ) For example, every ordered group is residuated, and the division defined by the above coincides with notion of division in a group . A less trivial example is the set Mat n ( B ) of square matrices over a boolean algebra B , where the matrices are ordered pointwise . The pointwise order endows Mat n ( B ) with pointwise meets, joins and complements. Matrix multiplication is defined in the usual manner with the "product" being a meet, and the "sum" a join. It can be shown [ 4 ] that X \ Y = ( Y t X ′)′ and X / Y = ( X ′ Y t )′ , where X ′ is the complement of X , and Y t is the transposed matrix ).
https://en.wikipedia.org/wiki/Residuated_mapping
In mathematics , specifically in group theory , residue-class-wise affine groups are certain permutation groups acting on Z {\displaystyle \mathbb {Z} } (the integers ), whose elements are bijective residue-class-wise affine mappings . A mapping f : Z → Z {\displaystyle f:\mathbb {Z} \rightarrow \mathbb {Z} } is called residue-class-wise affine if there is a nonzero integer m {\displaystyle m} such that the restrictions of f {\displaystyle f} to the residue classes (mod m {\displaystyle m} ) are all affine . This means that for any residue class r ( m ) ∈ Z / m Z {\displaystyle r(m)\in \mathbb {Z} /m\mathbb {Z} } there are coefficients a r ( m ) , b r ( m ) , c r ( m ) ∈ Z {\displaystyle a_{r(m)},b_{r(m)},c_{r(m)}\in \mathbb {Z} } such that the restriction of the mapping f {\displaystyle f} to the set r ( m ) = { r + k m ∣ k ∈ Z } {\displaystyle r(m)=\{r+km\mid k\in \mathbb {Z} \}} is given by Residue-class-wise affine groups are countable , and they are accessible to computational investigations . Many of them act multiply transitively on Z {\displaystyle \mathbb {Z} } or on subsets thereof. A particularly basic type of residue-class-wise affine permutations are the class transpositions : given disjoint residue classes r 1 ( m 1 ) {\displaystyle r_{1}(m_{1})} and r 2 ( m 2 ) {\displaystyle r_{2}(m_{2})} , the corresponding class transposition is the permutation of Z {\displaystyle \mathbb {Z} } which interchanges r 1 + k m 1 {\displaystyle r_{1}+km_{1}} and r 2 + k m 2 {\displaystyle r_{2}+km_{2}} for every k ∈ Z {\displaystyle k\in \mathbb {Z} } and which fixes everything else. Here it is assumed that 0 ≤ r 1 < m 1 {\displaystyle 0\leq r_{1}<m_{1}} and that 0 ≤ r 2 < m 2 {\displaystyle 0\leq r_{2}<m_{2}} . The set of all class transpositions of Z {\displaystyle \mathbb {Z} } generates a countable simple group which has the following properties: It is straightforward to generalize the notion of a residue-class-wise affine group to groups acting on suitable rings other than Z {\displaystyle \mathbb {Z} } , though only little work in this direction has been done so far. See also the Collatz conjecture , which is an assertion about a surjective , but not injective residue-class-wise affine mapping.
https://en.wikipedia.org/wiki/Residue-class-wise_affine_group
In climate engineering , the residue-to-product ratio (RPR) is used to calculate how much unused crop residue might be left after harvesting a particular crop. Also called the residue yield or straw/grain ratio , the equation takes the mass of residue divided by the mass of crop produced, and the result is dimensionless. [ 1 ] The RPR can be used to project costs and benefits of bio-energy projects, and is crucial in determining financial sustainability. The RPR is particularly important for estimating the production of biochar , a beneficial farm input obtained from crop residues through pyrolysis . However, it is important to note that RPR values are rough estimates taken from broad production statistics, and can vary greatly depending on crop variety, climate, processing, and residual moisture content. [ 2 ]
https://en.wikipedia.org/wiki/Residue-to-product_ratio
In chemistry , residue is whatever remains or acts as a contaminant after a given class of events. Residue may be the material remaining after a process of preparation, separation, or purification, such as distillation , evaporation , or filtration . It may also denote the undesired by-products of a chemical reaction . Residues as an undesired by-product are a concern in agricultural and food industries. Toxic chemical residues, wastes or contamination from other processes, are a concern in food safety. The most common food residues originate from pesticides, veterinary drugs, and industrial chemicals. [ 1 ] For example, the U.S. Food and Drug Administration (FDA) and the Canadian Food Inspection Agency (CFIA) have guidelines for detecting chemical residues that are possibly dangerous to consume. [ 2 ] [ 3 ] In the U.S., the FDA is responsible for setting guidelines while other organizations enforce them. Similar to the food industry, in environmental sciences residue also refers to chemical contaminants. Residues in the environment are often the result of industrial processes, such as escaped chemicals from mining processing, fuel leaks during industrial transportation, trace amounts of radioactive material, and excess pesticides that enter the soil. [ 4 ] Residue may refer to an atom or a group of atoms that form part of a molecule , such as a methyl group . In biochemistry and molecular biology , the term residue refers to a specific monomer within the polymeric chain of a polysaccharide , protein or nucleic acid . In proteins, the carboxyl group of one amino acid links with the amino group of another amino acid to form a peptide . This results in the removal of water, and what remains is called the residue. In naming residues, the word acid is replaced with residue . [ 5 ] A residue's properties will influence interactions with other residues and the overall chemical properties of the protein it resides in. One might say, "This protein consists of 118 amino acid residues" or "The histidine residue is considered to be basic because it contains an imidazole ring." Note that a residue is different from a moiety , which, in the above example would be constituted by the imidazole ring or the imidazole moiety . A DNA or RNA residue is a single nucleotide in a nucleic acid . Examples of residues in DNA are the bases "A", "T", "G", and "C".
https://en.wikipedia.org/wiki/Residue_(chemistry)
A residue curve describes the change in the composition of the liquid phase of a chemical mixture during continuous evaporation at the condition of vapor–liquid equilibrium (open distillation). Multiple residue curves for a single system are called residue curves map . Residue curves allow testing the feasibility of a separation of mixtures and therefore are a valuable tool in designing distillation processes. Residue curve maps are typically used for examining ternary mixtures which can't be easily separated by distillation because of azeotropic points or too small relative volatilities . Pure components and azeotropic points are called node s. Three different types are possible: The distillation regions and the nodes are the topology of the mixture. The calculation of residue curves is done by solving the mass balance over time by numerical integration with methods like Runge-Kutta . d x d ξ = x − y {\displaystyle {\frac {dx}{d\xi }}=x-y} with x: vector of liquid compositions in mole fractions [mol/mol] y: vector of vapor compositions in mole fractions [mol/mol] ξ: dimensionless time The integration of this equation can be done forward and backward in time allowing the calculation from any feed composition to the beginning and end of the residue curve. The ternary mixture of chloroform, methanol and acetone has three binary azeotropes and one ternary azeotrope. Together with the three pure components the system has seven nodes which altogether form four distallation regions. Two nodes are stable (pure methanol and the binary azeotrope of chloroform and acetone which have both the lowest vapor pressure (isothermal calculation) in their two regions where they are part of. The other two binary azeotropes are unstable nodes. They have the highest vapor pressure in their regions. The other nodes are saddles (the ternary azeotrope, the pure acetone and the pure chloroform). The border lines in this system connect the ternary azeotrope (saddle) with the two stable nodes and the two unstable nodes. The residue curves are always moving away from an unstable node to a saddle but never reaches that because they then turn to a stable node.
https://en.wikipedia.org/wiki/Residue_curve
In the fields of engineering and construction, resilience is the ability to absorb or avoid damage without suffering complete failure and is an objective of design, maintenance and restoration for buildings and infrastructure , as well as communities. [ 1 ] [ 2 ] [ 3 ] A more comprehensive definition is that it is the ability to respond, absorb, and adapt to, as well as recover in a disruptive event. [ 4 ] A resilient structure/system/community is expected to be able to resist to an extreme event with minimal damages and functionality disruptions during the event; after the event, it should be able to rapidly recovery its functionality similar to or even better than the pre-event level. [ 5 ] The concept of resilience originated from engineering and then gradually applied to other fields. It is related to that of vulnerability. Both terms are specific to the event perturbation, meaning that a system/infrastructure/community may be more vulnerable or less resilient to one event than another one. However, they are not the same. One obvious difference is that vulnerability focuses on the evaluation of system susceptibility in the pre-event phase; resilience emphasizes the dynamic features in the pre-event, during-event, and post-event phases. [ 6 ] Resilience is a multi-facet property, covering four dimensions: technical, organization, social and economic. [ 7 ] Therefore, using one metric may not be representative to describe and quantify resilience. In engineering, resilience is characterized by four Rs: robustness, redundancy, resourcefulness, and rapidity. Current research studies have developed various ways to quantify resilience from multiple aspects, such as functionality- and socioeconomic- related aspects. [ 6 ] The built environment need resilience to existing and emerging threats such as severe wind storms or earthquakes and creating robustness and redundancy in building design. New implications of changing conditions on the efficiency of different approaches to design and planning can be addressed in the following term. [ 8 ] Engineering resilience has inspired other fields and influenced the way how they interpret resilience, e.g. supply chain resilience . According to the dictionary, resilience means "the ability to recover from difficulties or disturbance." The root of the term resilience is found in the Latin term 'resilio' which means to go back to a state or to spring back. [ 9 ] In the 1640s the root term provided a resilience in the field of the mechanics of materials as "the ability of a material to absorb energy when it is elastically deformed and to release that energy upon unloading". By 1824, the term had developed to encompass the meaning of ‘elasticity’. [ 10 ] Thomas Tredgold was the first to introduce the concept of resilience in 1818 in England. [ 11 ] The term was used to describe a property in the strength of timber, as beams were bent and deformed to support heavy load. Tredgold found the timber durable and did not burn readily, despite being planted in bad soil conditions and exposed climates. [ 12 ] Resilience was then refined by Mallett in 1856 in relation to the capacity of specific materials to withstand specific disturbances. These definitions can be used in engineering resilience due to the application of a single material that has a stable equilibrium regime rather than the complex adaptive stability of larger systems. [ 13 ] [ 14 ] In the 1970s, researchers studied resilience in relation to child psychology and the exposure to certain risks. Resilience was used to describe people who have “the ability to recover from adversity.” One of the many researchers was Professor Sir Michael Rutter, who was concerned with a combination of risk experiences and their relative outcomes. [ 15 ] In his paper Resilience and Stability of Ecological systems (1973), C.S. Holling first explored the topic of resilience through its application to the field of ecology. Ecological resilience was defined as a "measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between state variables." [ 16 ] Holling found that such a framework can be applied to other forms of resilience. The application to ecosystems was later used to draw into other manners of human, cultural and social applications. The random events described by Holling are not only climatic, but instability to neutral systems can occur through the impact of fires, the changes in forest community or the process of fishing. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. Multiple state systems rather than objects should b studied as the world is a heterogeneous space with various biological, physical and chemical characteristics. [ 17 ] Unlike material and engineering resilience, Ecological and social resilience focus on the redundancy and persistence of multi-equilibrium states to maintain existence of function. Engineering resilience refers to the functionality of a system in relation to hazard mitigation. Within this framework, resilience is calculated based on the time it takes a system to return to a single state equilibrium. [ 18 ] Researchers at the MCEER (Multi-Hazard Earthquake Engineering research center) have identified four properties of resilience: Robustness, resourcefulness, redundancy and rapidity. [ 19 ] Social-ecological resilience, also known as adaptive resilience, [ 20 ] is a new concept that shifts the focus to combining the social, ecological and technical domains of resilience. The adaptive model focuses on the transformable quality of the stable state of a system. In adaptive buildings, both short term and long term resilience are addressed to ensure that the system can withstand disturbances with social and physical capacities. Buildings operate at multiple scale and conditions, therefore it is important to recognize that constant changes in architecture are expected. Laboy and Fannon recognize that the resilience model is shifting, and have applied the MCEER four properties of resilience to the planning, designing and operating phases of architecture. [ 18 ] Rather than using four properties to describe resilience, Laboy and Fannon suggest a 6R model that adds Recovery for the operation phase of a building and Risk Avoidance for the planning phase of the building. In the planning phase of a building, site selection, building placement and site conditions are crucial for the risk avoidance. Early planning can help prepare and design for the built environment based on forces that we understand and perceive. In the operation phase of the building, a disturbance does not mark the end of resilience, but should propose a recovery plan for future adaptations. Disturbances should be used as a learning opportunity to assess mistakes and outcomes, and reconfigure for future needs. The international building code provides minimum requirements for buildings using performative based standards. The most recent International Building Code (IBC)was released in 2018 by the International Code Council (ICC), focusing on standards that protect public health, safety and welfare, without restricting use of certain building methods. The code addresses several categories, which are updated every three years to incorporate new technologies and changes. Building codes are fundamental to the resilience of communities and their buildings, as “Resilience in the built environment starts with strong, regularly adopted and properly administered building codes” [ 21 ] Benefits occur due to the adoption of codes as the National Institute of Building Sciences (NIBS) found that the adoption of the International Building Code provides an $11 benefit for every $1 invested. [ 22 ] The International Code Council is focused on assuming the community's buildings support the resilience of communities ahead of disasters. The process presented by the ICC includes understanding the risks, identifying strategies for the risks, and implementing those strategies. Risks vary based on communities, geographies and other factors. The American Institute of Architects created a list of shocks and stresses that are related to certain community characteristics. Shocks are natural forms of hazards (floods, earthquakes), while stresses are more chronic events that can develop over a longer period of time (affordability, drought). It is important to understand the application of resilient design on both shocks and stresses as buildings can play a part in contributing to their resolution. Even though the IBC is a model code, it is adopted by various state and governments to regulate specific building areas. Most of the approaches to minimizing risks are organized around building use and occupancy. In addition, the safety of a structure is determined by material usage, frames, and structure requirements can provide a high level of protection for occupants. Specific requirements and strategies are provided for each shock or stress such as with tsunamis, fires and earthquakes. [ 23 ] The U.S Resiliency Council (USRC), a non-profit organization, created the USRC Rating system which describes the expected impacts of a natural disaster on new and existing buildings. The rating considers the building prior to its use through its structure, Mechanical-Electrical systems and material usage. Currently, the program is in its pilot stage, focusing primarily on earthquake preparedness and resilience. For earthquake hazards, the rating relies heavily on the requirements set by the Building codes for design. Buildings can obtain one of the Two types of USRC rating systems: The verified Rating system is used for marketing and publicity purposes using badges. The rating is easy to understand, credible and transparent at is awarded by professionals. The USRC building rating system rates buildings with stars ranging from one to five stars based on the dimensions used in their systems. The three dimensions that the USRC uses are Safety, Damage and Recovery. Safety describes the prevention of potential harm for people after an event. Damage describes the estimated repair required due to replacements and losses. Recovery is calculated based on the time it takes for the building to regain function after a shock. [ 24 ] The following types of Rating certification can be achieved: Earthquake Building rating system can be obtained through hazard evaluation and seismic testing. In addition to the technical review provided by the USRC, A CRP seismic analysis applies for a USRC rating with the required documentation. [ 24 ] The USRC is planning on creating similar standards for other natural hazards such as floods, storms and winds. Transaction rating system provides a building with a report for risk exposure, possibly investments and benefits. This rating remains confidential with the USRC and is not used to publicize or market the building. Due to the current focus on seismic interventions, the USRC does not take into consideration several parts of a building. The USRC building rating system does not take into consideration any changes to the design of the building that might occur after the rating is awarded. Therefore, changes that might impede the resilience of a building would not affect the rating that the building was awarded. In addition, changes in the uses of the building after certification might include the use of hazardous materials would not affect the rating certification of the building. The damage rating does not include damage caused by pipe breakage, building upgrades and damage to furnishings. The recovery rating does not include fully restoring all building function and all damages but only a certain amount. In 2013, The 100 Resilient Cities Program was initiated by the Rockefeller foundation , with the goal to help cities become more resilient to physical, social and economic shocks and stresses. The program helps facilitate the resilience plans in cities around the world through access to tools, funding and global network partners such as ARUP and the AIA. Of 1,000 cities that applied to join the program, only 100 cities were selected with challenges ranging from aging populations, cyber attacks, severe storms and drug abuse. There are many cities that are members of the program, but in the article, Building up resilience in cities worldwide, Spaans and Waterhot focus on the city of Rotterdam to compare the city's resilience before and after the participation in the program. The authors found that the program broadens the scope and improved the Resilience plan of Rotterdam by including access to water, data, clean air, cyber robustness, and safe water. The program addresses other social stresses that can weaken the resilience of cities such as violence and unemployment. Therefore, cities are able to reflect on their current situation and plan to adapt to new shocks and stresses. [ 25 ] The findings of the article can support the understanding of resiliency at a larger urban scale that requires an integrated approach with coordination across multiple government scales, time scales and fields. In addition to integrating resiliency into building code and building certification programs, the 100 resilience Cities program provides other support opportunities that can help increase awareness through non-profit organizations. [ 25 ] After more than six years of growth and change, the existing 100 Resilient Cities organization concluded on July 31, 2019. [ 26 ] RELi is a design criteria used to develop resilience in multiple scales of the built environment such as buildings, neighborhoods and infrastructure. It was developed by the Institute for Market Transformation to Sustainability (MTS) to help designers plan for hazards. [ 27 ] RELi is very similar to LEED but with a focus on resilience. RELi is now owned by the U.S Green Building Council (USGBC) and available to projects seeking LEED certification. The first version of RELi was released in 2014, it is currently still in the pilot phase, with no points allocated for specific credits. RELi accreditation is not required, and the use of the credit information is voluntary. Therefore, the current point system is still to be determined and does not have a tangible value. RELi provides a credit catalog that is used a s a reference guide for building design and expands on the RELi definition of resilience as follows: Resilient Design pursues Buildings + Communities that are shock resistant, healthy, adaptable and regenerative through a combination of diversity, foresight and the capacity for self-organization and learning. A Resilient Society can withstand shocks and rebuild itself when necessary. It requires humans to embrace their capacity to anticipate, plan and adapt for the future. [ 28 ] The RELi Catalog considers multiple scales of intervention with requirements for a panoramic approach, risk adaptation & mitigation for acute events and a comprehensive adaptation & mitigation for the present and future. RELi's framework highly focuses on social issues for community resilience such as providing community spaces and organisations. RELi also combines specific hazard designs such as flood preparedness with general strategies for energy and water efficiency. The following categories are used to organize the RELi credit list: The RELI Program complements and expands on other popular rating systems such as LEED, Envision, and Living Building Challenge. The menu format of the catalog allows users to easily navigate the credits and recognize the goals achieved by RELI. References to other rating systems that have been used can help increase awareness on RELi and its credibility of its use. The reference for each credit is listed in the catalog for ease of access. [ 28 ] In 2018, three new LEED pilot credits were released to increase awareness on specific natural and man-made disasters. The pilot credits are found in the Integrative Process category and are applicable to all Building Design and Construction rating systems. [ 29 ] LEED credits overlap with RELi rating system credits, the USGBC has been refining RELi to better synthesize with the LEED resilient design pilot credits. It is important to assess current climate data and design in preparation of changes or threats to the environment. Resilience plans and passive design strategies can differ based on climates that are too hot. Here are general climate responsive design strategies based on three different climatic conditions: [ 31 ] Determining and assessing vulnerabilities to the built environment based on specific locations is crucial for creating a resilience plan. Disasters lead to a wide range of consequences such as damaged buildings, ecosystems and human losses. For example, earthquakes that took place in the Wenchuan County in 2008, lead to major landslides which relocated entire city district such as Old Beichuan. [ 33 ] Here are some natural hazards and potential strategies for resilience assessment. There are multiple strategies for protecting structures against hurricanes, based on wind and rain loads. Earthquakes can also result in the structural damage and collapse of buildings due to high stresses on building frames. It is difficult to discuss the concepts of resilience and sustainability in comparison due to the various scholarly definitions that have been used in the field over the years. Many policies and academic publications on both topics either provide their own definitions of both concepts or lack a clear definition of the type of resilience they seek. Even though sustainability is a well established term, there are generic interpretations of the concept and its focus. Sanchez et al. proposed a new characterization of the term ‘sustainable resilience’ which expands the social-ecological resilience to include more sustained and long-term approaches. Sustainable resilience focuses not only on the outcomes, but also on the processes and policy structures in the implementation. [ 34 ] Both concepts share essential assumptions and goals such as passive survivability and persistence of a system operation over time and in response to disturbances. There is also a shared focus on climate change mitigation as they both appear in larger frameworks such as Building Code and building certification programs. Holling and Walker argue that “a resilient sociol-ecological system is synonymous with a region that is ecological, economically and socially sustainable.” [ 35 ] Other scholars such as Perrings state that “a development strategy is not sustainable if it is not resilient.” [ 36 ] [ 37 ] Therefore, the two concepts are intertwined and cannot be successful individually as they are dependent on one another. For example, in RELi and in LEED and other building certifications, providing access to safe water and an energy source is crucial before, during and after a disturbance. [ 35 ] Some scholars argue that resilience and sustainability tactics target different goals. Paula Melton argues that resilience focuses on the design for unpredictable, while sustainability focuses on the climate responsive designs. Some forms of resilience such as adaptive resilience focus on designs that can adapt and change based on a shock event, on the other hand, sustainable design focuses on systems that are efficient and optimized. [ 38 ] The first influential quantitative resilience metric based on the functionality recovery curve was proposed by Bruneau et al., [ 7 ] where resilience is quantified as the resilience loss as follows. R L = ∫ t 0 t f [ 100 % − Q ( t ) ] d t {\displaystyle R_{L}=\int _{t_{0}}^{t_{f}}[100\%-Q(t)]dt} where Q ( t ) {\displaystyle Q(t)} is the functionality at time t {\displaystyle t} ; t 0 {\displaystyle t_{0}} is the time when the event strikes; t f {\displaystyle t_{f}} is the time when the functionality full recovers. The resilience loss is a metric of only positive value. It has the advantage of being easily generalized to different structures, infrastructures, and communities. This definition assumes that the functionality is 100% pre-event and will eventually be recovered to a full functionality of 100%. This may not be true in practice. A system may be partially functional when a hurricane strikes and may not be fully recovered due to uneconomic cost-benefit ratio. Resilience index is a normalized metric between 0 and 1, computed from the functionality recovery curve. [ 39 ] R = ∫ t 0 t h Q ( t ) d t t h − t 0 {\displaystyle R={\frac {\int _{t_{0}}^{t_{h}}Q(t)dt}{t_{h}-t_{0}}}} where Q ( t ) {\displaystyle Q(t)} is the functionality at time t {\displaystyle t} ; t 0 {\displaystyle t_{0}} is the time when the event strikes; t h {\displaystyle t_{h}} is the time horizon of interest.
https://en.wikipedia.org/wiki/Resilience_(engineering_and_construction)
In mathematical modeling , resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state . [ 1 ] It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition . A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley. A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation. The concept of resilience is particularly useful in systems that exhibit tipping points , whose study has a long history that can be traced back to catastrophe theory . While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems. [ 2 ] [ 3 ] In 1973, Canadian ecologist C. S. Holling proposed a definition of resilience in the context of ecological systems. According to Holling, resilience is "a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables". Holling distinguished two types of resilience: engineering resilience and ecological resilience . [ 4 ] Engineering resilience refers to the ability of a system to return to its original state after a disturbance, such as a bridge that can be repaired after an earthquake. Ecological resilience, on the other hand, refers to the ability of a system to maintain its identity and function despite a disturbance, such as a forest that can regenerate after a wildfire while maintaining its biodiversity and ecosystem services . With time, the once well-defined and unambiguous concept of resilience has experienced a gradual erosion of its clarity, becoming more vague and closer to an umbrella term than a specific concrete measure. [ 5 ] Mathematically, resilience can be approximated by the inverse of the return time to an equilibrium [ 6 ] [ 7 ] [ 8 ] given by resilience ≡ − Re ( λ 1 ( A ) ) ) {\displaystyle {\text{resilience}}\equiv -{\text{Re}}(\lambda _{1}({\textbf {A}})))} where λ 1 {\textstyle \lambda _{1}} is the maximum eigenvalue of matrix A {\textstyle {\textbf {A}}} . The largest this value is, the faster a system returns to the original stable steady state, or in other words, the faster the perturbations decay. [ 9 ] In ecology , resilience might refer to the ability of the ecosystem to recover from disturbances such as fires, droughts, or the introduction of invasive species. A resilient ecosystem would be one that is able to adapt to these changes and continue functioning, while a less resilient ecosystem might experience irreversible damage or collapse. [ 10 ] The exact definition of resilience has remained vague for practical matters, which has led to a slow and proper application of its insights for management of ecosystems. [ 11 ] In epidemiology , resilience may refer to the ability of a healthy community to recover from the introduction of infected individuals. That is, a resilient system is more likely to remain at the disease-free equilibrium after the invasion of a new infection. Some stable systems exhibit critical slowing down where, as they approach a basic reproduction number of 1, their resilience decreases, hence taking a longer time to return to the disease-free steady state. [ 12 ] Resilience is an important concept in the study of complex systems , where there are many interacting components that can affect each other in unpredictable ways. [ 13 ] Mathematical models can be used to explore the resilience of such systems and to identify strategies for improving their resilience in the face of environmental or other changes. For example, when modelling networks it is often important to be able to quantify network resilience, or network robustness , to the loss of nodes. Scale-free networks are particularly resilient [ 14 ] since most of their nodes have few links. This means that if some nodes are randomly removed, it is more likely that the nodes with fewer connections are taken out, thus preserving the key properties of the network. [ 15 ]
https://en.wikipedia.org/wiki/Resilience_(mathematics)
Definitions for power system resilience vary, but a commonly cited definition was proposed by the Federal Energy Regulatory Com- mission (FERC), which defines it as ”the ability to withstand and reduce the magnitude and/or duration of disruptive events, which includes the capacity to anticipate, absorb, adapt to, and/or rapidly recover from such an event". [ 1 ] Attributes that define a resilient system are, for example, predictive capacity, robustness, adaptability, rate of recovery and adaptive capacity. [ 2 ] All systems have different vulnerabilities, and resilience is therefore studied in both a system- and event-specific manner. [ 3 ] The event scope for resilience is often defined as high-impact low-probability events, or extreme events. [ 4 ] However, as extreme climate-related events and cybersecurity events have grown more common, a more universal definition is that resilience deals with n-k, k>5 events. [ 5 ] High-impact low-probability events can be caused by for example, extreme weather, human error, ageing infrastructure or cyberattacks. These events cause significant devastation over vast areas for an extended period. For example, a severe storm can knock out power to a large geographical area, while a cyberattack on the communication systems can disrupt the entire power grid . As these events are rare, there is limited data available on their effects, which makes calculating the risks of the events challenging. Additionally, the interdependence of different infrastructures, such as energy, transportation, and communication, can exacerbate the impact of a disruptive event. Therefore, strategic resilience planning for mitigating the negative effects of future events requires a comprehensive approach that considers a range of potential disruptive events and their potential impact across the power system infrastructure. [ 6 ] Regardless of the reasons, one growing concern is that power outages result in economic losses and hardship for people who have become increasingly reliant on electricity for even basic comforts. So it is essential that electrical power systems (EPSs) around the world are resilient. A resilient EPS should ensure an uninterrupted power supply, even in the face of minor faults and major disruptive events. It should be robust enough to be reliable and have the ability to predict and prepare for potential outages. Additionally, a resilient EPS should have a mechanism to quickly recover and restore power to critical establishments. However, while power system reliability is well-defined and has established metrics in the electricity sector, resiliency is often confused with reliability, despite some similarities. [ 7 ] According to the findings of National Academies report, the electric grid's smooth operation, which is organized in a hierarchical structure and tightly interconnected on a large scale, will remain crucial for ensuring dependable electric service to the majority of consumers over the next two decades. [ 8 ] Power disruptions are problematic for both consumers and the electric system itself. These disruptions are typically caused by physical damage to local parts of the system, such as lightning strikes, falling trees, or equipment failure. The majority of outages affecting customers in the United States are caused by events that occur in the distribution system, while larger storms, natural phenomena, and operator errors can cause outages across the high-voltage system. A variety of events, such as hurricanes, ice storms, droughts, earthquakes, wildfires, and vandalism, can lead to outages. When power goes out, life becomes more challenging, especially in terms of communication, business operations, and traffic control. Brief outages are usually manageable, but longer and wider outages result in greater costs and inconveniences. Critical services like medical care, emergency services, and communications can be disrupted, leading to potential loss of life. This report focuses on building a resilient electric system that minimizes adverse impacts of large outages, particularly blackouts that last several days or longer and extend over multiple areas or states, which are particularly problematic for a modern economy that depends on reliable electric supply. [ 9 ] Despite the efforts of utilities to prevent and mitigate large-scale power outages, they still occur and cannot be eliminated due to the numerous potential sources of disruption to the power system. It is somewhat surprising that such outages are not more frequent, considering the magnitude of the system and the potential for problems. However, the planners and operators of the system have made great efforts over many years to ensure that the electric system is engineered and operated with a high level of reliability. In recent times, there has been an increased emphasis on resilience as well. The North American Electric Reliability Corporation (NERC), which is responsible for developing reliability standards for the bulk power system, defines reliability in terms of two fundamental concepts. [ 10 ] The system's reliability standards vary in practice, and while the bulk power system maintains a relatively high level of reliability throughout the United States, it cannot be made completely faultless due to its complexity as a " cyber-physical system ." To ensure adequacy of electricity generation capability, a one-day-in-ten-years loss of load standard is commonly used, which means that the generation reserves must be sufficient to prevent voluntary load shedding due to inadequate supply from occurring more than once every ten years. However, with millions of intricate physical, communications, computational , and networked components and systems, the system is inherently complex and cannot attain perfect reliability. Resilience and reliability are two different concepts. Resilience, as defined by the Random House Dictionary of the English Language , refers to the ability to return to the original state after being stretched, compressed, or bent. Moreover, resilience involves recovering from adversity, illness, depression, or other similar situations. It also encompasses the ability to rebound and cope with outages effectively by reducing their impacts, regrouping quickly and efficiently after the event ends, and learning to handle future events better. [ 11 ] Climate-related issues have intensified the attention on energy sustainability and resilience. In the United States, electric utility firms have registered over 2500 significant power outages since 2002, with almost half of them (specifically 1172) attributed to weather events, including storms, hurricanes, and other unspecified severe weather occurrences. [ 12 ] These incidents often lead to significant economic losses. [ 7 ] The Committee on Enhancing the Resilience of the Nation's Electric Power Transmission and Distribution System has developed strategies that seek to reduce the impact of large-scale, long-duration outages. Resilience is not just about preventing these outages from happening, but also limiting their scope and impact, restoring power quickly, and preparing for future events. [ 8 ] Some parts of the United States still rely on regulated, vertically integrated utilities, while others have adopted competitive markets. Efforts to improve resilience must take into account this institutional and policy heterogeneity. [ 8 ] The use of automation at the high-voltage level can improve grid reliability, but also introduces cybersecurity vulnerabilities. These " smart grids " use improved sensing, communication, automation technologies, and advanced metering infrastructure . [ 8 ] Distributed energy resources are rapidly growing in some states, but most U.S. customers will continue to depend on the large-scale, interconnected, and hierarchically structured electric grid. Therefore, strategies to enhance electric power resilience must consider a diverse set of technical and institutional arrangements and a wide variety of hazards. There is no single solution that fits all situations when it comes to avoiding, planning for, coping with, and recovering from major outages. [ 8 ]
https://en.wikipedia.org/wiki/Resilience_(power_system)
Resilience engineering is a subfield of safety science research that focuses on understanding how complex adaptive systems cope when encountering a surprise. The term resilience in this context refers to the capabilities that a system must possess in order to deal effectively with unanticipated events. [ 1 ] Resilience engineering examines how systems build, sustain, degrade, and lose these capabilities. [ 2 ] Resilience engineering researchers have studied multiple safety-critical domains , including aviation , anesthesia , fire safety , space mission control, military operations , power plants, air traffic control, rail engineering , health care, and emergency response to both natural and industrial disasters. [ 2 ] [ 3 ] [ 4 ] Resilience engineering researchers have also studied the non-safety-critical domain of software operations. [ 5 ] Whereas other approaches to safety (e.g., behavior-based safety , probabilistic risk assessment ) focus on designing controls to prevent or mitigate specific known hazards (e.g., hazard analysis ), or on assuring that a particular system is safe (e.g., safety cases ), resilience engineering looks at a more general capability of systems to deal with hazards that were not previously known before they were encountered. In particular, resilience engineering researchers study how people are able to cope effectively with complexity to ensure safe system operation, especially when they are experiencing time pressure. [ 6 ] Under the resilience engineering paradigm, accidents are not attributable to human error . Instead, the assumption is that humans working in a system are always faced with goal conflicts, and limited resources, requiring them to constantly make trade-offs while under time pressure. When failures happen, they are understood as being due to the system temporarily being unable to cope with complexity. [ 7 ] Hence, resilience engineering is related to other perspectives in safety that have reassessed the nature of human error, such as the "new look", [ 8 ] the "new view", [ 9 ] "safety differently", [ 10 ] and Safety-II. [ 11 ] Resilience engineering researchers ask questions such as: Because incidents often involve unforeseen challenges, resilience engineering researchers often use incident analysis as a research method. [ 4 ] [ 3 ] The first symposium on resilience engineering was held in October 2004 in Soderkoping , Sweden. [ 6 ] It brought together fourteen safety science researchers with an interest in complex systems . [ 12 ] A second symposium on resilience engineering was held in November 2006 in Sophia Antipolis, France. [ 13 ] The symposium had eighty participants. [ 14 ] The Resilience Engineering Association , an association of researchers and practitioners with an interest in resilience engineering, continues to hold bi-annual symposia. [ 15 ] These symposia led to a series of books being published (see Books section below). This section discusses aspects of the resilience engineering perspective that are different from traditional approaches to safety. The resilience engineering perspective assumes that the nature of work which people do within a system that contributes to an accident is fundamentally the same as the work that people do that contributes to successful outcomes. As a consequence, if work practices are only examined after an accident and are only interpreted in the context of the accident, the result of this analysis is subject to selection bias . [ 12 ] The resilience engineering perspective posits that a significant number of failure modes are literally inconceivable in advance of them happening, because the environment that systems operate in are very dynamic and the perspectives of the people within the system are always inherently limited. [ 12 ] These sorts of events are sometimes referred to as fundamental surprise . Contrast this with the approach of probabilistic risk assessment which focuses on evaluate conceivable risks. The resilience engineering perspective holds that human performance variability has positive effects as well as negative ones, and that safety is increased by amplifying the positive effects of human variability as well as adding controls to mitigate the negative effects. For example, the ability of humans to adapt their behavior based on novel circumstances is a positive effect that creates safety. [ 12 ] As a consequence, adding controls to mitigate the effects of human variability can reduce safety in certain circumstances [ 16 ] Expert operators are an important source of resilience inside of systems. These operators become experts through previous experience at dealing with failures. [ 12 ] [ 17 ] Under the resilience engineering perspective, the operators are always required to trade-off risks. As a consequence, in order to create safety, it is sometimes necessary for a system to take on some risk. [ 12 ] The researcher Richard Cook distinguishes two separate kinds of work that tend to be conflated under the heading resilience engineering : [ 18 ] The first type of resilience engineering work is determining how to best take advantage of the resilience that is already present in the system. Cook uses the example of setting a broken bone as this type of work: the resilience is already present in the physiology of bone, and setting the bone uses this resilience to achieving better healing outcomes. Cook notes that this first type of resilience work does not require a deep understanding of the underlying mechanisms of resilience: humans have been setting bones long before the mechanism by which bone heals was understood. The second type of resilience engineering work involves altering mechanisms in the system in order to increase the amount of the resilience. Cook uses the example of new drugs such as Abaloparatide and Teriparatide , which mimic Parathyroid hormone-related protein and are used to treat osteoporosis. Cook notes that this second type of resilience work requires a much deeper understanding of the underlying existing resilience mechanisms in order to create interventions that can effectively increase resilience. The safety researcher Erik Hollnagel views resilient performance as requiring four systemic potentials: [ 19 ] This has been described in a White Paper from Eurocontrol on Systemic Potentials Management https://skybrary.aero/bookshelf/systemic-potentials-management-building-basis-resilient-performance The safety researcher David Woods considers the following two concepts in his definition of resilience: [ 20 ] These two concepts are elaborated in Woods's theory of graceful extensibility . Woods contrasts resilience with robustness , which is the ability of a system to deal effectively with potential challenges that were anticipated in advance. The safety researcher Richard Cook argued that bone should serve as the archetype for understanding what resilience is in the Woods perspective. [ 18 ] Cook notes that bone has both graceful extensibility (has a soft boundary at which it can extend function) and sustained adaptability (bone is constantly adapting through a dynamic balance between creation and destruction that is directed by mechanical strain). In Woods's view, there are three common patterns to the failure of complex adaptive systems : [ 21 ] In 2012 the growing interest for resilience engineering gave rise to the sub-field of Resilient Health Care. This led to a series of annual conferences on the topic that are still ongoing as well as a series of books, on Resilient Health Care, and in 2022 to the establishment of the Resilient Health Care Society (registered in Sweden). ( https://rhcs.se/ )
https://en.wikipedia.org/wiki/Resilience_engineering
Resilient asphalt is a type of asphalt concrete designed to reduce aching of feet and joints from walking. [ 1 ] It has been used at the 1939 New York World's Fair [ 1 ] and on Main Street, USA in Walt Disney World . [ 2 ] [ 3 ] This Disney -related article is a stub . You can help Wikipedia by expanding it . This material -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resilient_asphalt
A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature". [ 1 ] Computerized or digital control systems are used to reliably automate many industrial operations such as power plants or automobiles. The complexity of these systems and how the designers integrate them, the roles and responsibilities of the humans that interact with the systems, and the cyber security of these highly networked systems have led to a new paradigm in research philosophy for next-generation control systems. Resilient Control Systems consider all of these elements and those disciplines that contribute to a more effective design, such as cognitive psychology , computer science , and control engineering to develop interdisciplinary solutions. These solutions consider things such as how to tailor the control system operating displays to best enable the user to make an accurate and reproducible response, how to design in cybersecurity protections such that the system defends itself from attack by changing its behaviors, and how to better integrate widely distributed computer control systems to prevent cascading failures that result in disruptions to critical industrial operations. In the context of cyber-physical systems, resilient control systems are an aspect that focuses on the unique interdependencies of a control system, as compared to information technology computer systems and networks, due to its importance in operating our critical industrial operations. Originally intended to provide a more efficient mechanism for controlling industrial operations, the development of digital control systems allowed for flexibility in integrating distributed sensors and operating logic while maintaining a centralized interface for human monitoring and interaction. [ 2 ] This ease of readily adding sensors and logic through software, which was once done with relays and isolated analog instruments, has led to wide acceptance and integration of these systems in all industries. However, these digital control systems have often been integrated in phases to cover different aspects of an industrial operation, connected over a network, and leading to a complex interconnected and interdependent system. [ 3 ] While the control theory applied is often nothing more than a digital version of their analog counterparts, the dependence of digital control systems upon the communications networks, has precipitated the need for cybersecurity due to potential effects on confidentiality, integrity and availability of the information. [ 4 ] To achieve resilience in the next generation of control systems , therefore, addressing the complex control system interdependencies, including the human systems interaction and cybersecurity, will be a recognized challenge. [ 1 ] From a philosophical standpoint, advancing the area of resilient control systems requires a definition, metrics and consideration of the challenges and associated disciplinary fusion to address. From these will fall the value proposition for investment and adoption. Each of these topics will be discussed in what follows, but for perspective consider Fig.1. [ 5 ] Research in resilience engineering over the last decade has focused in two areas, organizational and information technology . Organizational resilience considers the ability of an organization to adapt and survive in the face of threats, including the prevention or mitigation of unsafe, hazardous or compromising conditions that threaten its very existence. [ 6 ] Information technology resilience has been considered from a number of standpoints . [ 7 ] Networking resilience has been considered as quality of service . [ 8 ] Computing has considered such issues as dependability and performance in the face of unanticipated changes . [ 9 ] However, based upon the application of control dynamics to industrial processes, functionality and determinism are primary considerations that are not captured by the traditional objectives of information technology. . [ 10 ] Considering the paradigm of control systems, one definition has been suggested that "Resilient control systems are those that tolerate fluctuations via their structure, design parameters, control structure and control parameters". [ 11 ] However, this definition is taken from the perspective of control theory application to a control system. The consideration of the malicious actor and cyber security are not directly considered, which might suggest the definition, "an effective reconstitution of control under attack from intelligent adversaries," which was proposed. [ 12 ] However, this definition focuses only on resilience in response to a malicious actor. To consider the cyber-physical aspects of control system, a definition for resilience considers both benign and malicious human interaction, in addition to the complex interdependencies of the control system application . [ 13 ] The use of the term “recovery” has been used in the context of resilience , paralleling the response of a rubber ball to stay intact when a force is exerted on it and recover its original dimensions after the force is removed. [ 14 ] Considering the rubber ball in terms of a system, resilience could then be defined as its ability to maintain a desired level of performance or normalcy without irrecoverable consequences. While resilience in this context is based upon the yield strength of the ball, control systems require an interaction with the environment, namely the sensors, valves, pumps that make up the industrial operation. To be reactive to this environment, control systems require an awareness of its state to make corrective changes to the industrial process to maintain normalcy. [ 15 ] With this in mind, in consideration of the discussed cyber-physical aspects of human systems integration and cyber security, as well as other definitions for resilience at a broader critical infrastructure level, [ 16 ] [ 17 ] the following can be deduced as a definition of a resilient control system: A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature [ 1 ] Considering the flow of a digital control system as a basis, a resilient control system framework can be designed. Referring to the left side of Fig. 2, a resilient control system holistically considers the measures of performance or normalcy for the state space . At the center, an understanding of performance and priority provide the basis for an appropriate response by a combination of human and automation, embedded within a multi-agent , semi-autonomous framework. Finally, to the right, information must be tailored to the consumer to address the need and position a desirable response. Several examples or scenarios of how resilience differs and provides benefit to control system design are available in the literature. [ 18 ] [ 13 ] Some primary tenets of resilience , as contrasted to traditional reliability, have presented themselves in considering an integrated approach to resilient control systems. [ 19 ] [ 20 ] [ 21 ] These cyber-physical tenants complement the fundamental concept of dependable or reliable computing by characterizing resilience in regard to control system concerns, including design considerations that provide a level of understanding and assurance in the safe and secure operation of an industrial facility. These tenants are discussed individually below to summarize some of the challenges to address in order to achieve resilience . The benign human has an ability to quickly understand novel solutions, and provide the ability to adapt to unexpected conditions. This behavior can provide additional resilience to a control system, [ 22 ] but reproducibly predicting human behavior is a continuing challenge. The ability to capture historic human preferences can be applied to bayesian inference and bayesian belief networks , but ideally a solution would consider direct understanding of human state using sensors such as an EEG . [ 23 ] [ 24 ] Considering control system design and interaction, the goal would be to tailor the amount of automation necessary to achieve some level of optimal resilience for this mixed initiative response. [ 25 ] Presented to the human would be that actionable information that provides the basis for a targeted, reproducible response. [ 26 ] In contrast to the challenges of prediction and integration of the benign human with control systems, the abilities of the malicious actor (or hacker) to undermine desired control system behavior also create a significant challenge to control system resilience . [ 27 ] Application of dynamic probabilistic risk analysis used in human reliability can provide some basis for the benign actor. [ 28 ] However, the decidedly malicious intentions of an adversarial individual, organization or nation make the modeling of the human variable in both objectives and motives. [ 29 ] However, in defining a control system response to such intentions, the malicious actor looks forward to some level of recognized behavior to gain an advantage and provide a pathway to undermining the system. Whether performed separately in preparation for a cyber attack , or on the system itself, these behaviors can provide opportunity for a successful attack without detection. Therefore, in considering resilient control system architecture, atypical designs that imbed active and passively implemented randomization of attributes, would be suggested to reduce this advantage. [ 30 ] [ 31 ] While much of the current critical infrastructure is controlled by a web of interconnected control systems, either architecture termed as distributed control systems ( DCS ) or supervisory control and data acquisition ( SCADA ), the application of control is moving toward a more decentralized state. In moving to a smart grid, the complex interconnected nature of individual homes, commercial facilities and diverse power generation and storage creates an opportunity and a challenge to ensuring that the resulting system is more resilient to threats. [ 32 ] [ 33 ] The ability to operate these systems to achieve a global optimum for multiple considerations, such as overall efficiency, stability and security, will require mechanisms to holistically design complex networked control systems . [ 34 ] [ 35 ] Multi-agent methods suggest a mechanism to tie a global objective to distributed assets, allowing for management and coordination of assets for optimal benefit and semi-autonomous, but constrained controllers that can react rapidly to maintain resilience for rapidly changing conditions. [ 36 ] [ 37 ] Establishing a metric that can capture the resilience attributes can be complex, at least if considered based upon differences between the interactions or interdependencies. Evaluating the control, cyber and cognitive disturbances, especially if considered from a disciplinary standpoint, leads to measures that already had been established. However, if the metric were instead based upon a normalizing dynamic attribute, such a performance characteristic that can be impacted by degradation, an alternative is suggested. Specifically, applications of base metrics to resilience characteristics are given as follows for type of disturbance: Such performance characteristics exist with both time and data integrity. Time, both in terms of delay of mission and communications latency, and data, in terms of corruption or modification, are normalizing factors. In general, the idea is to base the metric on “what is expected” and not necessarily the actual initiator to the degradation. Considering time as a metrics basis, resilient and un-resilient systems can be observed in Fig. 3. [ 38 ] Dependent upon the abscissa metrics chosen, Fig. 3 reflects a generalization of the resiliency of a system. Several common terms are represented on this graphic, including robustness, agility, adaptive capacity, adaptive insufficiency, resiliency and brittleness. To overview the definitions of these terms, the following explanations of each is provided below: On the abscissa of Fig. 3, it can be recognized that cyber and cognitive influences can affect both the data and the time, which underscores the relative importance of recognizing these forms of degradation in resilient control designs. For cybersecurity, a single cyberattack can degrade a control system in multiple ways. Additionally, control impacts can be characterized as indicated. While these terms are fundamental and seem of little value for those correlating impact in terms like cost, the development of use cases provide a means by which this relevance can be codified. For example, given the impact to system dynamics or data, the performance of the control loop can be directly ascertained and show approach to instability and operational impact. The very nature of control systems implies a starting point for the development of resilience metrics. That is, the control of a physical process is based upon quantifiable performance and measures, including first principles and stochastic. The ability to provide this measurement, which is the basis for correlating operational performance and adaptation, then also becomes the starting point for correlation of the data and time variations that can come from the cognitive, cyber-physical sources. Effective understanding is based upon developing a manifold of adaptive capacity that correlates the design (and operational) buffer. For a power system, this manifold is based upon the real and reactive power assets, the controllable having the latitude to maneuver, and the impact of disturbances over time. For a modern distribution system (MDS), these assets can be aggregated from the individual contributions as shown in Fig. 4. [ 39 ] For this figure, these assets include: a) a battery, b) an alternate tie line source, c) an asymmetric P/Q-conjectured source, d) a distribution static synchronous compensator (DSTATCOM), and e) low latency, four quadrant source with no energy limit. 1) When considering the current digital control system designs, the cyber security of these systems is dependent upon what is considered border protections, i.e., firewalls, passwords, etc. If a malicious actor compromised the digital control system for an industrial operation by a man-in-the-middle attack , data can be corrupted with the control system. The industrial facility operator would have no way of knowing the data has been compromised, until someone such as a security engineer recognized the attack was occurring. As operators are trained to provide a prompt, appropriate response to stabilize the industrial facility, there is a likelihood that the corrupt data would lead to the operator reacting to the situation and lead to a plant upset. In a resilient control system, as per Fig. 2, cyber and physical data is fused to recognize anomalous situations and warn the operator. [ 40 ] 2) As our society becomes more automated for a variety of drivers, including energy efficiency, the need to implement ever more effective control algorithms naturally follow. However, advanced control algorithms are dependent upon data from multiple sensors to predict the behaviors of the industrial operation and make corrective responses. This type of system can become very brittle, insofar as any unrecognized degradation in the sensor itself can lead to incorrect responses by the control algorithm and potentially a worsened condition relative to the desired operation for the industrial facility. Therefore, implementation of advanced control algorithms in a resilient control system also requires the implementation of diagnostic and prognostic architectures to recognize sensor degradation, as well as failures with industrial process equipment associated with the control algorithms. [ 41 ] [ 42 ] [ 43 ] In our world of advancing automation, our dependence upon these advancing technologies will require educated skill sets from multiple disciplines. The challenges may appear simply rooted in better design of control systems for greater safety and efficiency. However, the evolution of the technologies in the current design of automation has created a complex environment in which a cyber-attack, human error (whether in design or operation), or a damaging storm can wreak havoc on the basic infrastructure. The next generation of systems will need to consider the broader picture to ensure a path forward where failures do not lead to ever greater catastrophic events. One critical resource are students who are expected to develop the skills necessary to advance these designs, and require both a perspective on the challenges and the contributions of others to fulfill the need. Addressing this need, a semester course in resilient control systems was established over a decade ago at Idaho and other universities as a catalogue or special topics focus for undergraduate and graduate students. The lessons in this course were codified in a text that provides the basis for the interdisciplinary studies. [ 44 ] In addition, other courses have been developed to provide the perspectives and relevant examples to overview the critical infrastructure issues and provide opportunity to create resilient solutions at such universities as George Mason University and Northeastern . Through the development of technologies designed to set the stage for next generation automation, it has become evident that effective teams are comprised several disciplines. [ 45 ] However, developing a level of effectiveness can be time-consuming, and when done in a professional environment can expend a lot of energy and time that provides little obvious benefit to the desired outcome. It is clear that the earlier these STEM disciplines can be successfully integrated, the more effective they are at recognizing each other's contributions and working together to achieve a common set of goals in the professional world. Team competition at venues such as Resilience Week will be a natural outcome of developing such an environment, allowing interdisciplinary participation and providing an exciting challenge to motivate students to pursue a STEM education. Standards and policy that define resilience nomenclature and metrics are needed to establish a value proposition for investment, which includes government, academia and industry. The IEEE Industrial Electronics Society has taken the lead in forming a technical committee toward this end. The purpose of this committee will be to establish metrics and standards associated with codifying promising technologies that promote resilience in automation. This effort is distinct from more supply chain community focus on resilience and security, such as the efforts of ISO and NIST
https://en.wikipedia.org/wiki/Resilient_control_systems
Resin acid refers to any of several related carboxylic acids found in tree resins . Nearly all resin acids have the same basic skeleton: three fused rings having the empirical formula C 19 H 29 COOH. Resin acids occur in nature as tacky, yellowish gums consisting of several compounds. They are water-insoluble. A common resin acid is abietic acid . [ 1 ] Resin acids are used to produce soaps for diverse applications, but their use is being displaced increasingly by synthetic acids such as 2-ethylhexanoic acid or petroleum-derived naphthenic acids . Resin acids are protectants and wood preservatives that are produced by parenchymatous epithelial cells that surround the resin ducts in trees from temperate coniferous forests . The resin acids are formed when two-carbon and three-carbon molecules couple with isoprene building units to form monoterpenes (volatile), sesquiterpenes (volatile), and diterpenes (nonvolatile) structures. Pines contain numerous vertical and radial resin ducts scattered throughout the entire wood. The accumulation of resin in the heartwood and resin ducts causes a maximum concentration in the base of the older trees. Resin in the sapwood , however, is less at the base of the tree and increases with height. In 2005, as an infestation of the Mountain pine beetle ( Dendroctonus ponderosae ) and blue stain fungus devastated the Lodgepole Pine forests of northern interior British Columbia , Canada, resin acid levels three to four times greater than normal were detected in infected trees, prior to death. These increased levels show that a tree uses the resins as a defense. Resins are both toxic to the beetle and the fungus and also can entomb the beetle in diterpene remains from secretions. Increasing resin production has been proposed as a way to slow the spread of the beetle in the "Red Zone" or the wildlife urban interface. Resin acids originate from geranylgeranyl pyrophosphate , which is acted on (i.e., the substrate for) by copalyl diphosphate synthase . The initial conversion gives copalyl diphosphate , the progenitor of the diterpene diphosphates (nomenclature warning: pyrophosphate and diphosphate are often used interchangeably). Under enzymatic control, this pyrophosphate compound rearranges into the following diterpenes: levopimaradiene, abietadiene , neoabietadiene. Traces of three other diterpenes are also generated: palustradiene, sandaracopimaradiene, and pimara-8(14),15-diene. These hydrocarbons are substrates for cytochrome P450 , which introduces oxygen functionalities, i.e. converts C-H bonds to C-OH bonds and similar reactions involving oxygen in air. This conversion turn terpenes into terpenoids. [ 1 ] Several important resin acids can be identified in rosin, as listed below. [ 2 ] The two classes, abietic acids and pimaric acids, are isomers with the formula C 19 H 29 CO 2 H. The commercial manufacture of wood pulp grade chemical cellulose using the kraft chemical pulping processes releases resin acids. The Kraft process is conducted under strongly alkaline conditions of sodium hydroxide , sodium sulfide , and sodium hydrosulfide . These bases neutralize resin acids, converting them to their respective sodium salts, sodium abietate, ((CH 3 ) 4 C 15 H 17 COONa), sodium pimarate ((CH 3 ) 3 (CH 2 )C 15 H 23 COONa) and so on. In this form, the sodium salts are poorly insoluble and, being of lower density than the spent pulping process liquor, float to the surface of storage vessels during the process of concentration, as a somewhat gelatinous pasty yellow fluid called kraft soap (also called resin soap ). [ 3 ] This soap is used in bleaching and cleaning and as a compound in some varnishes . It also finds use in rubber industry as an emulsifier. Often the soap is pretreated with formaldehyde and maleic anhydride . [ 4 ] Pine soap is refined from resin soap via tall oil by acidification, refining and resaponification. [ 5 ] Kraft soap can be reneutralized with sulfuric acid to restore the acidic forms abietic acid , palmitic acid , and related resin acid components. This refined mixture is called tall oil . Other major components include fatty acids and unsaponifiable sterols . Resin acids, because of the same protectant nature they provide in the trees where they originate, also impose toxic implications on the effluent treatment facilities in pulp manufacturing plants. Furthermore, any residual resin acids that pass the treatment facilities add toxicity to the stream discharged to the receiving waters. The chemical composition of tall oil varies with the species of trees used in pulping, and in turn with geographical location. For example, the coastal areas of the southeastern United States have a high proportion of Slash Pine ( Pinus elliottii ); inland areas of the same region have a preponderance of Loblolly Pine ( Pinus taeda ). Slash Pine generally contains a higher concentration of resin acids than Loblolly Pine. In general, the tall oil produced in coastal areas of the southeastern United States contains over 40% resin acids and sometimes as much as 50% or more. The fatty acids fraction is usually lower than the resin acids, and unsaponifiables amount to 6-8%. Farther north in Virginia , where Pitch Pine ( Pinus rigida )and Shortleaf Pine ( Pinus echinata ) are more dominant, the resin acid content decreases to as low as 30-35% with a corresponding increase in the fatty acids present. In Canada , where mills process Lodgepole Pine ( Pinus contorta ) in interior British Columbia and Alberta , Jack Pine ( Pinus banksiana ), Alberta to Quebec and Eastern White Pine ( Pinus strobus ) and Red Pine ( Pinus resinosa ), Ontario to New Brunswick , resin acid levels of 25% are common with unsaponifiable contents of 12-25%. Similar variations may be found in other parts of the United States and in other countries. For example, in Finland , Sweden and Russia , resin acid values from Scots Pine ( Pinus sylvestris ) may vary from 20 to 50%, fatty acids from 35 to 70%, and unsaponifiables from 6 to 30%. Resin acids are converted into ester gum by reaction with controlled amounts of glycerol or other polyhydric alcohols . Some have drying properties , and as ester gum is used in paints , varnishes , and lacquers . [ 6 ] Resin acids are converted to resin soaps . Resin acids are very poorly soluble in water (milligrams per liter) and have low acute toxicity. [ 7 ]
https://en.wikipedia.org/wiki/Resin_acid
Resin canals or resin ducts are elongated, tube-shaped intercellular spaces surrounded by epithelial cells which secrete resin into the canal. These canals are orientated longitudinally and radially in between fusiform rays. [ 1 ] They are usually found in late wood, or denser wood grown later in the season. [ 2 ] Resin is antiseptic and aromatic and prevents the development of fungi and deters insects. Resin canal characteristics (such as number, size and density) in pine species can determine its resistance to pests. In one study, biologists were able to categorize 84% of lodgepole pine , and 92% of limber pines , as being either susceptible or resistant to bark beetles based only on their resin canals and growth rate over 20 years. [ 3 ] In another study, scientists found ponderosa pine trees that survived drought and bark beetle attacks had resin ducts that were >10% larger in diameter, >25% denser (resin canals per mm 2 ), and composed >50% more area of per ring. [ 4 ] John G. Haygreen, Jim L. Bowyer: Forest products and wood sciences. Iowa State University Press, Ames, Iowa, 1996 (3rd ed.), ISBN 0-8138-2256-4
https://en.wikipedia.org/wiki/Resin_canal
The Resin Identification Code ( RIC ) is a technical standard with a set of symbols appearing on plastic products that identify the plastic resin out of which the product is made. [ 1 ] It was developed in 1988 by the Society of the Plastics Industry (now the Plastics Industry Association ) in the United States, but since 2008 it has been administered by ASTM International , an international standards organization . [ 1 ] Due to resemblance to the recycling symbol , RIC symbols are often mistaken for the former. [ 2 ] Subsequent revisions to the RIC have replaced the arrows with a solid triangle, but the old symbols are still in common use. The US Society of the Plastics Industry (SPI) first introduced the system in 1988 as the "Voluntary Plastic Container Coding System". The SPI stated that one purpose of the original SPI code was to "Provide a consistent national system to facilitate recycling of post-consumer plastics." [ 3 ] The system has been adopted by a growing number of communities implementing recycling programs, as a tool to assist in sorting plastics. In order to deal with the concerns of recyclers across the U.S., the RIC system was designed to make it easier for workers in materials recovery and recycling facilities to sort and separate items according to their resin type. [ citation needed ] Plastics must be recycled separately, with other like materials, in order to preserve the value of the recycled material, and enable its reuse in other products after being recycled. When a number is omitted, the arrows arranged in a triangle resemble the universal recycling symbol , a generic indicator of recyclability. Subsequent revisions to the RIC have replaced the arrows with a solid triangle, in order to address consumer confusion about the meaning of the RIC, and the fact that the presence of a RIC symbol on an item does not necessarily indicate that it is recyclable any more than its absence means the plastic object is necessarily un recyclable. In 2008, ASTM International took over the administration of the RIC system and eventually issued ASTM D7611—Standard Practice for Coding Plastic Manufactured Articles for Resin Identification. [ 4 ] In 2013 this standard was revised to change the graphic marking symbol of the RIC from the "chasing arrows" of the Recycling Symbol to a solid triangle instead. Modifications to the RIC are currently being discussed and developed by ASTM's D20.95 subcommittee on recycled plastics. [ 5 ] In the U.S. the Sustainable Packaging Coalition has also created a " How2Recycle " label [ 6 ] in an effort to replace the RIC with a label that aligns more closely with how the public currently uses the RIC. Rather than indicating what type of plastic resin a product is made out of, the four "How2Recycle" labels indicate whether a plastic product is The "How2Recycle" labels also encourage consumers to check with local facilities to see what plastics each municipal recycling facility can accept. The different resin identification codes are part of the Unicode block called Miscellaneous Symbols and have the following codes: ♳ (U+2673), ♴ (U+2674), ♵ (U+2675), ♶ (U+2676), ♷ (U+2677), ♸ (U+2678), and ♹ (U+2679). ♺ (U+267A). Below are the RIC symbols after ASTM's 2013 revision [ 10 ] [ 11 ] In the United States, use of the RIC in the coding of plastics has led to ongoing consumer confusion about which plastic products are recyclable. When many plastics recycling programs were first being implemented in communities across the United States, only plastics with RICs "1" and "2" (polyethylene terephthalate and high-density polyethylene, respectively) were accepted to be recycled. The list of acceptable plastic items has grown since then, [ 1 ] and in some areas municipal recycling programs can collect and successfully recycle most plastic products regardless of their RIC. This has led some communities to instruct residents to refer to the form of packaging (i.e. "bottles", "tubs", "lids", etc.) when determining what to include in a curbside recycling bin, rather than instructing them to rely on the RIC. [ 12 ] To further alleviate consumer confusion, the American Chemistry Council launched the "Recycling Terms & Tools" program to promote standardized language that can be used to educate consumers about how to recycle plastic products. However, even when it is technically possible to recycle a particular plastic, it is often economically unfeasible to recycle it, and this can mislead consumers into thinking that more plastic is recycled than really is. [ 13 ] In the U.S. in 2018, only 8.5% of plastic waste was recycled. [ 14 ]
https://en.wikipedia.org/wiki/Resin_identification_code
Resiniferatoxin ( RTX ) is a naturally occurring chemical found in resin spurge ( Euphorbia resinifera ), a cactus-like plant commonly found in Morocco , and in Euphorbia poissonii found in northern Nigeria . [ 1 ] It is a potent functional analog of capsaicin , the active ingredient in chili peppers . [ 2 ] Resiniferatoxin has a score of 16 billion Scoville heat units , making pure resiniferatoxin about 500 to 1000 times hotter than pure capsaicin . [ 3 ] [ 4 ] Resiniferatoxin activates transient vanilloid receptor 1 (TRPV1) in a subpopulation of primary afferent sensory neurons involved in nociception , the transmission of physiological pain. [ 5 ] [ 6 ] TRPV1 is an ion channel in the plasma membrane of sensory neurons and stimulation by resiniferatoxin causes this ion channel to become permeable to cations , especially calcium . The influx of cations causes the neuron to depolarize, transmitting signals similar to those that would be transmitted if the innervated tissue were being burned or damaged. This stimulation is followed by desensitization and analgesia , in part because the nerve endings die from calcium overload. [ 7 ] [ 8 ] A total synthesis of (+)-resiniferatoxin was completed by the Paul Wender group at Stanford University in 1997. [ 9 ] The process begins with a starting material of 1,4-pentadien-3-ol and consists of more than 25 significant steps. As of 2007, this represented the only complete total synthesis of any member of the daphnane family of molecules. [ 10 ] One of the main challenges in synthesizing a molecule such as resiniferatoxin is forming the three-ring backbone of the structure. The Wender group was able to form the first ring of the structure by first synthesizing Structure 1 in Figure 1. By reducing the ketone of Structure 1 followed by oxidizing the furan nucleus with m-CPBA and converting the resulting hydroxy group to an oxyacetate, Structure 2 can be obtained. Structure 2 contains the first ring of the three-ring structure of RTX. It reacts through an oxidopyrylium cycloaddition when heated with DBU in acetonitrile to form Structure 4 by way of Intermediate 3. Several steps of synthesis are required to form Structure 5 from Structure 4, with the main goal of positioning the allylic branch of the seven-membered ring in a trans conformation. Once this conformation is achieved, zirconocene-mediated cyclization of Structure 5 can occur, and oxidizing the resulting hydroxy group with TPAP will yield Structure 6. Structure 6 contains all three rings of the RTX backbone and can then be converted to resiniferatoxin through additional synthesis steps attaching the required functional groups. [ 9 ] An alternative approach to synthesizing the three-ring backbone makes use of radical reactions to create the first and third rings in a single step, followed by the creation of the remaining ring. It has been proposed by the Masayuki Inoue group of the University of Tokyo . [ 11 ] [ 12 ] At 16 billion Scoville units, resiniferatoxin is rather toxic and can inflict chemical burns in minute quantities. The primary action of resiniferatoxin is to activate sensory neurons responsible for the perception of pain. It is currently the most potent TRPV1 agonist known, [ 13 ] with ~500x higher binding affinity for TRPV1 than pure capsaicin , the active ingredient in hot chili peppers such as those produced by Capsicum annuum . It is 3 to 4 orders of magnitude more potent than capsaicin for effects on thermoregulation and neurogenic inflammation. [ 14 ] For rats, LD50 through oral ingestion is 148.1 mg/kg. [ 15 ] It causes severe burning pain in sub-microgram (less than 1/1,000,000th of a gram) quantities when ingested orally. Sorrento Therapeutics has been developing RTX as a means to provide pain relief for forms of advanced cancer . [ 16 ] [ 17 ] The nerve desensitizing properties of RTX were once thought to be useful to treat overactive bladder (OAB) by preventing the bladder from transmitting "sensations of urgency" to the brain, similar to how they can prevent nerves from transmitting signals of pain; RTX has never received FDA approval for this use. [ 4 ] RTX has also previously been investigated as a treatment for interstitial cystitis , rhinitis , and lifelong premature ejaculation (PE). [ 17 ] [ 18 ]
https://en.wikipedia.org/wiki/Resiniferatoxin
Resinol is a skin protectant and topical analgesic that is made by ResiCal Inc. from Orchard Park, New York . It is an over-the-counter drug that can currently be purchased in 1.25 or 3.3 ounce (35 or 94 g) jars by contacting a local pharmacy's drug wholesaler to order the item or on the Internet . It is an ointment that is beige in color and has a distinctive rubbery scent. It has a tacky consistency and is somewhat difficult to get off undesired body parts it comes into contact with (like one’s fingers after applying it). Resinol was developed by Dr. Merville Hamilton Carter (1857-1939) in his private practice as treatment for his patients in Baltimore, Maryland during the late 19th century. [ 1 ] In 1895, Carter, along with his brother Allan L. Carter and his cousin Henry Stier Dulaney founded the Resinol Chemical Company and began to mass-produce the ointment and other medical products. After over forty years of selling Resinol, the company had John H. Buffham & Co. as an outlet in Great Britain and was a successful global distributor. Henry LeRoy Carter Sr., the son of Dr. Carter, began running the company after the deaths of his father and other staff members. The company's sales began to decline in the 1940s, and after the death of Henry LeRoy Carter Sr. in 1951, [ 2 ] his son Henry LeRoy Carter Jr. took the place of his father and grandfather as president of the Resinol Chemical Company. At that time, the company focused more on soap manufacturing, but continued to sell Resinol. For the rest of the 20th century, Resinol's popularity continued to dwindle. It was purchased by ResiCal Inc. in 2002, [ 3 ] which at the time was headed by D. Brooks Cole. [ 4 ] Resinol is used to treat several different types of skin ailments. It is used to prevent and temporarily protect chafed, chapped, or cracked skin , temporarily relieve pain and itching caused by minor burns, minor cuts and scrapes, minor skin irritations and sunburn , and dry the oozing and weeping of irritation caused by contact with poison ivy , poison oak , and poison sumac . Adults and children that are two years of age or older should apply Resinol to affected area of skin no more than three to four times a day. A physician should be asked if an application would be appropriate for a child younger than two years. Resinol is for external uses only. When using it, avoid contact with eyes and do not apply over large areas of the body. Discontinue use and ask a physician if any condition worsens where applied, symptoms last more than seven days, or symptoms clear up and reappear within a few days. Keep out of reach of children. If swallowed, get medical help or contact a poison control center immediately. The active ingredients used in Resinol are a 55% solution of petroleum jelly and a 2% solution of resorcinol . Calamine , corn starch , lanolin , and zinc oxide comprise the inactive ingredients . In the 1980s, Resinol ointment was manufactured by the Mentholatum Company of Buffalo , New York 14213, a maker of liniment . Its ingredients statement then read Zinc Oxide 12% (an antibacterial agent and sunscreen); Calamine 6%; Resorcinol 2% (also an antibacterial). In the 1960s, Resinol came in an opaque, white glass jar with a metal lid. It was made by the Resinol Chemical Company of Baltimore , Maryland 21201, and the label listed the following ingredients: Resorcinol ; Oil of Cade (Cade is a species of prickly juniper native to the regions surrounding the Mediterranean ; the oil gave the unguent a medicinal odor); Prepared Calamine ; Zinc Oxide ; Bismuth Subnitrate (now used mostly in veterinary medicine); Boric Acid (antibacterial); Lanolin ; Petrolatum .
https://en.wikipedia.org/wiki/Resinol
In the context of ecological stability , resistance is the property of communities or populations to remain "essentially unchanged" [ 1 ] when subject to disturbance . [ 2 ] [ 3 ] : 789 [ 4 ] [ 5 ] The inverse of resistance is sensitivity. [ 1 ] Resistance is one of the major aspects of ecological stability . Volker Grimm and Christian Wissel identified 70 terms and 163 distinct definitions of the various aspects of ecological stability, but found that they could be reduced to three fundamental properties: "staying essentially unchanged", "returning to the reference state...after a temporary disturbance" and "persistence through time of an ecological system." Resistant communities are able to remain "essentially unchanged" despite disturbance. [ 1 ] Although commonly seen as distinct from resilience , Brian Walker and colleagues considered resistance to be a component of resilience in their expanded definition of resilience, [ 6 ] while Fridolin Brand used a definition of resilience that he described as "close to the stability concept 'resistance', as identified by Grimm and Wissel (1997)". [ 7 ] The inverse of resistance is sensitivity - sensitive species or communities show large changes when subject to environmental stress or disturbance. [ 1 ] However, anthropologist Munira Khayyat offers a new perspective on resistance in ecology beyond natural ecosystems. In her study of South Lebanon, she examines how plants and landscapes persist and adapt through cycles of war and occupation, introducing the concept of ‘resistant ecologies’. Unlike traditional definitions of ecological resistance, which often frame it as the ability to remain unchanged, Khayyat’s concept demonstrates resistance as an adaptation to an ongoing state of instability, underscoring its distinctions with resilience. In contrast to resilience, which implies recovery after disturbance, resistance in this context refers to a continuous survival strategy in deadly environments such as war. [ 8 ] In 1988, Hurricane Joan hit the rainforests along Nicaragua 's Caribbean coast. Douglas Boucher and colleagues contrasted the resistant response of Qualea paraensis with the resilient response of Vochysia ferruginea ; the mortality rate was low for Q. paraensis (despite extensive damage to the trees), but the growth rates of surviving trees were also low and few seedlings established. Despite the disturbance, populations were essentially unchanged. In contrast, V. ferruginea experienced very high rates of mortality in the hurricane but showed very high rates of seedling recruitment. As a result, population densities of the species increased. [ 9 ] In their study of Jamaican montane forests affected by Hurricane Hugo in 1988, Peter Bellingham and colleagues used the degree of hurricane damage and the magnitude of the post-hurricane response to categorise tree species into four groups – resistant species (those with limited storm damage and low response), susceptible species (greater damage but low response), usurpers (limited damage but high response) and resilient species (greater damage and high response). [ 10 ] English ecologist Charles Elton applied the term resistance to the ecosystem properties which limit the ability of introduced species to successfully invade communities. These properties include both abiotic factors like temperature and drought, and biotic factors including competition , parasitism , predation and the lack of necessary mutualists . Higher species diversity and lower resource availability can also contribute to resistance. [ 11 ]
https://en.wikipedia.org/wiki/Resistance_(ecology)
Read the Wiktionary entry "resistance distance" You can also:
https://en.wikipedia.org/wiki/Resistance_distance_(mechanics)
Resistance paper , [ 1 ] [ 2 ] also known as conductive paper and by the trade name Teledeltos paper is paper impregnated or coated with a conductive substance such that the paper exhibits a uniform and known surface resistivity . Resistance paper and conductive ink were commonly used as an analog two-dimensional [ 3 ] electromagnetic field solver . Teledeltos paper is a particular type of resistance paper. This article about materials science is a stub . You can help Wikipedia by expanding it . This electromagnetism -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resistance_paper
Resistbot is a service that people in the United States can use to compose and send letters to elected officials from the messaging apps on their mobile phones, with the goal being that the task can be completed in "under two minutes". [ 1 ] It identifies a user's federal, state, and city [ 2 ] elected officials, then provides an electronic service to deliver to those officials, as well as to local newspapers, and to publish online. As the platform has developed, Resistbot has added functionality such as confirming voter registrations, locating town halls, finding volunteer opportunities, and locating polling places. [ 1 ] Resistbot has been funded by over 24,000 small-dollar donations as of September 12, 2017, [ 3 ] and is built and maintained by volunteers. [ 4 ] Resistbot was established by Eric Ries and Jason Putorti in January 2017. [ 5 ] Jason Putorti attended the University of Pittsburgh where he graduated with a BS in computer science. [ 6 ] Before launching Resistbot, he served as the designer at AngelList and previously co-founded Causes and Votizen . [ 7 ] He expresses that one of his goals in creating Resistbot was to create a universal way to increase civic engagement and civic education. [ 8 ] Though the program was founded to oppose the actions of the Trump administration , [ 9 ] it functions as an unbiased channel, allowing users to compose their own messages. Unlike many other advocacy efforts, it provides no scripts to users. [ 10 ] Donations from users pay for postage for letters and voter registration forms, faxes and calls to officials, and texts between the users and the service. [ 1 ] When Resistbot began, letters were faxed [ 11 ] to officials' offices. However, as the program received more heavy usage, and officials started to unplug their fax machines, it switched to electronic delivery as a primary channel, with faxes, postal letters, and hand deliveries as secondary methods. [ 12 ] The first states that had access to Resistbot's feature of texting one's state legislature were Arizona, California, Florida, Maryland, New Jersey, Ohio, Oklahoma, South Carolina, Texas, Utah, and Washington. [ 13 ] Between June 21 and 22, 2018 alone, Resistbot volunteers delivered 12,781 letters to the U.S. Senate, largely about family separation. [ 14 ] Those letters represented only a small sample of deliveries overall. [ 15 ] Within five months of launch Resistbot had 730,000 users, [ 16 ] by six months 1 million, [ 17 ] and after fifteen months 4.5 million. [ 18 ] As of 2023, nearly 10 million people have used the service to send 35 million letters, and Resistbot has handled 450 million text messages. [ 19 ] Resistbot has been featured on many news and magazine sites including Recode , Teen Vogue , Fast Company , Engadget , GOOD Magazine , The Guardian , The Miami Herald , and Huffington Post . In an interview with Recode , Putorti acknowledged that though the product's main purpose was to voice those in opposition to the Trump Presidency, the system delivers all messages without regard to political views. [ 20 ] Resistbot's Twitter feed features many responses by members of Congress to users who have sent messages through the software. [ 21 ] It was called "The Most Genius Thing Of 2017" by GOOD magazine. [ 22 ] In April 2017 Resistbot added a feature called "Letters to the Editor". This feature allows users to choose to send their message both to their elected official, and directly to a local newspaper or media source in their area. This allows the message they wrote to get seen by their communities and can help them gain support for their cause, potentially leading to more people texting Resistbot about this cause. [ 16 ] During the congressional recess in August 2017, Resistbot helped to facilitate what they called flash-mobs. When members of congress were refusing to attend town hall meetings, Resistbot encouraged users to organize or protest in order to help gain support for their causes. [ 16 ] In November 2017, Resistbot was used as a channel by Medium to push net neutrality letters to Congress. The article published seven letter templates for readers to send to their representatives in favor of net neutrality. [ 23 ] Individuals couldn't send a message to the FCC or its commissioners, only the elected officials who attend to the address that the user enters into the prompts. In January 2018, The Peace Report published an article pushing its users to send letters to government officials through Resistbot in order to oppose the construction of two new military bases in Okinawa. [ 24 ] The article contained a letter template for readers to copy and paste to Congress representatives. In February 2018, WUSA TV fact checked and verified that texting "NRA" to Resistbot would tell users how their officials had benefited, or been hurt by, NRA contributions. [ 25 ] In September 2018, InStyle listed it as a way to "make your voice heard," regarding the nomination of Brett Kavanaugh . [ 26 ] In 2020, Resistbot was cited as a way to help save the United States Postal Service by Mashable , [ 27 ] New York , [ 28 ] and Vogue . [ 29 ] The "widespread claims" were fact-checked by Snopes , which tested the service and wrote, "the process took around five minutes with the only significant delay coming when the user awaits a verification code sent to their email address. Snopes could confirm that the letter was sent to the representatives in question because the office of one of them, U.S. Rep. Matt Cartwright , D-Pennsylvania, happened to respond later on with an acknowledgement that explicitly addressed the topic of the letter." [ 30 ] During the 2018 midterms, Analyst Institute ran an academic control-group study on Resistbot. The authors wrote that the voter turnout program, "was highly cost-effective and able to generate an impressive number of net voters, surpassing the performance of many other programs in midterm election cycles." [ 31 ] In another study, Dr. Christopher Mann, Ph. D. wrote in the Journal of Experimental Political Science that Resistbot, "increased turnout by 1.8 percentage points in a 2019 election". [ 32 ] Resistbot is able to offer a program that can allow for normal-day citizens to get in contact with a representative to have one on one conversations about government concerns/issues. The fears of many have been subsided because they can now go to a person who is informed and in power in order to get a better view of issues rather than looking at the mispread of information on various media sources. [ 33 ] However, there is one new fear as a result of this system which is being scammed due to its technological qualities. [ 34 ] The older generation and even some from this current generation are having a hard time adjusting to the switch from in-person to online which is why simplicity in technology platforms is useful. [ 35 ] In a primary example of simple technology usage, right after Ruth Bader Ginsburg passed, Resistbot had a message that said if you texted RBG to their number, then this would be counted as a vote for delaying the replacement of the new Supreme Court nomination. [ 36 ] However, the general public was convinced that this could be a scam, it seemed too easy and unreliable to actually place a vote towards such a big issue. The message was then later verified as not a scam, but once fully verified on Snopes, [ 37 ] their time window to vote had already been shut. This issue of people not knowing what is a scam and what isn't is important when looking at a company that solely relies on human action and trust. A reason why Resistbot is easy to use is due to the easy access of sending a text. Texting has become more and more frequent, Jason Putorti, the executive director of Resistbot stated that the reason their system uses texting is due to how immediate the responses can get read and sent. [ 38 ] Texting is a medium that not all people trust, due to such a large network of scam within the country especially political related, even corporations like local news warn against replying to political texts. [ 36 ] Texting is such a large source of communication and can be spun out of control when personal numbers are leaked to phone books and businesses. For example, when interviewing a lady from Arizona, she displayed her concern for how a business like Resistbot retrieved her number and what database it could be leaked into. [ 36 ] Jason Putorti mentions how in-person interaction will continue to be much more limited due to the result of the COVID-19 pandemic. [ 36 ] The pandemic highlighted how important civic technologies are and how important it is for them to grow just like Resistbot has. Resistbot created an efficient way to spread the voter's ideas to a representative, rather than be an email that goes left in the unread pile and eventually moved to the trash. A problem occurs when scam businesses try to imitate Resistbot; the Federal Trade Commission has come out and said that if any sort of text starts asking for personal information such as bank account information, then it should be avoided. [ 36 ] Resistbot is currently working on ways to improve its trust with its users. Despite the legitimacy concerns, Resistbot has proven that it is useful in modern day issues. Due to White House Offices being slammed with calls and letters due to the 2020 election, many people are unable to schedule calls and in-person visits. [ 38 ] Resistbot is able to save time and still convey the message needed to be said to representatives, a quick text is all that is necessary. A way that Resistbot is making an effort to be personal despite the lack of a face to face conversation is that Resistbot does not offer any scripts or pre-planned letters. [ 36 ] This gives the voter and representative a bit more faith that this process is legitimate. The app plans to add more additions to its system such as an event map for town hall meetings and scheduled Congress phone calls. [ 38 ] The app is adamant about growing and creating a platform that is trusted as well as useful to the daily citizen. [ 39 ] In March 2017, Micah L. Sifry wrote, "making it easier to digitally contact your Member of Congress paradoxically makes it more likely that they will discount the value of your opinion," in a criticism of the service. [ 40 ] Lee Drutman similarly wrote, "these services and technologies are cheapening the meaning of civic engagement by turning it into a commodity..." [ 41 ] In September 2017, during the political fight over health care, Eric Ries told Business Insider, "if I didn't read the news, I would know when there's a new bill from the server melt down problems alone." [ 4 ] Official website
https://en.wikipedia.org/wiki/Resistbot
Resisting AI: An Anti-fascist Approach to Artificial Intelligence is a book on artificial intelligence (AI) by Dan McQuillan, published in 2022 by Bristol University Press . Resisting AI takes the form of an extended essay, [ 1 ] which contrasts optimistic visions about AI's potential by arguing that AI may best be seen as a continuation and reinforcement of bureaucratic forms of discrimination and violence, ultimately fostering authoritarian outcomes. [ 2 ] For McQuillan, AI's promise of objective calculability is antithetical to an egalitarian and just society. [ 3 ] [ 4 ] McQuillan uses the expression "AI violence" to describe how – based on opaque algorithms – various actors can discriminate against categories of people in accessing jobs, loans, medical care, and other benefits. [ 2 ] The book suggests that AI has a political resonance with soft eugenic approaches to the valuation of life by modern welfare states, [ 5 ] and that AI exhibits eugenic features in its underlying logic, as well as in its technical operations. [ 5 ] The parallel is with historical eugenicists achieving saving to the state by sterilizing defectives so the state would not have to care for their offspring. [ 5 ] The analysis of McQuillan goes beyond the known critique of AI systems fostering precarious labour markets, addressing "necropolitics", the politics of who is entitled to live, and who to die. [ 2 ] [ 6 ] Although McQuillan offers a brief history of machine learning at the beginning of the book – with its need for "hidden and undercompensated labour", [ 6 ] he is concerned more with the social impacts of AI rather than with its technical aspects. [ 7 ] [ 6 ] McQuillan sees AI as the continuation of existing bureaucratic systems that already marginalize vulnerable groups – aggravated by the fact that AI systems trained on existing data are likely to reinforce existing discriminations, e.g. in attempting to optimize welfare distribution based on existing data patterns, [ 7 ] ultimately creating a system of "self-reinforcing social profiling". [ 8 ] In elaborating on the continuation between existing bureaucratic violence and AI, McQuillan connects to Hannah Arendt 's concept of the thoughtless bureaucrat in Eichmann in Jerusalem: A Report on the Banality of Evil , which now becomes the algorithm that, lacking intent, cannot be accountable, and is thus endowed with an "algorithmic thoughtlessness". [ 9 ] McQuillan defends the "fascist" in the title of the work by arguing that while not all AI is fascist, this emerging technology of control may end up being deployed by fascist or authoritarian regimes. [ 10 ] For McQuillan, AI can support the diffusion of states of exception , as a technology impossible to properly regulate and a mechanism for multiplying exceptions more widely. An example of a scenario where AI systems of surveillance could bring discrimination to a new high is the initiative to create LGBT -free zones in Poland. [ 11 ] [ 7 ] Skeptical of ethical regulations to control the technology, McQuillan suggests people's councils and workers' councils, and other forms of citizens' agency to resist AI. [ 7 ] A chapter titled "Post-Machine Learning" makes an appeal for resistance via currents of thought from feminist science ( standpoint theory ), post-normal science ( extended peer communities ), and new materialism ; McQuillan encourages the reader to question the meaning of "objectivity" and calls for the necessity of alternative ways of knowing. [ 12 ] Among the virtuous examples of resistance – possibly to be adopted by the AI workers themselves – McQuillan notes [ 13 ] the Lucas Plan of the workers of Lucas Aerospace Corporation , [ 14 ] in which a workforce declared redundant took control, reorienting the enterprise toward useful products. [ 10 ] The work of McQuillan [ 15 ] warns against "watered-down forms of engagement" with AI, such as citizen juries, which superficially look like democratic deliberation but may actually obscure important decisions about AI that are outside the purview of the engagement situation (McQuillan 2022, 128). In an interview about the book, McQuillan defines himself as an "AI abolitionist". [ 16 ] The book is praised for "masterfully disassembles AI as an epistemological, social, and political paradigm, [ 17 ] and for his examination of how most of the data that is fed into "privatized AI infrastructure is “amputated” [ 18 ] from context or embodied experience and ultimately processed through crowdsourcing." On the critical side, a review in the academic journal Justice, Power and Resistance took exception to the "nightmarish visions of Big Brother" offered by McQuillan, and argued that while many elements of AI may pose concern, a critique should not be based on a caricature of what AI is, concluding that McQuillan's work is "less of a theory and more of a Manifesto". [ 3 ] Another review notes "a disconnect between the technical aspects of AI and the socio-political analysis McQuillan provides." [ 7 ] Although the book was published before the ChatGPT and large language model debate heated up, the book has not lost relevance to the AI discussion. [ 19 ] It is noted [ 20 ] for suggesting a link between beliefs in artificial intelligence and beliefs in a racialised and gendered visions of intelligence overall, whereby a certain type of rational, measurable intelligence is privileged, leading to "historical notions of hierarchies of being". [ 21 ] The blog Reboot praised McQuillan for offering a theory of harm of AI (why AI could end up hurting people and society) that does not just encourage tackling in isolation specific predicted problems with AI-centric systems: bias, non-inclusiveness, exploitativeness, environmental destructiveness, opacity, and non-contestability. [ 12 ] For [ 22 ] educational policies could also look at AI following the reading of McQuillan: In his book Resisting AI, Dan McQuillan argues that "When we're thinking about the actuality of AI, we can't separate the calculations in the code from the social context of its application" .... McQuillan's particular concern is how many contemporary applications of AI are amplifying existing inequalities and injustices as well as deepening social divisions and instabilities. His book makes a powerful case for anticipating these effects and actively resisting them for the good of societies. Videos [ 19 ] [ 23 ] and podcasts [ 1 ] [ 24 ] [ 25 ] with an interest in AI and emerging technology have discussed the book.
https://en.wikipedia.org/wiki/Resisting_AI
In chromatography , resolution is a measure of the separation of two peaks of different retention time t in a chromatogram . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Chromatographic peak resolution is given by where t R is the retention time and w b is the peak width at baseline. The bigger the time-difference and/or the smaller the bandwidths, the better the resolution of the compounds. Here compound 1 elutes before compound 2. If the peaks have the same width The theoretical plate height is given by where L is the column length and N the number of theoretical plates. [ 5 ] The relation between plate number and peak width at the base is given by
https://en.wikipedia.org/wiki/Resolution_(chromatography)
In mathematical logic and automated theorem proving , resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic . For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem . For first-order logic , resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic , providing a more practical method than one following from Gödel's completeness theorem . The resolution rule can be traced back to Davis and Putnam (1960); [ 1 ] however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson 's syntactical unification algorithm , which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness . [ 2 ] The clause produced by a resolution rule is sometimes called a resolvent . The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following, ¬ c {\displaystyle \lnot c} is taken to be the complement to c {\displaystyle c} ). The resulting clause contains all the literals that do not have complements. Formally: where The above may also be written as: Or schematically as: We have the following terminology: The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of consensus applied to clauses rather than terms. [ 3 ] When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair; however, the result is always a tautology . Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause). is equivalent to When coupled with a complete search algorithm , the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms. This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form . [ 4 ] The steps are as follows. One instance of this algorithm is the original Davis–Putnam algorithm that was later refined into the DPLL algorithm that removed the need for explicit representation of the resolvents. This description of the resolution technique uses a set S as the underlying data-structure to represent resolution derivations. Lists , Trees and Directed Acyclic Graphs are other possible and common alternatives. Tree representations are more faithful to the fact that the resolution rule is binary. Together with a sequent notation for clauses, a tree representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule , restricted to atomic cut-formulas. However, tree representations are not as compact as set or list representations, because they explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause. Graph representations can be as compact in the number of clauses as list representations and they also store structural information regarding which clauses were resolved to derive each resolvent. a ∨ b , ¬ a ∨ c b ∨ c {\displaystyle {\frac {a\vee b,\quad \neg a\vee c}{b\vee c}}} In plain language: Suppose a {\displaystyle a} is false. In order for the premise a ∨ b {\displaystyle a\vee b} to be true, b {\displaystyle b} must be true. Alternatively, suppose a {\displaystyle a} is true. In order for the premise ¬ a ∨ c {\displaystyle \neg a\vee c} to be true, c {\displaystyle c} must be true. Therefore, regardless of falsehood or veracity of a {\displaystyle a} , if both premises hold, then the conclusion b ∨ c {\displaystyle b\vee c} is true. Resolution rule can be generalized to first-order logic to: [ 5 ] where ϕ {\displaystyle \phi } is a most general unifier of L 1 {\displaystyle L_{1}} and L 2 ¯ {\displaystyle {\overline {L_{2}}}} , and Γ 1 {\displaystyle \Gamma _{1}} and Γ 2 {\displaystyle \Gamma _{2}} have no common variables. The clauses P ( x ) , Q ( x ) {\displaystyle P(x),Q(x)} and ¬ P ( b ) {\displaystyle \neg P(b)} can apply this rule with [ b / x ] {\displaystyle [b/x]} as unifier. Here x is a variable and b is a constant. Here we see that In first-order logic, resolution condenses the traditional syllogisms of logical inference down to a single rule. To understand how resolution works, consider the following example syllogism of term logic : Or, more generally: To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form (CNF). In this form, all quantification becomes implicit: universal quantifiers on variables ( X , Y , ...) are simply omitted as understood, while existentially-quantified variables are replaced by Skolem functions . So the question is, how does the resolution technique derive the last clause from the first two? The rule is simple: To apply this rule to the above example, we find the predicate P occurs in negated form in the first clause, and in non-negated form in the second clause. X is an unbound variable, while a is a bound value (term). Unifying the two produces the substitution Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q ( X ), in this case), produces the conclusion: For another example, consider the syllogistic form Or more generally, In CNF, the antecedents become: (The variable in the second clause was renamed to make it clear that variables in different clauses are distinct.) Now, unifying Q ( X ) in the first clause with ¬ Q ( Y ) in the second clause means that X and Y become the same variable anyway. Substituting this into the remaining clauses and combining them gives the conclusion: The resolution rule, as defined by Robinson, also incorporated factoring, which unifies two literals in the same clause, before or during the application of resolution as defined above. The resulting inference rule is refutation-complete, [ 6 ] in that a set of clauses is unsatisfiable if and only if there exists a derivation of the empty clause using only resolution, enhanced by factoring. An example for an unsatisfiable clause set for which factoring is needed to derive the empty clause is: Since each clause consists of two literals, so does each possible resolvent. Therefore, by resolution without factoring, the empty clause can never be obtained. Using factoring, it can be obtained e.g. as follows: [ 7 ] Generalizations of the above resolution rule have been devised that do not require the originating formulas to be in clausal normal form . [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas. Besides, they avoid combinatorial explosion during transformation to clause-form, [ 10 ] : 98 and sometimes save resolution steps. [ 13 ] : 425 For propositional logic, Murray [ 9 ] : 18 and Manna and Waldinger [ 10 ] : 98 use the rule where p {\displaystyle p} denotes an arbitrary formula, F [ p ] {\displaystyle F[p]} denotes a formula containing p {\displaystyle p} as a subformula, and F [ true ] {\displaystyle F[{\textit {true}}]} is built by replacing in F [ p ] {\displaystyle F[p]} every occurrence of p {\displaystyle p} by true {\displaystyle {\textit {true}}} ; likewise for G {\displaystyle G} . The resolvent F [ true ] ∨ G [ false ] {\displaystyle F[{\textit {true}}]\lor G[{\textit {false}}]} is intended to be simplified using rules like q ∧ true ⟹ q {\displaystyle q\land {\textit {true}}\implies q} , etc. In order to prevent generating useless trivial resolvents, the rule shall be applied only when p {\displaystyle p} has at least one "negative" and "positive" [ 14 ] occurrence in F {\displaystyle F} and G {\displaystyle G} , respectively. Murray has shown that this rule is complete if augmented by appropriate logical transformation rules. [ 10 ] : 103 Traugott uses the rule where the exponents of p {\displaystyle p} indicate the polarity of its occurrences. While G [ true ] {\displaystyle G[{\textit {true}}]} and G [ false ] {\displaystyle G[{\textit {false}}]} are built as before, the formula F [ G [ true ] , ¬ G [ false ] ] {\displaystyle F[G[{\textit {true}}],\lnot G[{\textit {false}}]]} is obtained by replacing each positive and each negative occurrence of p {\displaystyle p} in F {\displaystyle F} with G [ true ] {\displaystyle G[{\textit {true}}]} and G [ false ] {\displaystyle G[{\textit {false}}]} , respectively. Similar to Murray's approach, appropriate simplifying transformations are to be applied to the resolvent. Traugott proved his rule to be complete, provided ∧ , ∨ , → , ¬ {\displaystyle \land ,\lor ,\rightarrow ,\lnot } are the only connectives used in formulas. [ 12 ] : 398–400 Traugott's resolvent is stronger than Murray's. [ 12 ] : 395 Moreover, it does not introduce new binary junctors, thus avoiding a tendency towards clausal form in repeated resolution. However, formulas may grow longer when a small p {\displaystyle p} is replaced multiple times with a larger G [ true ] {\displaystyle G[{\textit {true}}]} and/or G [ false ] {\displaystyle G[{\textit {false}}]} . [ 12 ] : 398 As an example, starting from the user-given assumptions the Murray rule can be used as follows to infer a contradiction: [ 15 ] For the same purpose, the Traugott rule can be used as follows : [ 12 ] : 397 From a comparison of both deductions, the following issues can be seen: For first-order predicate logic, Murray's rule is generalized to allow distinct, but unifiable, subformulas p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} of F {\displaystyle F} and G {\displaystyle G} , respectively. If ϕ {\displaystyle \phi } is the most general unifier of p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} , then the generalized resolvent is F ϕ [ true ] ∨ G ϕ [ false ] {\displaystyle F\phi [{\textit {true}}]\lor G\phi [{\textit {false}}]} . While the rule remains sound if a more special substitution ϕ {\displaystyle \phi } is used, no such rule applications are needed to achieve completeness. [ citation needed ] Traugott's rule is generalized to allow several pairwise distinct subformulas p 1 , … , p m {\displaystyle p_{1},\ldots ,p_{m}} of F {\displaystyle F} and p m + 1 , … , p n {\displaystyle p_{m+1},\ldots ,p_{n}} of G {\displaystyle G} , as long as p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} have a common most general unifier, say ϕ {\displaystyle \phi } . The generalized resolvent is obtained after applying ϕ {\displaystyle \phi } to the parent formulas, thus making the propositional version applicable. Traugott's completeness proof relies on the assumption that this fully general rule is used; [ 12 ] : 401 it is not clear whether his rule would remain complete if restricted to p 1 = ⋯ = p m {\displaystyle p_{1}=\cdots =p_{m}} and p m + 1 = ⋯ = p n {\displaystyle p_{m+1}=\cdots =p_{n}} . [ 16 ] Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality. It generates all "equal" versions of clauses, except reflexive identities. The paramodulation operation takes a positive from clause, which must contain an equality literal. It then searches an into clause with a subterm that unifies with one side of the equality. The subterm is then replaced by the other side of the equality. The general aim of paramodulation is to reduce the system to atoms, reducing the size of the terms when substituting. [ 17 ]
https://en.wikipedia.org/wiki/Resolution_(logic)
In mathematical logic , proof compression by splitting is an algorithm that operates as a post-process on resolution proofs. It was proposed by Scott Cotton in his paper "Two Techniques for Minimizing Resolution Proof". [ 1 ] The Splitting algorithm is based on the following observation: Given a proof of unsatisfiability π {\displaystyle \pi } and a variable x {\displaystyle x} , it is easy to re-arrange (split) the proof in a proof of x {\displaystyle x} and a proof of ¬ x {\displaystyle \neg x} and the recombination of these two proofs (by an additional resolution step) may result in a proof smaller than the original. Note that applying Splitting in a proof π {\displaystyle \pi } using a variable x {\displaystyle x} does not invalidates a latter application of the algorithm using a differente variable y {\displaystyle y} . Actually, the method proposed by Cotton [ 1 ] generates a sequence of proofs π 1 π 2 … {\displaystyle \pi _{1}\pi _{2}\ldots } , where each proof π i + 1 {\displaystyle \pi _{i+1}} is the result of applying Splitting to π i {\displaystyle \pi _{i}} . During the construction of the sequence, if a proof π j {\displaystyle \pi _{j}} happens to be too large, π j + 1 {\displaystyle \pi _{j+1}} is set to be the smallest proof in { π 1 , π 2 , … , π j } {\displaystyle \{\pi _{1},\pi _{2},\ldots ,\pi _{j}\}} . For achieving a better compression/time ratio, a heuristic for variable selection is desirable. For this purpose, Cotton [ 1 ] defines the "additivity" of a resolution step (with antecedents p {\displaystyle p} and n {\displaystyle n} and resolvent r {\displaystyle r} ): Then, for each variable v {\displaystyle v} , a score is calculated summing the additivity of all the resolution steps in π {\displaystyle \pi } with pivot v {\displaystyle v} together with the number of these resolution steps. Denoting each score calculated this way by a d d ( v , π ) {\displaystyle add(v,\pi )} , each variable is selected with a probability proportional to its score: To split a proof of unsatisfiability π {\displaystyle \pi } in a proof π x {\displaystyle \pi _{x}} of x {\displaystyle x} and a proof π ¬ x {\displaystyle \pi _{\neg x}} of ¬ x {\displaystyle \neg x} , Cotton [ 1 ] proposes the following: Let l {\displaystyle l} denote a literal and p ⊕ x n {\displaystyle p\oplus _{x}n} denote the resolvent of clauses p {\displaystyle p} and n {\displaystyle n} where x ∈ p {\displaystyle x\in p} and ¬ x ∈ n {\displaystyle \neg x\in n} . Then, define the map π l {\displaystyle \pi _{l}} on nodes in the resolution dag of π {\displaystyle \pi } : Also, let o {\displaystyle o} be the empty clause in π {\displaystyle \pi } . Then, π x {\displaystyle \pi _{x}} and π ¬ x {\displaystyle \pi _{\neg x}} are obtained by computing π x ( o ) {\displaystyle \pi _{x}(o)} and π ¬ x ( o ) {\displaystyle \pi _{\neg x}(o)} , respectively.
https://en.wikipedia.org/wiki/Resolution_proof_compression_by_splitting
In proof theory , an area of mathematical logic , resolution proof reduction via local context rewriting is a technique for resolution proof reduction via local context rewriting . [ 1 ] This proof compression method was presented as an algorithm named ReduceAndReconstruct , that operates as a post-processing of resolution proofs. ReduceAndReconstruct is based on a set of local proof rewriting rules that transform a subproof into an equivalent or stronger one. [ 1 ] Each rule is defined to match a specific context. A context [ 1 ] involves two pivots ( p {\displaystyle p} and q {\displaystyle q} ) and five clauses ( α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } , δ {\displaystyle \delta } and η {\displaystyle \eta } ). The structure of a context is shown in ( 1 ). Note that this implies that p {\displaystyle p} is contained in β {\displaystyle \beta } and γ {\displaystyle \gamma } (with opposite polarity) and q {\displaystyle q} is contained in δ {\displaystyle \delta } and α {\displaystyle \alpha } (also with opposite polarity). The table below shows the rewriting rules proposed by Simone et al. . [ 1 ] The idea of the algorithm is to reduce proof size by opportunistically applying these rules. s t C s ¯ t D t C D var ⁡ ( s ) t ¯ E C D E var ⁡ ( t ) ⇒ s t C t ¯ E s C E var ⁡ ( t ) t ¯ E s ¯ t D s ¯ D E var ⁡ ( t ) C D E var ⁡ ( s ) {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}tD}{tCD}}\,\operatorname {var} (s)\qquad {\overline {t}}E}{CDE}}\,\operatorname {var} (t)\Rightarrow {\cfrac {{\cfrac {stC\qquad {\overline {t}}E}{sCE}}\,\operatorname {var} (t)\qquad {\cfrac {{\overline {t}}E\qquad {\overline {s}}tD}{{\overline {s}}DE}}\,\operatorname {var} (t)}{CDE}}\,\operatorname {var} (s)} s t C s ¯ D t C D var ⁡ ( s ) t ¯ E C D E var ⁡ ( t ) ⇒ s t C t ¯ E s C E var ⁡ ( t ) s ¯ D C D E var ⁡ ( s ) {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}D}{tCD}}\,\operatorname {var} (s)\qquad {\overline {t}}E}{CDE}}\,\operatorname {var} (t)\Rightarrow {\cfrac {{\cfrac {stC\qquad {\overline {t}}E}{sCE}}\,\operatorname {var} (t)\qquad {\overline {s}}D}{CDE}}\,\operatorname {var} (s)} s t C s ¯ t D t C D var ⁡ ( s ) s t ¯ E s C D E var ⁡ ( t ) ⇒ s t C s t ¯ E s C E var ⁡ ( t ) {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}tD}{tCD}}\,\operatorname {var} (s)\qquad s{\overline {t}}E}{sCDE}}\,\operatorname {var} (t)\Rightarrow {\cfrac {stC\qquad s{\overline {t}}E}{sCE}}\,\operatorname {var} (t)} s t C s ¯ D t D C var ⁡ ( s ) s t ¯ E s C D E var ⁡ ( t ) ⇒ s t C s t ¯ E s C E var ⁡ ( t ) s ¯ D C D E var ⁡ ( s ) {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}D}{tDC}}\,\operatorname {var} (s)\qquad s{\overline {t}}E}{sCDE}}\,\operatorname {var} (t)\Rightarrow {\cfrac {{\cfrac {stC\qquad s{\overline {t}}E}{sCE}}\,\operatorname {var} (t)\qquad {\overline {s}}D}{CDE}}\,\operatorname {var} (s)} s t C s ¯ D t D C var ⁡ ( s ) s ¯ t ¯ E s ¯ C D E var ⁡ ( t ) ⇒ s ¯ D {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}D}{tDC}}\,\operatorname {var} (s)\qquad {\overline {s}}{\overline {t}}E}{{\overline {s}}CDE}}\,\operatorname {var} (t)\Rightarrow {\overline {s}}D} s t C s ¯ t D t C D var ⁡ ( s ) t ¯ E C D E var ⁡ ( t ) ⇐ s t C t ¯ E s C E var ⁡ ( t ) t ¯ E s ¯ t D s ¯ D E var ⁡ ( t ) C D E var ⁡ ( s ) {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}tD}{tCD}}\,\operatorname {var} (s)\qquad {\overline {t}}E}{CDE}}\,\operatorname {var} (t)\Leftarrow {\cfrac {{\cfrac {stC\qquad {\overline {t}}E}{sCE}}\,\operatorname {var} (t)\qquad {\cfrac {{\overline {t}}E\qquad {\overline {s}}tD}{{\overline {s}}DE}}\,\operatorname {var} (t)}{CDE}}\,\operatorname {var} (s)} s t C s ¯ D t C D var ⁡ ( s ) s t ¯ E s C D E var ⁡ ( t ) ⇒ s t C s t ¯ E s C E var ⁡ ( t ) {\displaystyle {\cfrac {{\cfrac {stC\qquad {\overline {s}}D}{tCD}}\,\operatorname {var} (s)\qquad s{\overline {t}}E}{sCDE}}\,\operatorname {var} (t)\Rightarrow {\cfrac {stC\qquad s{\overline {t}}E}{sCE}}\,\operatorname {var} (t)} The first five rules were introduced in an earlier paper. [ 2 ] In addition: The following example [ 1 ] shows a situation where the proof becomes illegal after the application of B2' rule: Applying rule B2' to the highlighted context: The proof is now illegal because the literal o {\displaystyle o} is missing from the transformed root clause. To reconstruct the proof, one can remove o {\displaystyle o} together with the last resolution step (that is now redundant). The final result is the following legal (and stronger) proof: A further reduction of this proof by applying rule A2 to create a new opportunity to apply rule B2'. [ 1 ] There are usually a huge number of contexts where rule A2 may be applied, so an exhaustive approach is not feasible in general. One proposal [ 1 ] is to execute ReduceAndReconstruct as a loop with two termination criteria: number of iterations and a timeout (what is reached first). The pseudocode [ 1 ] below shows this. ReduceAndReconstruct uses the function ReduceAndReconstructLoop , which is specified below. The first part of the algorithm does a topological ordering of the resolution graph (considering that edges goes from antecedentes to resolvents). This is done to ensure that each node is visited after its antecedents (this way, broken resolution steps are always found and fixed). [ 1 ] If the input proof is not a tree (in general, resolution graphs are directed acyclic graphs ), then the clause δ {\displaystyle \delta } of a context may be involved in more than one resolution step. In this case, to ensure that an application of a rewriting rule is not going to interfere with other resolution steps, a safe solution is to create a copy of the node represented by clause δ {\displaystyle \delta } . [ 1 ] This solution increases proof size and some caution is needed when doing this. The heuristic for rule selection is important to achieve a good compression performance. Simone et al. [ 1 ] use the following order of preference for the rules (if applicable to the given context): B2 > B3 > { B2', B1 } > A1' > A2 (X > Y means that X is preferred over Y). Experiments have shown that ReduceAndReconstruct alone has a worse compression/time ratio than the algorithm RecyclePivots . [ 3 ] However, while RecyclePivots can be applied only once to a proof, ReduceAndReconstruct may be applied multiple times to produce a better compression. An attempt to combine ReduceAndReconstruct and RecyclePivots algorithms has led to good results. [ 1 ]
https://en.wikipedia.org/wiki/Resolution_proof_reduction_via_local_context_rewriting
In Galois theory , a discipline within the field of abstract algebra , a resolvent for a permutation group G is a polynomial whose coefficients depend polynomially on the coefficients of a given polynomial p and has, roughly speaking, a rational root if and only if the Galois group of p is included in G . More exactly, if the Galois group is included in G , then the resolvent has a rational root, and the converse is true if the rational root is a simple root . Resolvents were introduced by Joseph Louis Lagrange and systematically used by Évariste Galois . Nowadays they are still a fundamental tool to compute Galois groups . The simplest examples of resolvents are These three resolvents have the property of being always separable , which means that, if they have a multiple root , then the polynomial p is not irreducible . It is not known if there is an always separable resolvent for every group of permutations. For every equation the roots may be expressed in terms of radicals and of a root of a resolvent for a solvable group, because the Galois group of the equation over the field generated by this root is solvable. Let n be a positive integer , which will be the degree of the equation that we will consider, and ( X 1 , ..., X n ) an ordered list of indeterminates . According to Vieta's formulas this defines the generic monic polynomial of degree n F ( X ) = X n + ∑ i = 1 n ( − 1 ) i E i X n − i = ∏ i = 1 n ( X − X i ) , {\displaystyle F(X)=X^{n}+\sum _{i=1}^{n}(-1)^{i}E_{i}X^{n-i}=\prod _{i=1}^{n}(X-X_{i}),} where E i is the i th elementary symmetric polynomial . The symmetric group S n acts on the X i by permuting them, and this induces an action on the polynomials in the X i . The stabilizer of a given polynomial under this action is generally trivial, but some polynomials have a bigger stabilizer. For example, the stabilizer of an elementary symmetric polynomial is the whole group S n . If the stabilizer is non-trivial, the polynomial is fixed by some non-trivial subgroup G ; it is said to be an invariant of G . Conversely, given a subgroup G of S n , an invariant of G is a resolvent invariant for G if it is not an invariant of any bigger subgroup of S n . [ 1 ] Finding invariants for a given subgroup G of S n is relatively easy; one can sum the orbit of a monomial under the action of S n . However, it may occur that the resulting polynomial is an invariant for a larger group. For example, consider the case of the subgroup G of S 4 of order 4, consisting of (12)(34) , (13)(24) , (14)(23) and the identity (for the notation, see Permutation group ). The monomial X 1 X 2 gives the invariant 2( X 1 X 2 + X 3 X 4 ) . It is not a resolvent invariant for G , because being invariant by (12) , it is in fact a resolvent invariant for the larger dihedral subgroup D 4 : ⟨(12), (1324)⟩ , and is used to define the resolvent cubic of the quartic equation . If P is a resolvent invariant for a group G of index m inside S n , then its orbit under S n has order m . Let P 1 , ..., P m be the elements of this orbit. Then the polynomial is invariant under S n . Thus, when expanded, its coefficients are polynomials in the X i that are invariant under the action of the symmetry group and thus may be expressed as polynomials in the elementary symmetric polynomials. In other words, R G is an irreducible polynomial in Y whose coefficients are polynomial in the coefficients of F . Having the resolvent invariant as a root, it is called a resolvent (sometimes resolvent equation ). Consider now an irreducible polynomial with coefficients in a given field K (typically the field of rationals ) and roots x i in an algebraically closed field extension . Substituting the X i by the x i and the coefficients of F by those of f in the above, we get a polynomial R G ( f ) ( Y ) {\displaystyle R_{G}^{(f)}(Y)} , also called resolvent or specialized resolvent in case of ambiguity). If the Galois group of f is contained in G , the specialization of the resolvent invariant is invariant by G and is thus a root of R G ( f ) ( Y ) {\displaystyle R_{G}^{(f)}(Y)} that belongs to K (is rational on K ). Conversely, if R G ( f ) ( Y ) {\displaystyle R_{G}^{(f)}(Y)} has a rational root, which is not a multiple root, the Galois group of f is contained in G . There are some variants in the terminology. The Galois group of a polynomial of degree n {\displaystyle n} is S n {\displaystyle S_{n}} or a proper subgroup of it. If a polynomial is separable and irreducible, then the corresponding Galois group is a transitive subgroup. Transitive subgroups of S n {\displaystyle S_{n}} form a directed graph: one group can be a subgroup of several groups. One resolvent can tell if the Galois group of a polynomial is a (not necessarily proper) subgroup of given group. The resolvent method is just a systematic way to check groups one by one until only one group is possible. This does not mean that every group must be checked: every resolvent can cancel out many possible groups. For example, for degree five polynomials there is never need for a resolvent of D 5 {\displaystyle D_{5}} : resolvents for A 5 {\displaystyle A_{5}} and M 20 {\displaystyle M_{20}} give desired information. One way is to begin from maximal (transitive) subgroups until the right one is found and then continue with maximal subgroups of that.
https://en.wikipedia.org/wiki/Resolvent_(Galois_theory)
In algebra , a resolvent cubic is one of several distinct, although related, cubic polynomials defined from a monic polynomial of degree four : In each case: Suppose that the coefficients of P ( x ) belong to a field k whose characteristic is different from 2 . In other words, we are working in a field in which 1 + 1 ≠ 0 . Whenever roots of P ( x ) are mentioned, they belong to some extension K of k such that P ( x ) factors into linear factors in K [ x ] . If k is the field Q of rational numbers, then K can be the field C of complex numbers or the field Q of algebraic numbers . In some cases, the concept of resolvent cubic is defined only when P ( x ) is a quartic in depressed form—that is, when a 3 = 0 . Note that the fourth and fifth definitions below also make sense and that the relationship between these resolvent cubics and P ( x ) are still valid if the characteristic of k is equal to 2 . Suppose that P ( x ) is a depressed quartic—that is, that a 3 = 0 . A possible definition of the resolvent cubic of P ( x ) is: [ 1 ] The origin of this definition lies in applying Ferrari's method to find the roots of P ( x ) . To be more precise: Add a new unknown, y , to x 2 + a 2 /2 . Now you have: If this expression is a square, it can only be the square of But the equality is equivalent to and this is the same thing as the assertion that R 1 ( y ) = 0. If y 0 is a root of R 1 ( y ) , then it is a consequence of the computations made above that the roots of P ( x ) are the roots of the polynomial together with the roots of the polynomial Of course, this makes no sense if y 0 = 0 , but since the constant term of R 1 ( y ) is – a 1 2 , 0 is a root of R 1 ( y ) if and only if a 1 = 0 , and in this case the roots of P ( x ) can be found using the quadratic formula . Another possible definition [ 1 ] (still supposing that P ( x ) is a depressed quartic) is The origin of this definition is similar to the previous one. This time, we start by doing: and a computation similar to the previous one shows that this last expression is a square if and only if A simple computation shows that Another possible definition [ 2 ] [ 3 ] (again, supposing that P ( x ) is a depressed quartic) is The origin of this definition lies in another method of solving quartic equations, namely Descartes' method . If you try to find the roots of P ( x ) by expressing it as a product of two monic quadratic polynomials x 2 + αx + β and x 2 – αx + γ , then If there is a solution of this system with α ≠ 0 (note that if a 1 ≠ 0 , then this is automatically true for any solution), the previous system is equivalent to It is a consequence of the first two equations that then and After replacing, in the third equation, β and γ by these values one gets that and this is equivalent to the assertion that α 2 is a root of R 3 ( y ) . So, again, knowing the roots of R 3 ( y ) helps to determine the roots of P ( x ) . Note that Still another possible definition is [ 4 ] In fact, if the roots of P ( x ) are α 1 , α 2 , α 3 , and α 4 , then a fact the follows from Vieta's formulas . In other words, R 4 ( y ) is the monic polynomial whose roots are α 1 α 2 + α 3 α 4 , α 1 α 3 + α 2 α 4 , and α 1 α 4 + α 2 α 3 . It is easy to see that Therefore, P ( x ) has a multiple root if and only if R 4 ( y ) has a multiple root. More precisely, P ( x ) and R 4 ( y ) have the same discriminant . One should note that if P ( x ) is a depressed polynomial, then Yet another definition is [ 5 ] [ 6 ] If, as above, the roots of P ( x ) are α 1 , α 2 , α 3 , and α 4 , then again as a consequence of Vieta's formulas . In other words, R 5 ( y ) is the monic polynomial whose roots are ( α 1 + α 2 )( α 3 + α 4 ) , ( α 1 + α 3 )( α 2 + α 4 ) , and ( α 1 + α 4 )( α 2 + α 3 ) . It is easy to see that Therefore, as it happens with R 4 ( y ) , P ( x ) has a multiple root if and only if R 5 ( y ) has a multiple root. More precisely, P ( x ) and R 5 ( y ) have the same discriminant. This is also a consequence of the fact that R 5 ( y + a 2 ) = - R 4 (- y ) . Note that if P ( x ) is a depressed polynomial, then It was explained above how R 1 ( y ) , R 2 ( y ) , and R 3 ( y ) can be used to find the roots of P ( x ) if this polynomial is depressed. In the general case, one simply has to find the roots of the depressed polynomial P ( x − a 3 /4) . For each root x 0 of this polynomial, x 0 − a 3 /4 is a root of P ( x ) . If a quartic polynomial P ( x ) is reducible in k [ x ] , then it is the product of two quadratic polynomials or the product of a linear polynomial by a cubic polynomial. This second possibility occurs if and only if P ( x ) has a root in k . In order to determine whether or not P ( x ) can be expressed as the product of two quadratic polynomials, let us assume, for simplicity, that P ( x ) is a depressed polynomial. Then it was seen above that if the resolvent cubic R 3 ( y ) has a non-null root of the form α 2 , for some α ∈ k , then such a decomposition exists. This can be used to prove that, in R [ x ] , every quartic polynomial without real roots can be expressed as the product of two quadratic polynomials. Let P ( x ) be such a polynomial. We can assume without loss of generality that P ( x ) is monic. We can also assume without loss of generality that it is a reduced polynomial, because P ( x ) can be expressed as the product of two quadratic polynomials if and only if P ( x − a 3 /4) can and this polynomial is a reduced one. Then R 3 ( y ) = y 3 + 2 a 2 y 2 + ( a 2 2 − 4 a 0 ) y − a 1 2 . There are two cases: More generally, if k is a real closed field , then every quartic polynomial without roots in k can be expressed as the product of two quadratic polynomials in k [ x ] . Indeed, this statement can be expressed in first-order logic and any such statement that holds for R also holds for any real closed field. A similar approach can be used to get an algorithm [ 2 ] to determine whether or not a quartic polynomial P ( x ) ∈ Q [ x ] is reducible and, if it is, how to express it as a product of polynomials of smaller degree. Again, we will suppose that P ( x ) is monic and depressed. Then P ( x ) is reducible if and only if at least one of the following conditions holds: Indeed: The resolvent cubic of an irreducible quartic polynomial P ( x ) can be used to determine its Galois group G ; that is, the Galois group of the splitting field of P ( x ) . Let m be the degree over k of the splitting field of the resolvent cubic (it can be either R 4 ( y ) or R 5 ( y ) ; they have the same splitting field). Then the group G is a subgroup of the symmetric group S 4 . More precisely: [ 4 ]
https://en.wikipedia.org/wiki/Resolvent_cubic
In linear algebra and operator theory , the resolvent set of a linear operator is a set of complex numbers for which the operator is in some sense " well-behaved ". The resolvent set plays an important role in the resolvent formalism . Let X be a Banach space and let L : D ( L ) → X {\displaystyle L\colon D(L)\rightarrow X} be a linear operator with domain D ( L ) ⊆ X {\displaystyle D(L)\subseteq X} . Let id denote the identity operator on X . For any λ ∈ C {\displaystyle \lambda \in \mathbb {C} } , let A complex number λ {\displaystyle \lambda } is said to be a regular value if the following three statements are true: The resolvent set of L is the set of all regular values of L : The spectrum is the complement of the resolvent set and subject to a mutually singular spectral decomposition into the point spectrum (when condition 1 fails), the continuous spectrum (when condition 2 fails) and the residual spectrum (when condition 3 fails). If L {\displaystyle L} is a closed operator , then so is each L λ {\displaystyle L_{\lambda }} , and condition 3 may be replaced by requiring that L λ {\displaystyle L_{\lambda }} be surjective .
https://en.wikipedia.org/wiki/Resolvent_set
Resonance-enhanced multiphoton ionization ( REMPI ) is a technique applied to the spectroscopy of atoms and small molecules . In practice, a tunable laser can be used to access an excited intermediate state . The selection rules associated with a two- photon or other multiphoton photoabsorption are different from the selection rules for a single photon transition. The REMPI technique typically involves a resonant single or multiple photon absorption to an electronically excited intermediate state followed by another photon which ionizes the atom or molecule. The light intensity to achieve a typical multiphoton transition is generally significantly larger than the light intensity to achieve a single photon photoabsorption. Because of this, subsequent photoabsorption is often very likely. An ion and a free electron will result if the photons have imparted enough energy to exceed the ionization threshold energy of the system. In many cases, REMPI provides spectroscopic information that can be unavailable to single photon spectroscopic methods , for example rotational structure in molecules is easily seen with this technique. REMPI is usually generated by a focused frequency tunable laser beam to form a small-volume plasma. In REMPI, first m photons are simultaneously absorbed by an atom or molecule in the sample to bring it to an excited state. Other n photons are absorbed afterwards to generate an electron and ion pair. The so-called m+n REMPI is a nonlinear optical process, which can only occur within the focus of the laser beam. A small-volume plasma is formed near the laser focal region. If the energy of m photons does not match any state, an off-resonant transition can occur with an energy defect ΔE, however, the electron is very unlikely to remain in that state. For large detuning, it resides there only during the time Δt. The uncertainty principle is satisfied for Δt, where ћ=h/2π and h is the Planck constant (6.6261×10^-34 J∙s). Such transition and states are called virtual, unlike real transitions to states with long lifetimes. The real transition probability is many orders of magnitude higher than the virtual transition one, which is called resonance enhanced effect. High photon intensity experiments can involve multiphoton processes with the absorption of integer multiples of the photon energy. In experiments that involve a multiphoton resonance, the intermediate is often a low-lying Rydberg state , and the final state is often an ion. The initial state of the system, photon energy, angular momentum and other selection rules can help in determining the nature of the intermediate state. This approach is exploited in resonance-enhanced multiphoton ionization spectroscopy (REMPI). The technique is in wide use in both atomic and molecular spectroscopy. An advantage of the REMPI technique is that the ions can be detected with almost complete efficiency and even time resolved for their mass . It is also possible to gain additional information by performing experiments to look at the energy of the liberated photoelectron in these experiments. Coherent microwave scattering from electrons in REMPI-induced plasma filaments adds the capability to measure selectively-ionized species with a high spatial and temporal resolution - allowing for nonintrusive determinations of concentration profiles without the use of physical probes or electrodes. It has been applied for the detection of species such as argon, xenon, nitric oxide, carbon monoxide, atomic oxygen, and methyl radicals both within enclosed cells, open air, and atmospheric flames. [ 1 ] [ 2 ] [ non-primary source needed ] Microwave detection is based on homodyne or heterodyne technologies. They can significantly increase the detection sensitivity by suppressing the noise and follow sub-nanosecond plasma generation and evolution. The homodyne detection method mixes the detected microwave electric field with its own source to produce a signal proportional to the product of the two. The signal frequency is converted down from tens of gigahertz to below one gigahertz so that the signal can be amplified and observed with standard electronic devices. Because of the high sensitivity associated with the homodyne detection method, the lack of background noise in the microwave regime, and the capability of time gating of the detection electronics synchronous with the laser pulse, very high SNRs are possible even with milliwatt microwave sources. These high SNRs allow the temporal behavior of the microwave signal to be followed on a sub-nanosecond time scale. Thus the lifetime of electrons within the plasma can be recorded. By utilizing a microwave circulator, a single microwave horn transceiver has been built, which significantly simplifies the experimental setup. Detection in the microwave region has numerous advantages over optical detection. Using homodyne or heterodyne technologies, the electric field rather than the power can be detected, so much better noise rejection can be achieved. In contrast to optical heterodyne techniques, no alignment or mode matching of the reference is necessary. The long wavelength of the microwaves leads to effective point coherent scattering from the plasma in the laser focal volume, so phase matching is unimportant and scattering in the backward direction is strong. Many microwave photons can be scattered from a single electron, so the amplitude of the scattering can be increased by increasing the power of the microwave transmitter. The low energy of the microwave photons corresponds to thousands of more photons per unit energy than in the visible region, so shot noise is drastically reduced. For weak ionization characteristic of trace species diagnostics, the measured electric field is a linear function of the number of electrons which is directly proportional to the trace species concentration. Furthermore, there is very little solar or other natural background radiation in the microwave spectral region.
https://en.wikipedia.org/wiki/Resonance-enhanced_multiphoton_ionization
In chemistry , resonance , also called mesomerism , is a way of describing bonding in certain molecules or polyatomic ions by the combination of several contributing structures (or forms , [ 1 ] also variously known as resonance structures or canonical structures ) into a resonance hybrid (or hybrid structure ) in valence bond theory . It has particular value for analyzing delocalized electrons where the bonding cannot be expressed by one single Lewis structure . The resonance hybrid is the accurate structure for a molecule or ion; it is an average of the theoretical (or hypothetical) contributing structures. Under the framework of valence bond theory , resonance is an extension of the idea that the bonding in a chemical species can be described by a Lewis structure. For many chemical species, a single Lewis structure, consisting of atoms obeying the octet rule , possibly bearing formal charges , and connected by bonds of positive integer order, is sufficient for describing the chemical bonding and rationalizing experimentally determined molecular properties like bond lengths , angles , and dipole moment . [ 2 ] However, in some cases, more than one Lewis structure could be drawn, and experimental properties are inconsistent with any one structure. In order to address this type of situation, several contributing structures are considered together as an average, and the molecule is said to be represented by a resonance hybrid in which several Lewis structures are used collectively to describe its true structure. For instance, in NO 2 – , nitrite anion, the two N–O bond lengths are equal, even though no single Lewis structure has two N–O bonds with the same formal bond order . However, its measured structure is consistent with a description as a resonance hybrid of the two major contributing structures shown above: it has two equal N–O bonds of 125 pm, intermediate in length between a typical N–O single bond (145 pm in hydroxylamine , H 2 N–OH) and N–O double bond (115 pm in nitronium ion , [O=N=O] + ). According to the contributing structures, each N–O bond is an average of a formal single and formal double bond, leading to a true bond order of 1.5. By virtue of this averaging, the Lewis description of the bonding in NO 2 – is reconciled with the experimental fact that the anion has equivalent N–O bonds. The resonance hybrid represents the actual molecule as the "average" of the contributing structures, with bond lengths and partial charges taking on intermediate values compared to those expected for the individual Lewis structures of the contributors, were they to exist as "real" chemical entities. [ 3 ] The contributing structures differ only in the formal apportionment of electrons to the atoms, and not in the actual physically and chemically significant electron or spin density. While contributing structures may differ in formal bond orders and in formal charge assignments, all contributing structures must have the same number of valence electrons and the same spin multiplicity . [ 4 ] Because electron delocalization lowers the potential energy of a system, any species represented by a resonance hybrid is more stable than any of the (hypothetical) contributing structures. [ 5 ] Electron delocalization stabilizes a molecule because the electrons are more evenly spread out over the molecule, decreasing electron-electron repulsion. [ 6 ] The difference in potential energy between the actual species and the (computed) energy of the contributing structure with the lowest potential energy is called the resonance energy [ 7 ] or delocalization energy. The magnitude of the resonance energy depends on assumptions made about the hypothetical "non-stabilized" species and the computational methods used and does not represent a measurable physical quantity, although comparisons of resonance energies computed under similar assumptions and conditions may be chemically meaningful. Molecules with an extended π system such as linear polyenes and polyaromatic compounds are well described by resonance hybrids as well as by delocalised orbitals in molecular orbital theory . Resonance is to be distinguished from isomerism . Isomers are molecules with the same chemical formula but are distinct chemical species with different arrangements of atomic nuclei in space. Resonance contributors of a molecule, on the other hand, can only differ in the way electrons are formally assigned to atoms in the Lewis structure depictions of the molecule. Specifically, when a molecular structure is said to be represented by a resonance hybrid, it does not mean that electrons of the molecule are "resonating" or shifting back and forth between several sets of positions, each one represented by a Lewis structure. Rather, it means that the set of contributing structures represents an intermediate structure (a weighted average of the contributors), with a single, well-defined geometry and distribution of electrons. It is incorrect to regard resonance hybrids as rapidly interconverting isomers, even though the term "resonance" might evoke such an image. [ 8 ] (As described below , the term "resonance" originated as a classical physics analogy for a quantum mechanical phenomenon, so it should not be construed too literally.) Symbolically, the double headed arrow A ⟷ B {\displaystyle {\ce {A<->B}}} is used to indicate that A and B are contributing forms of a single chemical species (as opposed to an equilibrium arrow, e.g., A ↽ − − ⇀ B {\displaystyle {\ce {A <=> B}}} ; see below for details on usage). A non-chemical analogy is illustrative: one can describe the characteristics of a real animal, the narwhal , in terms of the characteristics of two mythical creatures: the unicorn , a creature with a single horn on its head, and the leviathan , a large, whale-like creature. The narwhal is not a creature that goes back and forth between being a unicorn and being a leviathan, nor do the unicorn and leviathan have any physical existence outside the collective human imagination. Nevertheless, describing the narwhal in terms of these imaginary creatures provides a reasonably good description of its physical characteristics. Due to confusion with the physical meaning of the word resonance , as no entities actually physically "resonate", it has been suggested that the term resonance be abandoned in favor of delocalization [ 9 ] and resonance energy abandoned in favor of delocalization energy . A resonance structure becomes a contributing structure and the resonance hybrid becomes the hybrid structure . The double headed arrows would be replaced by commas to illustrate a set of structures, as arrows of any type may suggest that a chemical change is taking place. In diagrams, contributing structures are typically separated by double-headed arrows (↔). The arrow should not be confused with the right and left pointing equilibrium arrow (⇌). All structures together may be enclosed in large square brackets, to indicate they picture one single molecule or ion, not different species in a chemical equilibrium . Alternatively to the use of contributing structures in diagrams, a hybrid structure can be used. In a hybrid structure, pi bonds that are involved in resonance are usually pictured as curves [ 10 ] or dashed lines, indicating that these are partial rather than normal complete pi bonds. In benzene and other aromatic rings, the delocalized pi-electrons are sometimes pictured as a solid circle. [ 11 ] The concept first appeared in 1899 in Johannes Thiele 's "Partial Valence Hypothesis" to explain the unusual stability of benzene which would not be expected from August Kekulé 's structure proposed in 1865 with alternating single and double bonds. [ 12 ] Benzene undergoes substitution reactions, rather than addition reactions as typical for alkenes . He proposed that the carbon-carbon bond in benzene is intermediate of a single and double bond. The resonance proposal also helped explain the number of isomers of benzene derivatives. For example, Kekulé's structure would predict four dibromobenzene isomers, including two ortho isomers with the brominated carbon atoms joined by either a single or a double bond. In reality there are only three dibromobenzene isomers and only one is ortho, in agreement with the idea that there is only one type of carbon-carbon bond, intermediate between a single and a double bond. [ 13 ] The mechanism of resonance was introduced into quantum mechanics by Werner Heisenberg in 1926 in a discussion of the quantum states of the helium atom. He compared the structure of the helium atom with the classical system of resonating coupled harmonic oscillators . [ 3 ] [ 14 ] In the classical system, the coupling produces two modes, one of which is lower in frequency than either of the uncoupled vibrations; quantum mechanically, this lower frequency is interpreted as a lower energy. Linus Pauling used this mechanism to explain the partial valence of molecules in 1928, and developed it further in a series of papers in 1931-1933. [ 15 ] [ 16 ] The alternative term mesomerism [ 17 ] popular in German and French publications with the same meaning was introduced by C. K. Ingold in 1938, but did not catch on in the English literature. The current concept of mesomeric effect has taken on a related but different meaning. The double headed arrow was introduced by the German chemist Fritz Arndt who preferred the German phrase zwischenstufe or intermediate stage . Resonance theory dominated over competing Hückel method for two decades thanks to being relatively easier to understand for chemists without fundamental physics background, even if they couldn't grasp the concept of quantum superposition and confused it with tautomerism . Pauling and Wheland themselves characterized Erich Hückel 's approach as "cumbersome" at the time, and his lack of communication skills contributed: when Robert Robinson sent him a friendly request, he responded arrogantly that he is not interested in organic chemistry. [ 18 ] In the Soviet Union, resonance theory – especially as developed by Pauling – was attacked in the early 1950s as being contrary to the Marxist principles of dialectical materialism , and in June 1951 the Soviet Academy of Sciences under the leadership of Alexander Nesmeyanov convened a conference on the chemical structure of organic compounds, attended by 400 physicists, chemists, and philosophers, where "the pseudo-scientific essence of the theory of resonance was exposed and unmasked". [ 19 ] One contributing structure may resemble the actual molecule more than another (in the sense of energy and stability). Structures with a low value of potential energy are more stable than those with high values and resemble the actual structure more. The most stable contributing structures are called major contributors . Energetically unfavourable and therefore less favorable structures are minor contributors . With rules listed in rough order of diminishing importance, major contributors are generally structures that A maximum of eight valence electrons is strict for the Period 2 elements Be, B, C, N, O, and F, as is a maximum of two for H and He and effectively for Li as well. [ 20 ] The issue of expansion of the valence shell of third period and heavier main group elements is controversial. A Lewis structure in which a central atom has a valence electron count greater than eight traditionally implies the participation of d orbitals in bonding. However, the consensus opinion is that while they may make a marginal contribution, the participation of d orbitals is unimportant, and the bonding of so-called hypervalent molecules are, for the most part, better explained by charge-separated contributing forms that depict three-center four-electron bonding . Nevertheless, by tradition, expanded octet structures are still commonly drawn for functional groups like sulfoxides , sulfones , and phosphorus ylides , for example. Regarded as a formalism that does not necessarily reflect the true electronic structure, such depictions are preferred by the IUPAC over structures featuring partial bonds, charge separation, or dative bonds . [ 21 ] Equivalent contributors contribute equally to the actual structure, while the importance of nonequivalent contributors is determined by the extent to which they conform to the properties listed above. A larger number of significant contributing structures and a more voluminous space available for delocalized electrons lead to stabilization (lowering of the energy) of the molecule. In benzene the two cyclohexatriene Kekulé structures, first proposed by Kekulé , are taken together as contributing structures to represent the total structure. In the hybrid structure on the right, the dashed hexagon replaces three double bonds, and represents six electrons in a set of three molecular orbitals of π symmetry, with a nodal plane in the plane of the molecule. In furan a lone pair of the oxygen atom interacts with the π orbitals of the carbon atoms. The curved arrows depict the permutation of delocalized π electrons , which results in different contributors. The ozone molecule is represented by two contributing structures. In reality the two terminal oxygen atoms are equivalent and the hybrid structure is drawn on the right with a charge of − 1 ⁄ 2 on both oxygen atoms and partial double bonds with a full and dashed line and bond order 1 + 1 ⁄ 2 . [ 22 ] [ 23 ] For hypervalent molecules , the rationalization described above can be applied to generate contributing structures to explain the bonding in such molecules. Shown below are the contributing structures of a 3c-4e bond in xenon difluoride . The allyl cation has two contributing structures with a positive charge on the terminal carbon atoms. In the hybrid structure their charge is + 1 ⁄ 2 . The full positive charge can also be depicted as delocalized among three carbon atoms. The diborane molecule is described by contributing structures, each with electron-deficiency on different atoms. This reduces the electron-deficiency on each atom and stabilizes the molecule. Below are the contributing structures of an individual 3c-2e bond in diborane. Often, reactive intermediates such as carbocations and free radicals have more delocalized structure than their parent reactants, giving rise to unexpected products. The classical example is allylic rearrangement . [ 24 ] When 1 mole of HCl adds to 1 mole of 1,3-butadiene, in addition to the ordinarily expected product 3-chloro-1-butene, we also find 1-chloro-2-butene. Isotope labelling experiments have shown that what happens here is that the additional double bond shifts from 1,2 position to 2,3 position in some of the product. This and other evidence (such as NMR in superacid solutions) shows that the intermediate carbocation must have a highly delocalized structure, different from its mostly classical (delocalization exists but is small) parent molecule. This cation (an allylic cation) can be represented using resonance, as shown above. This observation of greater delocalization in less stable molecules is quite general. The excited states of conjugated dienes are stabilised more by conjugation than their ground states, causing them to become organic dyes. [ 25 ] A well-studied example of delocalization that does not involve π electrons ( hyperconjugation ) can be observed in the non-classical 2-Norbornyl cation [ 26 ] Another example is methanium ( CH + 5 ). These can be viewed as containing three-center two-electron bonds and are represented either by contributing structures involving rearrangement of σ electrons or by a special notation, a Y that has the three nuclei at its three points. Delocalized electrons are important for several reasons; a major one is that an expected chemical reaction may not occur because the electrons delocalize to a more stable configuration, resulting in a reaction that happens at a different location. An example is the Friedel–Crafts alkylation [ 27 ] of benzene with 1-chloro-2-methylpropane; the carbocation rearranges to a tert - butyl group stabilized by hyperconjugation , a particular form of delocalization. Comparing the two contributing structures of benzene, all single and double bonds are interchanged. Bond lengths can be measured, for example using X-ray diffraction . The average length of a C–C single bond is 154 pm ; that of a C=C double bond is 133 pm. In localized cyclohexatriene, the carbon–carbon bonds should be alternating 154 and 133 pm. Instead, all carbon–carbon bonds in benzene are found to be about 139 pm, a bond length intermediate between single and double bond. This mixed single and double bond (or triple bond) character is typical for all molecules in which bonds have a different bond order in different contributing structures. Bond lengths can be compared using bond orders. For example, in cyclohexane the bond order is 1 while that in benzene is 1 + (3 ÷ 6) = 1 + 1 ⁄ 2 . Consequently, benzene has more double bond character and hence has a shorter bond length than cyclohexane. Resonance (or delocalization) energy is the amount of energy needed to convert the true delocalized structure into that of the most stable contributing structure. The empirical resonance energy can be estimated by comparing the enthalpy change of hydrogenation of the real substance with that estimated for the contributing structure. The complete hydrogenation of benzene to cyclohexane via 1,3-cyclohexadiene and cyclohexene is exothermic ; 1 mole of benzene delivers 208.4 kJ (49.8 kcal). Hydrogenation of one mole of double bonds delivers 119.7 kJ (28.6 kcal), as can be deduced from the last step, the hydrogenation of cyclohexene. In benzene, however, 23.4 kJ (5.6 kcal) are needed to hydrogenate one mole of double bonds. The difference, being 143.1 kJ (34.2 kcal), is the empirical resonance energy of benzene. Because 1,3-cyclohexadiene also has a small delocalization energy (7.6 kJ or 1.8 kcal/mol) the net resonance energy, relative to the localized cyclohexatriene, is a bit higher: 151 kJ or 36 kcal/mol. [ 28 ] This measured resonance energy is also the difference between the hydrogenation energy of three 'non-resonance' double bonds and the measured hydrogenation energy: Regardless of their exact values, resonance energies of various related compounds provide insights into their bonding. The resonance energies for pyrrole , thiophene , and furan are, respectively, 88, 121, and 67 kJ/mol (21, 29, and 16 kcal/mol). [ 30 ] Thus, these heterocycles are far less aromatic than benzene, as is manifested in the lability of these rings. Resonance has a deeper significance in the mathematical formalism of valence bond theory (VB). Quantum mechanics requires that the wavefunction of a molecule obey its observed symmetry. If a single contributing structure does not achieve this, resonance is invoked. For example, in benzene, valence bond theory begins with the two Kekulé structures which do not individually possess the sixfold symmetry of the real molecule. The theory constructs the actual wave function as a linear superposition of the wave functions representing the two structures. As both Kekulé structures have equal energy, they are equal contributors to the overall structure – the superposition is an equally weighted average, or a 1:1 linear combination of the two in the case of benzene. The symmetric combination gives the ground state, while the antisymmetric combination gives the first excited state , as shown. In general, the superposition is written with undetermined coefficients, which are then variationally optimized to find the lowest possible energy for the given set of basis wave functions. When more contributing structures are included, the molecular wave function becomes more accurate and more excited states can be derived from different combinations of the contributing structures. In molecular orbital theory , the main alternative to valence bond theory , the molecular orbitals (MOs) are approximated as sums of all the atomic orbitals (AOs) on all the atoms; there are as many MOs as AOs. Each AO i has a weighting coefficient c i that indicates the AO's contribution to a particular MO. For example, in benzene, the MO model gives us 6 π MOs which are combinations of the 2p z AOs on each of the 6 C atoms. Thus, each π MO is delocalized over the whole benzene molecule and any electron occupying an MO will be delocalized over the whole molecule. This MO interpretation has inspired the picture of the benzene ring as a hexagon with a circle inside. When describing benzene, the VB concept of localized σ bonds and the MO concept of delocalized π orbitals are frequently combined in elementary chemistry courses. The contributing structures in the VB model are particularly useful in predicting the effect of substituents on π systems such as benzene. They lead to the models of contributing structures for an electron-withdrawing group and electron-releasing group on benzene. The utility of MO theory is that a quantitative indication of the charge from the π system on an atom can be obtained from the squares of the weighting coefficient c i on atom C i . Charge q i ≈ c 2 i . The reason for squaring the coefficient is that if an electron is described by an AO, then the square of the AO gives the electron density . The AOs are adjusted ( normalized ) so that AO 2 = 1, and q i ≈ ( c i AO i ) 2 ≈ c 2 i . In benzene, q i = 1 on each C atom. With an electron-withdrawing group q i < 1 on the ortho and para C atoms and q i > 1 for an electron-releasing group . Weighting of the contributing structures in terms of their contribution to the overall structure can be calculated in multiple ways, using "Ab initio" methods derived from Valence Bond theory, or else from the Natural Bond Orbitals (NBO) approaches of Weinhold NBO5 Archived 2008-02-08 at the Wayback Machine , or finally from empirical calculations based on the Hückel method. A Hückel method-based software for teaching resonance is available on the HuLiS Web site. In the case of ions it is common to speak about delocalized charge (charge delocalization). An example of delocalized charge in ions can be found in the carboxylate group, wherein the negative charge is centered equally on the two oxygen atoms. Charge delocalization in anions is an important factor determining their reactivity (generally: the higher the extent of delocalization the lower the reactivity) and, specifically, the acid strength of their conjugate acids. As a general rule, the better delocalized is the charge in an anion the stronger is its conjugate acid . For example, the negative charge in perchlorate anion ( ClO − 4 ) is evenly distributed among the symmetrically oriented oxygen atoms (and a part of it is also kept by the central chlorine atom). This excellent charge delocalization combined with the high number of oxygen atoms (four) and high electronegativity of the central chlorine atom leads to perchloric acid being one of the strongest known acids with a p K a value of −10. [ 32 ] The extent of charge delocalization in an anion can be quantitatively expressed via the WAPS (weighted average positive sigma) parameter [ 33 ] parameter and an analogous WANS (weighted average negative sigma) [ 34 ] [ 35 ] parameter is used for cations. WAPS and WANS values are given in e / Å 4 . Larger values indicate more localized charge in the corresponding ion.
https://en.wikipedia.org/wiki/Resonance_(chemistry)
In particle physics , a resonance is the peak located around a certain energy found in differential cross sections of scattering experiments . These peaks are associated with subatomic particles , which include a variety of bosons , quarks and hadrons (such as nucleons , delta baryons or upsilon mesons ) and their excitations . In common usage, "resonance" only describes particles with very short lifetimes , mostly high-energy hadrons existing for 10 −23 seconds or less. It is also used to describe particles in intermediate steps of a decay, so-called virtual particles . [ 1 ] The width of the resonance ( Γ ) is related to the mean lifetime ( τ ) of the particle (or its excited state) by the relation where ℏ = h 2 π {\displaystyle {\hbar }={\frac {h}{2\pi }}} and h is the Planck constant . Thus, the lifetime of a particle is the direct inverse of the particle's resonance width. For example, the charged pion has the second-longest lifetime of any meson, at 2.6033 × 10 −8 s . [ 2 ] Therefore, its resonance width is very small, about 2.528 × 10 −8 eV or about 6.11 MHz . Pions are generally not considered as "resonances". The charged rho meson has a very short lifetime, about 4.41 × 10 −24 s . Correspondingly, its resonance width is very large, at 149.1 MeV or about 36 ZHz . This amounts to nearly one-fifth of the particle's rest mass . [ 3 ] This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resonance_(particle_physics)
In nuclear physics , resonance escape probability p {\displaystyle p} is the probability that a neutron will slow down from fission energy to thermal energies without being captured by a nuclear resonance. A resonance absorption of a neutron in a nucleus does not produce nuclear fission . The probability of resonance absorption is called the resonance factor ψ {\displaystyle \psi } , and the sum of the two factors is p + ψ = 1 {\displaystyle p+\psi =1} . [ 1 ] Generally, the higher the neutron energy, the lower the probability of absorption, but for some energies, called resonance energies , the resonance factor is very high. These energies depend on the properties of heavy nuclei. Resonance escape probability is highly determined by the heterogeneous geometry of a reactor, because fast neutrons resulting from fission can leave the fuel and slow to thermal energies in a moderator, skipping over resonance energies before reentering the fuel. [ 1 ] Resonance escape probability appears in the four factor formula and the six factor formula . To compute it, neutron transport theory is used. The nucleus can capture a neutron only if the kinetic energy of the neutron is close to the energy of one of the energy levels of the new nucleus formed as a result of capture. The capture cross section of such a neutron by the nucleus increases sharply. The energy at which the neutron-nucleus interaction cross section reaches a maximum is called the resonance energy. The resonance energy range is divided into two parts, the region of resolved and unresolved resonances. The first region occupies the energy interval from 1 eV to E gr . In this region, the energy resolution of the instruments is sufficient to distinguish any resonance peak. Starting from the energy E gr , the distance between resonance peaks becomes smaller than the energy resolution. Subsequently, the resonance peaks are not separated. For heavy elements, the boundary energy E gr ≈1 keV. In thermal neutron reactors, the main resonant neutron absorber is Uranium-238 . In the table for 238 U, several resonance neutron energies E r , the maximum absorption cross sections σ a, r in the peak, and the width G of these resonances are given. Let us assume that the resonant neutrons move in an infinite system consisting of a moderator and 238 U. When colliding with the moderator nuclei, the neutrons are scattered, and with the 238 U nuclei, they are absorbed. The former collisions favor the retention and removal of resonant neutrons from the danger zone, while the latter lead to their loss. The probability of avoiding resonance capture (coefficient φ) is related to the density of nuclei N S and the moderating power of the medium ξΣ S by the relationship below, The J eFF value is called the effective resonance integral . It characterizes the absorption of neutrons by a single nucleus in the resonance region and is measured in barns . The use of the effective resonance integral simplifies quantitative calculations of resonance absorption without detailed consideration of neutron interaction at deceleration. The effective resonance integral is usually determined experimentally. It depends on the concentration of 238 U and the mutual arrangement of uranium and the moderator. In a homogeneous mixture of moderator and 238 U, the effective resonance integral is found with a good accuracy by the empirical formula below, where N 3 / N8 is the ratio of moderator and 238 U nuclei in the homogeneous mixture, σ 3 S is the microscopic scattering cross section of the moderator. As can be seen from the formula, the effective resonance integral decreases with increasing 238 U concentration. The more 238 U nuclei in the mixture, the less likely absorption by a single nucleus of the moderating neutrons will take place. The effect of absorption in some 238 U nuclei on absorption in others is called resonance level shielding . It increases with increasing concentration of resonance absorbers. As an example, we can calculate the effective resonance integral in a homogeneous natural uranium-graphite mixture with the ratio N 3 /N 8 =215 . The scattering cross section of graphite σ C S =4.7 barns; In a homogeneous environment, all 238 U nuclei are in the same conditions with respect to the resonant neutron flux. In a heterogeneous environment uranium is separated from the moderator, which significantly affects the resonant neutron absorption. Firstly, some of the resonant neutrons become thermal neutrons in the moderator without colliding with uranium nuclei; secondly, resonant neutrons hitting the surface of the fuel elements are almost all absorbed by the thin surface layer. The inner 238 U nuclei are shielded by the surface nuclei and participate less in the resonant neutron absorption, and the shielding increases with the increase of the fuel element diameter d . Therefore, the effective 238 U resonance integral in a heterogeneous reactor depends on the fuel element diameter d : The constant a characterizes the absorption of resonance neutrons by surface and the constant b - by inner 238 U nuclei. For each type of nuclear fuel (natural uranium, uranium dioxide, etc.) the constants a and b are measured experimentally. For natural uranium rods a=4.15, b=12.35 . U for a rod from natural uranium with diameter d=3 cm: Comparison of the last two examples shows that the separation of uranium and moderator noticeably decreases neutron absorption in the resonance region. Coefficient φ is dependent on the following; Which reflects the competition of two processes in the resonance region: absorption of neutrons and their deceleration. The cross section Σ , by definition, is analogous to the macroscopic absorption cross section with replacement of the microscopic cross section by the effective resonance integral J eFF . It also characterizes the loss of slowing neutrons in the resonance region. As the 238 U concentration increases, the absorption of resonant neutrons increases and hence fewer neutrons are slowed down to thermal energies. The resonance absorption is influenced by the slowing down of neutrons. Collisions with the moderator nuclei take neutrons out of the resonance region and are more intense the greater the moderating power ξ Σ S {\displaystyle \xi \Sigma _{S}} . So, for the same concentration of 238 U, the probability of avoiding resonance capture in the uranium-water medium is greater than in the uranium-carbon medium. Let us calculate the probability of avoiding resonance capture in homogeneous and heterogeneous environments natural uranium-graphite. In both media the ratio of carbon and 238 U nuclei N C /N S =215 . The diameter of the uranium rod is d=3 cm . Taking into account that ξC=0.159 , σ C a =4.7 barn , we calculate the following probability; Calculating the coefficients φ in homogeneous and heterogeneous mixtures, we get; The transition from homogeneous to heterogeneous medium slightly reduces the thermal neutron absorption in uranium. However, this loss is considerably overlapped by the decrease of the resonance neutron absorption, and the propagation properties of the medium improve. This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resonance_escape_probability
Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom. [ 1 ] Typically the photon contained electromagnetic field is applied to the two-level atom through the use of a monochromatic laser. A two-level atom is a specific type of two-state system in which the atom can be found in the two possible states. The two possible states are if an electron is found in its ground state or the excited state. In many experiments an atom of lithium is used because it can be closely modeled to a two-level atom as the excited states of the singular electron are separated by large enough energy gaps to significantly reduce the possibility of the electron jumping to a higher excited state. Thus it allows for easier frequency tuning of the applied laser as frequencies further from resonance can be used while still driving the electron to jump to only the first excited state. Once the atom is excited, it will release a photon with the same energy as the energy difference between the excited and ground state. The mechanism for this release is the spontaneous decay of the atom. The emitted photon is released in an arbitrary direction. While the transition between two specific energy levels is the dominant mechanism in resonance fluorescence, experimentally other transitions will play a very small role and thus must be taken into account when analyzing results. The other transitions will lead to emission of a photon of a different atomic transition with much lower energy which will lead to "dark" periods of resonance fluorescence. [ 2 ] The dynamics of the electromagnetic field of the monochromatic laser can be derived by first treating the two-level atom as a spin-1/2 system with two energy eigenstates which have energy separation of ħω 0 . The dynamics of the atom can then be described by the three rotation operators, R i ^ ( t ) {\displaystyle {\hat {R_{i}}}(t)} , R j ^ ( t ) {\displaystyle {\hat {R_{j}}}(t)} , R k ^ ( t ) {\displaystyle {\hat {R_{k}}}(t)} , acting upon the Bloch sphere. Thus the energy of the system is described entirely through an electric dipole interaction between the atom and field with the resulting hamiltonian being described by H ^ = 1 2 ∫ ( ϵ 0 E → ^ 2 ( r → , t ) + 1 μ 0 B → ^ 2 ( r → , t ) ) d 3 x + ℏ ω 0 R k ^ ( t ) + 2 ω 0 μ → ⋅ A → ^ ( 0 , t ) R j ^ ( t ) {\displaystyle {\hat {H}}={\frac {1}{2}}\int (\epsilon _{0}{\hat {\vec {E}}}^{2}({\vec {r}},t)+{\frac {1}{\mu _{0}}}{\hat {\vec {B}}}^{2}({\vec {r}},t))d^{3}x+\hbar \omega _{0}{\hat {R_{k}}}(t)+2\omega _{0}{\vec {\mu }}\cdot {\hat {\vec {A}}}(0,t){\hat {R_{j}}}(t)} . After quantizing the electromagnetic field, the Heisenberg Equation as well as Maxwell's equations can then be used to find the resulting equations of motion for R k ^ ( t ) {\displaystyle {\hat {R_{k}}}(t)} as well as for b ^ ( t ) {\displaystyle {\hat {b}}(t)} , the annihilation operator of the field, R ^ ˙ k ( t ) = − 2 β ( R ^ k ( t ) + 1 2 ) − ( ω 0 / ℏ ) { [ b ^ ( t ) + b ^ † ( t ) ] μ → ⋅ A → ^ f r e e ( + ) ( r → , t ) + H . c . } {\displaystyle {\dot {\hat {R}}}_{k}(t)=-2\beta ({\hat {R}}_{k}(t)+{\frac {1}{2}})-(\omega _{0}/\hbar )\{[{\hat {b}}(t)+{\hat {b}}^{\dagger }(t)]{\vec {\mu }}\cdot {\hat {\vec {A}}}_{free}^{(+)}({\vec {r}},t)+H.c.\}} b ^ ˙ ( t ) = ( − i ω 0 − β + i γ ) b ^ ( t ) − ( β + i γ ) b ^ † ( t ) + 2 ( ω 0 / ℏ ) [ R ^ k ( t ) μ → ⋅ A → ^ f r e e ( + ) ( 0 , t ) + H . c . ] {\displaystyle {\dot {\hat {b}}}(t)=(-i\omega _{0}-\beta +i\gamma ){\hat {b}}(t)-(\beta +i\gamma ){\hat {b}}^{\dagger }(t)+2(\omega _{0}/\hbar )[{\hat {R}}_{k}(t){\vec {\mu }}\cdot {\hat {\vec {A}}}_{free}^{(+)}(0,t)+H.c.]} , where β {\displaystyle \beta } and γ {\displaystyle \gamma } are frequency parameters used to simplify equations. Now that the dynamics of the field with respect to the states of the atom has been described, the mechanism through which photons are released from the atom as the electron falls from the excited state to the ground state, Spontaneous Emission , can be examined. Spontaneous emission is when an excited electron arbitrarily decays to the ground state emitting a photon. As the electromagnetic field is coupled to the state of the atom, and the atom can only absorb a single photon before having to decay, the most basic case then is if the field only contains a single photon. Thus spontaneous decay occurs when the excited state of the atom emits a photon back into the vacuum Fock state of the field | e ⟩ ⊗ | { 0 } ⟩ ⇒ | g ⟩ ⊗ | { 1 } ⟩ {\displaystyle |e\rangle \otimes |\{0\}\rangle \Rightarrow |g\rangle \otimes |\{1\}\rangle } . During this process the decay of the expectation values of the above operators follow the following relations ⟨ R ^ k ( t ) ⟩ + 1 2 = [ ⟨ R ^ k ( 0 ) ⟩ + 1 2 ] e − 2 β t {\displaystyle \langle {\hat {R}}_{k}(t)\rangle +{\frac {1}{2}}=[\langle {\hat {R}}_{k}(0)\rangle +{\frac {1}{2}}]e^{-2\beta t}} , ⟨ b ^ s ( t ) ⟩ = ⟨ b ^ s ( 0 ) ⟩ e ( − β + i γ ) t {\displaystyle \langle {\hat {b}}_{s}(t)\rangle =\langle {\hat {b}}_{s}(0)\rangle e^{(-\beta +i\gamma )t}} . So the atom decays exponentially and the atomic dipole moment shall oscillate. The dipole moment oscillates due to the Lamb shift, which is a shift in the energy levels of the atom due to fluctuations of the field. It is imperative, however, to look at fluorescence in the presence of a field with many photons, as this is a much more general case. This is the case in which the atom goes through many excitation cycles. In this case the exciting field emitted from the laser is in the form of coherent states | { v } ⟩ {\displaystyle |\{v\}\rangle } . This allows for the operators which comprise the field to act on the coherent state and thus be replaced with eigenvalues. Thus we can simplify the equations by allowing operators to be turned into constants. The field can then be described much more classically than a quantized field normally would be able to. As a result, we are able to find the expectation value of the electric field for the retarded time. ⟨ E → ^ ( − ) ( r → , t ) ⋅ E → ^ ( + ) ( r → , t ) ⟩ = ( ω 0 2 4 π ϵ 0 c 2 ) 2 ( μ 2 r 2 − ( μ → ⋅ r → ) 2 r 4 ) × ⟨ b ^ s † ( t − r c ) b ^ s ( t − r c ) ⟩ = ( ω 0 2 μ sin ⁡ ψ 4 π ϵ 0 c 2 r ) 2 [ ⟨ R ^ k ( t − r c ) ⟩ + 1 2 ] {\displaystyle \langle {\hat {\vec {E}}}^{(-)}({\vec {r}},t)\cdot {\hat {\vec {E}}}^{(+)}({\vec {r}},t)\rangle =\left({\frac {\omega _{0}^{2}}{4\pi \epsilon _{0}c^{2}}}\right)^{2}\left({\frac {\mu ^{2}}{r^{2}}}-{\frac {({\vec {\mu }}\cdot {\vec {r}})^{2}}{r^{4}}}\right)\times \langle {\hat {b}}_{s}^{\dagger }\left(t-{\frac {r}{c}}\right){\hat {b}}_{s}\left(t-{\frac {r}{c}}\right)\rangle =\left({\frac {\omega _{0}^{2}\mu \sin \psi }{4\pi \epsilon _{0}c^{2}r}}\right)^{2}[\langle {\hat {R}}_{k}\left(t-{\frac {r}{c}}\right)\rangle +{\frac {1}{2}}]} , where ψ {\displaystyle \psi } is the angle between μ ^ {\displaystyle {\hat {\mu }}} and r ^ {\displaystyle {\hat {r}}} . There are two general types of excitations produced by fields. The first is one that dies out as V = 0 , t ⇒ ∞ {\displaystyle V=0,t\Rightarrow \infty } , while the other one reaches a state in which it eventually reaches a constant amplitude, thus V ^ ( t ) = ϵ ^ α e i ( ω 0 − ω 1 ) t + i ϕ {\displaystyle {\hat {V}}(t)={\hat {\epsilon }}\alpha e^{i(\omega _{0}-\omega _{1})t+i\phi }} . Here α {\displaystyle \alpha } is a real normalization constant, ϕ {\displaystyle \phi } is a real phase factor, and ϵ ^ {\displaystyle {\hat {\epsilon }}} is a unit vector which indicates the direction of the excitation. Thus as t ⇒ ∞ {\displaystyle t\Rightarrow \infty } , then ⟨ R ^ k ( t ) ⟩ + 1 / 2 ⇒ 1 4 Ω 2 1 2 Ω 2 + β 2 + ( γ + ω 1 − ω 0 ) 2 {\displaystyle \langle {\hat {R}}_{k}(t)\rangle +1/2\Rightarrow {\frac {{\frac {1}{4}}\Omega ^{2}}{{\frac {1}{2}}\Omega ^{2}+\beta ^{2}+(\gamma +\omega _{1}-\omega _{0})^{2}}}} . As Ω {\displaystyle \Omega } is the Rabi frequency, we can see that this is analogous to the rotation of a spin state around the Bloch sphere from an interferometer. Thus the dynamics of a two-level atom can be accurately modeled by a photon in an interferometer. It is also possible to model as an atom and a field, and it will, in fact, retain more properties of the system such as lamb shift, but the basic dynamics of resonance fluorescence can be modeled as a spin-1/2 particle. There are several limits that can be analyzed to make the study of resonance fluorescence easier. The first of these is the approximations associated with the Weak Field Limit , where the square modulus of the Rabi frequency of the field that is coupled to two-level atom is much smaller than the rate of spontaneous emission of the atom. This means that the difference in the population between the excited state of the atom and the ground state of the atom is approximately independent of time. [ 3 ] If we also take the limit in which the time period is much larger than the time for spontaneous decay, the coherences of the light can be modeled as ρ a b ( t ) = − i ( Ω R / 2 ) e − i ν t i ( ω − ν ) + Γ / 2 [ ρ a a ( 0 ) − ρ b b ( 0 ) ] {\displaystyle \rho _{ab}(t)={\frac {-i(\Omega _{R}/2)e^{-i\nu t}}{i(\omega -\nu )+\Gamma /2}}[\rho _{aa}(0)-\rho _{bb}(0)]} , where Ω R {\displaystyle \Omega _{R}} is the Rabi frequency of the driving field and Γ {\displaystyle \Gamma } is the spontaneous decay rate of the atom. Thus it is clear that when an electric field is applied to the atom, the dipole of the atom oscillates according to driving frequency and not the natural frequency of the atom. If we also look at the positive frequency component of the electric field, ⟨ E → ( + ) ( r → , t ) ⟩ = ω 2 μ sin ⁡ ψ 4 π ϵ 0 c 2 | r → | x ^ ⟨ σ − ( t − | r → | c ) ⟩ {\displaystyle \langle {\vec {E}}^{(+)}({\vec {r}},t)\rangle ={\frac {\omega ^{2}\mu \sin \psi }{4\pi \epsilon _{0}c^{2}|{\vec {r}}|}}{\hat {x}}\langle \sigma _{-}(t-{\frac {|{\vec {r}}|}{c}})\rangle } we can see that the emitted field is the same as the absorbed field other than the difference in direction, resulting in the spectrum of the emitted field being the same as that of the absorbed field. The result is that the two-level atom behaves exactly as a driven oscillator and continues scattering photons so long as the driving field remains coupled to the atom. The weak field approximation is also used in approaching two-time correlation functions. In the weak-field limit, the correlation function ⟨ b ^ s † ( t ) b ^ s ( t + τ ) ⟩ {\displaystyle \langle {\hat {b}}_{s}^{\dagger }(t){\hat {b}}_{s}(t+\tau )\rangle } can be calculated much more easily as only the first three terms must be kept. Thus the correlation function becomes ⟨ b ^ s † ( t ) b ^ s ( t + τ ) ⟩ = 1 4 Ω 2 e i ( ω 0 − ω 1 ) τ β 2 ( 1 + θ 2 ) ( 1 − Ω 2 1 2 Ω 2 + β 2 ( 1 + θ 2 ) ) + Ω 4 e − β | τ | e i ( ω 0 − ω 1 ) τ 8 β 4 θ ( 1 + θ 2 ) 2 × [ sin ⁡ ( β θ | τ | ) + θ cos ⁡ ( β θ τ ) ] {\displaystyle \langle {\hat {b}}_{s}^{\dagger }(t){\hat {b}}_{s}(t+\tau )\rangle ={\frac {1}{4}}{\frac {\Omega ^{2}e^{i(\omega _{0}-\omega _{1})\tau }}{\beta ^{2}(1+\theta ^{2})}}\left(1-{\frac {\Omega ^{2}}{{\frac {1}{2}}\Omega ^{2}+\beta ^{2}(1+\theta ^{2})}}\right)+{\frac {\Omega ^{4}e^{-\beta |\tau |}e^{i(\omega _{0}-\omega _{1})\tau }}{8\beta ^{4}\theta (1+\theta ^{2})^{2}}}\times [\sin(\beta \theta |\tau |)+\theta \cos(\beta \theta \tau )]} as t ⇒ ∞ {\displaystyle t\Rightarrow \infty } . From the above equation we can see that as t ⇒ ∞ {\displaystyle t\Rightarrow \infty } the correlation function will no longer depend on time, but rather that it will depend on τ {\displaystyle \tau } . The system will eventually reach a quasi-stationary state as t ⇒ ∞ {\displaystyle t\Rightarrow \infty } It is also clear that there are terms in the equation that go to zero as τ ⇒ ∞ {\displaystyle \tau \Rightarrow \infty } . These are the result of the Markovian processes of the quantum fluctuations of the system. We see that in the weak field approximation as well as t ⇒ ∞ , τ ⇒ ∞ {\displaystyle t\Rightarrow \infty ,\tau \Rightarrow \infty } , the coupled system will reach a quasi-steady state where the quantum fluctuations become negligible. The Strong Field Limit is the exact opposite limit to the weak field where the square modulus of the Rabi frequency of the electromagnetic field is much larger than the rate of spontaneous emission of the two-level atom. When a strong field is applied to the atom, a single peak is no longer observed in fluorescent light's radiation spectrum. Instead, other peaks begin appearing on either side of the original peak. These are known as side bands. The sidebands are a result of the Rabi oscillations of the field causing a modulation in the dipole moment of the atom. This causes a splitting in the degeneracy of certain eigenstates of the hamiltonian, specifically | e ⟩ ⊗ | { n } ⟩ {\displaystyle |e\rangle \otimes |\{n\}\rangle } and | g ⟩ ⊗ | { n + 1 } ⟩ {\displaystyle |g\rangle \otimes |\{n+1\}\rangle } are split into doublets. This is known as dynamic Stark splitting and is the cause for the Mollow triplet, which is a characteristic energy spectrum found in Resonance fluorescence. An interesting phenomena arises in the Mollow triplet where both of the sideband peaks have a width different than that of the central peak. If the Rabi frequency is allowed to become much larger than the rate of spontaneous decay of the atom, we can see that in the strong field limit ⟨ σ − ( t ) ⟩ e i ω t {\displaystyle \langle \sigma _{-}(t)\rangle e^{i\omega t}} will become ⟨ σ − ( t ) ⟩ e i ω t = 1 4 { [ 2 ρ + + ( 0 ) − 1 ] e − Γ 2 t − [ ρ + − ( 0 ) e − i Ω R t − 3 Γ 4 t − c . c ] } {\displaystyle \langle \sigma _{-}(t)\rangle e^{i\omega t}={\frac {1}{4}}\{[2\rho _{++}(0)-1]e^{-{\frac {\Gamma }{2}}t}-[\rho _{+-}(0)e^{-i\Omega _{R}t-{\frac {3\Gamma }{4}}t}-c.c]\}} . From this equation it is clear where the differences in width of the peaks in the Mollow triplet arise from as the central peak has a width of Γ 2 {\displaystyle {\frac {\Gamma }{2}}} and the sideband peaks have a width of 3 Γ 4 {\displaystyle {\frac {3\Gamma }{4}}} where Γ {\displaystyle \Gamma } is the rate of spontaneous emission for the atom. Unfortunately this cannot be used to calculate a steady state solution as ρ + + ( 0 ) ⇒ 1 2 {\displaystyle \rho _{++}(0)\Rightarrow {\frac {1}{2}}} and ρ + − ( 0 ) ⇒ 0 {\displaystyle \rho _{+-}(0)\Rightarrow 0} in a steady state solution. Thus the spectrum would vanish in a steady state solution, which is not the actual case. The solution that does allow for a steady state solution must take the form of a two-time correlation function as opposed to the above one-time correlation function. This solution appears as ⟨ σ + ( 0 ) σ − ( τ ) ⟩ = 1 4 ( e − Γ 2 τ + 1 2 e − 3 Γ 4 τ e − i Ω R τ + 1 2 e − 3 Γ 4 τ e i Ω R τ ) e − i ω τ {\displaystyle \langle \sigma _{+}(0)\sigma _{-}(\tau )\rangle ={\frac {1}{4}}\left(e^{-{\frac {\Gamma }{2}}\tau }+{\frac {1}{2}}e^{-{\frac {3\Gamma }{4}}\tau }e^{-i\Omega _{R}\tau }+{\frac {1}{2}}e^{-{\frac {3\Gamma }{4}}\tau }e^{i\Omega _{R}\tau }\right)e^{-i\omega \tau }} . Since this correlation function includes the steady state limits of the density matrix, where ρ + + s . s ⇒ 1 2 {\displaystyle \rho _{++}^{s.s}\Rightarrow {\frac {1}{2}}} and ρ + − s . s ⇒ 0 {\displaystyle \rho _{+-}^{s.s}\Rightarrow 0} , and the spectrum is nonzero, it is clear to see that the Mollow triplet remains the spectrum for the fluoresced light even in a steady state solution. The study of correlation functions is critical to the study of quantum optics as the Fourier transform of the correlation function is the energy spectral density. Thus the two-time correlation function is a useful tool in the calculation of the energy spectrum for a given system. We take the parameter τ {\displaystyle \tau } to be the difference between the two times in which the function is calculated. While correlation functions can more easily be described using limits of the strength of the field and limits placed on the time of the system, they can be found more generally as well. For resonance fluorescence, the most important correlation functions are ⟨ b ^ s † ( t ) b ^ s ( t + τ ) ⟩ e i ( ω 1 − ω 0 ) τ ≡ g ( t , τ ) {\displaystyle \langle {\hat {b}}_{s}^{\dagger }(t){\hat {b}}_{s}(t+\tau )\rangle e^{i(\omega _{1}-\omega _{0})\tau }\equiv g(t,\tau )} , ⟨ b ^ s † ( t ) b ^ s ( t + τ ) ⟩ e i ( ω 1 − ω 0 ) ( 2 t + τ e 2 i ϕ ≡ f ( t , τ ) {\displaystyle \langle {\hat {b}}_{s}^{\dagger }(t){\hat {b}}_{s}(t+\tau )\rangle e^{i(\omega _{1}-\omega _{0})(2t+\tau }e^{2i\phi }\equiv f(t,\tau )} , ⟨ b ^ s † ( t ) R ^ k ( t + τ ) ⟩ e i ( ω 1 − ω 0 ) t e i ϕ ≡ g ( t , τ ) {\displaystyle \langle {\hat {b}}_{s}^{\dagger }(t){\hat {R}}_{k}(t+\tau )\rangle e^{i(\omega _{1}-\omega _{0})t}e^{i\phi }\equiv g(t,\tau )} , where g ( t , τ ) = [ ⟨ R ^ k ( t ) ⟩ + 1 2 ] e − β ( 1 − i θ ) τ + Ω ∫ 0 τ d t ′ h ( t , t ′ ) e β ( 1 − i θ ) ( t ′ − τ ) {\displaystyle g(t,\tau )=[\langle {\hat {R}}_{k}(t)\rangle +{\frac {1}{2}}]e^{-\beta (1-i\theta )\tau }+\Omega \int \limits _{0}^{\tau }dt'h(t,t')e^{\beta (1-i\theta )(t'-\tau )}} , f ( t , τ ) = Ω ∫ 0 τ d t ′ h ( t , t ′ ) e β ( 1 + i θ ) ( t ′ − τ ) {\displaystyle f(t,\tau )=\Omega \int \limits _{0}^{\tau }dt'h(t,t')e^{\beta (1+i\theta )(t'-\tau )}} , h ( t , τ ) = − 1 2 ⟨ b ^ s † ( t ) ⟩ e i ( ω 0 − ω 1 ) t e i ϕ − 1 2 Ω ∫ 0 τ d t ′ [ f ( t , t ′ ) + g ( t , t ′ ) ] e 2 β ( t ′ − τ ) {\displaystyle h(t,\tau )=-{\frac {1}{2}}\langle {\hat {b}}_{s}^{\dagger }(t)\rangle e^{i(\omega _{0}-\omega _{1})t}e^{i\phi }-{\frac {1}{2}}\Omega \int \limits _{0}^{\tau }dt'[f(t,t')+g(t,t')]e^{2\beta (t'-\tau )}} . Two-time correlation functions are generally shown to be independent of t {\displaystyle t} , and instead rely on τ {\displaystyle \tau } as t ⇒ ∞ {\displaystyle t\Rightarrow \infty } . These functions can be used to find the spectral density S ( t , ω ) {\displaystyle S(t,\omega )} by computing the transform S ( t , ω ) = K ∫ 0 ∞ d τ g ( t − τ , τ ) e i ( ω − ω 1 ) τ + c . c {\displaystyle S(t,\omega )=K\int \limits _{0}^{\infty }d\tau g(t-\tau ,\tau )e^{i(\omega -\omega _{1})\tau }+c.c} , where K is a constant. The spectral density can be viewed as the rate of photon emission of photons of frequency ω {\displaystyle \omega } at the given time t {\displaystyle t} , which is useful in determining the power output of a system at a given time. The correlation function associated with the spectral density of resonance fluorescence is reliant on the electric field. Thus once the constant K has been determined, the result is equivalent to S ( r → , ω 0 ) = 1 π R e ∫ 0 ∞ d τ ⟨ E ( − ) ( r → , t ) E ( + ) ( r → , t + τ ) ⟩ e i ω 0 τ {\displaystyle S({\vec {r}},\omega _{0})={\frac {1}{\pi }}Re\int \limits _{0}^{\infty }d\tau \langle E^{(-)}({\vec {r}},t)E^{(+)}({\vec {r}},t+\tau )\rangle e^{i\omega _{0}\tau }} This is related to the intensity by ⟨ E ( − ) ( r → , t ) E ( + ) ( r → , t + τ ) ⟩ = I 0 ( r → ) ⟨ σ + ( t ) σ − ( t + τ ) ⟩ {\displaystyle \langle E^{(-)}({\vec {r}},t)E^{(+)}({\vec {r}},t+\tau )\rangle =I_{0}({\vec {r}})\langle \sigma _{+}(t)\sigma _{-}(t+\tau )\rangle } In the weak field limit when Ω R ≪ Γ 4 {\displaystyle \Omega _{R}\ll {\frac {\Gamma }{4}}} the power spectrum can be determined to be S ( r → , ω 0 ) = I 0 ( r → ) ( Ω R Γ ) 2 δ ( ω − ω 0 ) {\displaystyle S({\vec {r}},\omega _{0})=I_{0}({\vec {r}})\left({\frac {\Omega _{R}}{\Gamma }}\right)^{2}\delta (\omega -\omega _{0})} . In the strong field limit, the power spectrum is slight more complicated and found to be S ( r → , ω 0 ) = I 0 ( r → ) 8 π [ 3 Γ / 4 ( ω − Ω R − ω 0 ) 2 + ( 3 Γ / 4 ) 2 + Γ ( ω − ω 0 ) 2 + ( Γ / 2 ) 2 + 3 Γ / 4 ( ω + Ω R − ω 0 ) 2 + ( 3 Γ / 4 ) 2 ] {\displaystyle S({\vec {r}},\omega _{0})={\frac {I_{0}({\vec {r}})}{8\pi }}\left[{\frac {3\Gamma /4}{(\omega -\Omega _{R}-\omega _{0})^{2}+(3\Gamma /4)^{2}}}+{\frac {\Gamma }{(\omega -\omega _{0})^{2}+(\Gamma /2)^{2}}}+{\frac {3\Gamma /4}{(\omega +\Omega _{R}-\omega _{0})^{2}+(3\Gamma /4)^{2}}}\right]} . From these two functions it is easy to see that in the weak field limit a single peak appears at ω 0 {\displaystyle \omega _{0}} in the spectral density due to the delta function, while in the strong field limit a Mollow triplet forms with sideband peaks at ω = ω 0 ± Ω R {\displaystyle \omega =\omega _{0}\pm \Omega _{R}} , and appropriate peak width of Γ 2 {\displaystyle {\frac {\Gamma }{2}}} for the central peak and 3 Γ 4 {\displaystyle {\frac {3\Gamma }{4}}} for the sideband peaks. Photon anti-bunching is the process in Resonance Fluorescence through which rate at which photons are emitted by a two-level atom is limited. A two-level atom is only capable of absorbing a photon from the driving electromagnetic field after a certain period of time has passed. This time period is modeled as a probability distribution p ( τ ) {\displaystyle p(\tau )} where p ( τ ) ⇒ 0 {\displaystyle p(\tau )\Rightarrow 0} as τ ⇒ 0 {\displaystyle \tau \Rightarrow 0} . As the atom cannot absorb a photon, it is unable to emit one and thus there is a restriction on the spectral density. This is illustrated by the second order correlation function g ( 2 ) ( τ ) = 1 − ( cos ⁡ μ τ + 3 Γ 4 μ sin ⁡ μ τ ) e − 3 Γ τ / 4 {\displaystyle g^{(2)}(\tau )=1-\left(\cos \mu \tau +{\frac {3\Gamma }{4\mu }}\sin \mu \tau \right)e^{-3\Gamma \tau /4}} . From the above equation it is clear that g ( 2 ) ( 0 ) = 0 {\displaystyle g^{(2)}(0)=0} and thus g ( 2 ) ( τ ) > 0 {\displaystyle g^{(2)}(\tau )>0} resulting in the relation that describes photon antibunching g ( 2 ) ( τ ) > g ( 2 ) ( 0 ) {\displaystyle g^{(2)}(\tau )>g^{(2)}(0)} . This shows that the power cannot be anything other than zero for τ = 0 {\displaystyle \tau =0} . In the weak field approximation g ( 2 ) ( τ ) {\displaystyle g^{(2)}(\tau )} can only increase monotonically as τ {\displaystyle \tau } increases, however in the strong field approximation g ( 2 ) ( τ ) {\displaystyle g^{(2)}(\tau )} oscillates as it increases. These oscillations die off as τ ⇒ ∞ {\displaystyle \tau \Rightarrow \infty } . The physical idea behind photon anti-bunching is that while the atom itself is ready to be excited as soon as it releases its previous photon, the electromagnetic field created by the laser takes time to excite the atom. Double Resonance [ 4 ] is the phenomena when an additional magnetic field is applied to a two-level atom in addition to the typical electromagnetic field used to drive resonance fluorescence. This lifts the spin degeneracy of the Zeeman energy levels splitting them along the energies associated with the respective available spin levels, allowing for not only resonance to be achieved around the typical excited state, but if a second driving electromagnetic associated with the Larmor frequency is applied, a second resonance can be achieved around the energy state associated with m B = 0 {\displaystyle m_{B}=0} and the states associated with m b = ± 1 {\displaystyle m_{b}=\pm 1} . Thus resonance is achievable not only about the possible energy-levels of a two-level atom, but also about the sub-levels in the energy created by lifting the degeneracy of the level. If the applied magnetic field is tuned properly, the polarization of resonance fluorescence can be used to describe the composition of the excited state. Thus double resonance can be used to find the Landé factor, which is used to describe the magnetic moment of the electron within the two-level atom. Any two state system can be modeled as a two-level atom. This leads to many systems being described as an "Artificial Atom". For instance a superconducting loop which can create a magnetic flux passing through it can act as an artificial atom as the current can induce a magnetic flux in either direction through the loop depending on whether the current is clockwise or counterclockwise. [ 5 ] The hamiltonian for this system is described as H ^ = ℏ ω 0 2 + ϵ 2 σ ^ z 2 {\displaystyle {\hat {H}}=\hbar {\sqrt {\omega _{0}^{2}+\epsilon ^{2}}}{\frac {{\hat {\sigma }}_{z}}{2}}} where ℏ ϵ = 2 I p δ Φ {\displaystyle \hbar \epsilon =2I_{p}\delta \Phi } . This models the dipole interaction of the atom with a 1-D electromagnetic wave. It is easy to see that this is truly analogous to a real two-level atom due to the fact that the fluorescence appears in the spectrum as the Mollow triplet, precisely like a true two-level atom. These artificial atoms are often used to explore the phenomena of quantum coherence. This allows for the study of squeezed light which is known for creating more precise measurements. It is difficult to explore the resonance fluorescence of squeezed light in a typical two-level atom as all modes of the electromagnetic field must be squeezed which cannot easily be accomplished. In an artificial atom, the number of possible modes of the field is significantly limited allowing for easier study of squeezed light. In 2016 D.M. Toyli et al., performed an experiment in which two superconducting parametric amplifiers were used to generate squeezed light and then detect resonance fluorescence in artificial atoms from the squeezed light. [ 6 ] Their results agreed strongly with the theory describing the phenomena. The implication of this study is it allows for resonance fluorescence to assist in qubit readout for squeezed light. The qubit used in the study was an aluminum transmon circuit that was then coupled to a 3-D aluminum cavity. Extra silicon chips were introduced to the cavity to assist in the tuning of resonance to that of the cavity. The majority of the detuning that did occur was a result of the degeneration of the qubit over time. A quantum dot is a semiconductor nano-particle that is often used in quantum optical systems. This includes their ability to be placed in optical microcavities where they can act as two-level systems. In this process, quantum dots are placed in cavities which allow for the discretization of the possible energy states of the quantum dot coupled with the vacuum field. The vacuum field is then replaced by an excitation field and resonance fluorescence is observed. Current technology only allows for population of the dot in an excited state (not necessarily always the same), and relaxation of the quantum dot back to its ground state. Direct excitation followed by ground state collection was not achieved until recently. This is mainly due to the fact that as a result of the size of quantum dots, defects and contaminants create fluorescence of their own apart from the quantum dot. This desired manipulation has been achieved by quantum dots by themselves through a number of techniques including four-wave mixing and differential reflectivity, however no techniques had shown it to occur in cavities until 2007. Resonance fluorescence has been seen in a single self-assembled quantum dot as presented by Muller among others in 2007. [ 7 ] In the experiment they used quantum dots that were grown between two mirrors in the cavity. Thus the quantum dot was not placed in the cavity, but instead created in it. They then coupled a strong in-plane polarized tunable continuous-wave laser to the quantum dot and were able to observe resonance fluorescence from the quantum dot. In addition to the excitation of the quantum dot that was achieved, they were also able to collect the photon that was emitted with a micro-PL setup. This allows for resonant coherent control of the ground state of the quantum dot while also collecting the photons emitted from the fluorescence. In 2007, G. Wrigge, I. Gerhardt, J. Hwang, G. Zumofen, and V. Sandoghdar developed an efficient method to observe resonance fluorescence for an entire molecule as opposed to its typical observation in a single atom. [ 8 ] Instead of coupling the electric field to a single atom, they were able to replicate two-level systems in dye molecules embedded in solids. They used a tunable dye laser to excite the dye molecules in their sample. Due to the fact that they could only have one source at a time, the proportion of shot noise to actual data was much higher than normal. The sample which they excited was a Shpol'skii matrix which they had doped with the dyes they wished to use, dibenzanthanthrene. To improve the accuracy of the results, single-molecule fluorescence-excitation spectroscopy was used. The actual process for measuring the resonance was measuring the interference between the laser beam and the photons that were scattered from the molecule. Thus the laser was passed over the sample, resulting in several photons were scattered back, allowing for the measurement of the interference in the electromagnetic field that resulted. The improvement to this technique was they used solid-immersion lens technology. This is a lens that has a much higher numerical aperture than normal lenses as it is filled with a material that has a large refractive index. The technique used to measure the resonance fluorescence in this system was originally designed to locate individual molecules within substances. The largest implication that arises from resonance fluorescence is that for future technologies. Resonance fluorescence is used primarily in the coherent control of atoms. By coupling a two-level atom, such as a quantum dot, to an electric field in the form of a laser, you are able to effectively create a qubit. The qubit states correspond to the excited and the ground state of the two-level atoms. Manipulation of the electromagnetic field allows for effective control of the dynamics of the atom. These can then be used to create quantum computers. The largest barriers that still stand in the way of this being achievable are failures in truly controlling the atom. For instance true control of spontaneous decay and decoherence of the field pose large problems that must be overcome before two-level atoms can truly be used as qubits.
https://en.wikipedia.org/wiki/Resonance_fluorescence
Resonance ionization is a process in optical physics used to excite a specific atom (or molecule) beyond its ionization potential to form an ion using a beam of photons irradiated from a pulsed laser light. [ 1 ] In resonance ionization, the absorption or emission properties of the emitted photons are not considered, rather only the resulting excited ions are mass-selected, detected and measured. [ 2 ] Depending on the laser light source used, one electron can be removed from each atom so that resonance ionization produces an efficient selectivity in two ways: elemental selectivity in ionization and isotopic selectivity in measurement. [ 2 ] [ 3 ] [ 4 ] During resonance ionization, an ion gun creates a cloud of atoms and molecules from a gas-phase sample surface and a tunable laser is used to fire a beam of photons at the cloud of particles emanating from the sample ( analyte ). An initial photon from this beam is absorbed by one of the sample atoms, exciting one of the atom's electrons to an intermediate excited state . A second photon then ionizes the same atom from the intermediate state such that its high energy level causes it to be ejected from its orbital ; the result is a packet of positively charged ions which are then delivered to a mass analyzer . [ 5 ] [ 6 ] Resonance ionization contrasts with resonance-enhanced multiphoton ionization (REMPI) in that the latter is neither selective nor efficient since resonances are seldom used to prevent interference. Also, resonance ionization is used for an atomic (elemental) analyte , whereas REMPI is used for a molecular analyte . [ 7 ] The analytical technique on which the process of resonance ionization is based is termed resonance ionization mass spectrometry (RIMS). RIMS is derived from the original method, resonance ionization spectroscopy (RIS), which was initially being used to detect single atoms with better time resolution. [ 8 ] RIMS has proved useful in the investigation of radioactive isotopes (such as for studying rare fleeting isotopes produced in high-energy collisions), trace analysis (such as for discovering impurities in highly pure materials), atomic spectroscopy (such as for detecting low-content materials in biological samples), and for applications in which high levels of sensitivity and elemental selectivity are desired. Resonance ionization was first used in a spectroscopy experiment in 1971 at the Institute for Spectroscopy Russian Academy of Sciences ; in that experiment, ground state rubidium atoms were ionized using ruby lasers . [ 9 ] In 1974, a group of photophysical researchers at the Oak Ridge National Laboratory led by George Samuel Hurst developed, for the first time, the resonance ionization process on helium atoms. [ 10 ] They wanted to use laser light to measure the number of singlet metastable helium, He (2 1 S), particles created from energetic protons. [ 11 ] [ 12 ] The group achieved the selective ionization of the excited state of an atom at nearly 100% efficiency by using pulsed laser light to pass a beam of protons into the helium gas cell. The experiment on singlet metastable helium atoms was seminal in the journey towards using resonance ionization spectroscopy (RIS) for extensive atomic analysis in research settings. Cesium atoms was subsequently used to show that single atoms of an element could be counted if its resonance ionization was performed in a counter in which an electron could be detected for an atom in its ground state. [ 12 ] Subsequently, advanced techniques categorized under resonance ionization mass spectrometry (RIMS) were used to generate the relative abundance of various ion types by coupling the RIS lasers to magnetic sector , quadrupole , or time-of-flight (TOF) mass spectrometers. The field of resonance ionization spectroscopy (RIS) has largely been shaped by the formal and informal communications heralding its discovery. [ 13 ] Research papers on RIS have heavily relied on self-citation from inception, a trend which climaxed three years later with the founding of a company to commercialize the technique. [ 14 ] A model resonance ionization mass spectrometry (RIMS) set-up consists of a laser system (consisting of multiple lasers), sample from which the atoms are derived, and a suitable mass spectrometer which mass-selectively detects the photo ions created from resonance . In resonant ionization, atoms or molecules from ground state are excited to higher energy states by the resonant absorption of photons to produce ions. These ions are then monitored by appropriate detectors. In order to ensure a highly-efficient sensitivity and process saturation, the atomic or molecular beam must be formed from the ground state, the atoms should be efficiently excited and ionized, and each atom should be converted by the photon field of a short-timed pulsed laser to produce a positive ion and a valence electron. [ 15 ] In a basic RIS process, a pulsed laser beam produces photons of the right energy in order to excite an atom initially in its ground state, a , to an excited level, b . During the laser pulse, the ion population of state b increases at the expense of that of state a . After a few minutes, the rate of stimulated emission from the excited state will equal rate of production so that the system is in equilibrium as long as the laser intensity is kept sufficiently high during a pulse. This high laser intensity translates into a photon fluence (photons per unit of beam area) large enough so that a necessary condition for the saturation of the RIS process has been met. If, in addition, the rate of photoionization is greater than the rate of consumption of intermediates, then each selected state is converted to one electron plus one positive ion, so that the RIS process is saturated. [ 16 ] A usually efficient way to produce free atoms of an element in the ground state is to atomize the elements by ion sputtering or thermal vaporization of the element from a laser matrix under vacuum conditions or at environments with pressures significantly less than normal atmospheric pressure. The resulting plume of secondary atoms is then channeled through the path of multiple tuned laser beams which are capable of exciting consecutive electronic transitions in the specified element. Light from these tuned lasers promotes the desired atoms above their ionization potentials whereas interfering atoms from other elements are hardly ionized since they are generally transparent to the laser beam. This process produces photoions which are extracted and directed towards an analytical facility such as a magnetic sector to be counted. This approach is extremely sensitive to atoms of the specified element so that the ionization efficiency is almost 100% and also elementally selective, due to the highly unlikely chance that other species will be resonantly ionized. [ 16 ] [ 17 ] To achieve high ionization efficiencies, monochromatic lasers with high instantaneous spectral power are used. Typical lasers being used include continuous-wave lasers with extremely high spectral purity and pulsed lasers for analyses involving limited atoms. [ 18 ] Continuous-wave lasers however are often preferred to pulsed lasers due to the latter's relatively low duty cycle since they can only produce photo ions during the brief later pulses, and the difficulty in reproducing results due to pulse-to-pulse jitters, laser beam drifting, and wavelength variations. [ 19 ] Moderate laser powers, if high enough to affect the desired transition states, can be used since the non-resonant photoionization cross section is low which implies a negligible ionization efficiency of unwanted atoms. The influence of the laser matrix to be used for the sample can also be reduced by separating evaporation and ionization processes both in time and in space. Another factor that could affect the efficiency and selectivity of the ionization process is the presence of contaminants caused by surface or impact ionization. This can be reduced up to appreciable orders of magnitude by using mass analysis so that isotopic compositions of the desired element are determined. Most of the elements of the Periodic Table can be ionized by one of the several excitation schemes available. [ 3 ] The suitable excitation scheme depends on certain factors including the level scheme of the element's atom, its ionization energy , required selectivity and sensitivity, likely interference, and the wavelengths and power levels of the available laser systems. [ 15 ] Most excitation schemes vary in the last step, the ionization step. This is due to the low cross-section for non-resonant photo-ionization produced by the laser. A pulsed laser system facilitates the efficient coupling of a time-of-flight mass spectrometer (TOF-MS) to the resonance ionization set-up due to the instrument's abundance sensitivity. This is because TOF systems can produce an abundance sensitivity of up to 10 4 whereas magnetic mass spectrometers can only achieve up to 10 2 . [ 20 ] The total selectivity in a RIS process is a combination of the sensitivities in the various resonance transitions for multiple step-wise excitations. The probability of an atom to come in contact with the resonance of another atom is about 10 −5 . The addition of a mass spectrometer increases this figure by a factor of 10 6 such that the total elemental selectivity surpasses or at least compares to that of tandem mass spectrometry (MS/MS), the most selective technique available. [ 21 ] Optical ionization schemes are developed to produce element-selective ion source for various elements. Most of the elements of the periodic table have been resonantly ionized by using one of five major optical routes based on the principle of RIMS. [ 16 ] [ 22 ] The routes were formed by the absorption of two or three photons to achieve excitation and ionization and are provided on the basis of optically possible transitions between atomic levels in a process called the bound-bound transition . [ 23 ] For an atom of the element to be promoted to a bound-continuum, the energies emitted from the photons must be within the energy range of the selected tunable lasers. Also, the ionization energy of the last emitted photon must exceed that of the atom. [ 24 ] The optical ionization schemes are denoted by the amount of photons necessary to make the ion pair. For the first two Schemes 1 and 2, two photons (and processes) are involved. One photon excites the atom from the ground state to an intermediate state while the second photon ionizes the atom. In Schemes 3 and 4, three photons (and processes) are involved. The first two distinct photons create consecutive bound-bound transitions within the selected atom while the third photon is absorbed for ionization. Scheme 5 is a three-photon two-intermediate-level photoionization process. After the first two photons have been absorbed by the optical energy, the third photon achieves ionization. [ 8 ] The RIS process can be used to ionize all elements on the periodic table, except helium and neon, using available lasers. [ 1 ] In fact, it is possible to ionize most elements with a single laser set-up, thus enabling rapid switching from one element to another. In the early days, optical schemes from RIMS have been used to study over 70 elements and over 39 elements can be ionized with a single laser combination using a rapid computer-modulated framework that switches elements within seconds. [ 25 ] As an analytical technique, RIS is useful based on some of its working operations – they include extremely low detection limit so that mass of samples could be identified up to the order of 10 −15 , the extremely high sensitivity and elemental selectivity useful in micro- and trace analysis when coupled with mass spectrometers, and ability of the pulsed laser ion source to produce pure isobaric ion beams. [ 6 ] A major advantage of using resonance ionization is that it is a highly selective ionization mode; it is able to target a single type of atom among a background of many types of atoms, even when said background atoms are much more abundant than the target atoms. In addition, resonance ionization incorporates the high selectivity that is desired in spectroscopy methods with ultrasensitivity, thus making resonance ionization useful when analyzing complex samples with several atomic components. [ 26 ] [ 27 ] Resonance ionization spectroscopy (RIS) thus has a wide range of research and industrial applications. These include characterizing the diffusion and chemical reaction of free atoms in a gas medium, solid state surface analysis using direct sampling, studying the degree of concentration variations in a dilute vapor, detecting the allowable limits of number of particles needed in a semiconductor device, and estimating the flux of solar neutrinos on Earth. [ 16 ] Other uses include determining high-precision values for plutonium and uranium isotopes in a rapid fashion, investigating the atomic properties of technetium at the ultra trace level, and capturing the concurrent excitation of stable daughter atoms with the decay of their parent atoms as is the case for alpha particles , beta rays , and positrons . RIS is now in very common use in research facilities where the quick and quantitative determination of the elemental composition of materials is important. [ 2 ] Pulsed laser light sources provide higher photon fluxes than continuous-wave lasers do, [ 25 ] however the use of pulsed lasers currently limit vast applications of RIMS in two ways. One, photo ions are created only during short laser pulses, thus significantly reducing the duty cycle of pulsed resonance ionization mass spectrometers relative to their continuous-beam counterparts. Two, incessant drifts in laser pointing and pulse timing alongside jitters between pulses severely hamper chances of reproducibility . [ 19 ] These issues affect the extent to which resonance ionization can be used to solve some of the challenges confronted by practical analysts today; even so, applications of RIMS are replete in various traditional and emerging disciplines such as cosmochemistry , medical research , environmental chemistry , geophysical sciences , nuclear physics , genome sequencing , and semiconductors . [ 19 ] [ 28 ]
https://en.wikipedia.org/wiki/Resonance_ionization
In quantum mechanics , resonance cross section occurs in the context of quantum scattering theory , which deals with studying the scattering of quantum particles from potentials. The scattering problem deals with the calculation of flux distribution of scattered particles/waves as a function of the potential, and of the state (characterized by conservation of momentum/energy ) of the incident particle. For a free quantum particle incident on the potential, the plane wave solution to the time-independent Schrödinger wave equation is: For one-dimensional problems, the transmission coefficient T {\displaystyle T} is of interest. It is defined as: where J → {\displaystyle {\vec {J}}} is the probability current density. This gives the fraction of incident beam of particles that makes it through the potential. For three-dimensional problems, one would calculate the scattering cross-section σ {\displaystyle \sigma } , which, roughly speaking, is the total area of the incident beam which is scattered. Another quantity of relevance is the partial cross-section , σ l {\displaystyle \sigma _{\text{l}}} , which denotes the scattering cross section for a partial wave of a definite angular momentum eigenstate. These quantities naturally depend on k → {\displaystyle {\vec {k}}} , the wave-vector of the incident wave, which is related to its energy by: The values of these quantities of interest, the transmission coefficient T {\displaystyle T} (in case of one dimensional potentials), and the partial cross-section σ l {\displaystyle \sigma _{\text{l}}} show peaks in their variation with the incident energy E {\displaystyle E} . These phenomena are called resonances. A one-dimensional finite square potential is given by The sign of V 0 {\displaystyle V_{0}} determines whether the square potential is a well or a barrier . To study the phenomena of resonance, the time-independent Schrödinger equation for a stationary state of a massive particle with energy E > V 0 {\displaystyle E>V_{0}} is solved: The wave function solutions for the three regions x < 0 , 0 < x < L , x > L {\displaystyle x<0,0<x<L,x>L} are Here, k 1 {\displaystyle k_{1}} and k 2 {\displaystyle k_{2}} are the wave numbers in the potential-free region and within the potential respectively: To calculate T {\displaystyle T} , a coefficient in the wave function is set as B 3 = 0 {\displaystyle B_{3}=0} , which corresponds to the fact that there is no wave incident on the potential from the right. Imposing the condition that the wave function ψ ( x ) {\displaystyle \psi (x)} and its derivative d ψ d x {\displaystyle {\frac {d\psi }{dx}}} should be continuous at the well/barrier boundaries x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} , the relations between the coefficients are found, which allows T {\displaystyle T} to be found as: It follows that the transmission coefficient T {\displaystyle T} reaches its maximum value of 1 when: for any integer value n {\displaystyle n} . This is the resonance condition , which leads to the peaking of T {\displaystyle T} to its maxima, called resonance . From the above expression, resonance occurs when the distance covered by the particle in traversing the well and back ( 2 L {\displaystyle 2L} ) is an integer multiple of the De Broglie wavelength of a particle inside the potential ( λ = 2 π k {\displaystyle \lambda ={\frac {2\pi }{k}}} ). For E > V 0 {\displaystyle E>V_{0}} , reflections at potential discontinuities are not accompanied by any phase change. [ 1 ] Therefore, resonances correspond to the formation of standing waves within the potential barrier/well. At resonance, the waves incident on the potential at x = 0 {\displaystyle x=0} and the waves reflecting between the walls of the potential are in phase, and reinforce each other. Far from resonances, standing waves can't be formed. Then, waves reflecting between both walls of the potential (at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} ) and the wave transmitted through x = 0 {\displaystyle x=0} are out of phase, and destroy each other by interference. The physics is similar to that of transmission in Fabry–Pérot interferometer in optics, where the resonance condition and functional form of the transmission coefficient are the same. The transmission coefficient swings between its maximum of 1 and minimum of [ 1 + V 0 2 4 E ( E − V 0 ) ] − 1 {\displaystyle \left[1+{\frac {V_{0}^{2}}{4E(E-V_{0})}}\right]^{-1}} as a function of the length of square well ( L {\displaystyle L} ) with a period of π k 2 {\displaystyle {\frac {\pi }{k_{2}}}} . The minima of the transmission tend to 1 {\displaystyle 1} in the limit of large energy E >> V 0 {\displaystyle E>>V_{0}} , resulting in more shallow resonances, and inversely tend to 0 {\displaystyle 0} in the limit of low energy E << V 0 {\displaystyle E<<V_{0}} , resulting in sharper resonances. This is demonstrated in plots of transmission coefficient against incident particle energy for fixed values of the shape factor, defined as 2 m V 0 L 2 ℏ 2 {\displaystyle {\sqrt {\frac {2mV_{0}L^{2}}{\hbar ^{2}}}}}
https://en.wikipedia.org/wiki/Resonances_in_scattering_from_potentials
A resonant converter is a type of electric power converter that contains a network of inductors and capacitors called a resonant tank , tuned to resonate at a specific frequency. They find applications in electronics , in integrated circuits . [ 1 ] There are multiple types of resonant converter: This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resonant_converter
In astronomy , a resonant trans-Neptunian object is a trans-Neptunian object (TNO) in mean-motion orbital resonance with Neptune . The orbital periods of the resonant objects are in a simple integer relations with the period of Neptune, e.g. 1:2, 2:3, etc. Resonant TNOs can be either part of the main Kuiper belt population, or the more distant scattered disc population. [ 1 ] The diagram illustrates the distribution of the known trans-Neptunian objects. Resonant objects are plotted in red. Orbital resonances with Neptune are marked with vertical bars: 1:1 marks the position of Neptune's orbit and its trojans ; 2:3 marks the orbit of Pluto and plutinos ; and 1:2, 2:5, etc. mark a number of smaller families. The designation 2:3 or 3:2 both refer to the same resonance for TNOs. There is no ambiguity, because TNOs have, by definition, periods longer than Neptune's. The usage depends on the author and the field of research. Detailed analytical and numerical studies of Neptune's resonances have shown that the objects must have a relatively precise range of energies. [ 2 ] [ 3 ] If the object's semi-major axis is outside these narrow ranges, the orbit becomes chaotic, with widely changing orbital elements. As TNOs were discovered, more than 10% were found to be in 2:3 resonances, far from a random distribution. It is now believed that the objects have been collected from wider distances by sweeping resonances during the migration of Neptune. [ 4 ] Well before the discovery of the first TNO, it was suggested that interaction between giant planets and a massive disk of small particles would, via angular-momentum transfer, make Jupiter migrate inwards and make Saturn, Uranus, and especially Neptune migrate outwards. During this relatively short period of time, Neptune's resonances would be sweeping the space, trapping objects on initially varying heliocentric orbits into resonance. [ 5 ] A few objects have been discovered following orbits with semi-major axes similar to that of Neptune, near the Sun – Neptune Lagrangian points . These Neptune trojans , termed by analogy to the (Jupiter) Trojan asteroids , are in 1:1 resonance with Neptune. 31 are known as of February 2024. [ 6 ] [ 7 ] Only 3 objects are near Neptune's L 5 Lagrangian point , and the identification of one of these is insecure; the others are located in Neptune's L 4 region. [ 8 ] [ 7 ] In addition, (316179) 2010 EN 65 is a so-called "jumping trojan", currently transitioning from librating around L 4 to librating around L 5 , via the L 3 region. [ 9 ] The 2:3 resonance at 39.4 AU is by far the dominant category among the resonant objects. As of February 2020, it includes 383 confirmed and 99 possible member bodies (such as (175113) 2004 PF 115 ). [ 6 ] Of these 383 confirmed plutinos, 338 have their orbits secured in simulations run by the Deep Ecliptic Survey . [ 7 ] The objects following orbits in this resonance are named plutinos after Pluto , the first such body discovered. Large, numbered plutinos include: As of February 2020, 47 objects are confirmed to be in a 3:5 orbital resonance with Neptune at 42.2 AU. Among the numbered objects there are: [ 7 ] [ 6 ] Another population of objects is orbiting the Sun at 43.6 AU (in the midst of the classical objects ). The objects are rather small (with two exceptions, H >6) and most of them follow orbits close to the ecliptic . [ 7 ] As of February 2020 [update] , 55 4:7-resonant objects have had their orbits secured by the Deep Ecliptic Survey. [ 6 ] [ 7 ] Objects with well established orbits include: [ 7 ] This resonance at 47.7 AU is often considered to be the outer edge of the Kuiper belt , and the objects in this resonance are sometimes referred to as twotinos . Twotinos have inclinations less than 15 degrees and generally moderate eccentricities between 0.1 and 0.3. [ 10 ] An unknown number of the 2:1 resonants likely did not originate in a planetesimal disk that was swept by the resonance during Neptune's migration, but were captured when they had already been scattered. [ 11 ] There are far fewer objects in this resonance than plutinos. Johnston's Archive counts 111 while simulations by the Deep Ecliptic Survey have confirmed 126 as of February 2020. [ 6 ] [ 7 ] Long-term orbital integration shows that the 1:2 resonance is less stable than the 2:3 resonance; only 15% of the objects in 1:2 resonance were found to survive 4 Gyr as compared with 28% of the plutinos. [ 10 ] Consequently, it might be that twotinos were originally as numerous as plutinos, but their population has dropped significantly below that of plutinos since. [ 10 ] Objects with well established orbits include (in order of the diameter ): [ 6 ] There are 57 confirmed 2:5-resonant objects at 55.3 AU as of February 2020. [ 6 ] [ 7 ] Objects with well established orbits at 55.4 AU include: Johnston's Archive counts 14 1:3-resonant objects as of February 2020 at 62.5 AU. [ 6 ] A dozen of these are secure according to the Deep Ecliptic Survey: [ 7 ] As of February 2024, the following higher-order resonances are confirmed for a limited number of objects: [ 7 ] Haumea is thought to be in an intermittent 7:12 orbital resonance with Neptune. [ 13 ] Its ascending node Ω {\displaystyle \Omega } precesses with a period of about 4.6 million years, and the resonance is broken twice per precession cycle, or every 2.3 million years, only to return a hundred thousand years or so later. [ 14 ] Marc Buie qualifies it as non-resonant. [ 15 ] One of the concerns is that weak resonances may exist and would be difficult to prove due to the current lack of accuracy in the orbits of these distant objects. Many objects have orbital periods of more than 300 years and most have only been observed over a relatively short observation arc of a few years. Due to their great distance and slow movement against background stars, it may be decades before many of these distant orbits are determined well enough to confidently confirm whether a resonance is true or merely coincidental . A true resonance will smoothly oscillate while a coincidental near resonance will circulate. [ citation needed ] (See Toward a formal definition ) Simulations by Emel'yanenko and Kiseleva in 2007 show that (131696) 2001 XT 254 is librating in a 3:7 resonance with Neptune. [ 16 ] This libration can be stable for less than 100 million to billions of years. [ 16 ] Emel'yanenko and Kiseleva also show that (48639) 1995 TL 8 appears to have less than a 1% probability of being in a 3:7 resonance with Neptune, but it does execute circulations near this resonance . [ 16 ] The classes of TNO have no universally agreed precise definitions, the boundaries are often unclear and the notion of resonance is not defined precisely. The Deep Ecliptic Survey introduced formally defined dynamical classes based on long-term forward integration of orbits under the combined perturbations from all four giant planets. (see also formal definition of classical KBO ) In general, the mean-motion resonance may involve not only orbital periods of the form where p and q are small integers, λ and λ N are respectively the mean longitudes of the object and Neptune, but can also involve the longitude of the perihelion and the longitudes of the nodes (see orbital resonance , for elementary examples) An object is resonant if for some small integers (p,q,n,m,r,s), the argument (angle) defined below is librating (i.e. is bounded): [ 17 ] where the ϖ {\displaystyle \varpi } are the longitudes of perihelia and the Ω {\displaystyle \Omega } are the longitudes of the ascending nodes , for Neptune (with subscripts "N") and the resonant object (no subscripts). The term libration denotes here periodic oscillation of the angle around some value and is opposed to circulation where the angle can take all values from 0 to 360°. For example, in the case of Pluto, the resonant angle ϕ {\displaystyle \phi } librates around 180° with an amplitude of around 86.6° degrees, i.e. the angle changes periodically from 93.4° to 266.6°. [ 18 ] All new plutinos discovered during the Deep Ecliptic Survey proved to be of the type similar to Pluto's mean-motion resonance. More generally, this 2:3 resonance is an example of the resonances p:(p+1) (for example 1:2, 2:3, 3:4) that have proved to lead to stable orbits. [ 4 ] Their resonant angle is In this case, the importance of the resonant angle ϕ {\displaystyle \phi \,} can be understood by noting that when the object is at perihelion, i.e. λ = ϖ {\displaystyle \lambda =\varpi } , then i.e. ϕ {\displaystyle \phi \,} gives a measure of the distance of the object's perihelion from Neptune. [ 4 ] The object is protected from the perturbation by keeping its perihelion far from Neptune provided ϕ {\displaystyle \phi \,} librates around an angle far from 0°. As the orbital elements are known with a limited precision, the uncertainties may lead to false positives (i.e. classification as resonant of an orbit which is not). A recent approach [ 19 ] considers not only the current best-fit orbit but also two additional orbits corresponding to the uncertainties of the observational data. In simple terms, the algorithm determines whether the object would be still classified as resonant if its actual orbit differed from the best fit orbit, as the result of the errors in the observations. The three orbits are numerically integrated over a period of 10 million years. If all three orbits remain resonant (i.e. the argument of the resonance is librating, see formal definition ), the classification as a resonant object is considered secure. [ 19 ] If only two out of the three orbits are librating the object is classified as probably in resonance. Finally, if only one orbit passes the test, the vicinity of the resonance is noted to encourage further observations to improve the data. [ 19 ] The two extreme values of the semi-major axis used in the algorithm are determined to correspond to uncertainties of the data of at most 3 standard deviations . Such range of semi-axis values should, with a number of assumptions, reduce the probability that the actual orbit is beyond this range to less than 0.3%. The method is applicable to objects with observations spanning at least 3 oppositions. [ 19 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Resonant_trans-Neptunian_object
A resonating device is a structure used by an animal that improves the quality of its vocalizations through amplifying the sound produced via acoustic resonance . The benefit of such an adaptation is that the call's volume increases while lessening the necessary energy expenditure otherwise required to make such a sound. [ 1 ] [ 2 ] The resulting sound may also radiate more efficiently throughout the environment. [ 3 ] The resonator may take the form of a hollow (a resonant space ), a chamber (referred to as a resonating chamber ), or an otherwise air-filled cavity (such as an air sac ) which may be part of, or adjacent to, the animal's sound-producing organ , or it may be a structure entirely outside of the animal's body ( part of the environment ). Such structures use a similar principle to wind instruments , in that both utilize a resonator to amplify the soundwave that will ultimately be uttered. Such structures are widespread throughout the animal kingdom, as sound production is important in the social lives of various animals. Arthropods developed their resonating devices from various parts of their anatomy; bony fish often utilize their swim bladders as a resonating chamber; various tetrapods developed resonating devices in parts of their respiratory tract , and evidence suggests that dinosaurs possessed them as well. Vocalizations produced through zoological resonating devices act as mating calls , territorial calls , and other communication calls . Cicadas produce songs as part of their courtship display ; the males of a number of species possess an abdomen that is largely hollow. [ 4 ] The sound producing organ, the tymbals , are connected to the abdomen, and as a consequence their calls are amplified significantly; [ 1 ] cicadas have been recorded to emit sounds of around 100 decibels , which is enough to cause hearing loss after 15 minutes. [ 5 ] [ 6 ] [ 7 ] [ 8 ] A large australian species, Cyclochila australasiae , produces sounds of up to 120 decibels at close range. [ 9 ] [ 10 ] In contrast, the basal hairy cicadas ( Tettigarcta ) do not emit an audible, airborne sound; like related leafhoppers , they instead transmit their vibrations through their substrate, turning the plants they perch upon into resonators. [ 9 ] [ 11 ] A species of aquatic bug , Micronecta scholtzi , has been recorded to produce sounds of 105 dB, the "highest ratio dB/body size". This sound is produced via stridulation of the paramere ( genital appendage) on an abdominal ridge, and may be amplified by reflections and refractions within the layer of trapped air the bug uses as an air supply, though the use of the air bubble as such has not been proven. [ 12 ] [ 7 ] Tree crickets (specifically, Oecanthus henryi ) were found to create baffles by selecting appropriately sized leaves, then chewing a hole near the centre that was about the size of their wings. By calling from inside of these baffles, they were able to prevent acoustic short-circuiting and effectively increasing the loudness of its calls. [ 13 ] Bony fish possess an air-filled organ called the swim bladder that is primarily used to regulate buoyancy . However, a number of species have adapted their swimbladders to be a part of a sound-producing organ. The sound-producing apparatus consists of fast-contracting striated muscles that vibrate the swim bladder, either entirely attached to the swimbladder or also attaching to adjacent structures like the vertebral column or occipital bones . [ 14 ] Other families of fish which have sound-generating mechanisms involving the swim bladder include: [ 14 ] Frogs possess vocal sacs which serve to enhance their nuptial calls . To call, the frog closes its mouth, then expels air from its lungs, through its larynx , and into the vocal sac; the larynx's vibration causes the vocal sac to resonate. [ 20 ] [ 21 ] [ 22 ] Additionally, some frogs may call from inside structures that further amplify their calls; Metaphrynella sundana call from inside tree hollows with water pooling at the bottom, tuning their own calls to the resonant frequency of their specific tree hollow. [ 23 ] Mientien tree frogs ( Kurixalus idiootocus ) residing in urban areas utilize storm drains to improve their calls; frogs calling within the drains called louder and for longer periods. [ 24 ] The larynx is the primary vocal organ of mammals. In humans , it acts as a resonator only for high frequencies, due to its small volume; the pharynx , oral- , and nasal cavities , descending in order, are the most important resonators in humans. [ 25 ] [ 26 ] [ 27 ] Several non-human primates are adapted to producing loud calls, and they often rely on resonance chambers to produce it. The howler monkeys possess extralaryngeal airsacs along with a pneumatized (hollow) hyoid bone ; it is suggested that the hollow hyoid acts as a resonating chamber, allowing the howler monkey to produce its namesake call. [ 28 ] [ 29 ] Gibbons are also well known for their loud territorial calls; [ 30 ] [ 31 ] the siamang has a particularly well developed gular sac that acts as a resonating chamber. [ 32 ] Male orangutans also use their throat pouches for the purpose of enhancing their calls. [ 33 ] [ 34 ] Male gorillas ' airways have air sacs that penetrate into the soft tissue of the chest. These airsacs amplify the sound produced by his percussive chest-beating. [ 28 ] Horseshoe bats (of the family Rhinolophidae ) are a bat genus that possess air pouches, or chambers, around their larynx which act as Helmholtz resonators . [ 1 ] The male hammerhead bat has an extremely large larynx that extends through most of his thoracic cavity , displacing his other internal organs. [ 35 ] A pharyngeal air sac connects to a large sinus in the bat's snout; these structures act as resonating chambers to further amplify the bat's voice. [ 36 ] So specialized are these structures that scientists Herbert Lang and James Chapin remarked; "In no other mammal is everything so entirely subordinated to the organs of voice". [ 37 ] Pinnipeds have been noted to employ this structure; the expanded nasal chambers of elephant and hooded seals act as resonant spaces that enhance their calls. The expanded laryngeal lumen of California sea lions , the pharyngeal pouch of walrus , and the tracheal sacs of various phocids may also function in a similar manner. [ 38 ] Mysticetes , such as the blue whale , use their greatly expanded larynx as a resonant cavity. [ 28 ] Even in juveniles, the larynx is bigger than either one of the whale's lungs. This organ, along with the nasal passages, act as resonant spaces that produce the signature drawn-out calls of the baleen whales. [ 38 ] The ghara of the indian gharial is a specialized organ that acts as a resonating chamber; as a result, the call of a mature male can be heard up to 75 metres (82 yd) away. [ 39 ] [ 40 ] The crests of a number of lambeosaurine dinosaurs have been hypothesized to act as resonating chambers; reconstructed upper airways, specifically, the nasal passsages of Parasaurolophus , Lambeosaurus , Hypacrosaurus and Corythosaurus have been examined, and they were concluded to be able to enhance the vocalizations in life, and the different cranial crest shapes would have distinguished the sounds produced between genera. [ 41 ] [ 42 ] [ 43 ] [ 44 ] The avian syrinx is the primary vocal organ in most birds, [ 45 ] with the trachea being the primary resonator in the system. In some birds, the trachea is grossly elongated, coiling or looping within the thorax ; the trumpet manucode 's trachea is 20 times longer than is predicted for birds of a comparable size. This condition of tracheal elongation (TE) is known in several orders of birds, and it seems to have been evolved independently a number of times. W. T. Fitch hypothesizes that the function of such elongated trachea in birds may be to "exaggerate its apparent [body] size", through the lowering of the frequency ( Hz ) of its calls; larger individuals are preferentially selected as mates , and thus a "deeper" voice is selected for. Additionally, lower frequency calls travel further, attracting mates from a wider area. [ 46 ] Additionally, the air sac system, which is part of the respiratory system in birds, may be an important resonator in certain birds, as is the inflated crop of columbiform pigeons and doves . [ 47 ]
https://en.wikipedia.org/wiki/Resonating_device
In condensed matter physics , the resonating valence bond theory ( RVB ) is a theoretical model that attempts to describe high-temperature superconductivity , and in particular the superconductivity in cuprate compounds. It was proposed by P. W. Anderson and Ganapathy Baskaran in 1987. [ 1 ] [ 2 ] The theory states that in copper oxide lattices, electrons from neighboring copper atoms interact to form a valence bond , which locks them in place. However, with doping , these electrons can act as mobile Cooper pairs and are able to superconduct. Anderson observed in his 1987 paper that the origins of superconductivity in doped cuprates was in the Mott insulator nature of crystalline copper oxide. [ 3 ] RVB builds on the Hubbard and t-J models used in the study of strongly correlated materials . [ 4 ] In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials, was found by EPFL scientists, [ 5 ] lending support for Anderson's theory. [ 6 ] The physics of Mott insulators is described by the repulsive Hubbard model Hamiltonian : In 1971, Anderson first suggested that this Hamiltonian can have a non-degenerate ground state that is composed of disordered spin states. Shortly after the high-temperature superconductors were discovered, Anderson and Kivelson et al. proposed a resonating valence bond ground state for these materials, written as where C {\displaystyle C} represented a covering of a lattice by nearest neighbor dimers. Each such covering is weighted equally. In a mean field approximation , the RVB state can be written in terms of a Gutzwiller projection , and displays a superconducting phase transition per the Kosterlitz–Thouless mechanism. [ 7 ] However, a rigorous proof for the existence of a superconducting ground state in either the Hubbard or the t-J Hamiltonian is not yet known. [ 7 ] Further the stability of the RVB ground state has not yet been confirmed. [ 8 ]
https://en.wikipedia.org/wiki/Resonating_valence_bond_theory
A resonator is a device or system that exhibits resonance or resonant behavior. That is, it naturally oscillates with greater amplitude at some frequencies , called resonant frequencies , than at other frequencies. The oscillations in a resonator can be either electromagnetic or mechanical (including acoustic ). Resonators are used to either generate waves of specific frequencies or to select specific frequencies from a signal. Musical instruments use acoustic resonators that produce sound waves of specific tones. Another example is quartz crystals used in electronic devices such as radio transmitters and quartz watches to produce oscillations of very precise frequency. A cavity resonator is one in which waves exist in a hollow space inside the device. In electronics and radio, microwave cavities consisting of hollow metal boxes are used in microwave transmitters, receivers and test equipment to control frequency, in place of the tuned circuits which are used at lower frequencies. Acoustic cavity resonators, in which sound is produced by air vibrating in a cavity with one opening, are known as Helmholtz resonators . A physical system can have as many resonant frequencies as it has degrees of freedom ; each degree of freedom can vibrate as a harmonic oscillator . Systems with one degree of freedom, such as a mass on a spring, pendulums , balance wheels , and LC tuned circuits have one resonant frequency. Systems with two degrees of freedom, such as coupled pendulums and resonant transformers can have two resonant frequencies. A crystal lattice composed of N atoms bound together can have N resonant frequencies. As the number of coupled harmonic oscillators grows, the time it takes to transfer energy from one to the next becomes significant. The vibrations in them begin to travel through the coupled harmonic oscillators in waves, from one oscillator to the next. The term resonator is most often used for a homogeneous object in which vibrations travel as waves, at an approximately constant velocity, bouncing back and forth between the sides of the resonator. The material of the resonator, through which the waves flow, can be viewed as being made of millions of coupled moving parts (such as atoms). Therefore, they can have millions of resonant frequencies, although only a few may be used in practical resonators. The oppositely moving waves interfere with each other, and at its resonant frequencies reinforce each other to create a pattern of standing waves in the resonator. If the distance between the sides is d {\displaystyle d\,} , the length of a round trip is 2 d {\displaystyle 2d\,} . To cause resonance, the phase of a sinusoidal wave after a round trip must be equal to the initial phase so the waves self-reinforce. The condition for resonance in a resonator is that the round trip distance, 2 d {\displaystyle 2d\,} , is equal to an integer number of wavelengths λ {\displaystyle \lambda \,} of the wave: If the velocity of a wave is c {\displaystyle c\,} , the frequency is f = c / λ {\displaystyle f=c/\lambda \,} so the resonant frequencies are: So the resonant frequencies of resonators, called normal modes , are equally spaced multiples ( harmonics ) of a lowest frequency called the fundamental frequency . The above analysis assumes the medium inside the resonator is homogeneous, so the waves travel at a constant speed, and that the shape of the resonator is rectilinear. If the resonator is inhomogeneous or has a nonrectilinear shape, like a circular drumhead or a cylindrical microwave cavity , the resonant frequencies may not occur at equally spaced multiples of the fundamental frequency. They are then called overtones instead of harmonics . There may be several such series of resonant frequencies in a single resonator, corresponding to different modes of vibration. An electrical circuit composed of discrete components can act as a resonator when both an inductor and capacitor are included. Oscillations are limited by the inclusion of resistance, either via a specific resistor component, or due to resistance of the inductor windings. Such resonant circuits are also called RLC circuits after the circuit symbols for the components. A distributed-parameter resonator has capacitance, inductance, and resistance that cannot be isolated into separate lumped capacitors, inductors, or resistors. An example of this, much used in filtering , is the helical resonator . An inductor consisting of a coil of wire, is self-resonant at a certain frequency due to the parasitic capacitance between its turns. This is often an unwanted effect that can cause parasitic oscillations in RF circuits. The self-resonance of inductors is used in a few circuits, such as the Tesla coil . A cavity resonator is a hollow closed conductor such as a metal box or a cavity within a metal block, containing electromagnetic waves (radio waves) reflecting back and forth between the cavity's walls. When a source of radio waves at one of the cavity's resonant frequencies is applied, the oppositely-moving waves form standing waves , and the cavity stores electromagnetic energy. Since the cavity's lowest resonant frequency, the fundamental frequency, is that at which the width of the cavity is equal to a half-wavelength (λ/2), cavity resonators are only used at microwave frequencies and above, where wavelengths are short enough that the cavity is conveniently small in size. Due to the low resistance of their conductive walls, cavity resonators have very high Q factors ; that is their bandwidth , the range of frequencies around the resonant frequency at which they will resonate, is very narrow. Thus they can act as narrow bandpass filters . Cavity resonators are widely used as the frequency determining element in microwave oscillators . Their resonant frequency can be tuned by moving one of the walls of the cavity in or out, changing its size. The cavity magnetron is a vacuum tube with a filament in the center of an evacuated, lobed, circular cavity resonator. A perpendicular magnetic field is imposed by a permanent magnet. The magnetic field causes the electrons, attracted to the (relatively) positive outer part of the chamber, to spiral outward in a circular path rather than moving directly to this anode. Spaced about the rim of the chamber are cylindrical cavities. The cavities are open along their length and so they connect with the common cavity space. As electrons sweep past these openings they induce a resonant high frequency radio field in the cavity, which in turn causes the electrons to bunch into groups. A portion of this field is extracted with a short antenna that is connected to a waveguide (a metal tube usually of rectangular cross section). The waveguide directs the extracted RF energy to the load, which may be a cooking chamber in a microwave oven or a high gain antenna in the case of radar. The klystron , tube waveguide, is a beam tube including at least two apertured cavity resonators. The beam of charged particles passes through the apertures of the resonators, often tunable wave reflection grids, in succession. A collector electrode is provided to intercept the beam after passing through the resonators. The first resonator causes bunching of the particles passing through it. The bunched particles travel in a field-free region where further bunching occurs, then the bunched particles enter the second resonator giving up their energy to excite it into oscillations. It is a particle accelerator that works in conjunction with a specifically tuned cavity by the configuration of the structures. The reflex klystron is a klystron utilizing only a single apertured cavity resonator through which the beam of charged particles passes, first in one direction. A repeller electrode is provided to repel (or redirect) the beam after passage through the resonator back through the resonator in the other direction and in proper phase to reinforce the oscillations set up in the resonator. On the beamline of an accelerator system, there are specific sections that are cavity resonators for radio frequency (RF) radiation. The (charged) particles that are to be accelerated pass through these cavities in such a way that the microwave electric field transfers energy to the particles, thus increasing their kinetic energy and thus accelerating them. Several large accelerator facilities employ superconducting niobium cavities for improved performance compared to metallic (copper) cavities. The loop-gap resonator (LGR) is made by cutting a narrow slit along the length of a conducting tube. The slit has an effective capacitance and the bore of the resonator has an effective inductance. Therefore, the LGR can be modeled as an RLC circuit and has a resonant frequency that is typically between 200 MHz and 2 GHz. In the absence of radiation losses, the effective resistance of the LGR is determined by the resistivity and electromagnetic skin depth of the conductor used to make the resonator. One key advantage of the LGR is that, at its resonant frequency, its dimensions are small compared to the free-space wavelength of the electromagnetic fields. Therefore, it is possible to use LGRs to construct a compact and high-Q resonator that operates at relatively low frequencies where cavity resonators would be impractically large. If a piece of material with large dielectric constant is surrounded by a material with much lower dielectric constant, then this abrupt change in dielectric constant can cause confinement of an electromagnetic wave, which leads to a resonator that acts similarly to a cavity resonator. [ 1 ] Transmission lines are structures that allow broadband transmission of electromagnetic waves, e.g. at radio or microwave frequencies. Abrupt change of impedance (e.g. open or short) in a transmission line causes reflection of the transmitted signal. Two such reflectors on a transmission line evoke standing waves between them and thus act as a one-dimensional resonator, with the resonance frequencies determined by their distance and the effective dielectric constant of the transmission line. [ 1 ] A common form is the resonant stub , a length of transmission line terminated in either a short circuit or open circuit, connected in series or parallel with a main transmission line. Planar transmission-line resonators are commonly employed for coplanar , stripline , and microstrip transmission lines. Such planar transmission-line resonators can be very compact in size and are widely used elements in microwave circuitry. In cryogenic solid-state research, superconducting transmission-line resonators contribute to solid-state spectroscopy [ 2 ] and quantum information science. [ 3 ] [ 4 ] In a laser , light is amplified in a cavity resonator that is usually composed of two or more mirrors. Thus an optical cavity , also known as a resonator, is a cavity with walls that reflect electromagnetic waves (i.e. light ). This allows standing wave modes to exist with little loss. Mechanical resonators are used in electronic circuits to generate signals of a precise frequency . For example, piezoelectric resonators , commonly made from quartz , are used as frequency references. Common designs consist of electrodes attached to a piece of quartz, in the shape of a rectangular plate for high frequency applications, or in the shape of a tuning fork for low frequency applications. The high dimensional stability and low temperature coefficient of quartz helps keeps resonant frequency constant. In addition, the quartz's piezoelectric property converts the mechanical vibrations into an oscillating voltage , which is picked up by the attached electrodes. These crystal oscillators are used in quartz clocks and watches, to create the clock signal that runs computers, and to stabilize the output signal from radio transmitters . Mechanical resonators can also be used to induce a standing wave in other media. For example, a multiple degree of freedom system can be created by imposing a base excitation on a cantilever beam. In this case the standing wave is imposed on the beam. [ 5 ] This type of system can be used as a sensor to track changes in frequency or phase of the resonance of the fiber. One application is as a measurement device for dimensional metrology . [ 6 ] The most familiar examples of acoustic resonators are in musical instruments . Every musical instrument has resonators. Some generate the sound directly, such as the wooden bars in a xylophone , the head of a drum , the strings in stringed instruments , and the pipes in an organ . Some modify the sound by enhancing particular frequencies, such as the sound box of a guitar or violin . Organ pipes , the bodies of woodwinds , and the sound boxes of stringed instruments are examples of acoustic cavity resonators. The exhaust pipes in automobile exhaust systems are designed as acoustic resonators that work with the muffler to reduce noise, by making sound waves "cancel each other out". [ 7 ] The "exhaust note" is an important feature for some vehicle owners, so both the original manufacturers and the after-market suppliers use the resonator to enhance the sound. In " tuned exhaust " systems designed for performance, the resonance of the exhaust pipes can also be used to remove combustion products from the combustion chamber at a particular engine speed or range of speeds. [ 8 ] In many keyboard percussion instruments, below the centre of each note is a tube, which is an acoustic cavity resonator . The length of the tube varies according to the pitch of the note, with higher notes having shorter resonators. The tube is open at the top end and closed at the bottom end, creating a column of air that resonates when the note is struck. This adds depth and volume to the note. In string instruments, the body of the instrument is a resonator. The tremolo effect of a vibraphone is achieved via a mechanism that opens and shuts the resonators. String instruments such as the bluegrass banjo may also have resonators. Many five-string banjos have removable resonators, so players can use the instrument with a resonator in bluegrass style, or without it in folk music style. The term resonator , used by itself, may also refer to the resonator guitar . The modern ten-string guitar , invented by Narciso Yepes , adds four sympathetic string resonators to the traditional classical guitar. By tuning these resonators in a very specific way (C, B♭, A♭, G♭) and making use of their strongest partials (corresponding to the octaves and fifths of the strings' fundamental tones), the bass strings of the guitar now resonate equally with any of the 12 tones of the chromatic octave. The guitar resonator is a device for driving guitar string harmonics by an electromagnetic field. This resonance effect is caused by a feedback loop and is applied to drive the fundamental tones, octaves, 5th, 3rd to an infinite sustain .
https://en.wikipedia.org/wiki/Resonator
In chemistry , a resorcinarene (also resorcarene or calix[4]resorcinarene ) is a macrocycle , or a cyclic oligomer , based on the condensation of resorcinol (1,3-dihydroxybenzene) and an aldehyde . Resorcinarenes are a type of calixarene . Other types of resorcinarenes include the related pyrogallolarenes and octahydroxypyridines, derived from pyrogallol and 2,6-dihydroxypyridine , respectively. Resorcinarenes interact with other molecules forming a host–guest complex . [ 1 ] Resorcinarenes and pyrogallolarenes self-assemble into larger supramolecular structures. Both in the crystalline state and in organic solvents , six resorcinarene molecules are known to form hexamers with an internal volume of around one cubic nanometer (nanocapsules) and shapes similar to the Archimedean solids . [ 2 ] Hydrogen bonds appear to hold the assembly together. A number of solvent or other molecules reside inside. [ 3 ] The resorcinarene is also the basic structural unit for other molecular recognition scaffolds, typically formed by bridging the phenolic oxygens with alkyl or aromatic spacers. [ 4 ] A number of molecular structures are based on this macrocycle, namely cavitands and carcerands . The resorcinarenes are typically prepared by condensation of resorcinol and an aldehyde in acid solution. This reaction was first described by Adolf von Baeyer who described the condensation of resorcinol and benzaldehyde but was unable to elucidate the nature of the product(s). The methods have since been refined. [ 5 ] [ 6 ] Recrystallization typically gives the desired isomer in quite pure form. However, for certain aldehydes, the reaction conditions lead to significant by-products . Alternative condensation conditions have been developed, including the use of Lewis acid catalysts. A green chemistry procedure uses solvent-free conditions: resorcinol, an aldehyde, and p -toluenesulfonic acid are ground together in a mortar and pestle at low temperature. [ 7 ] Resorcinarenes can be characterized by a wide upper rim and a narrow lower rim . The upper rim includes eight hydroxyl groups that can participate in hydrogen bonding interactions. Depending on the aldehyde starting material, the lower rim includes four appending groups, usually chosen to give optimal solubility. The resorcin[n]arene nomenclature is analogous to that of calix[n]arenes, in which 'n' represents the number of repeating units in the ring. Pyrogallolarenes are related macrocycles derived from the condensation of pyrogallol (1,2,3-trihydroxybenzene) with an aldehyde. Resorcinarenes and pyrogallolarenes self-assemble to give supramolecular assemblies . Both in the crystalline state and in solution, they are known to form hexamers that are akin to certain Archimedean solids with an internal volume of around one cubic nanometer (nanocapsules). (Isobutylpyrogallol[4]arene) 6 is held together by 48 intermolecular hydrogen bonds. The remaining 24 hydrogen bonds are intramolecular . The cavity is filled by solvent. [ 8 ] The resorcinarene hexamer has been described as a yoctolitre reaction vessel. [ 9 ] [ 10 ] Within the confines of the container, terpene cyclizations and iminium catalyzed reactions have been observed. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Resorcinarene
Resorcinol (or resorcin ) is a phenolic compound. It is an organic compound with the formula C 6 H 4 (OH) 2 . It is one of three isomeric benzenediols , the 1,3-isomer (or meta -isomer). Resorcinol crystallizes from benzene as colorless needles that are readily soluble in water, alcohol, and ether, but insoluble in chloroform and carbon disulfide . [ 6 ] Resorcinol is produced in several steps from benzene, starting with dialkylation with propylene to give 1,3-diisopropylbenzene . Oxidation and Hock rearrangement of this disubstituted arene gives acetone and resorcinol. [ 6 ] Resorcinol is an expensive chemical, produced in only a very few locations around the world (as of 2010 only four commercial plants were known to be operative: in the United States, Germany, China, and Japan), and as such it is the determining factor in the cost of PRF adhesives. [ 7 ] Production in the United States ended in 2017 with the closure of Indspec Chemical's plant in Petrolia, Pennsylvania. [ 8 ] Many additional routes exist for resorcinol. It was formerly produced by disulfonation of benzene followed by hydrolysis of the 1,3-disulfonate. This method has been discarded because it cogenerates so much sulfur-containing waste. Resorcinol can also be produced when any of a large number of resins (such as galbanum and asafoetida ) are melted with potassium hydroxide , or by the distillation of Brazilwood extract. It may be synthesized by melting 3-iodophenol , phenol-3-sulfonic acid with potassium carbonate . Diazotization of 3-aminophenol or on 1,3-diaminobenzene followed by hydrolysis provides yet another route. [ 9 ] Many ortho - and para -compounds of the aromatic series (for example, the bromophenols , benzene- para -disulfonic acid) also yield resorcinol on fusion with potassium hydroxide. Partial hydrogenation of resorcinol gives dihydroresorcinol , also known as 1,3-cyclohexanedione. [ 10 ] [ 11 ] It reduces Fehling's solution and ammoniacal silver solutions . It does not form a precipitate with lead acetate solution, as does the isomeric pyrocatechol . Iron(III) chloride colors its aqueous solution a dark-violet, and bromine water precipitates tribromoresorcinol. These properties are what give it its use as a colouring agent for certain chromatography experiments. Sodium amalgam reduces it to dihydroresorcin, which when heated to 150 to 160 °C with concentrated barium hydroxide solution gives γ-acetylbutyric acid. [ citation needed ] When fused with potassium hydroxide, resorcinol yields phloroglucin , pyrocatechol , and diresorcinol . It condenses with acids or acid chlorides , in the presence of dehydrating agents, to oxyketones, for example, with zinc chloride and glacial acetic acid at 145 °C it yields resacetophenone (HO) 2 C 6 H 3 COCH 3 . [ 12 ] With the anhydrides of dibasic acids, it yields fluoresceins . When heated with calcium chloride — ammonia to 200 °C it yields meta -dioxydiphenylamine. [ 13 ] With sodium nitrite it forms a water-soluble blue dye, which is turned red by acids, and is used as a pH indicator under the name of lacmoid . [ 14 ] It condenses readily with aldehydes , yielding with formaldehyde , on the addition of catalytic hydrochloric acid , methylene diresorcin [(HO)C 6 H 3 (O)] 2 CH 2 . Reaction with chloral hydrate in the presence of potassium bisulfate yields the lactone of tetra-oxydiphenyl methane carboxylic acid. [ 15 ] In alcoholic solution it condenses with sodium acetoacetate to form 4-methylumbelliferone . [ 16 ] In presence of Sulfuric acid , with twice amount of Succinic acid , Resorcinol creates Fluorescence effect on water. [ 17 ] In addition to electrophilic aromatic addition, resorcinol (and other polyols ) undergo nucleophilic substitution via the enone tautomer . Nitration with concentrated nitric acid in the presence of cold concentrated sulfuric acid yields trinitroresorcin ( styphnic acid ), an explosive. Derivatives of resorcinol are found in different natural sources. Alkylresorcinols are found in rye . [ 18 ] Polyresorcinols are found as pseudotannins in plants. [ 17 ] Resorcinol is mainly used in the production of resins. As a mixture with phenol , it condenses with formaldehyde to afford adhesives . Such resins are used as adhesives in the rubber industry and others are used for wood glue . [ 6 ] In relation to its conversion resins with formaldehyde, resorcinol is the starting material for resorcinarene rings. It is present in over-the-counter topical acne treatments at 2% or less concentration, and in prescription treatments at higher concentrations. [ 19 ] Monoacetylresorcinol, C 6 H 4 (OH)(O–COCH 3 ), is used under the name of Euresol. [ 20 ] It is used in hidradenitis suppurativa with limited evidence showing it can help with resolution of the lesions. [ 21 ] Resorcinol is one of the active ingredients in products such as Resinol , Vagisil , and Clearasil . In the 1950s and early 1960s the British Army used it, in the form of a paste applied directly to the skin. One such place where this treatment was given to soldiers with chronic acne was the Cambridge Military Hospital, Aldershot, England. It was not always successful. 4-Hexylresorcinol is an anesthetic found in throat lozenges . Resorcinol is used as a chemical intermediate for the synthesis of pharmaceuticals and other organic compounds. It is used in the production of diazo dyes and plasticizers and as a UV absorber in resins. It is an analytical reagent for the qualitative determination of ketoses ( Seliwanoff's test ). It is the starting material for the initiating explosive lead styphnate . [ 22 ] Resazurin , C 12 H 7 NO 4 , obtained by the action of nitrous acid on resorcinol, [ 23 ] forms small dark red crystals possessing a greenish metallic glance. When dissolved in concentrated sulfuric acid and warmed to 210 °C, the solution on pouring into water yields a precipitate of resorufin , C 12 H 7 NO 3 , an oxyphenoxazone , which is insoluble in water but is readily soluble in hot concentrated hydrochloric acid , and in solutions of caustic alkalis . The alkaline solutions are of a rose-red color and show a cinnabar-red fluorescence . A tetrabromresorufin is used as a dyestuff under the name of Fluorescent Resorcin Blue. Thioresorcinol is obtained by the action of zinc and hydrochloric acid on meta -benzenedisulfonyl chloride. It melts at 27 °C and boils at 243 °C. Resorcinol disulfonic acid, (HO) 2 C 6 H 2 (HSO 3 ) 2 , is a deliquescent mass obtained by the action of sulfuric acid on resorcin. [ 24 ] It is readily soluble in water and ethanol . Resorcinol is also a common scaffold that is found in a class of anticancer agents, some of which ( luminespib , ganetespib, KW-2478, and onalespib) were in clinical trials as of 2014 [update] . [ 25 ] [ 26 ] Part of the resorcinol structure binds to inhibits the N-terminal domain of heat shock protein 90 , which is a drug target for anticancer treatments. [ 25 ] Austrian chemist Heinrich Hlasiwetz (1825–1875) is remembered for his chemical analysis of resorcinol and for his part in the first preparation of resorcinol, along with Ludwig Barth , which was published in 1864. [ 27 ] : 10 [ 28 ] Benzene-1,3-diol is the name recommended by the International Union of Pure and Applied Chemistry (IUPAC) in its 1993 Recommendations for the Nomenclature of Organic Chemistry . [ 29 ] Resorcinol is so named because of its derivation from ammoniated resin gum, and for its relation to the chemical orcinol . [ 30 ] Resorcinol has low toxicity, with an LD 50 (rats, oral) > 300 mg/kg. It is less toxic than phenol . [ 6 ] Resorcinol was named a substance of very high concern under European Union REACH in 2022 because of its endocrine disrupting properties. [ 31 ] This article incorporates text from a publication now in the public domain : Chisholm, Hugh , ed. (1911). " Resorcin ". Encyclopædia Britannica . Vol. 23 (11th ed.). Cambridge University Press. pp. 183– 184.
https://en.wikipedia.org/wiki/Resorcinol
Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified according to their availability as renewable or national and international resources. An item may become a resource with technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well. From a human perspective, a regular resource is anything to satisfy human needs and wants. [ 1 ] [ 2 ] The concept of resources has been developed across many established areas of work, in economics , biology and ecology , computer science , management , and human resources for example - linked to the concepts of competition , sustainability , conservation , and stewardship . In application within human society , commercial or non-commercial factors require resource allocation through resource management . The concept of resources can also be tied to the direction of leadership over resources; this may include human resources issues, for which leaders are responsible, in managing, supporting, or directing those matters and the resulting necessary actions. For example, in the cases of professional groups , innovative leaders and technical experts in archiving expertise , academic management , association management, business management , healthcare management , military management , public administration , spiritual leadership and social networking administration. Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition ) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource). There are three fundamental differences between economic versus ecological views: 1) the economic resource definition is human-centered ( anthropocentric ) and the biological or ecological resource definition is nature-centered ( biocentric or ecocentric ); 2) the economic view includes desire along with necessity, whereas the biological view is about basic biological needs; and 3) economic systems are based on markets of currency exchanged for goods and services, whereas biological systems are based on natural processes of growth, maintenance, and reproduction. [ 1 ] A computer resource is any physical or virtual component of limited availability within a computer or information management system. Computer resources include means for input, processing, output, communication, and storage. [ 3 ] Natural resources are derived from the environment . Many natural resources are essential for human survival, while others are used to satisfy human desire. Conservation is the management of natural resources with the goal of sustainability . Natural resources may be further classified in different ways. [ 1 ] Resources can be categorized based on origin: Natural resources are also categorized based on the stage of development: Natural resources can be categorized based on renewability: Depending upon the speed and quantity of consumption, overconsumption can lead to depletion or the total and everlasting destruction of a resource. Important examples are agricultural areas, fish and other animals, forests, healthy water and soil, cultivated and natural landscapes. Such conditionally renewable resources are sometimes classified as a third kind of resource or as a subtype of renewable resources. Conditionally renewable resources are presently subject to excess human consumption and the only sustainable long-term use of such resources is within the so-called zero ecological footprint , where humans use less than the Earth's ecological capacity to regenerate. Natural resources are also categorized based on distribution: Actual vs. potential natural resources are distinguished as follows: Based on ownership, resources can be classified as individual, community, national, and international. In economics, labor or human resources refers to the human work in the production of goods and rendering of services. Human resources can be defined in terms of skills, energy, talent, abilities, or knowledge. [ 4 ] In a project management context, human resources are those employees responsible for undertaking the activities defined in the project plan. [ 5 ] In economics , capital goods or capital are "those durable produced goods that are in turn used as productive inputs for further production" of goods and services. [ 6 ] A typical example is the machinery used in a factory . At the macroeconomic level, "the nation's capital stock includes buildings, equipment, software, and inventories during a given year." [ 7 ] Capitals are the most important economic resource. Whereas, tangible resources such as equipment have an actual physical existence, intangible resources such as corporate images, brands and patents, and other intellectual properties exist in abstraction. [ 8 ] Typically resources cannot be consumed in their original form, but rather through resource development they must be processed into more usable commodities and usable things. The demand for resources is increasing as economies develop. There are marked differences in resource distribution and associated economic inequality between regions or countries, with developed countries using more natural resources than developing countries. Sustainable development is a pattern of resource use, that aims to meet human needs while preserving the environment . [ 1 ] Sustainable development means that we should exploit our resources carefully to meet our present requirement without compromising the ability of future generations to meet their own needs. The practice of the three R's – reduce, reuse, and recycle must be followed to save and extend the availability of resources. Various problems are related to the usage of resources: Various benefits can result from the wise usage of resources:
https://en.wikipedia.org/wiki/Resource
In biology and ecology , a resource is a substance or object in the environment required by an organism for normal growth , maintenance , and reproduction . Resources can be consumed by one organism and, as a result, become unavailable to another organism. [ 1 ] [ 2 ] [ 3 ] For plants key resources are light , nutrients , water, and space to grow. For animals key resources are food, water, and territory . Terrestrial plants require particular resources for photosynthesis and to complete their life cycle of germination, growth, reproduction, and dispersal: [ 4 ] [ 5 ] Animals require particular resources for metabolism and to complete their life cycle of gestation, birth, growth, and reproduction: [ 6 ] Resource availability plays a central role in ecological processes:
https://en.wikipedia.org/wiki/Resource_(biology)
Resource consumption is about the consumption of non-renewable , or less often, renewable resources . Specifically, it may refer to: Measures of resource consumption are resource intensity and resource efficiency . Industrialization and globalized markets have increased the tendency for overconsumption of resources. The resource consumption rate of a nation does not usually correspond with the primary resource availability, this is called resource curse . Unsustainable consumption by the steadily growing human population may lead to resource depletion and a shrinking of the earth's carrying capacity . [ 1 ]
https://en.wikipedia.org/wiki/Resource_consumption
The resource fragmentation hypothesis was first proposed by Janzen & Pond (1975), and says that as species richness becomes large there is not a linear increase in the number of parasitoid species that can be supported. The mechanism for this hyperbolic relationship is suggested to be that each of the new host species are too rare to support the evolution of specialist parasitoids (Janzen & Pond, 1975). The resource fragmentation hypothesis is one of two hypotheses that seek to explain the distribution of the Ichneumonidae . This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resource_fragmentation_hypothesis
In biology , resource holding potential ( RHP ) is the ability of an animal to win an all-out fight if one were to take place. The term was coined by Geoff Parker to disambiguate physical fighting ability from the motivation to persevere in a fight (Parker, 1974 [ 1 ] ). Originally the term used was 'resource holding power', but 'resource holding potential' has come to be preferred. The latter emphasis on 'potential' serves as a reminder that the individual with greater RHP does not always prevail. An individual with more RHP may lose a fight if, for example, it is less motivated (has less to gain by winning) than its opponent. Mathematical models of RHP and motivation ( a.k.a. resource value or V) have traditionally been based on the hawk-dove game (e.g. Hammerstein, 1981) [ 2 ] in which subjective resource value is represented by the variable 'V'. In addition to RHP and V, George Barlow (Barlow et al., 1986 [ 3 ] ) proposed that a third variable, which he termed 'daring', played a role in determining fight outcome. Daring (a.k.a. aggressiveness) represents an individual's tendency to initiate or escalate a contest independent of the effects of RHP and V. It is instinctive for all animals to live a life according to fitness (Parker 1974). [ 4 ] Animals will do what they can to improve their fitness and therefore survive long enough to produce offspring. However, when resources are not in abundance, this can be challenging; eventually, animals will begin to compete for resources. The competition for resources can be dangerous and for some animals, deadly. Some animals have developed adaptive traits that increase their chances of survival when competing for resources. This trait is Resource Holding Potential (RHP) (Parker 1974). Resource Holding Potential, or Resource Holding Power, is the term defining the motivation an individual has to continue to fight, work, or endure through situations that others may give up during. Animals that use RHP often evaluate the conditions of the danger they face. These animals have the ability to assess the RHP of their opponent in relation to their own (Francesca Gherardi 2006). [ 5 ] Generally, the animal with the higher RHP survives and wins the disputes they encounter (Lindström and Pampoulie 2005). [ 6 ] The determinations of who has the higher RHP can vary. In some cases, the robust size of the animal will establish one’s dominance. However, RHP can also be measured by prior residency and knowledge of resource quality (Lindström and Pampoulie 2005). In this case, RHP is not about the direct dangers that come with standing one’s ground; sometimes, an animal will use RHP to determine if their current living status is worth protecting. With that being said, RHP does not take does not so much focus on the physical ability of the individual to fight, but instead focuses on the motivation of the individual. RHP does not always determine if the individual will prevail (Hurd 2006). [ 7 ] RHP along with other variables including the value of the resource and the aggressiveness (or daring) of the individual all help to determine how likely it is that an individual will initiate and prevail in a fight. Male sand gobies (a ray-finned fish) must build large nests in order to attract a mate, and to be able to house numerous eggs. If the male is small and not very attractive but has a large nest, he is at risk of a larger more attractive male coming by and attempting to “steal” the nest. On the other hand, if the male is larger in size but lives in a smaller nest, he has a lesser chance of finding a mate and less space to house his offspring. In either case, the male sand goby must use RHP to determine whether it is more fit for him to stay or move on (Lindström and Pampoulie 2005). [ 6 ] In Aegus chelifer chelifer, a small tropical beetle species, head width is considered a resource holding potential. Researchers discovered that body size, rather than mandible size, had a bigger effect on the outcome of fights between the beetles, making it their resource holding potential (Songvorawit et al. 2018). [ 8 ] In the sea anemone, Actinia equina , morphological traits appear to determine their resource holding potential. A. equina does a “self-assessment” of their RHP when fighting nearby anemones. Body size appears to be the main RHP unless a peel occurs due to contact with another anemone where toxin is released. If a peel occurs then nematocyst length is the main factor to their RHP (Rudin and Briffa 2012). [ 9 ] The topic of resource holding power has some similar characteristics to the behavior of conditional migration. The thought process of “What benefit do I receive from this action,” is a similarity between the two. If an all out fight only has two outcomes, death, or winning the competition for resources, than the individuals will be less likely to interact with one another and instigate a fight because the outcomes would be so severe. Similar concepts can be applied to the conditional migration behavior. Subordinate males will be less likely to migrate because of the severe outcomes that come from the migration. If subordinates migrate with dominant males to a place where resources will be limited their likelihood of surviving is greatly reduced. What benefit could they receive knowing that most likely they are going to lose resources. Conditional strategy - socially dominant individuals will be in a position to select the best option relative to their fitness. [ 7 ] [ 10 ]
https://en.wikipedia.org/wiki/Resource_holding_potential
Resource intensity is a measure of the resources (e.g. water , energy , materials ) needed for the production, processing and disposal of a unit of good or service , or for the completion of a process or activity; it is therefore a measure of the efficiency of resource use . It is often expressed as the quantity of resource embodied in unit cost e.g. litres of water per $1 spent on product. In national economic and sustainability accounting it can be calculated as units of resource expended per unit of GDP . When applied to a single person it is expressed as the resource use of that person per unit of consumption. Relatively high resource intensities indicate a high price or environmental cost of converting resource into GDP; low resource intensity indicates a lower price or environmental cost of converting resource into GDP. [ 1 ] Resource productivity and resource intensity are key concepts used in sustainability measurement as they measure attempts to decouple the connection between resource use and environmental degradation . Their strength is that they can be used as a metric for both economic and environmental cost. Although these concepts are two sides of the same coin, in practice they involve very different approaches and can be viewed as reflecting, on the one hand, the efficiency of resource production as outcome per unit of resource use (resource productivity) and, on the other hand, the efficiency of resource consumption as resource use per unit outcome (resource intensity). The sustainability objective is to maximize resource productivity while minimizing resource intensity. This article about energy economics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resource_intensity
Resource productivity is the quantity of good or service (outcome) that is obtained through the expenditure of unit resource . [ 1 ] [ 2 ] [ 3 ] This can be expressed in monetary terms as the monetary yield per unit resource. For example, when applied to crop irrigation it is the yield of crop obtained through use of a given volume of irrigation water , the “crop per drop”, which could also be expressed as monetary return from product per use of unit irrigation water. Resource productivity and resource intensity are key concepts used in sustainability measurement as they attempt to decouple the direct connection between resource use and environmental degradation . Their strength is that they can be used as a metric for both economic and environmental cost . Although these concepts are two sides of the same coin, in practice they involve very different approaches and can be viewed as reflecting, on the one hand, the efficiency of resource production as outcome per unit of resource use (resource productivity) and, on the other hand, the efficiency of resource consumption as resource use per unit outcome (resource intensity). The sustainability objective is to maximize resource productivity while minimizing resource intensity. Scientific and political debates on resource productivity are regularly held at, among others, the World Resources Forum conferences. This article about energy economics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resource_productivity
Resource selection functions (RSFs) are a class of functions that are used in spatial ecology to assess which habitat characteristics are important to a specific population or species of animal, by assessing a probability of that animal using a certain resource proportional to the availability of that resource in the environment. [ 1 ] Resource Selection Functions require two types of data: location information for the wildlife in question, and data on the resources available across the study area. Resources can include a broad range of environmental and geographical variables, including categorical variables such as land cover type, or continuous variables such as average rainfall over a given time period. A variety of methods are used for modeling RSFs, with logistic regression being commonly used. [ 2 ] RSFs can be fit to data where animal presence is known, but absence is not, such as for species where several individuals within a study area are fitted with a GPS collar, but some individuals may be present without collars. When this is the case, buffers of various distances are generated around known presence points, with a number of available points generated within each buffer, which represent areas where the animal could have been, but it is unknown whether they actually were. [ 3 ] These models can be fit using binomial generalized linear models or binomial generalized linear mixed models, with the resources, or environmental and geographic data, as explanatory variables. Resource selection functions can be modeled at a variety of spatial scales, depending on the species and the scientific question being studied. (insert one more sentence on scale) Most RSFs address one of the following scales, which were defined by Douglas Johnson in 1980 and are still used today: [ 4 ]
https://en.wikipedia.org/wiki/Resource_selection_function
Resources of a Resource ( ROR ) is an XML format for describing the content of an internet resource or website in a generic fashion so this content can be better understood by search engines , spiders , web applications , etc. The ROR format provides several pre-defined terms for describing objects like sitemaps , products, events, reviews, jobs, classifieds, etc. The format can be extended with custom terms. RORweb.com is the official website of ROR; the ROR format was created by AddMe.com as a way to help search engines better understand content and meaning. Similar concepts, like Google Sitemaps and Google Base , have also been developed since the introduction of the ROR format. ROR objects are placed in an ROR feed called ror.xml . This file is typically located in the root directory of the resource or website it describes. When a search engine like Google or Yahoo searches the web to determine how to categorize content, the ROR feed allows the search engines "spider" to quickly identify all the content and attributes of the website. This has three main benefits: This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Resources_of_a_Resource
In physiology , respiration is the transport of oxygen from the outside environment to the cells within tissues , and the removal of carbon dioxide in the opposite direction to the environment by a respiratory system . [ 1 ] The physiological definition of respiration differs from the biochemical definition , which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) [ 2 ] by oxidizing nutrients and releasing waste products. Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the diffusion and transport of metabolites between the organism and the external environment. Exchange of gases in the lung occurs by ventilation and perfusion. [ 1 ] Ventilation refers to the in-and-out movement of air of the lungs and perfusion is the circulation of blood in the pulmonary capillaries. [ 1 ] In mammals, physiological respiration involves respiratory cycles of inhaled and exhaled breaths . Inhalation (breathing in) is usually an active movement that brings air into the lungs where the process of gas exchange takes place between the air in the alveoli and the blood in the pulmonary capillaries . Contraction of the diaphragm muscle causes a pressure variation, which is equal to the pressures caused by elastic, resistive and inertial components of the respiratory system . In contrast, exhalation (breathing out) is usually a passive process, though there are many exceptions: when generating functional overpressure (speaking, singing, humming, laughing, blowing, snorting, sneezing, coughing, powerlifting ); when exhaling underwater (swimming, diving); at high levels of physiological exertion (running, climbing, throwing) where more rapid gas exchange is necessitated; or in some forms of breath-controlled meditation . Speaking and singing in humans requires sustained breath control that many mammals are not capable of performing. The process of breathing does not fill the alveoli with atmospheric air during each inhalation (about 350 ml per breath), but the inhaled air is carefully diluted and thoroughly mixed with a large volume of gas (about 2.5 liters in adult humans) known as the functional residual capacity which remains in the lungs after each exhalation, and whose gaseous composition differs markedly from that of the ambient air . Physiological respiration involves the mechanisms that ensure that the composition of the functional residual capacity is kept constant , and equilibrates with the gases dissolved in the pulmonary capillary blood, and thus throughout the body . Thus, in precise usage , the words breathing and ventilation are hyponyms , not synonyms , of respiration ; but this prescription is not consistently followed, even by most health care providers , because the term respiratory rate (RR) is a well-established term in health care , even though it would need to be consistently replaced with ventilation rate if the precise usage were to be followed. During respiration the C-H bonds are broken by oxidation-reduction reaction and so carbon dioxide and water are also produced. The cellular energy-yielding process is called cellular respiration. There are several ways to classify the physiology of respiration:
https://en.wikipedia.org/wiki/Respiration_(physiology)
Respiratory complex I , EC 7.1.1.2 (also known as NADH:ubiquinone oxidoreductase , Type I NADH dehydrogenase and mitochondrial complex I ) is the first large protein complex of the respiratory chains of many organisms from bacteria to humans. It catalyzes the transfer of electrons from NADH to coenzyme Q10 (CoQ10) and translocates protons across the inner mitochondrial membrane in eukaryotes or the plasma membrane of bacteria. This enzyme is essential for the normal functioning of cells, and mutations in its subunits lead to a wide range of inherited neuromuscular and metabolic disorders. Defects in this enzyme are responsible for the development of several pathological processes such as ischemia/reperfusion damage ( stroke and cardiac infarction ), Parkinson's disease and others. [ citation needed ] Complex I is the first enzyme of the mitochondrial electron transport chain . There are three energy-transducing enzymes in the electron transport chain - NADH:ubiquinone oxidoreductase (complex I), Coenzyme Q – cytochrome c reductase (complex III), and cytochrome c oxidase (complex IV). [ 1 ] Complex I is the largest and most complicated enzyme of the electron transport chain. [ 2 ] The reaction catalyzed by complex I is: In this process, the complex translocates four protons across the inner membrane per molecule of oxidized NADH , [ 3 ] [ 4 ] [ 5 ] helping to build the electrochemical potential difference used to produce ATP . Escherichia coli complex I (NADH dehydrogenase) is capable of proton translocation in the same direction to the established Δψ , showing that in the tested conditions, the coupling ion is H + . [ 6 ] Na + transport in the opposite direction was observed, and although Na + was not necessary for the catalytic or proton transport activities, its presence increased the latter. H + was translocated by the Paracoccus denitrificans complex I, but in this case, H + transport was not influenced by Na + , and Na + transport was not observed. Possibly, the E. coli complex I has two energy coupling sites (one Na + independent and the other Na + dependent), as observed for the Rhodothermus marinus complex I, whereas the coupling mechanism of the P. denitrificans enzyme is completely Na + independent. It is also possible that another transporter catalyzes the uptake of Na + . Complex I energy transduction by proton pumping may not be exclusive to the R. marinus enzyme. The Na + /H + antiport activity seems not to be a general property of complex I. [ 6 ] However, the existence of Na + -translocating activity of the complex I is still in question. The reaction can be reversed – referred to as aerobic succinate-supported NAD + reduction by ubiquinol – in the presence of a high membrane potential, but the exact catalytic mechanism remains unknown. Driving force of this reaction is a potential across the membrane which can be maintained either by ATP-hydrolysis or by complexes III and IV during succinate oxidation. [ 7 ] Complex I may have a role in triggering apoptosis . [ 8 ] In fact, there has been shown to be a correlation between mitochondrial activities and programmed cell death (PCD) during somatic embryo development. [ 9 ] Complex I is not homologous to Na + -translocating NADH Dehydrogenase (NDH) Family ( TC# 3.D.1 ), a member of the Na + transporting Mrp superfamily . As a result of a two NADH molecule being oxidized to NAD+, three molecules of ATP can be produced by Complex V ( ATP synthase ) downstream in the respiratory chain. All redox reactions take place in the hydrophilic domain of complex I. NADH initially binds to complex I, and transfers two electrons to the flavin mononucleotide (FMN) prosthetic group of the enzyme, creating FMNH 2 . The electron acceptor – the isoalloxazine ring – of FMN is identical to that of FAD . The electrons are then transferred through the FMN via a series of iron-sulfur (Fe-S) clusters , [ 10 ] and finally to coenzyme Q10 (ubiquinone). This electron flow changes the redox state of the protein, inducing conformational changes of the protein which alters the p K values of ionizable side chain, and causes four hydrogen ions to be pumped out of the mitochondrial matrix. [ 11 ] Ubiquinone (CoQ) accepts two electrons to be reduced to ubiquinol (CoQH 2 ). [ 1 ] The proposed pathway for electron transport prior to ubiquinone reduction is as follows: NADH – FMN – N3 – N1b – N4 – N5 – N6a – N6b – N2 – Q, where Nx is a labelling convention for iron sulfur clusters. [ 10 ] The high reduction potential of the N2 cluster and the relative proximity of the other clusters in the chain enable efficient electron transfer over long distance in the protein (with transfer rates from NADH to N2 iron-sulfur cluster of about 100 μs). [ 12 ] [ 13 ] The equilibrium dynamics of Complex I are primarily driven by the quinone redox cycle. In conditions of high proton motive force (and accordingly, a ubiquinol-concentrated pool), the enzyme runs in the reverse direction. Ubiquinol is oxidized to ubiquinone, and the resulting released protons reduce the proton motive force. [ 14 ] The coupling of proton translocation and electron transport in Complex I is currently proposed as being indirect (long range conformational changes) as opposed to direct (redox intermediates in the hydrogen pumps as in heme groups of Complexes III and IV ). [ 10 ] The architecture of the hydrophobic region of complex I shows multiple proton transporters that are mechanically interlinked. The three central components believed to contribute to this long-range conformational change event are the pH-coupled N2 iron-sulfur cluster, the quinone reduction, and the transmembrane helix subunits of the membrane arm. Transduction of conformational changes to drive the transmembrane transporters linked by a 'connecting rod' during the reduction of ubiquinone can account for two or three of the four protons pumped per NADH oxidized. The remaining proton must be pumped by direct coupling at the ubiquinone-binding site. It is proposed that direct and indirect coupling mechanisms account for the pumping of the four protons. [ 15 ] The N2 cluster's proximity to a nearby cysteine residue results in a conformational change upon reduction in the nearby helices, leading to small but important changes in the overall protein conformation. [ 16 ] Further electron paramagnetic resonance studies of the electron transfer have demonstrated that most of the energy that is released during the subsequent CoQ reduction is on the final ubiquinol formation step from semiquinone , providing evidence for the "single stroke" H + translocation mechanism (i.e. all four protons move across the membrane at the same time). [ 14 ] [ 17 ] Alternative theories suggest a "two stroke mechanism" where each reduction step ( semiquinone and ubiquinol ) results in a stroke of two protons entering the intermembrane space. [ 18 ] [ 19 ] The resulting ubiquinol localized to the membrane domain interacts with negatively charged residues in the membrane arm, stabilizing conformational changes. [ 10 ] An antiporter mechanism (Na + /H + swap) has been proposed using evidence of conserved Asp residues in the membrane arm. [ 20 ] The presence of Lys, Glu, and His residues enable for proton gating (a protonation followed by deprotonation event across the membrane) driven by the pK a of the residues. [ 10 ] NADH:ubiquinone oxidoreductase is the largest of the respiratory complexes. In mammals , the enzyme contains 44 separate water-soluble peripheral membrane proteins, which are anchored to the integral membrane constituents. Of particular functional importance are the flavin prosthetic group (FMN) and eight iron-sulfur clusters (FeS). Of the 44 subunits, seven are encoded by the mitochondrial genome . [ 21 ] [ 22 ] [ 23 ] The structure is an "L" shape with a long membrane domain (with around 60 trans-membrane helices) and a hydrophilic (or peripheral) domain, which includes all the known redox centres and the NADH binding site. [ 24 ] All thirteen of the E. coli proteins, which comprise NADH dehydrogenase I, are encoded within the nuo operon, and are homologous to mitochondrial complex I subunits. The antiporter-like subunits NuoL/M/N each contains 14 conserved transmembrane (TM) helices. Two of them are discontinuous, but subunit NuoL contains a 110 Å long amphipathic α-helix, spanning the entire length of the domain. The subunit, NuoL, is related to Na + / H + antiporters of TC# 2.A.63.1.1 (PhaA and PhaD). Three of the conserved, membrane-bound subunits in NADH dehydrogenase are related to each other, and to Mrp sodium-proton antiporters. Structural analysis of two prokaryotic complexes I revealed that the three subunits each contain fourteen transmembrane helices that overlay in structural alignments: the translocation of three protons may be coordinated by a lateral helix connecting them. [ 25 ] Complex I contains a ubiquinone binding pocket at the interface of the 49-kDa and PSST subunits. Close to iron-sulfur cluster N2, the proposed immediate electron donor for ubiquinone, a highly conserved tyrosine constitutes a critical element of the quinone reduction site. A possible quinone exchange path leads from cluster N2 to the N-terminal beta-sheet of the 49-kDa subunit. [ 26 ] All 45 subunits of the bovine NDHI have been sequenced. [ 27 ] [ 28 ] Each complex contains noncovalently bound FMN, coenzyme Q and several iron-sulfur centers. The bacterial NDHs have 8-9 iron-sulfur centers. A recent study used electron paramagnetic resonance (EPR) spectra and double electron-electron resonance (DEER) to determine the path of electron transfer through the iron-sulfur complexes, which are located in the hydrophilic domain. Seven of these clusters form a chain from the flavin to the quinone binding sites; the eighth cluster is located on the other side of the flavin, and its function is unknown. The EPR and DEER results suggest an alternating or “roller-coaster” potential energy profile for the electron transfer between the active sites and along the iron-sulfur clusters, which can optimize the rate of electron travel and allow efficient energy conversion in complex I. [ 29 ] Notes: Inhibition of complex I is the mode of action of the METI acaricides and insecticides: fenazaquin, fenpyroximate, pyrimidifen, pyridaben, tebufenpyrad , and tolfenpyrad. [ 35 ] [ 36 ] [ 37 ] They are assigned to IRAC group 21A. Perhaps the best-known inhibitor of complex I is rotenone , which is used as a piscicide and previously commonly used as an organic pesticide, but now banned in many countries. It is in IRAC group 21B. Rotenone and rotenoids are isoflavonoids occurring in several genera of tropical plants such as Antonia ( Loganiaceae ), Derris and Lonchocarpus ( Faboideae , Fabaceae ). There have been reports of the indigenous people of French Guiana using rotenone-containing plants to fish - due to its ichthyotoxic effect - as early as the 17th century. [ 38 ] Rotenone binds to the ubiquinone binding site of complex I as well as piericidin A , another potent inhibitor with a close structural homologue to ubiquinone. Acetogenins from Annonaceae are even more potent inhibitors of complex I. They cross-link to the ND2 subunit, which suggests that ND2 is essential for quinone-binding. [ 39 ] Rolliniastatin-2, an acetogenin, is the first complex I inhibitor found that does not share the same binding site as rotenone. [ 40 ] Bullatacin (an acetogenin found in Asimina triloba fruit) is the most potent known inhibitor of NADH dehydrogenase (ubiquinone) ( IC 50 =1.2 nM, stronger than rotenone). [ 41 ] Despite more than 50 years of study of complex I, no inhibitors blocking the electron flow inside the enzyme have been found. Hydrophobic inhibitors like rotenone or piericidin most likely disrupt the electron transfer between the terminal FeS cluster N2 and ubiquinone. It has been shown that long-term systemic inhibition of complex I by rotenone can induce selective degeneration of dopaminergic neurons. [ 42 ] Complex I is also blocked by adenosine diphosphate ribose – a reversible competitive inhibitor of NADH oxidation – by binding to the enzyme at the nucleotide binding site. [ 43 ] Both hydrophilic NADH and hydrophobic ubiquinone analogs act at the beginning and the end of the internal electron-transport pathway, respectively. The antidiabetic drug Metformin has been shown to induce a mild and transient inhibition of the mitochondrial respiratory chain complex I, and this inhibition appears to play a key role in its mechanism of action. [ 44 ] Inhibition of complex I has been implicated in hepatotoxicity associated with a variety of drugs, for instance flutamide and nefazodone . [ 45 ] Further, complex I inhibition was shown to trigger NAD + -independent glucose catabolism. [ 46 ] The catalytic properties of eukaryotic complex I are not simple. Two catalytically and structurally distinct forms exist in any given preparation of the enzyme: one is the fully competent, so-called “active” A-form and the other is the catalytically silent, dormant, “inactive”, D-form. After exposure of idle enzyme to elevated, but physiological temperatures (>30 °C) in the absence of substrate, the enzyme converts to the D-form. This form is catalytically incompetent but can be activated by the slow reaction (k~4 min −1 ) of NADH oxidation with subsequent ubiquinone reduction. After one or several turnovers the enzyme becomes active and can catalyse physiological NADH:ubiquinone reaction at a much higher rate (k~10 4 min −1 ). In the presence of divalent cations (Mg 2+ , Ca 2+ ), or at alkaline pH the activation takes much longer. The high activation energy (270 kJ/mol) of the deactivation process indicates the occurrence of major conformational changes in the organisation of the complex I. However, until now, the only conformational difference observed between these two forms is the number of cysteine residues exposed at the surface of the enzyme. Treatment of the D-form of complex I with the sulfhydryl reagents N-Ethylmaleimide or DTNB irreversibly blocks critical cysteine residues, abolishing the ability of the enzyme to respond to activation, thus inactivating it irreversibly. The A-form of complex I is insensitive to sulfhydryl reagents. [ 47 ] [ 48 ] It was found that these conformational changes may have a very important physiological significance. The inactive, but not the active form of complex I was susceptible to inhibition by nitrosothiols and peroxynitrite . [ 49 ] It is likely that transition from the active to the inactive form of complex I takes place during pathological conditions when the turnover of the enzyme is limited at physiological temperatures, such as during hypoxia , ischemia [ 50 ] [ 51 ] or when the tissue nitric oxide :oxygen ratio increases (i.e. metabolic hypoxia). [ 52 ] Recent investigations suggest that complex I is a potent source of reactive oxygen species . [ 53 ] Complex I can produce superoxide (as well as hydrogen peroxide ), through at least two different pathways. During forward electron transfer, only very small amounts of superoxide are produced (probably less than 0.1% of the overall electron flow). [ 53 ] [ 54 ] [ 55 ] During reverse electron transfer, complex I might be the most important site of superoxide production within mitochondria, with around 3-4% of electrons being diverted to superoxide formation. [ 56 ] Reverse electron transfer, the process by which electrons from the reduced ubiquinol pool (supplied by succinate dehydrogenase , glycerol-3-phosphate dehydrogenase , electron-transferring flavoprotein or dihydroorotate dehydrogenase in mammalian mitochondria) pass through complex I to reduce NAD + to NADH, driven by the inner mitochondrial membrane potential electric potential. Although it is not precisely known under what pathological conditions reverse-electron transfer would occur in vivo, in vitro experiments indicate that this process can be a very potent source of superoxide when succinate concentrations are high and oxaloacetate or malate concentrations are low. [ 57 ] This can take place during tissue ischaemia, when oxygen delivery is blocked. [ 58 ] Superoxide is a reactive oxygen species that contributes to cellular oxidative stress and is linked to neuromuscular diseases and aging. [ 59 ] NADH dehydrogenase produces superoxide by transferring one electron from FMNH 2 (or semireduced flavin) to oxygen (O 2 ). The radical flavin leftover is unstable, and transfers the remaining electron to the iron-sulfur centers. It is the ratio of NADH to NAD + that determines the rate of superoxide formation. [ 60 ] [ 61 ] Mutations in the subunits of complex I can cause mitochondrial diseases , including Leigh syndrome . Point mutations in various complex I subunits derived from mitochondrial DNA ( mtDNA ) can also result in Leber's Hereditary Optic Neuropathy .There is some evidence that complex I defects may play a role in the etiology of Parkinson's disease , perhaps because of reactive oxygen species (complex I can, like complex III , leak electrons to oxygen, forming highly toxic superoxide ). Although the exact etiology of Parkinson's disease is unclear, it is likely that mitochondrial dysfunction, along with proteasome inhibition and environmental toxins, may play a large role. In fact, the inhibition of complex I has been shown to cause the production of peroxides and a decrease in proteasome activity, which may lead to Parkinson's disease. [ 62 ] Additionally, Esteves et al. (2010) found that cell lines with Parkinson's disease show increased proton leakage in complex I, which causes decreased maximum respiratory capacity. [ 63 ] Brain ischemia/reperfusion injury is mediated via complex I impairment. [ 64 ] Recently it was found that oxygen deprivation leads to conditions in which mitochondrial complex I lose its natural cofactor, flavin mononucleotide (FMN) and become inactive. [ 65 ] [ 66 ] When oxygen is present the enzyme catalyzes a physiological reaction of NADH oxidation by ubiquinone, supplying electrons downstream of the respiratory chain (complexes III and IV). Ischemia leads to dramatic increase of succinate level. In the presence of succinate mitochondria catalyze reverse electron transfer so that fraction of electrons from succinate is directed upstream to FMN of complex I. Reverse electron transfer results in a reduction of complex I FMN, [ 56 ] increased generation of ROS, followed by a loss of the reduced cofactor (FMNH 2 ) and impairment of mitochondria energy production. The FMN loss by complex I and I/R injury can be alleviated by the administration of FMN precursor, riboflavin. [ 66 ] Recent studies have examined other roles of complex I activity in the brain. Andreazza et al. (2010) found that the level of complex I activity was significantly decreased in patients with bipolar disorder, but not in patients with depression or schizophrenia. They found that patients with bipolar disorder showed increased protein oxidation and nitration in their prefrontal cortex. These results suggest that future studies should target complex I for potential therapeutic studies for bipolar disorder. [ 67 ] Similarly, Moran et al. (2010) found that patients with severe complex I deficiency showed decreased oxygen consumption rates and slower growth rates. However, they found that mutations in different genes in complex I lead to different phenotypes, thereby explaining the variations of pathophysiological manifestations of complex I deficiency. [ 68 ] Exposure to pesticides can also inhibit complex I and cause disease symptoms. For example, chronic exposure to low levels of dichlorvos, an organophosphate used as a pesticide, has been shown to cause liver dysfunction. This occurs because dichlorvos alters complex I and II activity levels, which leads to decreased mitochondrial electron transfer activities and decreased ATP synthesis. [ 69 ] A proton-pumping, ubiquinone-using NADH dehydrogenase complex, homologous to complex I, is found in the chloroplast genomes of most land plants under the name ndh . This complex is inherited from the original symbiosis from cyanobacteria, but has been lost in most eukaryotic algae, some gymnosperms ( Pinus and gnetophytes ), and some very young lineages of angiosperms . The purpose of this complex is originally cryptic as chloroplasts do not participate in respiration, but now it is known that ndh serves to maintain photosynthesis in stressful situations. This makes it at least partially dispensable in favorable conditions. It is evident that angiosperm lineages without ndh do not last long from their young ages, but how gymnosperms survive on land without ndh for so long is unknown. [ 70 ] The following is a list of humans genes that encode components of complex I: As of this edit , this article uses content from "3.D.1 The H+ or Na+-translocating NADH Dehydrogenase (NDH) Family" , which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 3.0 Unported License , but not under the GFDL . All relevant terms must be followed.
https://en.wikipedia.org/wiki/Respiratory_complex_I
The respiratory quotient ( RQ or respiratory coefficient ) is a dimensionless number used in calculations of basal metabolic rate (BMR) when estimated from carbon dioxide production. It is calculated from the ratio of carbon dioxide produced by the body to oxygen consumed by the body, when the body is in a steady state. Such measurements, like measurements of oxygen uptake, are forms of indirect calorimetry . It is measured using a respirometer . The respiratory quotient value indicates which macronutrients are being metabolized, as different energy pathways are used for fats, carbohydrates, and proteins. [ 1 ] If metabolism consists solely of lipids, the respiratory quotient is approximately 0.7, for proteins it is approximately 0.8, and for carbohydrates it is 1.0. Most of the time, however, energy consumption is composed of both fats and carbohydrates. The approximate respiratory quotient of a mixed diet is 0.8. [ 1 ] Some of the other factors that may affect the respiratory quotient are energy balance, circulating insulin, and insulin sensitivity. [ 2 ] It can be used in the alveolar gas equation . The respiratory exchange ratio ( RER ) is the ratio between the metabolic production of carbon dioxide (CO 2 ) and the uptake of oxygen (O 2 ). [ 3 ] [ 4 ] The ratio is determined by comparing exhaled gases to room air. Measuring this ratio is equal to RQ only at rest or during mild to moderate aerobic exercise without the accumulation of lactate . The loss of accuracy during more intense anaerobic exercise is among others due to factors including the bicarbonate buffer system . The body tries to compensate for the accumulation of lactate and minimize the acidification of the blood by expelling more CO 2 through the respiratory system . [ 5 ] The RER can exceed 1.0 during intense exercise. A value above 1.0 cannot be attributed to the substrate metabolism, but rather to the aforementioned factors regarding bicarbonate buffering. [ 5 ] Calculation of RER is commonly done in conjunction with exercise tests such as the VO 2 max test . This can be used as an indicator that the participants are nearing exhaustion and the limits of their cardio-respiratory system. An RER greater than or equal to 1.0 is often used as a secondary endpoint criterion of a VO 2 max test. [ 5 ] The respiratory quotient ( RQ ) is the ratio: RQ = CO 2 eliminated / O 2 consumed where the term "eliminated" refers to carbon dioxide (CO 2 ) removed from the body in a steady state. In this calculation, the CO 2 and O 2 must be given in the same units, and in quantities proportional to the number of molecules. Acceptable inputs would be either moles , or else volumes of gas at standard temperature and pressure. Many metabolized substances are compounds containing only the elements carbon , hydrogen , and oxygen . Examples include fatty acids , glycerol , carbohydrates , deamination products, and ethanol . For complete oxidation of such compounds, the chemical equation is C x H y O z + (x + y/4 - z/2) O 2 → x CO 2 + (y/2) H 2 O and thus metabolism of this compound gives an RQ of x/(x + y/4 - z/2). For glucose, with the molecular formula, C 6 H 12 O 6 , the complete oxidation equation is C 6 H 12 O 6 + 6 O 2 → 6 CO 2 + 6 H 2 O. Thus, the RQ= 6 CO 2 / 6 O 2 =1. For oxidation of a fatty acid molecule, namely palmitic acid : [ 6 ] A RQ near 0.7 indicates that fat is the predominant fuel source, a value of 1.0 is indicative of carbohydrate being the predominant fuel source, and a value between 0.7 and 1.0 suggests a mix of both fat and carbohydrate. [ 6 ] In general a mixed diet corresponds with an RER of approximately 0.8. [ 7 ] For fats, the RQ depends on the specific fatty acids present. Amongst the commonly stored fatty acids in vertebrates, RQ varies from 0.692 ( stearic acid ) to as high as 0.759 ( docosahexaenoic acid ). Historically, it was assumed that 'average fat' had an RQ of about 0.71, and this holds true for most mammals including humans. However, a recent survey showed that aquatic animals, especially fish, have fat that should yield higher RQs on oxidation, reaching as high as 0.73 due to high amounts of docosahexaenoic acid. [ 8 ] The range of respiratory coefficients for organisms in metabolic balance usually ranges from 1.0 (representing the value expected for pure carbohydrate oxidation) to ~0.7 (the value expected for pure fat oxidation). In general, molecules that are more oxidized (e.g., glucose) require less oxygen to be fully metabolized and, therefore, have higher respiratory quotients. Conversely, molecules that are less oxidized (e.g., fatty acids) require more oxygen for their complete metabolism and have lower respiratory quotients. See BMR for a discussion of how these numbers are derived. A mixed diet of fat and carbohydrate results in an average value between these numbers. RQ value corresponds to a caloric value for each liter (L) of CO 2 produced. If O 2 consumption numbers are available, they are usually used directly, since they are more direct and reliable estimates of energy production. RQ as measured includes a contribution from the energy produced from protein. However, due to the complexity of the various ways in which different amino acids can be metabolized, no single RQ can be assigned to the oxidation of protein in the diet. Insulin, which increases lipid storage and decreases fat oxidation, is positively associated with increases in the respiratory quotient. [ 2 ] A positive energy balance will also lead to an increased respiratory quotient. [ 2 ] Practical applications of the respiratory quotient can be found in severe cases of chronic obstructive pulmonary disease , in which patients spend a significant amount of energy on respiratory effort. By increasing the proportion of fats in the diet, the respiratory quotient is driven down, causing a relative decrease in the amount of CO 2 produced. This reduces the respiratory burden to eliminate CO 2 , thereby reducing the amount of energy spent on respirations. [ 9 ] Respiratory Quotient can be used as an indicator of over or underfeeding. Underfeeding, which forces the body to utilize fat stores, will lower the respiratory quotient, while overfeeding, which causes lipogenesis , will increase it. [ 10 ] Underfeeding is marked by a respiratory quotient below 0.85, while a respiratory quotient greater than 1.0 indicates overfeeding. This is particularly important in patients with compromised respiratory systems, as an increased respiratory quotient significantly corresponds to increased respiratory rate and decreased tidal volume , placing compromised patients at a significant risk. [ 10 ] Because of its role in metabolism, respiratory quotient can be used in analysis of liver function and diagnosis of liver disease. In patients with liver cirrhosis , non-protein respiratory quotient (npRQ) values act as good indicators in the prediction of overall survival rate. Patients having a npRQ < 0.85 show considerably lower survival rates as compared to patients with a npRQ > 0.85. [ 11 ] A decrease in npRQ corresponds to a decrease in glycogen storage by the liver. [ 11 ] Similar research indicates that non-alcoholic fatty liver diseases are also accompanied by a low respiratory quotient value, and the non protein respiratory quotient value was a good indication of disease severity. [ 11 ] Recently the respiratory quotient is also used from aquatic scientists to illuminate its environmental applications. Experimental studies with natural bacterioplankton using different single substrates suggested that RQ is linked to the elemental composition of the respired compounds. [ 12 ] By this way, it is demonstrated that bacterioplankton RQ is not only a practical aspect of Bacterioplankton Respiration determination, but also a major ecosystem state variable that provides unique information about aquatic ecosystem functioning. [ 12 ] Based on the stoichiometry of the different metabolized substrates, the scientists can predict that dissolved oxygen (O 2 ) and carbon dioxide (CO 2 ) in aquatic ecosystems should covary inversely due to the processing of photosynthesis and respiration . [ 13 ] Using this quotient we could shed light on the metabolic behavior and the simultaneous roles of chemical and physical forcing that shape the biogeochemistry of aquatic ecosystems. [ 13 ] Moving from a molecular and cellular level to an ecosystem level, various processes account for the exchange of O 2 and CO 2 between the biosphere and atmosphere. Field measurements of the concurrent consumption of oxygen (-ΔO 2 ) and production of carbon dioxide (ΔCO 2 ) can be used to derive an apparent respiratory quotient (ARQ). [ 14 ] This value reflects a cumulative effect of not only the aerobic respiration of all organisms (microorganisms and higher consumers) in the sample, but also all the other biogeochemical processes which consume O 2 without a corresponding CO 2 production and vice versa influencing the observed RQ. [ 19 ]
https://en.wikipedia.org/wiki/Respiratory_exchange_ratio
Respiratory inductance plethysmography (RIP) is a method of evaluating pulmonary ventilation by measuring the movement of the chest and abdominal wall. Accurate measurement of pulmonary ventilation or breathing often requires the use of devices such as masks or mouthpieces coupled to the airway opening. These devices are often both encumbering and invasive, and thus ill suited for continuous or ambulatory measurements. As an alternative RIP devices that sense respiratory excursions at the body surface can be used to measure pulmonary ventilation. According to a paper by Konno and Mead [ 1 ] "the chest can be looked upon as a system of two compartments with only one degree of freedom each". Therefore, any volume change of the abdomen must be equal and opposite to that of the rib cage. The paper suggests that the volume change is close to being linearly related to changes in antero-posterior (front to back of body) diameter. When a known air volume is inhaled and measured with a spirometer , a volume-motion relationship can be established as the sum of the abdominal and rib cage displacements. Therefore, according to this theory, only changes in the antero-posterior diameter of the abdomen and the rib cage are needed to estimate changes in lung volume. Several sensor methodologies based on this theory have been developed. RIP is the most frequently used, established and accurate plethysmography method to estimate lung volume from respiratory movements [ citation needed ] . RIP has been used in many clinical and academic research studies in a variety of domains including polysomnographic (sleep), psychophysiology, psychiatric research, anxiety and stress research, anesthesia, cardiology and pulmonary research (asthma, COPD, dyspnea). A respiratory inductance plethysmograph consists of two sinusoid wire coils insulated and placed within two 2.5 cm (about 1 inch) wide, lightweight elastic and adhesive bands. The transducer bands are placed around the rib cage under the armpits and around the abdomen at the level of the umbilicus (belly button). They are connected to an oscillator and subsequent frequency demodulation electronics to obtain digital waveforms. During inspiration the cross-sectional area of the rib cage and abdomen increases altering the self-inductance of the coils and the frequency of their oscillation, with the increase in cross-sectional area proportional to lung volumes. The electronics convert this change in frequency to a digital respiration waveform where the amplitude of the waveform is proportional to the inspired breath volume. A typical pitch of the wire sinusoid is in the range 1-2 cm and the inductance of the belt is ~ 2-4 microhenries per metre of belt. [ 2 ] The inductance can be measured by making it part of the tuned circuit of an oscillator and then measuring the oscillation frequency. Konno and Mead [ 3 ] extensively evaluated a two-degrees-of-freedom model of chest wall motion, whereby ventilation could be derived from measurements of rib cage and abdomen displacements. With this model, tidal volume (Vt) was calculated as the sum of the anteroposterior dimensions of the rib cage and abdomen, and could be measured to within 10% of actual Vt as long as a given posture was maintained. Changes in volume of the thoracic cavity can also be inferred from displacements of the rib cage and diaphragm . Motion of the rib cage can be directly assessed, whereas the motion of the diaphragm is indirectly assessed as the outward movement of the anterolateral abdominal wall. However, accuracy issues arise when trying to assess accurate respiratory volumes from a single respiration band placed either at the thorax, abdomen or midline. Due to differences in posture and thoraco-abdominal respiratory synchronization it is not possible to obtain accurate respiratory volumes with a single band. Furthermore, the shape of the acquired waveform tends to be non-linear due to the non-exact co-ordination of the two respiratory compartments. This further limits quantification of many useful respiratory indices and limits utility to only respiration rates and other basic timing indices. Therefore, to accurately perform volumetric respiratory measurements, a dual band respiratory sensor system must be required. Dual band respiratory inductance plethysmography can be used to describe various measures of complex respiratory patterns. The image shows waveforms and measures commonly analyzed. Respiratory rate is the number of breaths per minute. A non-specific measure of respiratory disorder. Tidal volume (Vt) is the volume inspired and expired with each breath. Variability in the wave form can be used to differentiate between restrictive (less) and obstructive pulmonary diseases as well as acute anxiety. Minute ventilation is equivalent to tidal volume multiplied by respiratory rate and is used to assess metabolic activity. Peak inspiratory flow (PifVt) is a measure that reflects respiratory drive, the higher its value, the greater the respiratory drive in the presence of coordinated thoraco-abdominal or even moderately discoordinated thoraco-abdominal movements. Fractional inspiratory time (Ti/Tt) is the " Duty cycle " (Ti/Tt, ratio of time of inspirationy to total breath time). Low values may reflect severe airways obstruction and can also occur during speech. Higher values are observed when snoring. Work of breathing is a measure of a "Rapid shallow breathing index". Peak/mean inspiratory and expiratory flow measures the presence of upper airway flow limitations during inspiration and expiration. %RCi is the percent contribution of the rib cage excursions to the tidal volume Vt. The %RCi contribution to Tidal Volume ratio is obtained by dividing the inspired volume in the RC band by the inspired volume in the algebraic sum of RC + AB at the point of the peak of inspiratory tidal volume. This value is higher in woman than in men. The values are also generally higher during acute hyperventilation . Phase Angle - Phi - Normal breathing involves a combination of both thoracic and abdominal (diaphragmatic) movements. During inhalation, both the thoracic and abdominal cavities simultaneously expand in volume, and thus in girth as well. If there is a blockage in the trachea or nasopharynx, the phasing of these movements will shift in relation to the degree of the obstruction. In the case of a total obstruction, the strong chest muscles force the thorax to expand, pulling the diaphragm upward in what is referred to as "paradoxical" breathing – paradoxical in that the normal phases of thoracic and abdominal motion are reversed. This is commonly referred to as the Phase Angle . [ 4 ] Apnea & hypopnea detection - Diagnostic components of sleep apnea/hypopnea syndrome and periodic breathing . Apnea & hypopnea classification - Phase relation between thorax and abdomen classifies apnea/hypopnea events into central, mixed, and obstructive types. qDEEL quantitative difference of end expiratory lung volume is a change in the level of end expiratory lung volume and may be elevated in Cheyne-Stokes respiration and periodic breathing. Dual band respiratory inductance plethysmography was validated in determining tidal volume during exercise and shown to be accurate. A version of RIP embedded in a garment called the LifeShirt was used for these validation studies. [ 5 ] [ 6 ] Use of RIP for preclinical research in freely moving animals :
https://en.wikipedia.org/wiki/Respiratory_inductance_plethysmography
The respiratory quotient ( RQ or respiratory coefficient ) is a dimensionless number used in calculations of basal metabolic rate (BMR) when estimated from carbon dioxide production. It is calculated from the ratio of carbon dioxide produced by the body to oxygen consumed by the body, when the body is in a steady state. Such measurements, like measurements of oxygen uptake, are forms of indirect calorimetry . It is measured using a respirometer . The respiratory quotient value indicates which macronutrients are being metabolized, as different energy pathways are used for fats, carbohydrates, and proteins. [ 1 ] If metabolism consists solely of lipids, the respiratory quotient is approximately 0.7, for proteins it is approximately 0.8, and for carbohydrates it is 1.0. Most of the time, however, energy consumption is composed of both fats and carbohydrates. The approximate respiratory quotient of a mixed diet is 0.8. [ 1 ] Some of the other factors that may affect the respiratory quotient are energy balance, circulating insulin, and insulin sensitivity. [ 2 ] It can be used in the alveolar gas equation . The respiratory exchange ratio ( RER ) is the ratio between the metabolic production of carbon dioxide (CO 2 ) and the uptake of oxygen (O 2 ). [ 3 ] [ 4 ] The ratio is determined by comparing exhaled gases to room air. Measuring this ratio is equal to RQ only at rest or during mild to moderate aerobic exercise without the accumulation of lactate . The loss of accuracy during more intense anaerobic exercise is among others due to factors including the bicarbonate buffer system . The body tries to compensate for the accumulation of lactate and minimize the acidification of the blood by expelling more CO 2 through the respiratory system . [ 5 ] The RER can exceed 1.0 during intense exercise. A value above 1.0 cannot be attributed to the substrate metabolism, but rather to the aforementioned factors regarding bicarbonate buffering. [ 5 ] Calculation of RER is commonly done in conjunction with exercise tests such as the VO 2 max test . This can be used as an indicator that the participants are nearing exhaustion and the limits of their cardio-respiratory system. An RER greater than or equal to 1.0 is often used as a secondary endpoint criterion of a VO 2 max test. [ 5 ] The respiratory quotient ( RQ ) is the ratio: RQ = CO 2 eliminated / O 2 consumed where the term "eliminated" refers to carbon dioxide (CO 2 ) removed from the body in a steady state. In this calculation, the CO 2 and O 2 must be given in the same units, and in quantities proportional to the number of molecules. Acceptable inputs would be either moles , or else volumes of gas at standard temperature and pressure. Many metabolized substances are compounds containing only the elements carbon , hydrogen , and oxygen . Examples include fatty acids , glycerol , carbohydrates , deamination products, and ethanol . For complete oxidation of such compounds, the chemical equation is C x H y O z + (x + y/4 - z/2) O 2 → x CO 2 + (y/2) H 2 O and thus metabolism of this compound gives an RQ of x/(x + y/4 - z/2). For glucose, with the molecular formula, C 6 H 12 O 6 , the complete oxidation equation is C 6 H 12 O 6 + 6 O 2 → 6 CO 2 + 6 H 2 O. Thus, the RQ= 6 CO 2 / 6 O 2 =1. For oxidation of a fatty acid molecule, namely palmitic acid : [ 6 ] A RQ near 0.7 indicates that fat is the predominant fuel source, a value of 1.0 is indicative of carbohydrate being the predominant fuel source, and a value between 0.7 and 1.0 suggests a mix of both fat and carbohydrate. [ 6 ] In general a mixed diet corresponds with an RER of approximately 0.8. [ 7 ] For fats, the RQ depends on the specific fatty acids present. Amongst the commonly stored fatty acids in vertebrates, RQ varies from 0.692 ( stearic acid ) to as high as 0.759 ( docosahexaenoic acid ). Historically, it was assumed that 'average fat' had an RQ of about 0.71, and this holds true for most mammals including humans. However, a recent survey showed that aquatic animals, especially fish, have fat that should yield higher RQs on oxidation, reaching as high as 0.73 due to high amounts of docosahexaenoic acid. [ 8 ] The range of respiratory coefficients for organisms in metabolic balance usually ranges from 1.0 (representing the value expected for pure carbohydrate oxidation) to ~0.7 (the value expected for pure fat oxidation). In general, molecules that are more oxidized (e.g., glucose) require less oxygen to be fully metabolized and, therefore, have higher respiratory quotients. Conversely, molecules that are less oxidized (e.g., fatty acids) require more oxygen for their complete metabolism and have lower respiratory quotients. See BMR for a discussion of how these numbers are derived. A mixed diet of fat and carbohydrate results in an average value between these numbers. RQ value corresponds to a caloric value for each liter (L) of CO 2 produced. If O 2 consumption numbers are available, they are usually used directly, since they are more direct and reliable estimates of energy production. RQ as measured includes a contribution from the energy produced from protein. However, due to the complexity of the various ways in which different amino acids can be metabolized, no single RQ can be assigned to the oxidation of protein in the diet. Insulin, which increases lipid storage and decreases fat oxidation, is positively associated with increases in the respiratory quotient. [ 2 ] A positive energy balance will also lead to an increased respiratory quotient. [ 2 ] Practical applications of the respiratory quotient can be found in severe cases of chronic obstructive pulmonary disease , in which patients spend a significant amount of energy on respiratory effort. By increasing the proportion of fats in the diet, the respiratory quotient is driven down, causing a relative decrease in the amount of CO 2 produced. This reduces the respiratory burden to eliminate CO 2 , thereby reducing the amount of energy spent on respirations. [ 9 ] Respiratory Quotient can be used as an indicator of over or underfeeding. Underfeeding, which forces the body to utilize fat stores, will lower the respiratory quotient, while overfeeding, which causes lipogenesis , will increase it. [ 10 ] Underfeeding is marked by a respiratory quotient below 0.85, while a respiratory quotient greater than 1.0 indicates overfeeding. This is particularly important in patients with compromised respiratory systems, as an increased respiratory quotient significantly corresponds to increased respiratory rate and decreased tidal volume , placing compromised patients at a significant risk. [ 10 ] Because of its role in metabolism, respiratory quotient can be used in analysis of liver function and diagnosis of liver disease. In patients with liver cirrhosis , non-protein respiratory quotient (npRQ) values act as good indicators in the prediction of overall survival rate. Patients having a npRQ < 0.85 show considerably lower survival rates as compared to patients with a npRQ > 0.85. [ 11 ] A decrease in npRQ corresponds to a decrease in glycogen storage by the liver. [ 11 ] Similar research indicates that non-alcoholic fatty liver diseases are also accompanied by a low respiratory quotient value, and the non protein respiratory quotient value was a good indication of disease severity. [ 11 ] Recently the respiratory quotient is also used from aquatic scientists to illuminate its environmental applications. Experimental studies with natural bacterioplankton using different single substrates suggested that RQ is linked to the elemental composition of the respired compounds. [ 12 ] By this way, it is demonstrated that bacterioplankton RQ is not only a practical aspect of Bacterioplankton Respiration determination, but also a major ecosystem state variable that provides unique information about aquatic ecosystem functioning. [ 12 ] Based on the stoichiometry of the different metabolized substrates, the scientists can predict that dissolved oxygen (O 2 ) and carbon dioxide (CO 2 ) in aquatic ecosystems should covary inversely due to the processing of photosynthesis and respiration . [ 13 ] Using this quotient we could shed light on the metabolic behavior and the simultaneous roles of chemical and physical forcing that shape the biogeochemistry of aquatic ecosystems. [ 13 ] Moving from a molecular and cellular level to an ecosystem level, various processes account for the exchange of O 2 and CO 2 between the biosphere and atmosphere. Field measurements of the concurrent consumption of oxygen (-ΔO 2 ) and production of carbon dioxide (ΔCO 2 ) can be used to derive an apparent respiratory quotient (ARQ). [ 14 ] This value reflects a cumulative effect of not only the aerobic respiration of all organisms (microorganisms and higher consumers) in the sample, but also all the other biogeochemical processes which consume O 2 without a corresponding CO 2 production and vice versa influencing the observed RQ. [ 19 ]
https://en.wikipedia.org/wiki/Respiratory_quotient
Respirocytes are hypothetical, microscopic, artificial red blood cells that are intended to emulate the function of their organic counterparts, so as to supplement or replace the function of much of the human body's normal respiratory system . Respirocytes were proposed by Robert A. Freitas Jr in his 1998 paper "A Mechanical Artificial Red Blood Cell: Exploratory Design in Medical Nanotechnology". [ 1 ] Respirocytes are an example of molecular nanotechnology , a field of technology still in the very earliest, purely hypothetical phase of development. Current technology is not sufficient to build a respirocyte due to considerations of power, atomic-scale manipulation, immune reaction or toxicity , computation and communication . Freitas proposed a spherical robot made up of 18 billion atoms arranged as a tiny pressure tank , which would be filled up with oxygen and carbon dioxide . [ 2 ] [ unreliable source? ] [ 3 ] In Freitas' proposal, each respirocyte could store and transport 236 times more oxygen than a natural red blood cell, and could release it in a more controlled manner. [ 2 ] Freitas has also proposed " microbivore " robots that would attack pathogens in the manner of white blood cells . [ 4 ]
https://en.wikipedia.org/wiki/Respirocyte
Respirometry is a general term that encompasses a number of techniques for obtaining estimates of the rates of metabolism of vertebrates , invertebrates , plants , tissues, cells, or microorganisms via an indirect measure of heat production ( calorimetry ). The metabolism of an animal is estimated by determining rates of carbon dioxide production (VCO 2 ) and oxygen consumption (VO 2 ) of individual animals, either in a closed or an open-circuit respirometry system. Two measures are typically obtained: standard (SMR) or basal metabolic rate (BMR) and maximal rate ( VO2max ). SMR is measured while the animal is at rest (but not asleep) under specific laboratory (temperature, hydration) and subject-specific conditions (e.g., size or allometry [ 1 ] ), age, reproduction status, post-absorptive to avoid thermic effect of food ). [ 2 ] VO 2 max is typically determined during aerobic exercise at or near physiological limits. [ 3 ] In contrast, field metabolic rate (FMR) refers to the metabolic rate of an unrestrained, active animal in nature. [ 4 ] Whole-animal metabolic rates refer to these measures without correction for body mass. If SMR or BMR values are divided by the body mass value for the animal, then the rate is termed mass-specific. It is this mass-specific value that one typically hears in comparisons among species. [ 5 ] Respirometry depends on a "what goes in must come out" principle. [ 6 ] Consider a closed system first. Imagine that we place a mouse into an air-tight container. The air sealed in the container initially contains the same composition and proportions of gases that were present in the room: 20.95% O 2 , 0.04% CO 2 , water vapor (the exact amount depends on air temperature, see dew point ), 78% (approximately) N 2 , 0.93% argon and a variety of trace gases making up the rest (see Earth's atmosphere ). As time passes, the mouse in the chamber produces CO 2 and water vapor, but extracts O 2 from the air in proportion to its metabolic demands. Therefore, as long as we know the volume of the system, the difference between the concentrations of O 2 and CO 2 at the start when we sealed the mouse into the chamber (the baseline or reference conditions) compared to the amounts present after the mouse has breathed the air at a later time must be the amounts of CO 2 /O 2 produced/consumed by the mouse . Nitrogen and argon are inert gasses and therefore their fractional amounts are unchanged by the respiration of the mouse. In a closed system, the environment will eventually become hypoxic . For an open-system, design constraints include washout characteristics of the animal chamber and sensitivity of the gas analyzers. [ 7 ] [ 8 ] However, the basic principle remains the same: What goes in must come out. The primary distinction between an open and closed system is that the open system flows air through the chamber (i.e., air is pushed or pulled by pump) at a rate that constantly replenishes the O 2 depleted by the animal while removing the CO 2 and water vapor produced by the animal. The volumetric flow rate must be high enough to ensure that the animal never consumes all of the oxygen present in the chamber while at the same time, the rate must be low enough so that the animal consumes enough O 2 for detection. For a 20 g mouse , flow rates of about 200 ml/min through 500 ml containers would provide a good balance. At this flow rate, about 40 ml of O 2 is brought to the chamber and the entire volume of air in the chamber is exchanged within 5 minutes. For other smaller animals, chamber volumes can be much smaller and flow rates would be adjusted down as well. Note that for warm-blooded or endothermic animals ( birds and mammals ), chamber sizes and or flow rates would be selected to accommodate their higher metabolic rates. Calculating rates of VO 2 and/or VCO 2 requires knowledge of the flow rates into and out of the chamber, plus fractional concentrations of the gas mixtures into and out of the animal chamber. In general, metabolic rates are calculated from steady-state conditions (i.e., animal's metabolic rate is assumed to be constant [ 9 ] [ 10 ] ). To know the rates of oxygen consumed, one needs to know the location of the flow meter relative to the animal chamber (if positioned before the chamber, the flow meter is "upstream," if positioned after the chamber, the flow meter is "downstream"), and whether or not reactive gases are present (e.g., CO 2 , water , methane , see inert gas ). For an open system with upstream flow meter, water (e.g., anhydrous calcium sulfate ) and CO 2 removed prior to the oxygen analyzer, a suitable equation is For an open system with downstream flow meter, water and CO 2 removed prior to the oxygen analyzer, a suitable equation is where For example, values for BMR of a 20 g mouse ( Mus musculus ) might be FR = 200 mL/min, and readings of fractional concentration of O 2 from an oxygen analyzer are F in O 2 = 0.2095, F ex O 2 = 0.2072. The calculated rate of oxygen consumption is 0.58 mL/min or 35 mL/hour. Assuming an enthalpy of combustion for O 2 of 20.1 joules per milliliter, we would then calculate the heat production (and therefore metabolism) for the mouse as 703.5 J/h. For open flow system, the list of equipment and parts is long compared to the components of a closed system, but the chief advantage of the open system is that it permits continuous recording of metabolic rate. The risk of hypoxia is also much less in an open system. Pumps for air flow Flow meter and flow controllers Tubing and chambers Analyzers Finally, a computer data acquisition and control system would be a typical addition to complete the system. Instead of a chart recorder , continuous records of oxygen consumption and or carbon dioxide production are made with the assistance of an analog-to-digital converter coupled to a computer. Software captures, filters, converts, and displays the signal as appropriate to the experimenter's needs. A variety of companies and individuals service the respirometry community (e.g., Sable Systems , Qubit Systems, see also Warthog Systems). Inside the body oxygen is delivered to cells and in the cells to mitochondria , where it is consumed in the process generating most of the energy required by the organism. Mitochondrial respirometry measures the consumption of oxygen by the mitochondria without involving an entire living animal and is the main tool to study mitochondrial function. [ 13 ] Three different types of samples may be subjected to such respirometric studies: isolated mitochondria (from cell cultures, animals or plants); permeabilized cells (from cell cultures); and permeabilized fibers or tissues (from animals). In the latter two cases the cellular membrane is made permeable by the addition of chemicals leaving selectively the mitochondrial membrane intact. Therefore, chemicals that usually would not be able to cross the cell membrane can directly influence the mitochondria. By the permeabilization of the cellular membrane, the cell stops to exist as a living, defined organism, leaving only the mitochondria as still functional structures. Unlike whole-animal respirometry, mitochondrial respirometry takes place in solution, i.e. the sample is suspended in a medium. Today mitochondrial respirometry is mainly performed with a closed-chamber approach. The sample suspended in a suitable medium is placed in a hermetically closed metabolic chamber. The mitochondria are brought into defined “states” by the sequential addition of substrates or inhibitors. Since the mitochondria consume oxygen, the oxygen concentration drops. This change of oxygen concentration is recorded by an oxygen sensor in the chamber. From the rate of the oxygen decline (taking into account correction for oxygen diffusion) the respiratory rate of the mitochondria can be computed. [ 13 ] The functioning of mitochondria is studied in the field of bioenergetics . [ 14 ] Functional differences between mitochondria from different species are studied by respirometry as an aspect of comparative physiology . [ 15 ] [ 16 ] Mitochondrial respirometry is used to study mitochondrial functionality in mitochondrial diseases or diseases with a (suspected) strong link to mitochondria, e.g. diabetes mellitus type 2 , [ 17 ] [ 18 ] obesity [ 19 ] and cancer . [ 20 ] Other fields of application are e.g. sports science and the connection between mitochondrial function and aging . [ 21 ] The usual equipment includes a seal-able metabolic chamber, an oxygen sensor, and devices for data recording, stirring, thermostatisation and a way to introduce chemicals into the chamber. As described above for whole-animal respirometry the choice of materials is very important. [ 13 ] Plastic materials are not suitable for the chamber because of their oxygen storage capacity. When plastic materials are unavoidable (e.g. for o-rings, coatings of stirrers, or stoppers) polymers with a very low oxygen permeability (like PVDF as opposed to e.g. PTFE ) may be used. Remaining oxygen diffusion into or out of the chamber materials can be handled by correcting the measured oxygen fluxes for the instrumental oxygen background flux. The entire instrument comprising the mentioned components is often called an oxygraph. The companies providing equipment for whole-animal rspirometry mentioned above are usually not involved in mitochondrial respiromety. The community is serviced at widely varying levels of price and sophistication by companies like Oroboros Instruments, Hansatech, Respirometer Systems & Applications, YSI Life Sciences or Strathkelvin Instruments .
https://en.wikipedia.org/wiki/Respirometry
Control coefficients measure the response of a biochemical pathway to changes in enzyme activity. The response coefficient, as originally defined by Kacser and Burns, [ 1 ] is a measure of how external factors such as inhibitors, pharmaceutical drugs, or boundary species affect the steady-state fluxes and species concentrations. The flux response coefficient is defined by: R x J = d J d x x J {\displaystyle R_{x}^{J}={\frac {dJ}{dx}}{\frac {x}{J}}} where J {\displaystyle J} is the steady-state pathway flux . Similarly, the concentration response coefficient is defined by the expression: R x s = d s d x x s {\displaystyle R_{x}^{s}={\frac {ds}{dx}}{\frac {x}{s}}} where in both cases x {\displaystyle x} is the concentration of the external factor. The response coefficient measures how sensitive a pathway is to changes in external factors other than enzyme activities. The flux response coefficient is related to control coefficients and elasticities through the following relationship: R x J = ∑ i = 1 n C e i J ε x v i {\displaystyle R_{x}^{J}=\sum _{i=1}^{n}C_{e_{i}}^{J}\varepsilon _{x}^{v_{i}}} Likewise, the concentration response coefficient is related by the following expression: R x s = ∑ i = 1 n C e i s ε x v i {\displaystyle R_{x}^{s}=\sum _{i=1}^{n}C_{e_{i}}^{s}\varepsilon _{x}^{v_{i}}} The summation in both cases accounts for cases where a given external factor, x {\displaystyle x} , can act at multiple sites. For example, a given drug might act on multiple protein sites. The overall response is the sum of the individual responses. These results show that the action of an external factor, such as a drug, has two components: When designing drugs for therapeutic action, both aspects must therefore be considered. [ 2 ] There are various ways to prove the response theorems: The perturbation proof by Kacser and Burns [ 1 ] is given as follows. Given the simple linear pathway catalyzed by two enzymes e 1 {\displaystyle e_{1}} and e 2 {\displaystyle e_{2}} : X ⟶ e 1 S ⟶ e 2 {\displaystyle X{\stackrel {e_{1}}{\longrightarrow }}S{\stackrel {e_{2}}{\longrightarrow }}} where X {\displaystyle X} is the fixed boundary species. Let us increase the concentration of enzyme e 1 {\displaystyle e_{1}} by an amount δ e 1 {\displaystyle \delta e_{1}} . This will cause the steady state flux and concentration of S {\displaystyle S} , and all downstream species beyond e 2 {\displaystyle e_{2}} to increase. The concentration of X {\displaystyle X} is now decreased such that the flux and steady-state concentration of S {\displaystyle S} is restored back to their original values. These changes allow one to write down the following local and systems equations for the changes that occurred: δ v 1 v 1 = ε x 1 δ x x + ε e 1 1 δ e 1 e 1 = 0 } Local equation δ J J = R x J δ x x + C e 1 J δ e 1 e 1 = 0 } System equation {\displaystyle {\begin{array}{r}\left.{\dfrac {\delta v_{1}}{v_{1}}}=\varepsilon _{x}^{1}{\dfrac {\delta x}{x}}+\varepsilon _{e_{1}}^{1}{\dfrac {\delta e_{1}}{e_{1}}}=0\right\}{\text{ Local equation }}\\[5pt]\left.{\dfrac {\delta J}{J}}=R_{x}^{J}{\dfrac {\delta x}{x}}+C_{e_{1}}^{J}{\dfrac {\delta e_{1}}{e_{1}}}=0\right\}{\text{ System equation }}\end{array}}} There is no s {\displaystyle s} term in either equation because the concentration of s {\displaystyle s} is unchanged. Both right-hand sides of the equations are guaranteed to be zero by construction. The term δ e 1 / e 1 {\displaystyle \delta e_{1}/e_{1}} can be eliminated by combining both equations. If we also assume that the reaction rate for an enzyme-catalyzed reaction is proportional to the enzyme concentration, then ε e 1 1 = 1 {\displaystyle \varepsilon _{e_{1}}^{1}=1} , therefore: 0 = R x J δ x x − C e 1 J ε x 1 δ x x {\displaystyle 0=R_{x}^{J}{\frac {\delta x}{x}}-C_{e_{1}}^{J}\varepsilon _{x}^{1}{\frac {\delta x}{x}}} Since δ e 1 / e 1 ≠ 0 {\displaystyle \delta e_{1}/e_{1}\neq 0} this yields: R x J = C e 1 J ε x 1 {\displaystyle R_{x}^{J}=C_{e_{1}}^{J}\varepsilon _{x}^{1}} . This proof can be generalized to the case where X {\displaystyle X} may act at multiple sites. The pure algebraic proof is more complex [ 3 ] [ 4 ] and requires consideration of the system equation : N v ( s ( p ) , p ) = 0 {\displaystyle {\bf {N}}{\bf {v}}(s(p),p)=0} where N {\displaystyle {\bf {N}}} is the stoichiometry matrix and v {\displaystyle {\bf {v}}} the rate vector. In this derivation, we assume there are no conserved moieties in the network, but this doesn't invalidate the proof. Using the chain rule and differentiating with respect to p {\displaystyle p} yields, after rearrangement: d s d p = [ − N ∂ v ∂ s ] − 1 ∂ v ∂ p {\displaystyle {\dfrac {ds}{dp}}=\left[-{\bf {N}}{\dfrac {\partial v}{\partial s}}\right]^{-1}{\dfrac {\partial v}{\partial p}}} The inverted term is the unscaled control coefficient so that after scaling, it is possible to write: R p s = C v s ε p v {\displaystyle R_{p}^{s}=C_{v}^{s}\varepsilon _{p}^{v}} To derive the flux response coefficient theorem, we must use the additional equation: v = v ( s ( p ) , p ) {\displaystyle {\bf {v}}={\bf {v}}({\bf {s}}(p),p)}
https://en.wikipedia.org/wiki/Response_coefficient_(biochemistry)
Response factor , usually in chromatography and spectroscopy , is the ratio between a signal produced by an analyte, and the quantity of analyte which produces the signal. Ideally, and for easy computation, this ratio is unity (one). In real-world scenarios, this is often not the case. The response factor f i {\displaystyle f_{i}} can be expressed on a molar , volume or mass [ 1 ] basis. Where the true amount of sample and standard are equal: where A is the signal (e.g. peak area) and the subscript i indicates the sample and the subscript st indicates the standard . [ 2 ] The response factor of the standard is assigned an arbitrary factor, for example 1 or 100. Response factor of sample/Response factor of standard=RRF One of the main reasons to use response factors is to compensate for the irreproducibility of manual injections into a gas chromatograph (GC). Injection volumes for GCs can be 1 microliter (μL) or less and are difficult to reproduce. Differences in the volume of injected analyte leads to differences in the areas of the peaks in the chromatogram and any quantitative results are suspect. To compensate for this error, a known amount of an internal standard (a second compound that does not interfere with the analysis of the primary analyte) is added to all solutions (standards and unknowns). This way if the injection volumes (and hence the peak areas) differ slightly, the ratio of the areas of the analyte and the internal standard will remain constant from one run to the next. This comparison of runs also applies to solutions with different concentrations of the analyte. The area of the internal standard becomes the value to which all other areas are referenced. Below is the mathematical derivation and application of this method. Consider an analysis of octane (C 8 H 18 ) using nonane (C 9 H 20 ) as the internal standard. The 3 chromatograms below are for 3 different samples. The amount of octane in each sample is different, but the amount of nonane is the same (in practice this is not a requirement). Due to scaling, the areas of the nonane peak appear to have different areas, but in reality the areas are identical. Therefore, the relative amounts of octane in each sample increases in the order of mixture 1 (least) < mixture 3 < mixture 2 (most). This conclusion is reached because the ratio of the area of octane to that of nonane is the least in mixture 1 and the most, in mixture 2. Mixture 3 has an intermediate ratio. This ratio can be written as A r e a o c t a n e / A r e a n o n a n e {\displaystyle Area_{octane}/Area_{nonane}} . In chromatography, the area of a peak is proportional to the number of moles (n) times some constant of proportionality (k), Area = k×n . The number of moles of compound is equal to the concentration (molarity, M ) times the volume, n = MV . From these equations, the following derivation is made: Since both compounds are in the same solution and are injected together, the volume terms are equal and cancel out. The above equation is then rearranged to solve for the ratio of the k's. This ratio is then called the response factor, F. The response factor, F, is equal to the ratios of the k's, which are constant. Therefore, F is constant. What this means is that regardless of the amounts of octane and nonane in solution, the ratio of the ratios of area to concentration will always yield a constant. In practice, a solution containing known amounts of both octane and nonane is injected into a GC and a response factor, F, is calculated. Then a separate solution with an unknown amount of octane and a known amount of nonane is injected. The response factor is applied to the data from the second solution and the unknown concentration of the octane is found. This example deals with the analysis of octane and nonane, but can be applied to any two compounds.
https://en.wikipedia.org/wiki/Response_factor
Response modeling methodology (RMM) is a general platform for statistical modeling of a linear/nonlinear relationship between a response variable ( dependent variable ) and a linear predictor (a linear combination of predictors/effects/factors/ independent variables ), often denoted the linear predictor function . It is generally assumed that the modeled relationship is monotone convex (delivering monotone convex function ) or monotone concave (delivering monotone concave function ). However, many non-monotone functions, like the quadratic equation , are special cases of the general model. RMM was initially developed as a series of extensions to the original inverse Box–Cox transformation : y = ( 1 + λ z ) 1 / λ , {\displaystyle y={{(1+\lambda z)}^{1/\lambda }},} where y is a percentile of the modeled response, Y (the modeled random variable ), z is the respective percentile of a normal variate and λ is the Box–Cox parameter. As λ goes to zero, the inverse Box–Cox transformation becomes: y = e z , {\displaystyle y=e^{z},} an exponential model. Therefore, the original inverse Box-Cox transformation contains a trio of models: linear ( λ = 1), power ( λ ≠ 1, λ ≠ 0) and exponential ( λ = 0). This implies that on estimating λ, using sample data, the final model is not determined in advance (prior to estimation) but rather as a result of estimating. In other words, data alone determine the final model. Extensions to the inverse Box–Cox transformation were developed by Shore (2001a [ 1 ] ) and were denoted Inverse Normalizing Transformations (INTs). They had been applied to model monotone convex relationships in various engineering areas, mostly to model physical properties of chemical compounds (Shore et al. , 2001a, [ 1 ] and references therein). Once it had been realized that INT models may be perceived as special cases of a much broader general approach for modeling non-linear monotone convex relationships, the new Response Modeling Methodology had been initiated and developed (Shore, 2005a, [ 2 ] 2011 [ 3 ] and references therein). The RMM model expresses the relationship between a response, Y (the modeled random variable), and two components that deliver variation to Y: The basic RMM model describes Y in terms of the LP, two possibly correlated zero-mean normal errors, ε 1 and ε 2 (with correlation ρ and standard deviations σ ε 1 and σ ε 2 , respectively) and a vector of parameters { α , λ , μ } (Shore, 2005a, [ 2 ] 2011 [ 3 ] ): and ε 1 represents uncertainty (measurement imprecision or otherwise) in the explanatory variables (included in the LP). This is in addition to uncertainty associated with the response ( ε 2 ). Expressing ε 1 and ε 2 in terms of standard normal variates, Z 1 and Z 2 , respectively, having correlation ρ , and conditioning Z 2 | Z 1 = z 1 ( Z 2 given that Z 1 is equal to a given value z 1 ), we may write in terms of a single error, ε : where Z is a standard normal variate, independent of both Z 1 and Z 2 , ε is a zero-mean error and d is a parameter. From these relationships, the associated RMM quantile function is (Shore, 2011 [ 3 ] ): w = log ⁡ ( y ) = μ + ( α λ ) [ ( η + c z ) λ − 1 ] + ( d ) z + ε , {\displaystyle w=\log(y)=\mu +\left({\frac {\alpha }{\lambda }}\right)[(\eta +cz)^{\lambda }-1]+(d)z+\varepsilon ,} or, after re-parameterization: where y is the percentile of the response ( Y ), z is the respective standard normal percentile, ε is the model's zero-mean normal error with constant variance, σ , { a,b,c,d } are parameters and M Y is the response median ( z = 0), dependent on values of the parameters and the value of the LP, η : where μ (or m ) is an additional parameter. If it may be assumed that cz<<η, the above model for RMM quantile function can be approximated by: The parameter “c” cannot be “absorbed” into the parameters of the LP (η) since “c” and LP are estimated in two separate stages (as expounded below). If the response data used to estimate the model contain values that change sign, or if the lowest response value is far from zero (for example, when data are left-truncated), a location parameter, L , may be added to the response so that the expressions for the quantile function and for the median become, respectively: As shown earlier, the inverse Box–Cox transformation depends on a single parameter, λ , which determines the final form of the model (whether linear, power or exponential). All three models thus constitute mere points on a continuous spectrum of monotonic convexity, spanned by λ. This property, where different known models become mere points on a continuous spectrum, spanned by the model's parameters, is denoted the Continuous Monotonic Convexity (CMC) property. The latter characterizes all RMM models, and it allows the basic “linear-power-exponential” cycle (underlying the inverse Box–Cox transformation) to be repeated ad infinitum, allowing for ever more convex models to be derived. Examples for such models are an exponential-power model or an exponential-exponential-power model (see explicit models expounded further on). Since the final form of the model is determined by the values of RMM parameters, this implies that the data, used to estimate the parameters, determine the final form of the estimated RMM model (as with the Box–Cox inverse transformation). The CMC property thus grant RMM models high flexibility in accommodating the data used to estimate the parameters. References given below display published results of comparisons between RMM models and existing models. These comparisons demonstrate the effectiveness of the CMC property. Ignoring RMM errors (ignore the terms cz , dz , and e in the percentile model), we obtain the following RMM models, presented in an increasing order of monotone convexity: Adding two new parameters by introducing for η (in the percentile model): exp ⁡ [ ( β κ ) ( η κ − 1 ) ] {\displaystyle \exp \left[\left({\frac {\beta }{\kappa }}\right)(\eta ^{\kappa }-1)\right]} , a new cycle of “linear-power-exponential” is iterated to produce models with stronger monotone convexity (Shore, 2005a, [ 2 ] 2011, [ 3 ] 2012 [ 4 ] ): It is realized that this series of monotonic convex models, presented as they appear in a hierarchical order on the “Ladder of Monotonic Convex Functions” (Shore, 2011 [ 3 ] ), is unlimited from above. However, all models are mere points on a continuous spectrum, spanned by RMM parameters. Also note that numerous growth models, like the Gompertz function , are exact special cases of the RMM model. The k -th non-central moment of Y is (assuming L = 0; Shore, 2005a, [ 2 ] 2011 [ 3 ] ): Expanding Y k , as given on the right-hand-side, into a Taylor series around zero, in terms of powers of Z (the standard normal variate), and then taking expectation on both sides, assuming that cZ ≪ η so that η + cZ ≈ η , an approximate simple expression for the k -th non-central moment, based on the first six terms in the expansion, is: An analogous expression may be derived without assuming cZ ≪ η . This would result in a more accurate (however lengthy and cumbersome) expression. Once cZ in the above expression is neglected, Y becomes a log-normal random variable (with parameters that depend on η ). RMM models may be used to model random variation (as a general platform for distribution fitting) or to model systematic variation (analogously to generalized linear models , GLM). In the former case (no systematic variation, namely, η = constant), RMM Quantile function is fitted to known distributions. If the underlying distribution is unknown, the RMM quantile function is estimated using available sample data. Modeling random variation with RMM is addressed and demonstrated in Shore (2011 [ 3 ] and references therein). In the latter case (modeling systematic variation), RMM models are estimated assuming that variation in the linear predictor (generated via variation in the regressor-variables) contribute to the overall variation of the modeled response variable ( Y ). This case is addressed and demonstrated in Shore (2005a, [ 2 ] 2012 [ 4 ] and relevant references therein). Estimation is conducted in two stages. First the median is estimated by minimizing the sum of absolute deviations (of fitted model from sample data points). In the second stage, the remaining two parameters (not estimated in the first stage, namely, { c , d }), are estimated. Three estimation approaches are presented in Shore (2012 [ 4 ] ): maximum likelihood , moment matching and nonlinear quantile regression . AS of 2021, RMM literature addresses three areas: (1) Developing INTs and later the RMM approach, with allied estimation methods; (2) Exploring the properties of RMM and comparing RMM effectiveness to other current modelling approaches (for distribution fitting or for modelling systematic variation); (3) Applications. Shore (2003a [ 5 ] ) developed Inverse Normalizing Transformations (INTs) in the first years of the 21st century and has applied them to various engineering disciplines like statistical process control (Shore, 2000a, [ 1 ] b, [ 6 ] 2001a, [ 7 ] b, [ 8 ] 2002a [ 9 ] ) and chemical engineering (Shore at al. , 2002 [ 10 ] ). Subsequently, as the new Response Modeling Methodology (RMM) had been emerging and developing into a full-fledged platform for modeling monotone convex relationships (ultimately presented in a book, Shore, 2005a [ 2 ] ), RMM properties were explored (Shore, 2002b, [ 11 ] 2004a, [ 12 ] b, [ 13 ] 2008a, [ 14 ] 2011 [ 3 ] ), estimation procedures developed (Shore, 2005a, [ 2 ] b, [ 15 ] 2012 [ 4 ] ) and the new modeling methodology compared to other approaches, for modeling random variation (Shore 2005c, [ 16 ] 2007, [ 17 ] 2010; [ 18 ] Shore and A’wad 2010 [ 19 ] ), and for modeling systematic variation (Shore, 2008b [ 20 ] ). Concurrently, RMM had been applied to various scientific and engineering disciplines and compared to current models and modeling approaches practiced therein. For example, chemical engineering (Shore, 2003b; [ 21 ] Benson-Karhi et al. , 2007; [ 22 ] Shacham et al. , 2008; [ 23 ] Shore and Benson-Karhi, 2010 [ 24 ] ), statistical process control (Shore, 2014; [ 25 ] Shore et al. , 2014; [ 26 ] Danoch and Shore, 2016 [ 27 ] ), reliability engineering (Shore, 2004c; [ 28 ] Ladany and Shore, 2007 [ 29 ] ), forecasting (Shore and Benson-Karhi, 2007 [ 30 ] ), ecology (Shore, 2014 [ 25 ] ), and the medical profession (Shore et al., 2014; [ 26 ] Benson-Karhi et al. , 2017 [ 31 ] ).
https://en.wikipedia.org/wiki/Response_modeling_methodology
The theory of response reactions (RERs) was elaborated for systems in which several physico-chemical processes run simultaneously in mutual interaction, with local thermodynamic equilibrium , and in which state variables called extents of reaction are allowed, but thermodynamic equilibrium proper is not required. [ 1 ] It is based on detailed analysis of the Hessian determinant , using either the Gibbs or the De Donder method of analysis. The theory derives the sensitivity coefficient as the sum of the contributions of individual RERs. Thus phenomena which are in contradiction to over-general statements of the Le Chatelier principle can be interpreted. With the help of RERs the equilibrium coupling was defined. [ 2 ] RERs could be derived based either on the species, [ 3 ] or on the stoichiometrically independent reactions of a parallel system. The set of RERs is unambiguous in a given system; and the number of them (M) is ( S C + 1 ) {\displaystyle S \choose C+1} , where S denotes the number of species and C refers to the number of components. In the case of three-component systems , RERs can be visualized on a triangle diagram. [ 4 ] This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Response_reactions
In molecular biology , a response regulator is a protein that mediates a cell 's response to changes in its environment as part of a two-component regulatory system . Response regulators are coupled to specific histidine kinases which serve as sensors of environmental changes. Response regulators and histidine kinases are two of the most common gene families in bacteria , where two-component signaling systems are very common; they also appear much more rarely in the genomes of some archaea , yeasts , filamentous fungi , and plants . Two-component systems are not found in metazoans . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Response regulator proteins typically consist of a receiver domain and one or more effector domains, although in some cases they possess only a receiver domain and exert their effects through protein-protein interactions . In two-component signaling, a histidine kinase responds to environmental changes by autophosphorylation on a histidine residue, following which the response regulator receiver domain catalyzes transfer of the phosphate group to its own recipient aspartate residue. This induces a conformational change that alters the function of the effector domains, usually resulting in increased transcription of target genes. The mechanisms by which this occurs are diverse and include allosteric activation of the effector domain or oligomerization of phosphorylated response regulators. [ 2 ] In a common variation on this theme, called a phosphorelay, a hybrid histidine kinase possesses its own receiver domain, and a histidine phosphotransfer protein performs the final transfer to a response regulator. [ 4 ] In many cases, histidine kinases are bifunctional and also serve as phosphatases , catalyzing the removal of phosphate from response regulator aspartate residues, such that the signal transduced by the response regulator reflects the balance between kinase and phosphatase activity. [ 4 ] Many response regulators are also capable of autodephosphorylation, which occurs on a wide range of time scales. [ 2 ] In addition, phosphoaspartate is relatively chemically unstable and may be hydrolyzed non-enzymatically. [ 1 ] Histidine kinases are highly specific for their cognate response regulators; there is very little cross-talk between different two-component signaling systems in the same cell. [ 6 ] Response regulators can be divided into at least three broad classes, based on the features of effector domains: regulators with a DNA-binding effector domain, regulators with an enzymatic effector domain, and single-domain response regulators. [ 3 ] More comprehensive classifications based on more detailed analysis of domain architecture are possible. Beyond these broad categorizations, there are response regulators with other types of effector domains, including RNA-binding effector domains. Regulators with a DNA-binding effector domain are the most common response regulators, and have direct impacts on transcription . [ 7 ] They tend to interact with their cognate regulators at an N-terminus receiver domain, and contain the DNA-binding effector towards the C-terminus. Once phosphorylated at the receiver domain, the response regulator dimerizes, gains enhanced DNA binding capacity and acts as a transcription factor . [ 8 ] The architecture of DNA binding domains are characterized as being variations on helix-turn-helix motifs. One variation, found on the response regulator OmpR of the EnvZ/OmpR two-component system and other OmpR-like response regulators, is a "winged helix" architecture. [ 9 ] OmpR-like response regulators are the largest group of response regulators and the winged helix motif is widespread. Other subtypes of DNA-binding response regulators include FixJ-like and NtrC-like regulators. [ 10 ] DNA-binding response regulators are involved in various uptake processes, including nitrate / nitrite (NarL, found in most prokaryotes). [ 11 ] The second class of multidomain response regulators are those with enzymatic effector domains. [ 12 ] These response regulators can participate in signal transduction, and generate secondary messenger molecules. Examples include the chemotaxis regulator CheB, with a methylesterase domain that is inhibited when the response regulator is in the inactive unphosphorylated conformation. Other enzymatic response regulators include c-di-GMP phosphodiesterases (e.g. VieA in V. cholerae ), protein phosphatases and histidine kinases. [ 12 ] A relatively small number of response regulators, single-domain response regulators, only contain a receiver domain, relying on protein-protein interactions to exert their downstream biological effects. [ 13 ] The receiver domain undergoes a conformational change as it interacts with an autophosphorylated histidine kinase, and consequently, the response regulator can initiate further reactions along a signaling cascade. Prominent examples include the chemotaxis regulator CheY, which interacts with flagellar motor proteins directly in its phosphorylated state. [ 13 ] Sequencing has so far shown that the distinct classes of response regulators are unevenly distributed throughout various taxa, [ 14 ] including across domains. While response regulators with DNA-binding domains are the most common in bacteria, single-domain response regulators are more common in archaea, with other major classes of response regulators seemingly absent from archaeal genomes. The number of two-component systems present in a bacterial genome is highly correlated with genome size as well as ecological niche ; bacteria that occupy niches with frequent environmental fluctuations possess more histidine kinases and response regulators. [ 4 ] [ 7 ] New two-component systems may arise by gene duplication or by lateral gene transfer , and the relative rates of each process vary dramatically across bacterial species. [ 15 ] In most cases, response regulator genes are located in the same operon as their cognate histidine kinase; [ 4 ] lateral gene transfers are more likely to preserve operon structure than gene duplications. [ 15 ] The small number of two-component systems present in eukaryotes most likely arose by lateral gene transfer from endosymbiotic organelles; in particular, those present in plants likely derive from chloroplasts . [ 4 ]
https://en.wikipedia.org/wiki/Response_regulator
A response spectrum is a plot of the peak or steady-state response (displacement, velocity or acceleration) of a series of oscillators of varying natural frequency , that are forced into motion by the same base vibration or shock . The resulting plot can then be used to pick off the response of any linear system, given its natural frequency of oscillation. One such use is in assessing the peak response of buildings to earthquakes . The science of strong ground motion may use some values from the ground response spectrum (calculated from recordings of surface ground motion from seismographs ) for correlation with seismic damage. If the input used in calculating a response spectrum is steady-state periodic, then the steady-state result is recorded. Damping must be present, or else the response will be infinite. For transient input (such as seismic ground motion), the peak response is reported. Some level of damping is generally assumed, but a value will be obtained even with no damping. Response spectra can also be used in assessing the response of linear systems with multiple modes of oscillation (multi-degree of freedom systems), although they are only accurate for low levels of damping. Modal analysis is performed to identify the modes, and the response in that mode can be picked from the response spectrum. These peak responses are then combined to estimate a total response. A typical combination method is the square root of the sum of the squares (SRSS) if the modal frequencies are not close. The result is typically different from that which would be calculated directly from an input, since phase information is lost in the process of generating the response spectrum. The main limitation of response spectra is that they are only universally applicable for linear systems. Response spectra can be generated for non-linear systems, but are only applicable to systems with the same non-linearity, although attempts have been made to develop non-linear seismic design spectra with wider structural application. The results of this cannot be directly combined for multi-mode response. Response spectra are very useful tools of earthquake engineering for analyzing the performance of structures and equipment in earthquakes, since many behave principally as simple oscillators (also known as single degree of freedom systems). Thus, if you can find out the natural frequency of the structure, then the peak response of the building can be estimated by reading the value from the ground response spectrum for the appropriate frequency. In most building codes in seismic regions, this value forms the basis for calculating the forces that a structure must be designed to resist ( seismic analysis ). As mentioned earlier, the ground response spectrum is the response plot done at the free surface of the earth. Significant seismic damage may occur if the building response is 'in tune' with components of the ground motion ( resonance ), which may be identified from the response spectrum. This was observed in the 1985 Mexico City Earthquake [ 1 ] where the oscillation of the deep-soil lake bed was similar to the natural frequency of mid-rise concrete buildings, causing significant damage. Shorter (stiffer) and taller (more flexible) buildings suffered less damage. In 1941 at Caltech, George W. Housner began to publish calculations of response spectra from accelerographs . [1] In the 1982 EERI Monograph on "Earthquake Design and Spectra", [2] Newmark and Hall describe how they developed an "idealized" seismic response spectrum based on a range of response spectra generated for available earthquake records. This was then further developed into a design response spectrum for use in structural design, and this basic form (with some modifications) is now the basis for structural design in seismic regions throughout the world (typically plotted against structural "period", the inverse of frequency). A nominal level of damping is assumed (5% of critical damping). For "regular" low-rise buildings, the structural response to earthquakes is characterized by the fundamental mode (a "waving" back-and-forth), and most building codes permit design forces to be calculated from the design spectrum on the basis of that frequency, but for more complex structures, combination of the results for many modes (calculated through modal analysis) is often required. In extreme cases, where structures are either too irregular, too tall or of significance to a community in disaster response, the response spectrum approach is no longer appropriate, and more complex analysis is required, such as non-linear static or dynamic analysis like in seismic performance analysis technique.
https://en.wikipedia.org/wiki/Response_spectrum
In English -speaking countries, the common verbal response to another person's sneeze is " (God) bless you ", or less commonly in the United States and Canada , "Gesundheit", the German word for health (and the response to sneezing in German-speaking countries). There are several proposed origins of the phrase "bless-you" for use in the context of sneezing. In non-English-speaking cultures, words connoting good health or a long life are often used instead of "bless you", though some also use references to God. In certain languages such as Vietnamese , Japanese or Korean , nothing is generally said after a sneeze except for when expressing concern when the person is sick from a cold or otherwise. Instead, depending on the language, the sneezer may excuse themselves. ይማርህ ( yimarih ) for a male ያኑርህ ( yanurih ) for male نشوة ( nashwa ) يرحمكم الله ( yarḥamukum ullāh ) if the sneezer says الحمدلله ( al‐ḥamdulila̅h ), as an alternative/religious interaction "Elation!" or "Thrill!" "God have mercy on you" if the sneezer says "All praise is for God" شكراً (s hukran ) or يهديكم الله و يصلح بالكم ( yahdīkum alla̅h wa‐yuṣlaḥ ba̅lakum) after the alternative interaction "Thank you!" "God guide you and set your affairs aright" brakhmeh "Bless you" Ehun urtez! Jainkoak lagun! "For a hundred years!" "May God help you!" Eta zu kondatzaile "And you there to narrate" Sneezing in Southern Chinese culture means that someone is speaking ill behind your back. Dukha yekhil for a female Pozdrav Pánbůh or Je to pravda "Bless God" or "It is true" Dejž to Pánbůh (in reply to Pozdrav Pánbůh ) "May God let it happen (bless you)" If the person has sneezed three times: Morgen mooi weer Less commonly used: Proost If the person has sneezed three times: "The weather will be nice tomorrow" From the Latin prōsit meaning "May it be good"; "To your health" [ notes 1 ] Old-fashioned: à tes / vos amours after the second sneeze, and qu'elles durent toujours or à tes / vos rêves after the third. More archaically, one can say Que Dieu te/vous bénisse . Gott helfe [ 3 ] There is also a custom to respond three times to three sneezes: Guð hjálpi þér ("God help you"), styrki þig ("strengthen you"), og styðji ("and support"). [ 6 ] 大丈夫? ( Daijoubu? ) Жәрекімалда (West) Short forms: Бер тәңір (East), Ақ күш (North) More rarely there are the expressions 多保重 ( duōbǎozhòng ) and 多喝点水 ( duō he dian shui ) [ original research? ] "Take care" and "Drink more water" or Háíshį́į́ naa ntsékees / naa yáłti' يرحمكم الله ( yarḥamukum ullāh ) if the sneezer says الحمدلله ( al‐ḥamdulila̅h ), as an alternative/religious interaction "God have mercy on you" if the sneezer says "All praise is for God" يهديكم الله و يصلح بالكم ( yahdīkum alla̅h wa‐yuṣlaḥ ba̅lakum) after the alternative interaction "God guide you and set your affairs aright" Să crești mare! (for children; usually " Noroc " comes first, then " Sănătate " and as a third option, " Să crești mare! ") [ 9 ] "May you grow up!" Пис мацо (Pis maco), which is mostly used with children "Go away kitten" (as the sound of sneezing is said to sound like a cat's cough) Also, Dheergayusu , Poornayusu , or Sadayusu . Different variations of long life after consecutive sneezes; "Live long" Исән бул ( ee-sæn bool ) ( informal ) будь здорова ( BООD' zdoh-RO-va ) to a female sneezer informally будьте здорові ( BООD'-te zdoh-RO-vee ) ( formal ) [ 11 ] На здоров'я! (na zdoh-RO-v-ia) Правда (pra-vda) if a person sneezes during another person's speech Bendith (Duw) arnoch chi ( respectful ) After a second and third sneeze, צו לעבן ( tsu lebn ) and צו לאַנגע יאָר ( tsu lange yor ) [ 12 ] If someone is speaking when another sneezes, גענאָסן צום אמת ( genosn tsum emes ) [ 13 ] "To life" and "for many years" "Sneezed on truth" Ẹ ṣé ( eh shay ) ( formal )
https://en.wikipedia.org/wiki/Response_to_sneezing
Started in Canada in 1985, Responsible Care is a global, voluntary initiative developed autonomously by the chemical industry for the chemical industry. It runs in 67 countries whose combined chemical industries account for nearly 90% of global chemical production. 96 of the 100 largest chemical producers in the world have adopted Responsible Care. The initiative is one of the leading examples of industry self-regulation and studies have shown it has not improved the industry's environmental and safety performance. [ 1 ] [ 2 ] [ 3 ] Responsible Care was launched by the Chemistry Industry Association of Canada (formerly the Canadian Chemical Producers' Association - CCPA) in 1985. [ 4 ] The term was coined by CIAC president Jean Bélanger. [ 5 ] The scheme evolved, and, in 2006, The Responsible Care Global Charter was launched at the UN -led International Conference on Chemicals Management in Dubai . [ 6 ] Today, the program is stewarded by the International Council of Chemical Associations . According to the chemical industry, the goal of the program is to improve health, safety, and environmental performance. A 2007 study concluded that the primary goals of the initiative are to change low public opinion and concerns about industry environmental and public health practices of the industry, while also opposing stronger and more expensive legislation and regulation of chemical products, even if warranted. [ 1 ] [ 7 ] The signatory chemical companies agree to commit themselves to improve their performances in the fields of environmental protection , occupational safety and health protection, plant safety, product stewardship and logistics , as well as to continuously improve dialog with their neighbors and the public, independent from legal requirements. As part of Responsible Care initiative, the International Council of Chemical Associations introduced the Global Product Strategy in 2006. The initiative has been studied as one of the leading examples of industry self-regulation . [ 2 ] The program has been described as a way to help the chemical industry avoid regulation and to improve its public image in the wake of the 1984 Bhopal disaster . [ 2 ] A 2000 study concluded that it demonstrates how industries fail to self-regulate without explicit sanctions. [ 2 ] The study demonstrated that plants owned by RC participating firms improved their relative environmental performance more slowly than non members. [ 2 ] According to a 2013 study, between 1988 and 2001, plants owned by RC participating firms raised their toxicity-weighted pollution by 15.9% on average relative to statistically-equivalent plants owned by non-RC participating firms. [ 3 ] This article about an environmental organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Responsible_Care
Responsive architecture is an evolving field of architectural practice and research. Responsive architectures are those that measure actual environmental conditions (via sensors) to enable buildings to adapt their form, shape, color or character responsively (via actuators). Responsive architectures aim to refine and extend the discipline of architecture by improving the energy performance of buildings with responsive technologies (sensors / control systems / actuators) while also producing buildings that reflect the technological and cultural conditions of our time. Responsive architectures distinguish themselves from other forms of interactive design by incorporating intelligent and responsive technologies into the core elements of a building's fabric. For example: by incorporating responsive technologies into the structural systems of buildings architects have the ability to tie the shape of a building directly to its environment. This enables architects to reconsider the way they design and construct space while striving to advance the discipline rather than applying patchworks of intelligent technologies to an existing vision of "building". The common definition of responsive architecture, as described by many authors, is a class of architecture or building that demonstrates an ability to alter its form, to continually reflect the environmental conditions that surround it. The term responsive architecture was introduced by Nicholas Negroponte , who first conceived of it during the late 1960s when spatial design problems were being explored by applying cybernetics to architecture. Negroponte proposes that responsive architecture is the natural product of the integration of computing power into built spaces and structures, and that better performing, more rational buildings are the result. Negroponte also extends this mixture to include the concepts of recognition, intention, contextual variation, and meaning into computing and its successful (ubiquitous) integration into architecture. This cross-fertilization of ideas lasted for about eight years. Several important theories resulted from these efforts, but today Nicholas Negroponte’s contributions are the most obvious to architecture. His work moved the field of architecture in a technical, functional, and actuated direction. [ 1 ] Since Negroponte’s contribution, new works of responsive architecture have also emerged, but as aesthetic creations—rather than functional ones. The works of Diller & Scofidio (Blur), dECOi (Aegis Hypo-Surface), [ 2 ] and NOX (The Freshwater Pavilion, NL) are all classifiable as types of responsive architecture. Each of these works monitors fluctuations in the environment and alters its form in response to these changes. The Blur project by Diller & Scofidio relies upon the responsive characteristics of a cloud to change its form while blowing in the wind. In the work of dECOi, responsiveness is enabled by a programmable façade, and finally in the work of NOX, a programmable audio–visual interior. All of these works depend upon the abilities of computers to continuously calculate and join digital models that are programmable, to the real world and the events that shape it. Finally an account of the development of the use of responsive systems and their history in respect to recent architectural theory can be found in Tristan d'Estree Sterk's recent opening keynote address (ACADIA 2009) entitled "Thoughts for Gen X— Speculating about the Rise of Continuous Measurement in Architecture" [ 3 ] While a considerable amount of time and effort has been spent on intelligent homes in recent years, the emphasis here has been mainly on developing computerized systems and electronics to adapt the interior of the building or its rooms to the needs of residents. Research in the area of responsive architecture has had far more to do with the building structure [ 4 ] itself, its ability to adapt to changing weather conditions and to take account of light, heat and cold. This could theoretically be achieved by designing structures consisting of rods and strings which would bend in response to wind, distributing the load in much the same way as a tree. Similarly, windows would respond to light, opening and closing to provide the best lighting and heating conditions inside the building. This line of research, known as actuated tensegrity , relies on changes in structures controlled by actuators which in turn are driven by computerized interpreters of the real world conditions. [ 5 ] Climate adaptive building shells (CABS) can be identified as a sub-domain of responsive architecture, with special emphasis on dynamic features in facades and roofs. [ 6 ] CABS can repeatedly and reversibly change some of its functions, features or behavior over time in response to changing performance requirements and variable boundary conditions, with the aim of improving overall building performance. [ 7 ] Tristan Sterk of The Bureau For Responsive Architecture [ 8 ] and The School of the Art Institute of Chicago [ 9 ] and Robert Skelton of UCSD in San Diego [ 10 ] are working together on actuated tensegrity, experimenting with pneumatically controlled rods and wires which change the shape of a building in response to sensors both outside and inside the structure. Their goal is to limit and reduce the impact of buildings on natural environments. [ 11 ] MIT 's Kinetic Design Group has been developing the concept of intelligent kinetic systems which are defined as "architectural spaces and objects that can physically re-configure themselves to meet changing needs." They draw on structural engineering, embedded computation and adaptable architecture. The objective is to demonstrate that energy use and the environmental quality of buildings could be rendered more efficient and affordable by making use of a combination of these technologies. [ 12 ] Daniel Grünkranz of the University of Applied Arts Vienna is currently undertaking PhD research in the field of Phenomenology as it applies to Responsive Architectures and Technologies. [ 13 ] Depicted left: A full scale actuated tensegrity prototype built from cast aluminium, stainless steel components and pneumatic muscles (pneumatic muscles provided by Shadow Robotics UK) by Tristan d'Estree Sterk and The Office for Robotic Architectural Media (2003). These types of structural systems use variable and controllable rigidity to provide architects and engineers with systems that have a controllable shape. As a form of ultra-lightweight structure these systems offer a primary method for reducing the embodied energy used in construction processes.
https://en.wikipedia.org/wiki/Responsive_architecture
Responsiveness as a concept of computer science refers to the specific ability of a system or functional unit to complete assigned tasks within a given time. [ 1 ] For example, it would refer to the ability of an artificial intelligence system to understand and carry out its tasks in a timely fashion. [ 2 ] In the Reactive principle, Responsiveness is one of the fundamental criteria along with resilience , elasticity and message driven. [ 3 ] It is one of the criteria under the principle of robustness (from a v principle). The other three are observability, recoverability, and task conformance. Software which lacks a decent process management can have poor responsiveness even on a fast machine. On the other hand, even slow hardware can run responsive software. It is much more important that a system actually spend the available resources in the best way possible. For instance, it makes sense to let the mouse driver run at a very high priority to provide fluid mouse interactions. For long-term operations, such as copying, downloading or transforming big files the most important factor is to provide good user-feedback and not the performance of the operation since it can quite well run in the background, using only spare processor time. Long delays can be a major cause of user frustration, or can lead the user to believe the system is not functioning, or that a command or input gesture has been ignored. Responsiveness is therefore considered an essential usability issue for human-computer-interaction ( HCI ). The rationale behind the responsiveness principle is that the system should deliver results of an operation to users in a timely and organized manner. The frustration threshold can be quite different, depending on the situation and the fact that the user interface depends on local or remote systems to show a visible response. There are at least three user tolerance thresholds (i.e.): [ 4 ] Although numerous other options may exist, the most frequently used and recommended answers to responsiveness issues are:
https://en.wikipedia.org/wiki/Responsiveness
Resting metabolic rate ( RMR ) is whole-body mammal (and other vertebrate) metabolism during a time period of strict and steady resting conditions that are defined by a combination of assumptions of physiological homeostasis and biological equilibrium . RMR differs from basal metabolic rate (BMR) because BMR measurements must meet total physiological equilibrium whereas RMR conditions of measurement can be altered and defined by the contextual limitations. Therefore, BMR is measured in the elusive "perfect" steady state, whereas RMR measurement is more accessible and thus, represents most, if not all measurements or estimates of daily energy expenditure. [ 1 ] Indirect calorimetry is the study or clinical use of the relationship between respirometry and bioenergetics , where the measurement of the rates of oxygen consumption, sometimes carbon dioxide production, and less often urea production is transformed to rates of energy expenditure, expressed as the ratio between i) energy and ii) the time frame of the measurement . For example, following analysis of oxygen consumption of a human subject, if 5.5 kilocalories of energy were estimated during a 5-minute measurement from a rested individual, then the resting metabolic rate equals = 1.1 kcal/min rate. Unlike some related measurements (e.g. METs ), RMR itself is not referenced to body mass and has no bearing on the energy density of the metabolism. A comprehensive treatment of confounding factors on BMR measurements is demonstrated as early as 1922 in Massachusetts by Engineering Professor Frank B Sanborn, wherein descriptions of the effects of food, posture, sleep, muscular activity, and emotion provide criteria for separating BMR from RMR. [ 2 ] [ 3 ] [ 4 ] In the 1780s for the French Academy of Sciences, Lavoisier , Laplace , and Seguin investigated and published relationships between direct calorimetry and respiratory gas exchanges from mammalian subjects. 100 years later in the 19th century for the Connecticut-based Wesleyan University, Professors Atwater and Rosa provided ample evidence of nitrogen, carbon dioxide, and oxygen transport during the metabolism of amino acids, glucose, and fatty acids in human subjects, further establishing the value of indirect calorimetry in determining bioenergetics of free-living humans. [ 5 ] [ 6 ] The work of Atwater and Rosa also made it possible to calculate the caloric values of foods, which eventually became the criteria adopted by the USDA to create the food calorie library. [ 7 ] In the early 20th century at Oxford University, physiology researcher Claude Gordon Douglas developed an inexpensive and mobile method of collecting exhaled breath (partly in preparation for experiments to be conducted on Pike's Peak, Colorado). In this method, the subject exhales into a nearly impermeable and large volume collection bag over a recorded period of time. The entire volume is measured, the oxygen and carbon dioxide content are analyzed, and the differences from inspired "ambient" air are calculated to determine the rates of oxygen uptake and carbon dioxide output. [ 8 ] To estimate energy expenditure from the exhaled gases, several algorithms were developed. One of the most widely used was developed in 1949 at University of Glasgow by research physiologist J. B. de V. Weir. His abbreviated equation for estimating metabolic rate was written with rates of gas exchange being volume/time, excluded urinary nitrogen, and allowed for the inclusion of a time conversion factor of 1.44 to extrapolate to 24-hour energy expenditure from 'kcal per minute" to "kcal per day." Weir used the Douglas Bag method in his experiments, and in support of neglecting the effect of protein metabolism under normal physiological conditions and eating patterns of ~12.5% protein calories, he wrote: In the early 1970s, computer technology enabled on-site data processing, some real-time analysis, and even graphical displays of metabolic variables, such as O 2 , CO 2 , and air-flow, thereby encouraging academic institutions to test accuracy and precision in new ways. [ 10 ] [ 11 ] A few years later in the decade, battery-operated systems made debuts. For example, a demonstration of the mobile system with digital display of both cumulative and past-minute oxygen consumption was presented in 1977 at the Proceedings of the Physiological Society. [ 12 ] As manufacturing and computing costs dropped over the next few decades, various universal calibration methods for preparing and comparing various models in the 1990s brought attention to short-comings or advantages of various designs. [ 13 ] In addition to lower costs, the metabolic variable CO 2 was often ignored, promoting instead a focus on oxygen-consumption models of weight management and exercise training. In the new millennium, smaller "desktop-sized" indirect calorimeters were being distributed with dedicated personal computers and printers, and running modern windows-based software. [ 14 ] RMR measurements are recommended when estimating total daily energy expenditure (TEE). Since BMR measures are restricted to the narrow time frame (and strict conditions) upon waking, the looser-conditions RMR measure is more typically conducted. In the review organized by the USDA , [ 15 ] most publications documented specific conditions of resting measurements, including time from latest food intake or physical activities; this comprehensive review estimated RMR is 10 – 20% higher than BMR due to thermic effect of feeding and residual burn from activities that occur throughout the day. [ citation needed ] Thermochemistry aside, the rate of metabolism and an amount of energy expenditures can be mistakenly interchanged, for example, when describing RMR and REE. [ citation needed ] The Academy of Nutrition and Dietetics (AND) provides clinical guidance for preparing a subject for RMR measures, [ 16 ] in order to mitigate possible confounding factors from feeding, stressful physical activities, or exposure to stimulants such as caffeine or nicotine: [ citation needed ] In preparation, a subject should be fasting for 7 hrs or greater, and mindful to avoid stimulants and stressors, such as caffeine, nicotine, and hard physical activities such as purposeful exercises. For 30 minutes before conducting the measurement, a subject should be laying supine without physical movements, no reading nor listening to music. The ambiance should reduce stimulation by maintaining constant quiet, low lighting, and steady temperature. These conditions continue during the measurement stage. Further, the correct use of a well-maintained indirect calorimeter includes achieving a natural and steady breathing pattern in order to reveal oxygen consumption and carbon dioxide production rates under a reproducible resting condition. Indirect calorimetry is considered the gold-standard method to measure RMR. [ 17 ] Indirect calorimeters are usually found in laboratory and clinical settings, but technological advancements are bringing RMR measurement to free-living conditions. [ citation needed ] Long-term weight management is directly proportional to calories absorbed from feeding; nevertheless, myriad non-caloric factors also play biologically significant roles (not covered here) in estimating energy intake. In counting energy expenditure, the use of a resting measurement (RMR) is the most accurate method for estimating the major portion of Total daily energy expenditure (TEE), thereby giving the closest approximations when planning & following a Calorie Intake Plan. Thus, estimation of REE by indirect calorimetry is strongly recommended for accomplishing long-term weight management, a conclusion reached and maintained due to ongoing observational research by well-respected institutions such as the USDA , AND (previously ADA), ACSM , and internationally by the WHO . [ citation needed ] Energy expenditure is correlated to a number of factors, listed in alphabetical order. RMR is regularly used in ecology to study the response of individuals to changes in environmental conditions. Parasites by definition have a negative impact on their hosts and it is thus expected that there might be effects on host RMR. Varying effects of parasite infection on host RMR have been found. Most studies indicate an increase in RMR with parasite infection, but others show no effect, or even a decrease in RMR. It is still unclear why such variation in the direction of change in RMR with parasite infection is seen. [ 19 ]
https://en.wikipedia.org/wiki/Resting_metabolic_rate
A restricted-access barrier system ( RABS ) is an installation which is used in many industries, such as pharmaceutical , medical , chemical , electrical engineering where a controlled atmosphere is needed. The RABS provides a physical barrier between workers and production areas. [ 1 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Restricted-access_barrier_system
In linear algebra , the restricted isometry property ( RIP ) characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. The concept was introduced by Emmanuel Candès and Terence Tao [ 1 ] and is used to prove many theorems in the field of compressed sensing . [ 2 ] There are no known large matrices with bounded restricted isometry constants (computing these constants is strongly NP-hard , [ 3 ] and is hard to approximate as well [ 4 ] ), but many random matrices have been shown to remain bounded. In particular, it has been shown that with exponentially high probability, random Gaussian, Bernoulli, and partial Fourier matrices satisfy the RIP with number of measurements nearly linear in the sparsity level. [ 5 ] The current smallest upper bounds for any large rectangular matrices are for those of Gaussian matrices. [ 6 ] Web forms to evaluate bounds for the Gaussian ensemble are available at the Edinburgh Compressed Sensing RIC page. [ 7 ] Let A be an m × p matrix and let 1 ≤ s ≤ p be an integer. Suppose that there exists a constant δ s ∈ ( 0 , 1 ) {\displaystyle \delta _{s}\in (0,1)} such that, for every m × s submatrix A s of A and for every s -dimensional vector y , Then, the matrix A is said to satisfy the s -restricted isometry property with restricted isometry constant δ s {\displaystyle \delta _{s}} . This condition is equivalent to the statement that for every m × s submatrix A s of A we have where I s × s {\displaystyle I_{s\times s}} is the s × s {\displaystyle s\times s} identity matrix and ‖ X ‖ 2 → 2 {\displaystyle \|X\|_{2\to 2}} is the operator norm . See for example [ 8 ] for a proof. Finally this is equivalent to stating that all eigenvalues of A s ∗ A s {\displaystyle A_{s}^{*}A_{s}} are in the interval [ 1 − δ s , 1 + δ s ] {\displaystyle [1-\delta _{s},1+\delta _{s}]} . The RIC Constant is defined as the infimum of all possible δ {\displaystyle \delta } for a given A ∈ R n × m {\displaystyle A\in \mathbb {R} ^{n\times m}} . It is denoted as δ K {\displaystyle \delta _{K}} . For any matrix that satisfies the RIP property with a RIC of δ K {\displaystyle \delta _{K}} , the following condition holds: [ 1 ] The tightest upper bound on the RIC can be computed for Gaussian matrices. This can be achieved by computing the exact probability that all the eigenvalues of Wishart matrices lie within an interval.
https://en.wikipedia.org/wiki/Restricted_isometry_property
In algebra , the ring of restricted power series is the subring of a formal power series ring that consists of power series whose coefficients approach zero as degree goes to infinity. [ 1 ] Over a non-archimedean complete field , the ring is also called a Tate algebra . Quotient rings of the ring are used in the study of a formal algebraic space as well as rigid analysis , the latter over non-archimedean complete fields. Over a discrete topological ring , the ring of restricted power series coincides with a polynomial ring ; thus, in this sense, the notion of "restricted power series" is a generalization of a polynomial . Let A be a linearly topologized ring , separated and complete and { I λ } {\displaystyle \{I_{\lambda }\}} the fundamental system of open ideals. Then the ring of restricted power series is defined as the projective limit of the polynomial rings over A / I λ {\displaystyle A/I_{\lambda }} : In other words, it is the completion of the polynomial ring A [ x 1 , … , x n ] {\displaystyle A[x_{1},\dots ,x_{n}]} with respect to the filtration { I λ [ x 1 , … , x n ] } {\displaystyle \{I_{\lambda }[x_{1},\dots ,x_{n}]\}} . Sometimes this ring of restricted power series is also denoted by A { x 1 , … , x n } {\displaystyle A\{x_{1},\dots ,x_{n}\}} . Clearly, the ring A ⟨ x 1 , … , x n ⟩ {\displaystyle A\langle x_{1},\dots ,x_{n}\rangle } can be identified with the subring of the formal power series ring A [ [ x 1 , … , x n ] ] {\displaystyle A[[x_{1},\dots ,x_{n}]]} that consists of series ∑ c α x α {\displaystyle \sum c_{\alpha }x^{\alpha }} with coefficients c α → 0 {\displaystyle c_{\alpha }\to 0} ; i.e., each I λ {\displaystyle I_{\lambda }} contains all but finitely many coefficients c α {\displaystyle c_{\alpha }} . Also, the ring satisfies (and in fact is characterized by) the universal property : [ 4 ] for (1) each continuous ring homomorphism A → B {\displaystyle A\to B} to a linearly topologized ring B {\displaystyle B} , separated and complete and (2) each elements b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} in B {\displaystyle B} , there exists a unique continuous ring homomorphism extending A → B {\displaystyle A\to B} . In rigid analysis , when the base ring A is the valuation ring of a complete non-archimedean field ( K , | ⋅ | ) {\displaystyle (K,|\cdot |)} , the ring of restricted power series tensored with K {\displaystyle K} , is called a Tate algebra, named for John Tate . [ 5 ] It is equivalently the subring of formal power series k [ [ ξ 1 , … , ξ n ] ] {\displaystyle k[[\xi _{1},\dots ,\xi _{n}]]} which consists of series convergent on o k ¯ n {\displaystyle {\mathfrak {o}}_{\overline {k}}^{n}} , where o k ¯ := { x ∈ k ¯ : | x | ≤ 1 } {\displaystyle {\mathfrak {o}}_{\overline {k}}:=\{x\in {\overline {k}}:|x|\leq 1\}} is the valuation ring in the algebraic closure k ¯ {\displaystyle {\overline {k}}} . The maximal spectrum of T n {\displaystyle T_{n}} is then a rigid-analytic space that models an affine space in rigid geometry . Define the Gauss norm of f = ∑ a α ξ α {\displaystyle f=\sum a_{\alpha }\xi ^{\alpha }} in T n {\displaystyle T_{n}} by This makes T n {\displaystyle T_{n}} a Banach algebra over k ; i.e., a normed algebra that is complete as a metric space . With this norm , any ideal I {\displaystyle I} of T n {\displaystyle T_{n}} is closed [ 6 ] and thus, if I is radical, the quotient T n / I {\displaystyle T_{n}/I} is also a (reduced) Banach algebra called an affinoid algebra . Some key results are: As consequence of the division, preparation theorems and Noether normalization, T n {\displaystyle T_{n}} is a Noetherian unique factorization domain of Krull dimension n . [ 11 ] An analog of Hilbert's Nullstellensatz is valid: the radical of an ideal is the intersection of all maximal ideals containing the ideal (we say the ring is Jacobson). [ 12 ] Results for polynomial rings such as Hensel's lemma , division algorithms (or the theory of Gröbner bases ) are also true for the ring of restricted power series. Throughout the section, let A denote a linearly topologized ring, separated and complete.
https://en.wikipedia.org/wiki/Restricted_power_series