text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Stuart number ( N ), also known as magnetic interaction parameter, is a dimensionless number of fluids , i.e. gases or liquids. It is named after mathematician John Trevor Stuart . [ 1 ]
It is defined as the ratio of electromagnetic to inertial forces, which gives an estimate of the relative importance of a magnetic field on a flow. The Stuart number is relevant for flows of conducting fluids, e.g. in fusion reactors, steel casters or plasmas. [ 2 ] | https://en.wikipedia.org/wiki/Stuart_number |
The Stuart–Landau equation describes the behavior of a nonlinear oscillating system near the Hopf bifurcation , named after John Trevor Stuart and Lev Landau . In 1944, Landau proposed an equation for the evolution of the magnitude of the disturbance, which is now called as the Landau equation , to explain the transition to turbulence based on a phenomenological argument [ 1 ] and an attempt to derive this equation from hydrodynamic equations was done by Stuart for plane Poiseuille flow in 1958. [ 2 ] The formal derivation to derive the Landau equation was given by Stuart, Watson and Palm in 1960. [ 3 ] [ 4 ] [ 5 ] The perturbation in the vicinity of bifurcation is governed by the following equation
where
The evolution of the actual disturbance is given by the real part of A ( t ) {\displaystyle A(t)} i.e., by | A | cos ϕ {\displaystyle |A|\cos \phi } . Here the real part of the growth rate is taken to be positive, i.e., σ r > 0 {\displaystyle \sigma _{r}>0} because otherwise the system is stable in the linear sense, that is to say, for infinitesimal disturbances ( | A | {\displaystyle |A|} is a small number), the nonlinear term in the above equation is negligible in comparison to the other two terms in which case the amplitude grows in time only if σ r > 0 {\displaystyle \sigma _{r}>0} . The Landau constant is also taken to be positive, l r > 0 {\displaystyle l_{r}>0} because otherwise the amplitude will grow indefinitely (see below equations and the general solution in the next section). The Landau equation is the equation for the magnitude of the disturbance,
which can also be re-written as [ 6 ]
Similarly, the equation for the phase is given by
For non-homogeneous systems, i.e., when A {\displaystyle A} depends on spatial coordinates, see Ginzburg–Landau equation . Due to the universality of the equation, the equation finds its application in many fields such as hydrodynamic stability , [ 7 ] Belousov–Zhabotinsky reaction , [ 8 ] etc.
The Landau equation is linear when it is written for the dependent variable | A | − 2 {\displaystyle |A|^{-2}} ,
The general solution for σ r ≠ 0 {\displaystyle \sigma _{r}\neq 0} of the above equation is
As t → ∞ {\displaystyle t\rightarrow \infty } , the magnitude of the disturbance | A | {\displaystyle |A|} approaches a constant value that is independent of its initial value, i.e., | A | m a x → ( 2 σ r / l r ) 1 / 2 {\displaystyle |A|_{\mathrm {max} }\rightarrow (2\sigma _{r}/l_{r})^{1/2}} when t ≫ 1 / σ r {\displaystyle t\gg 1/\sigma _{r}} . The above solution implies that | A | {\displaystyle |A|} does not have a real solution if l r < 0 {\displaystyle l_{r}<0} and σ r > 0 {\displaystyle \sigma _{r}>0} . The associated solution for the phase function ϕ ( t ) {\displaystyle \phi (t)} is given by
As t ≫ 1 / σ r {\displaystyle t\gg 1/\sigma _{r}} , the phase varies linearly with time, ϕ ∼ ( σ i / σ r − l i / l r ) σ r t . {\displaystyle \phi \sim (\sigma _{i}/\sigma _{r}-l_{i}/l_{r})\sigma _{r}t.}
It is instructive to consider a hydrodynamic stability case where it is found that, according to the linear stability analysis, the flow is stable when R e ≤ R e c r {\displaystyle Re\leq Re_{\mathrm {cr} }} and unstable otherwise, where R e {\displaystyle Re} is the Reynolds number and the R e c r {\displaystyle Re_{\mathrm {cr} }} is the critical Reynolds number; a familiar example that is applicable here is the critical Reynolds number, R e c r ≈ 50 {\displaystyle Re_{\mathrm {cr} }\approx 50} , corresponding to the transition to Kármán vortex street in the problem of flow past a cylinder. [ 9 ] [ 10 ] The growth rate σ r {\displaystyle \sigma _{r}} is negative when R e < R e c r {\displaystyle Re<Re_{\mathrm {cr} }} and is positive when R e > R e c r {\displaystyle Re>Re_{\mathrm {cr} }} and therefore in the neighbourhood R e → R e c r {\displaystyle Re\rightarrow Re_{\mathrm {cr} }} , it may written as σ r = const . × ( R e − R e c r ) {\displaystyle \sigma _{r}={\text{const}}.\times (Re-Re_{\mathrm {cr} })} wherein the constant is positive. Thus, the limiting amplitude is given by
When the Landau constant is negative, l r < 0 {\displaystyle l_{r}<0} , we must include a negative term of higher order to arrest the unbounded increase of the perturbation. In this case, the Landau equation becomes [ 11 ]
The limiting amplitude then becomes
where the plus sign corresponds to the stable branch and the minus sign to the unstable branch. There exists a value of a critical value R e c r ′ {\displaystyle Re_{\mathrm {cr} }'} where the above two roots are equal ( σ r = − | l r | / 8 β r {\displaystyle \sigma _{r}=-|l_{r}|/8\beta _{r}} ) such that R e c r ′ < R e c r {\displaystyle Re_{\mathrm {cr} }'<Re_{\mathrm {cr} }} , indicating that the flow in the region R e c r ′ < R e < R e c r {\displaystyle Re_{\mathrm {cr} }'<Re<Re_{\mathrm {cr} }} is metastable , that is to say, in the metastable region, the flow is stable to infinitesimal perturbations, but not to finite amplitude perturbations. | https://en.wikipedia.org/wiki/Stuart–Landau_equation |
In microwave and radio-frequency engineering, a stub or resonant stub is a length of transmission line or waveguide that is connected at one end only. The free end of the stub is either left open-circuit, or short-circuited (as is always the case for waveguides). Neglecting transmission line losses, the input impedance of the stub is purely reactive ; either capacitive or inductive , depending on the electrical length of the stub, and on whether it is open or short circuit. Stubs may thus function as capacitors , inductors and resonant circuits at radio frequencies.
The behaviour of stubs is due to standing waves along their length. Their reactive properties are determined by their physical length in relation to the wavelength of the radio waves. Therefore, stubs are most commonly used in UHF or microwave circuits in which the wavelengths are short enough that the stub is conveniently small. [ 1 ] They are often used to replace discrete capacitors and inductors, because at UHF and microwave frequencies lumped components perform poorly due to parasitic reactance. [ 1 ] Stubs are commonly used in antenna impedance matching circuits, frequency selective filters , and resonant circuits for UHF electronic oscillators and RF amplifiers .
Stubs can be constructed with any type of transmission line : parallel conductor line (where they are called Lecher lines ), coaxial cable , stripline , waveguide , and dielectric waveguide . Stub circuits can be designed using a Smith chart , a graphical tool which can determine what length line to use to obtain a desired reactance.
The input impedance of a lossless, short circuited line is,
where
Thus, depending on whether tan ( β ℓ ) {\displaystyle \ \tan(\beta \ell )\ } is positive or negative, the short circuited stub will be inductive or capacitive, respectively.
The length of a stub to act as a capacitor C at an angular frequency of ω {\displaystyle \ \omega \ } is then given by:
the length of a stub to act as an inductor L at the same frequency is given by:
where in both equations, n is an integer number of half-wavelengths (possibly zero) that can be arbitrarily added to the line without changing the impedance.
The input impedance of a lossless open circuit stub is given by
where the symbols Z 0 , β , ℓ , ω , {\displaystyle \ Z_{0},\beta ,\ell ,\omega ,\ } etc. used in this section have the same meaning as in the section above.
It follows that depending on whether cot ( β ℓ ) {\displaystyle \cot(\beta \ell )} is positive or negative, the stub will be capacitive or inductive, respectively.
The length of an open circuit stub to act as an inductor L at an angular frequency of ω {\displaystyle \ \omega \ } is:
the length of an open circuit stub to act as a capacitor C at the same frequency is:
where again, n is an arbitrary whole number of half-wavelengths that can be inserted into the segment (including zero).
Stubs are often used as resonant circuits in oscillators and distributed element filters . An open circuit stub of length l {\displaystyle \scriptstyle l} will have a capacitive impedance at low frequency when β l < π / 2 {\displaystyle \scriptstyle \beta l<\pi /2} . Above this frequency the impedance is inductive. At precisely β l = π / 2 {\displaystyle \scriptstyle \beta l=\pi /2} the stub presents a short circuit. This is qualitatively the same behaviour as a series resonant circuit. For a lossless line the phase change constant is proportional to frequency,
where v is the velocity of propagation and is constant with frequency for a lossless line. For such a case the resonant frequency is given by,
While stubs function as resonant circuits, they differ from lumped element resonant circuits in that they have multiple resonant frequencies; in addition to the fundamental resonant frequency ω 0 {\displaystyle \scriptstyle \omega _{0}\,} , they resonate at multiples of this frequency: n ω 0 {\displaystyle \scriptstyle n\omega _{0}\,} . The impedance will not continue to rise monotonically with frequency after resonance as in a lumped tuned circuit. It will rise until the point where β l = π {\displaystyle \scriptstyle \beta l=\pi } at which point it will be open circuit. After this point (which is an anti-resonance point), the impedance will again become capacitive and start to fall. It will continue to fall until at β l = 3 π / 2 {\displaystyle \scriptstyle \beta l=3\pi /2\,} it again presents a short circuit. At this point, the filtering action of the stub has failed. This response of the stub continues to repeat with increasing frequency alternating between resonance and anti-resonance. It is not only a characteristic of stubs but of all distributed element filters that there is some frequency beyond which the filter fails and multiple unwanted passbands are produced. [ 2 ]
Similarly, a short circuit stub is an anti-resonator at π / 2 {\displaystyle \scriptstyle \pi /2} , that is, it behaves as a parallel resonant circuit, but again fails as 3 π / 2 {\displaystyle \scriptstyle 3\pi /2} is approached. [ 2 ]
Stubs can match a load impedance to the transmission line characteristic impedance. The stub is positioned a distance from the load. This distance is chosen so that at that point, the resistive part of the load impedance is made equal to the resistive part of the characteristic impedance by impedance transformer action of the length of the main line. The length of the stub is chosen so that it exactly cancels the reactive part of the presented impedance. The stub is made capacitive or inductive according to whether the main line presents an inductive or capacitive impedance, respectively. This is not the same as the actual impedance of the load since the reactive part of the load impedance will be subject to impedance transformer action and the resistive part. Matching stubs can be made adjustable so that matching can be corrected on test. [ 3 ]
A single stub will only achieve a perfect match at one specific frequency. Several stubs may be used spaced along the main transmission line for wideband matching. The resulting structure is filter-like, and filter design techniques are applied. For instance, the matching network may be designed as a Chebyshev filter but is optimised for impedance matching instead of passband transmission. The resulting transmission function of the network has a passband ripple like the Chebyshev filter, but the ripples never reach 0 dB insertion loss at any point in the passband, as they would do for the standard filter. [ 4 ]
Radial stubs are a planar component that consists of a sector of a circle rather than a constant-width line. They are used with planar transmission lines when a low impedance stub is required. Low characteristic impedance lines require a wide line. With a wide line, the junction of the stub with the main line is not at a well-defined point. Radial stubs overcome this difficulty by narrowing to a point at the junction. Filter circuits using stubs often use them in pairs, one connected to each side of the main line. A pair of radial stubs so connected is called a butterfly stub or a bowtie stub. [ 5 ] | https://en.wikipedia.org/wiki/Stub_(electronics) |
A stubroutine (also known as a stub function , null script , null subroutine , or null function ) is a command script or program subroutine which does nothing but return a constant value . The term itself is a portmanteau of "stub" and "subroutine", and typically refers to a placeholder function or subroutine that is not yet fully implemented but provides the necessary interface (like inputs and outputs) for the rest of the program to function.
They are used during program development , where the functional implementation of routines is delayed while other routines are developed, which allows developers to continue building and testing other parts of the software even when certain functions or subroutines are not fully developed. This is also one of the techniques used by the software cracking community to bypass callbacks and license-checking code: the target program is disassembled and the appropriate code is substituted for a null subroutine that just returns the value expected by the caller.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Stubroutine |
Stuck: How Vaccine Rumors Start and Why They Don't Go Away (2020), published by Oxford University Press and written by the director of the London School of Hygiene and Tropical Medicine 's Vaccine Confidence Project , Heidi Larson , looks at what influences attitudes to vaccination . It was largely compiled before the COVID-19 pandemic and inspired by her feeling that the dialogue between scientists and the public regarding vaccines was becoming complex on a background of increasing online information .
Using historical examples, from 19th century protests against smallpox vaccination to 21st-century boycotts of polio vaccination programmes, to show how rumours about vaccinations spread, the book looks chiefly at high-income countries and examines the factors that form opinions about vaccination.
Stuck: How Vaccine Rumors Start and Why They Don't Go Away was published by Oxford University Press in 2020, and written by the director of the London School of Hygiene and Tropical Medicine's Vaccine Confidence Project, Heidi Larson. [ 2 ] [ 3 ] [ 4 ] It was largely compiled before the COVID-19 pandemic . [ 1 ] [ 5 ] It has 200 pages, [ 1 ] of which 127 pages cover eight chapters, which are preceded by acknowledgements, prologue and an introduction, and are followed by notes and an index. [ 6 ]
The book addresses misinformation related to vaccination , and asks how vaccine rumors start and why they do not go away. [ 1 ] [ 4 ] Looking chiefly at high-income countries , the book examines social, political, psychological and cultural factors that make up the various mind-sets to vaccination. [ 2 ] Larson also uses historical examples, from 19th century protests against smallpox vaccination to 21st-century boycotts of polio vaccination programmes, to show how rumours about vaccinations spread. [ 2 ] She writes: "Digital media has certainly contributed to the social amplification of risk, but there is no single culprit in this wave of dissent." [ 2 ]
Larson was inspired by her feeling that the dialogue between scientists and the public, regarding vaccines, was becoming complex, against a background of a proliferation of online information . However, there is "opportunity for change", if vaccine experts can engage using social media . [ 7 ]
The book concludes with a call to social media companies to take responsibility for the part their technology plays in disseminating information pertaining to vaccines, because "for vaccine uptake to increase, the public must be inspired to protect one another". [ 2 ]
Released during the COVID-19 pandemic, The Lancet stated that "at a time of increasing global uncertainty, Larson's values of respecting other people's views and engaging with them will be crucial". [ 7 ] With the challenges of misinformation surrounding COVID-19 vaccines , Joan Donovan , writing in Nature , agreed with Larson's findings. [ 2 ] The book was also reviewed in the New Scientist . [ 5 ] | https://en.wikipedia.org/wiki/Stuck:_How_Vaccine_Rumors_Start_and_Why_They_Don't_Go_Away |
Student Pugwash USA is the U.S. affiliate of International Student/Young Pugwash , and the US student affiliate of the Pugwash Conferences on Science and World Affairs , recipients of the 1995 Nobel Peace Prize .
As an educational nonprofit organization, SPUSA does not adopt advocacy positions on policy or political issues or candidates, but seeks to foment student leadership and incubate groups of committed activists who go on to take action separate from SPUSA activities. The organization posits that, in order to create effective social change, students must first understand the issues at stake, then contemplate their ethical and moral responsibility to themselves and to society as a whole. Its stated purpose is not to advance a particular ethical viewpoint regarding scientific and technological issues, but rather to encourage students to consider ethics when thinking about the role of science and technology in society. SPUSA is open to all viewpoints and approaches to these discussions, but with a firm commitment to accurate science and factual information. For example, early SPUSA panel debates were held regarding the causes of climate change, before there was consensus in the scientific community; afterwards, that debate was considered resolved and no longer appropriate, but events continue discussing which approaches to take in response to current information.
Activities have included regional, national and international conferences, speaker events at campuses through a national chapter network, and compilation of issue briefs on scientific and technical issues of social importance. | https://en.wikipedia.org/wiki/Student_Pugwash_USA |
Student Switch Off is a campaign that aims to encourage students to save energy when living in University halls of residence . [ 1 ] It is run by the Students Organising for Sustainability UK , a student-led education charity focusing on sustainability.
As of March 2022, the campaign currently runs at 18 universities across the UK. In the 2021/22 academic year it engaged over 1,500 student through online competitions, campus visits and training. In 2021, the activities resulted in over 250 tonnes of CO 2 saved.
The scheme concentrates on behavioural change and social marketing to bring about carbon reduction. [ 2 ]
The campaign was set up by Dr Neil Jennings as a pilot project at the University of East Anglia in 2006. In the pilot year, the campaign helped to reduce energy usage by an average of over 10% in halls of residence, saving around 90 tonnes of CO 2 and over £19,000 in energy expenditure. Jennings received significant support in developing the campaign from the Ben & Jerry's Climate Change College [ 3 ] and secured sponsorship of the campaign from E.ON , Odeon Cinemas , The Independent and FirstGroup .
The campaign expanded to seven universities in 2007/08 and 11 in 2008/09 until in 2009 the Student Switch Off partnered with the National Union of Students as part of the Defra funded Degrees Cooler project, increasing the number of universities hosting the campaign by 22. Other partners included People & Planet , London Sustainability Exchange, Green Impact and Student Force for Sustainability. [ 4 ]
In 2009, the Student Switch Off was chosen by Carbon Leapfrog as one of the projects it would support with pro-bono legal and accountancy support.
In May 2012, the campaign won an Ashden Award (described as the Oscars of the energy-saving world) and in March 2011 won the "Best Energy Saving Idea" award at the inaugural People and Environment Achievement Awards. [ 5 ]
In 2012, ownership of the campaign was transferred to the National Union of Students and in 2014 the campaign received funding from the European Union (EU) to expand into four more European countries - Cyprus, Greece, Lithuania and Sweden.
In 2017, the campaign received additional funding from the EU to expand to Bulgaria, Ireland and Romania and to develop advice materials for students living in the private rented sector to reduce their exposure to fuel poverty.
In the academic year 2016/17, more than 26,000 students pledged their support for energy-saving in their halls of residence.
N.B. The aggregate CO 2 and money saving is variable between years even with a similar % reduction because of changing prices of energy, changing carbon emissions per kWh of electricity and changing number of months included in the analysis at different universities.
The Student Switch Off has received the following awards since its inception in 2006:
November 2008: The Green Awards. Highly commended, Best Green Campaigner
June 2010: National eWell-Being Awards. Highly commended, energy efficiency category [ citation needed ]
March 2011: People and Environment Achievement Awards. Winner, Best Energy Saving Idea
March 2011: Climate Week Awards. Finalist in Best Campaign category [ 6 ]
May 2012: Ashden Awards Winner [ 7 ] | https://en.wikipedia.org/wiki/Student_Switch_Off |
A student information system ( SIS ), student management system , school administration software or student administration system is a management information system for education sector establishments used to manage student data. It integrates students, parents, teachers and the administration. Student information systems provide capabilities for registering students in courses; documenting grading , transcripts of academic achievement and co-curricular activities, and the results of student assessment scores ; forming student schedules; tracking student attendance; generating reports and managing other student-related data needs in an educational institution.
Information security is a concern, as universities house an array of sensitive personal information, making them potentially attractive targets for security breaches, such as those experienced by retail corporations or healthcare providers. [ 1 ]
This article relating to education is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Student_information_system |
Students for the Exploration and Development of Space ( SEDS ) is a non-profit international student organization whose purpose is to drive space advocacy of space exploration and development through educational and engineering projects. [ 1 ]
Students for the Exploration and Development of Space was founded in 1980 at MIT by Peter Diamandis, Princeton University by Scott Scharfman, and Yale University by Richard Sorkin, [ 2 ] [ 3 ] and consists of an international group of undergraduate and graduate students from a diverse range of educational backgrounds and universities who are working to promote space. SEDS is a chapter-based organization with chapters in Italy , Canada , India , Israel , Mexico , Nepal , Nigeria , Philippines , South Africa , Spain , Turkey , United Kingdom , United States , Sri Lanka , and Zimbabwe. The permanent National Headquarters for SEDS-USA resides at MIT and that of SEDS-India resides at Vellore Institute of Technology . Though collaboration is frequent, each branch and chapter is independent and coordinates their own activities and projects.
SEDS was founded on September 17, 1980, primarily by Peter Diamandis , Scott Scharfman, Richard Sorkin, Robert D. Richards , and Todd B. Hawley and their first meeting was held on October 30, 1980. [ 4 ] After the initial meetings in 1980, SEDS president Peter Diamandis wrote a letter to the editor of Omni magazine deploring the status of the space program and asking students to help make a difference. The letter, published in Omni in early 1981, attracted students from around the world to SEDS. This laid the foundations for the first SEDS international conference, held at George Washington University between July 15–19, 1982. [ 5 ] As the decade progressed, SEDS continued to have more international conferences, which rotated among schools including George Washington University (again), University of Alabama in Huntsville , and Caltech . During the end of the decade, UKSEDS was founded at the Science Museum (London) and held their first conference at the University of Cambridge during November 25–26, 1989. [ 6 ]
During the 1990s, SEDS continued to host a national conference each year, sometimes in conjunction with the International Space Development Conference through 1997, when the last "SEDS National Conference" was held (conferences would re-appear 7 years later as the "SEDS SpaceVision Conference"). UKSEDS continued to have national conferences at rotating locations each year. During the last years of the decade, there was a major decline in SEDS leadership and a connected drop in the number of member chapters around the United States.
In 2004, the SEDS National Conferences were re-established by MITSEDS and hosted on the campus of the Massachusetts Institute of Technology on November 11–14. [ 7 ] The conference was renamed the SEDS SpaceVision conference and featured many speakers who would return year after year during this decade, including Loretta Hidalgo Whitesides , founder Dr. Robert Richards, Rick Tumlinson , George T. Whitesides , Robert Zubrin , and Pete Worden . The SpaceVision conference then visited University of Illinois at Urbana-Champaign (2005), University of Central Florida (2006), the Massachusetts Institute of Technology (2007), Texas A&M University (2008), [ 8 ] University of Arizona (2009), University of Illinois at Urbana-Champaign (2010), University of Colorado at Boulder (2011), University at Buffalo (2012), Arizona State University (2013), UNC Chapel Hill [ 9 ] (2014), Boston University (2015), Purdue (2016), University of Central Florida (2017), University of California, San Diego (2018), [ 10 ] Arizona State University (2019), [ 11 ] virtually for 2020, Rice University (2021), University of Chicago (2022) and Georgetown University (2023). During this time, UKSEDS continued to have one national conference each year. SEDS India, after hosting the SEDS International conference in 2007, continued with SEDS India National Conferences every year since 2009 at Vellore Institute of Technology, India. SEDS also began exploring innovative national projects such as fund-raising for a joint SEDS chapter Zero-G flight and designing an innovative national Rockoon competition modeled after the Ansari X PRIZE .
SEDS-USA organizes annual and rolling projects to engage its members in space-related activities. Two such projects are:
This is a competition between chapters designed to challenge students in high-power rocketry. [ 12 ] The goal of the competition is to launch a rocket, designed and built by the chapter members, to an altitude of 10,000 feet above sea-level. This competition has now successfully been running since 2011. The winner of the 2012 competition was Purdue-SEDS.
Started in 2011, this competition is co-organized with the Space Frontier Foundation [ 13 ] and aims to provide students with a real-world experience in entrepreneurship applied to the space industry. Students are required to develop space-scalable business models that will advance the NewSpace movement and are judged by a panel of 5 experts who have had several years of experience in space entrepreneurship. The winners of the 2011 and 2012 competitions were Illinois State University [ 14 ] and Iowa State University respectively.
SEDS is organized by country, region, and chapter. There is a large contingent of SEDS chapters in the United States , which are governed regionally and nationally by SEDS-USA. SEDS India has nine SEDS chapters under it and is headquartered at Vellore Institute of Technology . UKSEDS is composed of five regions across the United Kingdom and has its headquarters at the British Interplanetary Society HQ in London. There are other national sections of SEDS across the world, notably SEDS-Canada, SEDS South Africa, and SEDS Zimbabwe, which has four chapters and a junior chapter. Student leaders of the international groups convene as SEDS-Earth, the global governing body of SEDS. SEDS is an organization member of the Alliance for Space Development . [ 15 ]
SEDS-USA is the governing body of all chapters in the United States, and is the largest and original branch of SEDS. It is overseen by a national board of directors , board of advisors , and a board of trustees . An integral aspect of SEDS-USA is the Council of Chapters (CoC). This council consists of national representatives of each chapter and is led by the Chair of the Council of Chapters. The CoC meets via teleconference to exchange updates between individual chapters and the national board. The 2022–23 national directors of SEDS-USA are listed below. [ 16 ]
( University of Chicago )
( University of Chicago )
( Purdue University )
( Purdue University )
( Georgetown University )
UK Students for the Exploration and Development of Space (UKSEDS) is the national student space society of the United Kingdom. Established in 1988, it is dedicated to promoting the exploration and development of space by inspiring, educating, and supporting students and young professionals interested in the space industry. [ 17 ] UKSEDS provides a platform for student collaboration on space projects, organises high-profile conferences and workshops, and conducts outreach activities aimed at fostering interest in space science and engineering among young people. The organisation acts as a bridge, building strong links between students, academia, and the wider space industry, both within the UK and internationally. [ 18 ]
UKSEDS was inspired by the efforts of students who attended the first International Space University (ISU) Space Studies Program held at MIT in 1988. Recognizing the potential to create a national community of space enthusiasts, these students organised a founding conference at London's Science Museum in March 1989. Later that year, a full conference was held at Cambridge University, cementing UKSEDS as a key player in the UK's space community.
In 2013, UKSEDS celebrated its 25th anniversary. Former committee members shared insights into UKSEDS’ development and contributions over the years. Dr Chris Welch, UKSEDS Chair from 1993 to 1995, recalled his initial involvement with SEDS and ISU during the International Astronautical Congress (IAC) in Brighton in 1987, where he met key figures like Peter Diamandis and Todd Hawley . [ 19 ]
Dr Ralph D. Lorenz , a founding committee member from 1988 to 1989, emphasised the importance of student-driven initiatives in sustaining UKSEDS amidst existing organisations like the British Interplanetary Society (BIS) and the Royal Aeronautical Society (RAeS) . Similarly, Richard Osborne highlighted the critical roles of Chris Welch and Mark Bentley in ensuring the organisation's continuity during challenging periods in the early 1990s. [ 20 ]
UKSEDS offers a wide range of activities designed to engage students and foster their development:
UKSEDS has undertaken numerous technical space projects, including:
UKSEDS operates under the guidance of an Executive Committee , elected annually at the organisation's Annual General Meeting (AGM) during its flagship conference, the National Student Space Conference (NSSC) . The committee is responsible for overseeing day-to-day operations, planning events, and ensuring alignment with the organisation's goals.
In addition to the executive committee, UKSEDS is supported by a Board of External Trustees , who are appointed for three-year terms. The trustees provide strategic oversight and ensure the organisation remains sustainable and impactful. [ citation needed ]
The 2024/25 Executive Committee was elected during the 36th NSSC in March 2024 and includes the following members:
UKSEDS collaborates with various organisations to advance its mission, including:
In 2013, UKSEDS formalised a Memorandum of Understanding with the British Interplanetary Society , enhancing cooperation between young members and experienced professionals. [ 31 ]
UKSEDS has had many prominent individuals serve on its executive committee, contributing to its development and influence in the space sector. Some notable past committee members include: [ citation needed ]
SEDS-Canada is a federally incorporated not-for-profit organization based in Toronto, Canada, whose mandate is to advocate for the exploration and development of space through non-partisan political advocacy, conferences, student competitions, and chapter grants. The organization was initiated in early 1981 by entrepreneur Bob Richards, and it was re-established in 2014 by a group of students from the University of Toronto and the University of Western Ontario , after several years of inactivity. SEDS-Canada currently has eleven university chapters operating across the country.
Space Exploration and Development of Space Turkey, founded in March 2017 by Hadican Çatak at Hacettepe University , is the first and only national space, and entrepreneurial organization with its 350+ active members and branches in 8 universities as of January 2019.
SEDS TR's goal is to gather all interested undergraduates, master's degree students, and doctoral students and to carry out tasks that help them improve their career prospects in their field of activity by establishing a common working platform.
In order to reach this goal, SEDS TR has been working on engineering projects, organizing events and extending its area of effect by founding SEDS organizations in universities throughout Turkey and in respect to this, SEDS is trying to make operations and work done mentioned above accessible to every other student in Turkey. [ 32 ]
The SEDS-UAE Chapter is based at the Our Own English High School in Abu Dhabi. This chapter was founded by a high school student, Nishirth Khandwala. Members of SEDS UAE engage themselves in various activities and programs such as the International Asteroid Search Campaign. [ 33 ] [ 34 ]
SEDS-South Africa is South Africa's national student Space society, and is the governing body of all SEDS chapters in South Africa. SEDS South Africa is made up of students and young professionals in Southern Africa who are interested in Space exploration and development. This includes engaging government policymakers, amateur satellite building, model rocketry, manufacturing in Space, student and young professionals collaboration, connecting with the Space industry, ham amateur radio , analogue Space missions, Space exploration, and Space technology to benefit humankind.
SEDS South Africa's founding branch is the University of Cape Town , SEDS-SA-UCT. Branches include:
SEDS-India is the governing body of SEDS in India with its headquarters at Vellore Institute of Technology . SEDS India was founded in 2004 by Pradeep Mohandas and Abhishek Ray. The first chapter was established in Mumbai at PIIT, New Panvel. SEDS India governs affiliated chapters in India at various universities, including Vellore Institute of Technology , Veltech University, Birla Institute of Technology and Science, Pilani - Hyderabad Campus , Birla Institute of Technology & Science Pilani-K. K. Birla Goa Campus, Sri Ramakrishna Engineering College and SASTRA University. Chapter affairs are controlled by the Local Chapter Committee which reports to the executive board of SEDS India. The executive board of SEDS India consists of six board members who are selected through a voting process, with all individual members of SEDS India being eligible to vote. The Permanent Trustee of SEDS India is Geetha Manivasagam, Vellore Institute of Technology. The advisory panel has multiple dignitaries on its board, including the associate director of Vikram Sarabhai Space Center .
The main outreach program of SEDS India is called OneSpace. OneSpace was founded to spread awareness about and engagement with space among underprivileged children in rural India and children residing in local orphanages. Attempts have also been made by SEDS India to outreach to northeast India, where access to space education and technical projects is more difficult. These efforts were led with the help of Angaraj Duara, an alumnus of Maharishi Vidyamandir Shilpukhuri, Guwahati, and established seven chapters in Assam. They are the Army Public School Narangi, Sharla Birla Gyan Jyoti School Guwahati, IIT-Guwahati, Handique Girls College, Royal Global Institute - RSET Guwahati, Donbosco Public School Panbazar and Tezpur University. SEDS-APSN was the first chapter in northeast India. A separate SEDS-NorthEast governing body oversees activities in the northeast.
(President SEDS (SG))
(President NUS SEDS)
(President SEDS- NTU )
(President SEDS- SUTD )
SEDS (Singapore), founded in July 2019 by Vairavan Ramanathan and Nick Lee from National University of Singapore and the Nanyang Technological University respectively, is the first and only national space and entrepreneurial organization in Singapore. The goal of SEDS Singapore is to provide a platform for students of all backgrounds based in Singapore to actively participate in ushering in a new space age.
Currently, there are 3 SEDS chapters under SEDS (Singapore). NUS SEDS based in National University of Singapore . SEDS-NTU based in Nanyang Technological University . SEDS-SUTD based in Singapore University of Technology and Design .
Current Active Projects of SEDS (Singapore):
The most widespread astronomy related organization in Sri Lanka , SEDS Sri Lanka provides myriad opportunities to enthusiastic school children and university undergraduates alike. Founded in September 2018 by then graduate, Amila Sandun Basnayake and undergraduate, Thilan Harshana, currently the main organization SEDS Sri Lanka governs 16 chapters established under it. Hailing from a number of government and private universities, as well as a separate chapter for school children named SEDS Juniors, a wide range of activities are carried out throughout the year. [ 35 ]
These opportunities are represented in many ways such as both onsite and online workshops, SEDS Space Talks, competitions and citizen scientist ventures and educational programs for juniors. Among these, it is of importance to note the first high altitude balloon launched by Sri Lanka, under the project SERENDIB 1.0, the Hackathon, NASA Space Apps conducted in collaboration with NASA, and the numerous asteroid hunts held in collaboration with Pan-STARRS .
SEDS Philippines (SEDSPH) is the official Philippine chapter of the Students for the Exploration and Development of Space or SEDS.
Macedonian Students for the Exploration and Development of Space (MK-SEDS / МК-СИРК) is the national and regional governing body for SEDS Chapters in Macedonia and Europe.
MK-SEDS initially started in 2019 as a self-organized and student-run Macedonian Cosmic Institute at the Ss. Cyril and Methodius University in Skopje on the initiative of few students.
On October 18, 2020, students, alumni, alumnae and youth from Ss. Cyril and Methodius University of Skopje , Goce Delchev University of Shtip and St. Kliment Ohridski University of Bitola united in their intention to represent the force of Good, Beauty and Truth in the cosmic community of planet Earth adopted the Decision for registration of the Macedonian STUDENTS FOR THE EXPLORATION AND DEVELOPMENT OF SPACE.
The permanent Headquarters for MK-SEDS resides at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University in Skopje [ mk ] .
Zimbabwe has multiple SEDS chapters at its major universities namely University of Zimbabwe , National University of Science and Technology , Midlands State University and Chinhoyi University of Technology . It also has a junior SEDS chapter that is aimed at introducing space education to students in high schools. In December 2021, SEDS MSU was one of the 10 teams in the world to be part of the Global Satellite Tracking Initiative where they were recipients of equipment to set up a ground station at Midlands State University. | https://en.wikipedia.org/wiki/Students_for_the_Exploration_and_Development_of_Space |
Studio monitors are loudspeakers in speaker enclosures specifically designed for professional audio production applications, such as recording studios , filmmaking , television studios , radio studios and project or home studios, where accurate audio reproduction is crucial. Among audio engineers , the term monitor implies that the speaker is designed to produce relatively flat (linear) phase and frequency responses . In other words, it exhibits minimal emphasis or de-emphasis of particular frequencies, the loudspeaker gives an accurate reproduction of the tonal qualities of the source audio ("uncolored" and "transparent" are synonyms), and there will be no relative phase shift of particular frequencies—meaning no distortion in sound-stage perspective for stereo recordings.
Beyond stereo sound-stage requirements, a linear phase response helps impulse response remain true to source without encountering "smearing". An unqualified reference to a monitor often refers to a near-field (compact or close-field) design. This is a speaker small enough to sit on a stand or desk in proximity to the listener, so that most of the sound that the listener hears is coming directly from the speaker, rather than reflecting off walls and ceilings (and thus picking up coloration and reverberation from the room). Monitor speakers may include more than one type of driver (e.g., a tweeter and a woofer ) or, for monitoring low-frequency sounds, such as bass drum , additional subwoofer cabinets may be used.
There are studio monitors designed for mid-field or far-field use as well. These are larger monitors with approximately 12 inch or larger woofers, suited to the bigger studio environment. They extend the width of the sweet spot, allowing "accurate stereo imaging for multiple persons". [ 1 ] They tend to be used in film scoring environments, where simulation of larger sized areas like theaters is important. [ 2 ]
Also, studio monitors are made in a more physically robust manner than home hi-fi loudspeakers; whereas home hi-fi loudspeakers often only have to reproduce compressed commercial recordings, studio monitors have to cope with the high volumes and sudden sound bursts that may happen in the studio when playing back unmastered mixes.
Broadcasting and recording organisations employ audio engineers who use loudspeakers to assess the aesthetic merits of the programme and to tailor the balance by audio mixing and mastering to achieve the desired result. Loudspeakers are also required at various points in the audio processing chain to enable engineers to ensure that the programme is reasonably free from technical defects, such as audible distortion or background noise. [ 3 ]
The engineer may mix programming that will sound pleasing on the widest range of playback systems used by regular listeners (i.e. high-end audio , low-quality radios in clock radios and " boom boxes ", in club PA systems , in a car stereo or a home stereo). While some broadcasters like the BBC generally believe in using monitors of "the highest practicable standard of performance", [ 3 ] some audio engineers argue that monitoring should be carried out with loudspeakers of mediocre technical quality to be representative of the regular systems end-users are likely to be listening with; or that some technical defects are apparent only with high-grade reproducing equipment and therefore can be ignored. [ 3 ] However, as a public broadcaster dealing with a lot of live material, the BBC holds the view that studio monitors should be "as free as possible from avoidable defects". It is argued that real life low-grade sound systems are so different that it would be impossible to compensate for the characteristics of every type of system available; technical faults must not be apparent to even a minority of listeners while remaining undetected by the operating staff. It is further argued that, because of technical progress in the science of sound transmission, equipment in the studio originating the programme should have a higher standard of performance than the equipment employed in reproducing it, since the former has a longer life. [ 3 ]
In fact, most professional audio production studios have several sets of monitors spanning the range of playback systems in the market. This may include a sampling of large, expensive speakers as may be used in movie theatres, hi-fi style speakers, car speakers, portable music systems, PC speakers and consumer-grade headphones. [ citation needed ]
Amplification: Studio monitors may be " active " (including one or more internal power amplifier(s)), or passive (requiring an external power amplifier ). Active models are usually bi-amplified , which means that the input sound signal is divided into two parts by an active crossover for low and high frequency components. Both parts are amplified using separate low- and high-frequency amplifiers, and then the low-frequency part is routed to a woofer and the high-frequency part is routed to a tweeter or horn . Bi-amplification is done so that a cleaner overall sound reproduction can be obtained, since signals are easier to process before power amplification. Consumer loudspeakers may or may not have these various design goals.
In the early years of the recording industry in the 1920s and 1930s, studio monitors were used primarily to check for noise interference and obvious technical problems rather than for making artistic evaluations of the performance and recording. Musicians were recorded live and the producer judged the performance on this basis, relying on simple tried-and-true microphone techniques to ensure that it had been adequately captured; playback through monitors was used simply to check that no obvious technical flaws had spoiled the original recording.
As a result, early monitors tended to be basic loudspeaker cabinets. The state-of-the-art loudspeakers of the era were massive horn-loaded systems which were mostly used in cinemas. High-end loudspeaker design grew out of the demands of the motion picture industry and most of the early loudspeaker pioneers worked in Los Angeles where they attempted to solve the problems of cinema sound. Stereophonic sound was in its infancy, having been pioneered in Britain by an engineer who worked for EMI . Designing monitors for recording studios was not a major priority.
The first high-quality loudspeaker developed expressly as a studio monitor was the Altec Lansing Duplex 604 in 1944. This innovative driver has historically been regarded as growing out of the work of James Bullough Lansing who had previously supplied the drivers for the Shearer Horn in 1936, a speaker that had rapidly become the industry standard in motion-picture sound. He had also designed the smaller Iconic and this was widely employed at the time as a motion-picture studio monitor. The 604 was a relatively compact coaxial design and within a few years it became the industry standard in the United States, a position it maintained in its various incarnations (the 604 went through eleven model-changes) over the next 25 years. It was common in US studios throughout the 1950s and 60s and remained in continuous production until 1998.
In the UK, Tannoy introduced its own coaxial design, the Dual Concentric, and this assumed the same reference role in Europe as the Altec 604 held in the US. The British Broadcasting Corporation researchers conducted evaluations on as many speakers as they could obtain in around 1948, but found commercial loudspeaker makers had little to offer that met their requirements. The BBC needed speakers that worked well with program material within real professional and domestic settings environments, and not just fulfil technical measurements such as frequency-response, distortion, monitors in anechoic chambers. Above all, the BBC required monitors to sound balanced, be neutral in tone, and lack colouration. [ 4 ] Monitor usage in the industry was highly conservative, with almost monopolistic reliance on industry "standards", in spite of the sonic failings of these aging designs. The Altec 604 had a notoriously ragged frequency response but almost all U.S. studios continued to use it because virtually every producer and engineer knew its sound intimately and were practiced at listening through its sonic limitations.
Recording through unfamiliar monitors, no matter how technically advanced, was hazardous because engineers unfamiliar with their sonic signatures could make poor production decisions and it was financially unviable to give production staff expensive studio time to familiarize themselves with new monitors. As a result, pretty well every U.S. studio had a set of 604's and every European studio a Tannoy Dual Concentric or two. However, in 1959, at the height of its industry dominance, Altec made the mistake of replacing the 604 with the 605A Duplex, a design widely regarded as inferior to its predecessor. There was a backlash from some record companies and studios and this allowed Altec's competitor, JBL (a company originally started by 604 designer James B. Lansing), to make inroads into the pro monitor market.
Capitol Records replaced their Altecs with JBL D50 Monitors and a few years later their UK affiliate, EMI , also made the move to JBL's. Although Altec re-introduced the 604 as the "E" version Super Duplex in response to the criticism, they now had a major industry rival to contend with. Over the next decade most of the developments in studio monitor design originated from JBL.
As the public broadcaster in the UK, the BBC had the determinant role in defining industry standards. Its renowned research departments invested considerable resources in determining studio monitor suited to their different broadcasting needs, and also created their own models from first principles. A 1958 research paper identified the sound goal, in a monaural system:
It is assumed that the ideal to be aimed at in the design of a sound reproducing system is realism, i.e. that the listener should be able to imagine himself to be in the presence of the original source of sound.
There is, of course, scope for legitimate experiment in the processing of the reproduced signals in an endeavour to improve on nature, however, realism, or as near an approach to it as may be possible, ought surely to be regarded as the normal condition and avoidable departures from this state, while justified upon occasion, should not be allowed to become a permanent feature of the system. [ 3 ]
In designing a loudspeaker, the BBC established the compromise that had to be established between size, weight and cost considerations. Two-way designs were preferred due to the inherently simpler crossover network, but were subject to the limitations of speaker driver technology at the time – there were few high-frequency units available at the time that functioned down to 1.5 kHz, meaning that the woofer must operate in a predictable manner up to about 2 kHz. [ 5 ] The BBC developed a two-way studio monitor in 1959, the LS5/1, using a 58mm Celestion tweeter and 380mm Goodmans bass unit, but continually had problems with consistency of the bass units. The successful testing of a 305mm bass cone made with new thermoplastics led to development and deployment of the LS5/5 and LS5/6 monitors that occupied only 60% volume of its predecessor. [ 5 ]
As recording became less and less "live" and multi-tracking and overdubbing became the norm, the studio monitor became far more crucial to the recording process. When there was no original performance outside what existed on the tape, the monitor became the touchstone of all engineering and production decisions. As a result, accuracy and transparency became paramount and the conservatism evident in the retention of the 604 as the standard for over twenty years began to give way to fresh technological development. Despite this, the 604 continued to be widely used - mainly because many engineers and producers were so familiar with their sonic signature that they were reluctant to change.
In a BBC white paper published in January 1963, the authors explored two-channel stereophony, and remarked that it was at a disadvantage compared with multi-channel stereophony that was already available in cinemas in that "the full intended effects is apparent only to observers located within a restricted area in front of the loudspeakers". The authors expressed reservations about dispersion and directionality in 2-channel systems, noting that the "face-to-face listening arrangement" was not able to give an acceptable presentation for a centrally-located observer in a domestic setting. [ 6 ] The paper concluded:
The achievement of suitable directional characteristics within the aesthetic and economic limitations applying to domestic equipment will however require a much greater research effort than either the corporation or the radio industry have so far been able to devote to the subject. [ 6 ]
To complement its larger two-way monitors for studio use, the BBC developed a small speaker for near-field monitoring of the frequency range from 400 Hz to about 20 kHz for its outside broadcasting monitoring. The principal constraints were space and situations where using headphones is unsatisfactory, such as in mobile broadcasting vans. Based on scaling tests done in 1968, and detailed audio work against the LS5/8 – a large "Grade I monitor" already in use at the time – and with live sources, the BBC Research Department developed the LS3/5, which became the famous LS3/5A that was used from 1975 to much of the 1990s and beyond by the BBC and audiophiles alike. [ 7 ] [ 8 ] [ 9 ]
In the late 1960s JBL introduced two monitors which helped secure them pre-eminence in the industry. The 4320 was a direct competitor to the Altec 604 but was a more accurate and powerful speaker and it quickly made inroads against the industry standard. However, it was the more compact 4310 that revolutionized monitoring by introducing the idea of close or "nearfield" monitoring. (The sound field very close to a sound source is called the "near-field." By "very close" is meant in the predominantly direct, rather than reflected, sound field. A near-field speaker is a compact studio monitor designed for listening at close distances (3 to 5 feet (0.9 to 1.5 m)), so, in theory, the effects of poor room acoustics are greatly reduced.)
The 4310 was small enough to be placed on the recording console and listened to from much closer distances than the traditional large wall-(or "soffit") mounted main monitors. As a result, studio-acoustic problems were minimized. Smaller studios found the 4310 ideal and that monitor and its successor, the 4311, became studio fixtures throughout the 1970s. Ironically, the 4310 had been designed to replicate the sonic idiosyncrasies of the Altec 604 but in a smaller package to cater for the technical needs of the time.
The 4311 was so popular with professionals that JBL introduced a domestic version for the burgeoning home-audio market. This speaker, the JBL L-100, (or "Century") was a massive success and became the biggest-selling hi-fi speaker ever within a few years. By 1975, JBL overtook Altec as the monitor of choice for most studios. The major studios continued to use huge designs mounted on the wall which were able to produce prodigious SPL's and amounts of bass.
This trend reached its zenith with The Who 's use of a dozen JBL 4350 monitors, each capable of 125 dB and containing two fifteen-inch woofers and a twelve-inch mid-bass driver. Most studios, however, also used more modest monitoring devices to check how recordings would sound through car speakers and cheap home systems. A favourite "grot-box" monitor employed in this way was the Auratone 5C, a crude single-driver device that gave a reasonable facsimile of typical lo-fi sound.
However, a backlash against the behemoth monitor was soon to take place. With the advent of punk , new wave , indie , and lo-fi , a reaction to high-tech recording and large corporate-style studios set in and do-it-yourself recording methods became the vogue. Smaller, less expensive, recording studios needed smaller, less expensive monitors and the Yamaha NS-10 , a design introduced in 1978 ironically for the home audio market, became the monitor of choice for many studios in the 1980s. [ 10 ] While its sound-quality has often been derided, even by those who monitor through it, the NS-10 continues in use to this day and many more successful recordings have been produced with its aid over the past twenty five years than with any other monitor. [ 11 ] [ 12 ]
By the mid-1980s the near-field monitor had become a permanent fixture. The larger studios still had large soffit-mounted main monitors but producers and engineers spent most of their time working with near-fields. Common large monitors of the time were Eastlake / Westlake monitors with twin 15" bass units, a wooden midrange horn and a horn-loaded tweeter. The UREI 813 was also popular. Based on the almost ageless Altec 604 with a Time-Align passive crossover network developed by Ed Long , it included delay circuitry to align the acoustic centers of the low and high-frequency components. Fostex "Laboratory Series" monitors were used in a few high-end studios, but with increasing costs of manufacture, they became rare. The once dominant JBL fell gradually into disfavour.
One of the most striking trends was the growth of soft-dome monitors. These operated without horn-loaded drivers. Horns, while having advantages in transient response and efficiency, tend to be hard to listen to over long periods. The lack of distortion of high-end dome midrange & tweeters made them easy to work with all day (and night). Typical soft-dome systems were made by Roger Quested, ATC, Neil Grant and PMC and were actively driven by racks of active crossovers and amplifiers. Other monitor and studio designers like Tom Hidley, Phil Newall and Sam Toyoshima continued research into the speaker/room interface and led developments in room design, trapping, absorption and diffusion to create a consistent and neutral monitoring environment.
The main post-NS-10 trend has been the almost universal acceptance of powered monitors where the speaker enclosure contains the driving amplifiers. Passive monitors require outboard power amplifiers to drive them as well as speaker wire to connect them. Powered monitors, by contrast, are comparatively more convenient and streamlined single units, which in addition, marketeers claim a number of technical advantages. The interface between speaker and amplifier can be optimized, possibly offering greater control and precision, and advances in amplifier design have reduced the size and weight of the electronics significantly. The result has been that passive monitors have become far less common than powered monitors in project and home studios.
In the 2000s, there was a trend to focus on "translation". Engineers tended to choose monitors less for their accuracy than for their ability to "translate" – to make recordings sound good on a variety of playback systems, from stock car radios and standard boom boxes to esoteric audiophile systems. As the mix engineer Chris Lord-Alge has noted:
But it is uncertain just what tools aid translation. Some producers argue that accuracy is still the best guarantee. If a producer or audio engineer is listening to recorded tracks and mixing tracks using a "flattering" monitor speaker, they may miss subtle problems in the mic'ing or recording quality that a more precise monitor would expose. Other producers feel that monitors should mimic home audio and car speakers, as this is what most consumers listen to music on. Still more believe that monitors need to be relentlessly unflattering, so that the producer and engineer must work hard to make recordings sound good.
No speaker, monitor or hi-fi sound system, regardless of the design principle or cost, has a completely flat frequency response; all speakers color the sound to some degree. Monitor speakers are assumed to be as free as possible from coloration . While no rigid distinction exists between consumer speakers and studio monitors, manufacturers usually accent the difference in their marketing material. Generally, studio monitors are physically robust, to cope with the high volumes and physical knocks that may happen in the studio, and are used for listening at shorter distances (e.g., near field) than hi-fi speakers, though nothing precludes them from being used in a home-sized environment. In one prominent recording magazine, Sound on Sound , the number of self-amplified (active) studio monitor reviews significantly outweighs the number of passive monitor reviews over the past two decades indicating that studio monitors are predominantly self-amplified, although not exclusively so. [ 14 ] Hi-fi speakers usually require external amplification. [ 15 ]
Monitors are used by almost all professional producers and audio engineers. The claimed advantage of studio monitors is that the production translates better to other sound systems. [ 16 ] In the 1970s, the JBL 4311's domestic equivalent, the L-100, was used in a large number of homes, while the Yamaha NS-10 served both domestically and professionally during the 1980s. Despite not being a "commercial product" at the outset, the BBC licensed production of the LS3/5A monitor, which it used internally. It was commercially successful in its twenty-something-year life, [ 9 ] [ 17 ] from 1975 until approximately 1998. The diminutive BBC speaker has amassed an "enthusiastic, focused, and ... loyal following", according to Paul Seydor in The Absolute Sound . [ 18 ] Estimates of their sales differ, but are generally in the 100,000 pairs ballpark. [ 18 ] [ 19 ]
Professional audio companies such as Genelec , Neumann (formerly Klein + Hummel), Quested , and M & K sell almost exclusively to recording studios and record producers , who comprise key players in the professional monitor market. Most of the consumer audio manufacturers confine themselves to supplying speakers for home hi-fi systems. Companies that straddle both worlds, like ADAM , Amphion Loudspeakers , ATC , Dynaudio , Focal/JM Labs , JBL , PMC , surrounTec and Tannoy tend to clearly differentiate their monitor and hi-fi lines. | https://en.wikipedia.org/wiki/Studio_monitor |
Study of Environmental Arctic Change ( SEARCH ) is a collaborative program of Arctic researchers, funding agencies, and others that facilitates the synthesis of Arctic science and communicates the current understanding to help society respond to a rapidly changing Arctic.
SEARCH activities are supported by a collaborative grant from the National Science Foundation Division of Polar Programs, Arctic Sciences Section to the International Arctic Research Center , and the Arctic Research Consortium of the US. These resources have been further supplemented by recent contributions from the National Center for Atmospheric Research , the U.S. Geological Survey , the University of Alaska Fairbanks , the Center for the Blue Economy ( Middlebury Institute of International Studies at Monterey ) and the U.S. Arctic Research Commission . Other agencies have provided program support in the past and continue their intellectual engagement.
SEARCH focuses on how shrinking land ice, diminishing sea ice, and degrading permafrost impact Arctic and global systems. [ 1 ] | https://en.wikipedia.org/wiki/Study_of_Environmental_Arctic_Change |
In field theory , the Stueckelberg action (named after Ernst Stueckelberg [ 1 ] ) describes a massive spin-1 field as an R (the real numbers are the Lie algebra of U(1) ) Yang–Mills theory coupled to a real scalar field ϕ {\displaystyle \phi } . This scalar field takes on values in a real 1D affine representation of R with m {\displaystyle m} as the coupling strength .
This is a special case of the Higgs mechanism , where, in effect, λ and thus the mass of the Higgs scalar excitation has been taken to infinity, so the Higgs has decoupled and can be ignored, resulting in a nonlinear, affine representation of the field, instead of a linear representation — in contemporary terminology, a U(1) nonlinear σ -model.
Gauge-fixing ϕ = 0 {\displaystyle \phi =0} , yields the Proca action .
This explains why, unlike the case for non-abelian vector fields, quantum electrodynamics with a massive photon is , in fact, renormalizable , even though it is not manifestly gauge invariant (after the Stückelberg scalar has been eliminated in the Proca action).
The Stueckelberg extension of the Standard Model ( StSM) consists of a gauge invariant kinetic term for a massive U(1) gauge field. Such a term can be implemented into the Lagrangian of the Standard Model without destroying the renormalizability of the theory and further provides a mechanism for
mass generation that is distinct from the Higgs mechanism in the context of Abelian gauge theories.
The model involves a non-trivial
mixing of the Stueckelberg and the Standard Model sectors by including an additional term in the effective Lagrangian of the Standard Model given by
The first term above is the Stueckelberg field strength, M 1 {\displaystyle M_{1}} and M 2 {\displaystyle M_{2}} are topological mass parameters and σ {\displaystyle \sigma } is the axion.
After symmetry breaking in the electroweak sector the photon remains massless. The model predicts a new type of gauge boson dubbed Z S t ′ {\displaystyle Z'_{\rm {St}}} which inherits a very distinct narrow decay width in this model. The St sector of the StSM decouples from the SM in limit M 2 / M 1 → 0 {\displaystyle M_{2}/M_{1}\to 0} .
Stueckelberg type couplings arise quite naturally in theories involving compactifications of higher-dimensional string theory , in particular, these couplings appear in the dimensional reduction of the ten-dimensional N = 1 supergravity coupled to supersymmetric Yang–Mills gauge fields in the presence of internal gauge fluxes. In the context of intersecting D-brane model building, products of U(N) gauge groups are broken to their SU(N) subgroups via the Stueckelberg couplings and thus the Abelian gauge fields become massive. Further, in a much simpler fashion one may consider a model with only one extra dimension (a type of Kaluza–Klein model ) and compactify down to a four-dimensional theory. The resulting Lagrangian will contain massive vector gauge bosons that acquire masses through the Stueckelberg mechanism. | https://en.wikipedia.org/wiki/Stueckelberg_action |
Stuff Matters: Exploring the Marvelous Materials That Shape Our Man-Made World is a 2014 non-fiction book by the British materials scientist Mark Miodownik . The book explores many of the common materials people encounter during their daily lives and seeks to explain the science behind them in an accessible manner. Miodownik devotes a chapter each to ten such materials, discussing their scientific qualities alongside quirky facts and anecdotes about their impacts on human history. Called "a hugely enjoyable marriage of science and art", [ 1 ] Stuff Matters was critically and commercially successful, becoming a New York Times best seller and a winner of the Royal Society Prize for Science Books .
Miodownik was working at University College London as a professor of materials and society at the time the book was published. [ 2 ] He first gained interest in his field of study during his teenage years following an attempted robbery while on the subway . He was stabbed with a razor through multiple layers of clothing, leading him to be curious about the qualities of steel that provided for such a sharp and strong edge. [ 3 ] The author would go on to earn a doctorate in jet engine alloys before entering into academic work. [ 2 ] He was an occasional presenter on instructional television programs, and the year after publishing Stuff Matters he was the recipient of the American Association for the Advancement of Science 's Public Engagement with Science Award. [ 4 ] [ 5 ] Stuff Matters was the author's first published popular science work. [ 4 ]
Each of the book's chapters begins with the same photograph of Miodownik sitting on the rooftop of his London apartment. [ 6 ] In each iteration of the photo, a different object is circled – a teacup in one chapter, a flowerpot in another, and so on – with that chapter focused on the history and science of the material of which the highlighted item consists. [ 7 ] Over the course of the book, Miodownik covers a number of materials that have been around for a long time ( steel , paper , glass , porcelain ), some introduced last century ( concrete , plastics , carbon fiber ), and a few relatively new inventions ( graphene , aerogels ). He includes a chapter on chocolate due in large part to his own obsession with the sweet. [ 8 ] Miodownik seeks to draw connections from the materials to the lives of the people who use them, saying, "The material world is not just a display of our technology and culture, it is part of us. We invented it, we made it, and in turn, it makes us who we are." [ 6 ]
The author takes varying approaches to explaining each material's attributes and their importance, since according to him, the "materials and our relationships with them are too diverse for a single approach to suit them all". [ 9 ] In the process of describing the book's subjects he intersperses scientific knowledge with insights into the materials' impacts on human history. [ 3 ] For instance, historically the Chinese had a technological edge over the rest of the world in many respects (they alone held the secret to making porcelain for hundreds of years, for one example). However, their culture preferred other materials over glass, and Miodownik surmises that the resulting lack of advancement with that substance later held the culture back scientifically, as glass is a key component in such tools as microscopes and telescopes . [ 3 ] [ 8 ] Elsewhere, the author describes how the sudden 19th-century surge in popularity of billiards can be linked to the invention of both nylon and vinyl (the need for a cheap alternative to ivory for making pool balls led to the increased development of celluloid , the success of which led to further innovation in plastics). [ 7 ]
While much of the book relates the history of the selected materials, Miodownik also devotes time to many of their futures, including the development of a type of concrete that is infused with bacteria meant to self-repair cracks as they occur. [ 7 ] Also described are aerogels, which are ultralight materials that are the best thermal insulators known to man. [ 10 ] Composed of over 99% air, these materials are able to produce Rayleigh scattering in much the same way as the Earth's atmosphere, thereby appearing blue to the naked eye. [ 8 ] [ 10 ] This effect, combined with the aerogel's light weight, leads Miodownik to say that holding a sample is "like holding a piece of sky". [ 10 ] The material is extremely expensive to make, however, and outside of occasional specific applications for NASA (it was a key component of that agency's Stardust mission), practical uses have been difficult to find. [ 10 ]
Miodownik writes that civilization is built on the materials around us, and that we acknowledge their importance by naming our historical eras after them. [ 9 ] [ 11 ] The Stone , Bronze , and Iron Ages are well known, and Miodownik argues that the steel age likely began in the late 19th century and we could be considered to be currently living in the silicon age. [ 11 ] The constant desire for improvements in our lives (improved comfort, improved safety, etc.) drives the constant improvements to the materials that comprise our world. Therefore, Miodownik concludes, materials are "a multi-scale expression of our human needs and desires". [ 9 ]
Stuff Matters was a New York Times best seller and won the 2014 Royal Society Prize for Science Books as well as the 2015 National Academies Communication Award . [ 12 ] The book released to generally positive reviews. Writing for The New York Times Book Review , Rose George praised Miodownik's blend of science and storytelling. [ 2 ] The Wall Street Journal called it a "thrilling account of the modern material world", [ 3 ] while The Independent was impressed with the "learned, elegant discourse" Miodownik conducts in each chapter. [ 1 ] The Observer 's Robin McKie considered the book "deftly written" and appreciated the author's conclusions drawn from the historical record. [ 11 ] The reviewer for the Financial Times enjoyed the book but was critical of the occasional error, as when Miodownik mistakenly identifies the Greek word for chocolate as being much older than it is. [ 4 ]
The reviewer for Entertainment Weekly wrote that Miodownik occasionally lapsed into technical speak in a book meant for a broader audience, but that the author's clear enthusiasm for his subject outweighed any such negative aspects. [ 13 ] Science News considered Miodownik's explanations of the more science-intensive material to be accessible and praised the humor interspersed throughout the book. [ 14 ] Stuff Matters was well-received by certain trade journals as well. The American Ceramic Society Bulletin wrote that Miodownik's writing worked both as an introduction to the layperson as well as a "reminder of the field's broad purpose" for those with more knowledge on the subject, [ 6 ] while the journal of the Boston Society of Architects particularly enjoyed the book's chapter on concrete. [ 7 ] Bill Gates reviewed the book favorably on his website, writing, "In political contests, voters sometimes put more weight on whether they'd like to have a beer with a candidate than on the candidate's qualifications. Miodownik would pass anyone's beer test , and he has serious qualifications." [ 15 ] | https://en.wikipedia.org/wiki/Stuff_Matters |
The Sturgeon Refinery also NWR Sturgeon Refinery is an 80,000 bbl/d (13,000 m 3 /d) bitumen refinery built and operated by North West Redwater Partnership (NWRP) in a public-private partnership with the Alberta provincial government. It is located in Sturgeon County northeast of Edmonton , Alberta , [ 3 ] in Alberta's Industrial Heartland . Premier Jason Kenney announced on July 6, 2021, that the province of Alberta had acquired NWRP's equity stake, representing 50% of the $10-billion project, with the other 50% owned by Canadian Natural Resources . [ 4 ]
The Sturgeon refinery is owned and operated by the Canadian Natural Resources Ltd. and the Alberta government. On July 6, 2021 Premier Jason Kenney announced that the province of Alberta had acquired a 50% "equity stake" in the Sturgeon Refinery through the APMC, which now owns the "stake previously owned by Calgary-based North West Refining Inc." In the Financial Post article reporting the acquisition, the refinery was described as "over-budget and behind-schedule". [ 4 ]
Previously, the NWRP/Sturgeon Refinery Contractual and Ownership Structure consisted of three main parties who entered into a public private partnership agreement—Canadian Natural Resources, North West Refining Inc and the Government of Alberta's Crown corporation, Alberta Petroleum Marketing Commission (APMC). [ 1 ] [ 2 ] : 22 According to their agreement as described in the 2018 report by the Office of the Auditor General of Alberta, the APMC—which is responsible for the implementation of Alberta's Bitumen Royalty-in-Kind (BRIK) policy and processing agreements, [ 5 ] has a financial obligation to supply 75% of feedstock to the refinery, take on 75% of the funding commitment of toll obligation, and 75% of subordinated debt. [ 2 ] : 22 The toll obligation which the pays, is a processing fee or toll for each barrel of bitumen refined. This includes an operating toll, a debt toll, an equity toll, and an incentive fee. [ 2 ] : 26 The original assessment included a capital cost cap of $6.5 billion. [ 2 ] : 26 In return, APMC can collect Bitumen Royalty-in-Kind (BRIK) when the refinery is fully operational. Under the agreement, Canadian Natural Resources Partnership (CNR), which is 100% owned by Canadian Natural Resources Limited (CNRL), and which has 50% ownership of North West Redwater Partnership (NWRP), provides 25% of feedstock and 25% toll obligation. [ 2 ] : 22
North West Refining Inc. owns the other half of North West Redwater Partnership (NWRP) through two subsidiaries—North West Upgrading LP (NWU) and North West Phase One Inc. The North West Redwater Holding Corporation and the NWR Financing Company Lts are both 100% owned by North West Redwater Partnership (NWRP). [ 2 ] : 22
A February 2018 report by the Office of the Auditor General of Alberta entitled "APMC Management of Agreement to Process Bitumen at the Sturgeon Refinery", said that the original agreement between the Alberta government and North West Redwater Partnership (NWRP) resulted in the province taking on "many of the risks as if it were building the refinery as a 75 per cent tollpayer in this arrangement". [ 2 ] : 23 The APMC has only one vote representing 25% of decision-making power in the partnership, while the two private companies together hold 75% of the decision-making power. [ 2 ] : 23 In contrast, in regards to the $CDN26 billion in toll payments to be made over a thirty-year period APMC is responsible for 75% while CNRL is responsible for the rest. [ 2 ] : 23 Because of the "unconditional nature of the debt component of the toll payments", a "substantial amount of the risk was transferred to the province" when APMC entered into these agreement. [ 2 ] : 23
The AG's report described the arrangement between Alberta's provincial government and the NWRP, as "high-benefit" and "high-risk"—a "$26 billion commitment on behalf of the government to supply bitumen feedstock to the NWR Sturgeon refinery over a thirty year period. : 1 [ Notes 1 ] When the Department of Energy and the APMC acknowledged that taking bitumen-in-kind was neither "practical or cost-efficient", the APMC entered into contracts with bitumen suppliers to provide the 75% feedstock to fulfill their commitment to the refinery. In effect, the APMC is purchasing bitumen instead of collecting bitumen-in-kind royalties. [ 2 ] : 24
During construction, the APMC CEO and some staff managed the contract itself; NWRP, with its 400 staff members, oversaw the actual construction and "risk management activities". [ 2 ] : 24 [ Notes 2 ]
The 2017, Alberta's Industrial Heartland Association's website, listed NWRP's Sturgeon Refiner as one of the major energy projects in the Heartland—"Canada’s largest hydrocarbon processing center" with over forty companies. [ 6 ] The Heartland's geographic region encompasses its 5 five municipal partners, the City of Fort Saskatchewan , Lamont County , Strathcona County , Sturgeon County , and the City of Edmonton . [ 6 ]
According to Global News , the $CDN1.2 billion, [ 7 ] Alberta Carbon Trunk Line System (ACTL), a 240 kilometres (150 mi) CO 2 -pipeline which came online on June 2, 2020, is part of NWRP's Sturgeon refinery system. [ 8 ] The ACTL is a "major carbon capture project", according to the NWRP, and is the Alberta's "largest carbon capture and storage system". [ 7 ] The ACTL, which was partially financed through federal government programs and the Canada Pension Plan Investment Board (CPPIB), is owned and operated by Enhance Energy and Wolf Midstream. [ 9 ] [ Notes 3 ] The ACTL captures carbon dioxide from industrial emitters in the Industrial Heartland region, like the Sturgeon refinery, and transports it to "central and southern Alberta for secure storage" in "aging reservoirs", and enhanced oil recovery (EOR) projects. [ 9 ] [ 6 ]
According to NWR Sturgeon refinery's website, operations include upgrading bitumen from the Athabasca oil sands into ultra-low-sulfur diesel . [ 10 ] [ 1 ] Other finished products include "high quality recycled and manufactured diluents" used in the process of extracting bitumen in Alberta, "pure naptha", used in "petrochemical processes or as part of the manufactured diluent pool", "low sulphur" vacuum gas oil (VGO) ", that can be used as "intermediate feedstock in refineries", [ 10 ] butane , and propane . [ 6 ]
The September 18, 2007 Alberta government commissioned report, entitled "Our Fair Share", by the Alberta Royalty Review panel had concluded that bitumen royalty rates and formulas had "not kept pace with changes in the resource base and world energy markets" [ 11 ] [ 12 ] : 7 and as a result, Albertans, who own their natural resources, were not receiving their "fair share" from energy development. [ 12 ] : 7 [ 13 ] In 2008, the global price of oil reached its peak all-time high of $USD145 a barrel, [ 14 ] : 215 but later in 2008, during the financial crisis of 2007–2008 , oil prices had plummeted to $32 a barrel resulting in "the cancellation of many energy projects" in Alberta. [ 15 ] [ Notes 4 ]
In response to Review, which the then Progressive Conservative Association of Alberta Premier Ed Stelmach had commissioned, the Alberta government enacted new regulations under the provincial Alberta Mines and Minerals Act at that were identified in the Alberta Royalty Framework. [ 16 ] [ 17 ]
The 2007 Alberta Royalty Framework identified the need for a Bitumen Royalty-in-Kind (BRIK) option, allowing the government to choose how the Crown could collect its bitumen royalty share of "conventional crude oil production"—in cash or in kind. [ 18 ] Through BRIK, the Crown could use its share of bitumen royalties "strategically", to "enhance Alberta’s value-add activities such as upgrading, refining, and petrochemical development", [ 19 ] : 4 to Alberta's economy, and to hedge risks in the commodity market . [ 18 ] Under the new royalty formulas, the government had anticipated revenue of $2 billion annually. [ 20 ]
On July 21, 2009 Stelmach's provincial government released a BRIK Request for Proposals (RFP) to "procure a long-term contract to process or purchase a share of royalty volumes of bitumen". [ 19 ]
The only proposal was that submitted by North West Upgrading LP (NWU). After receiving a report from the NWU proposal evaluation team in April 2010, which warned that the agreement placed a "disproportionate risk" on Alberta's government, the NWRP and AMPC agreement was signed in February 2011. [ 2 ] : 25
A private consortium North West Redwater Partnership (NWRP) was "selected to construct and operate" the Sturgeon Refinery. [ 5 ] Originally the estimate for capital costs for the project was $5.7 billion [ 1 ] By 2011, the estimate had increased to $6.5 billion. [ 21 ]
In 2012, the construction of Phase 1 of the Sturgeon Refinery was sanctioned. In its announcement, NWRP said that the refinery was to be built, owned and operated by NWRP. [ 22 ] : 171
Originally, the Sturgeon upgrader was supposed to be fully operational by October 2016. [ 21 ]
In January 2014, under then Premier Jim Prentice , the Building New Petroleum Markets Act was passed, allowing the Minister of Energy to provide loans to projects, like the NWRP's Sturgeon Refinery. [ 2 ] : 30 When the APMC, the NWU and CNRL reached an amended agreement in April 2014, the APMC providing a $CDN324 million loan to NWRP. [ 2 ] : 30
By May 2017, the expected completion date was delayed until June 2018. As a result, the Ministry of Energy updated the estimate for the refinery's capital cost to $9.4 billion. [ 2 ] : 30 The delay and resulting cost increases represented an additional $CDN95 million loan to NWRP by the APMC. [ 2 ] : 30
In 2017, Sturgeon Refinery began producing diesel from synthetic crude upgraded Alberta oilsands feedstock, [ 6 ] and by November 2018, was producing about 35,000 to 40,000 barrels per day of diesel. [ 23 ] The heavily discounted price of "stranded Alberta heavy oil" resulted in deep discounts for the refineries feedstock—as much as US$30 per barrel less than usual. [ 23 ] In 2017, NWRP proceeded with phase one of the refinery capable of upgrading bitumen at a rate of 50,000 barrels a day. [ 6 ] with the cost estimated at $CDN9.7 billion. [ 6 ]
Because of the onerous obligations under the agreement, in June 2018, the provincial New Democratic Party (NDP) under Premier Rachel Notley , had to begin to pay "75 per cent of the debt-servicing costs related to financing of the project." Even though no revenue had been generated for Alberta by the Sturgeon Refinery, the Alberta Petroleum Marketing Commission (APMC)—a Crown corporation responsible for the "implementation of BRIK policy, processing agreements", [ 5 ] had "been making payments averaging $27 million a month related to the financing" the $9.9-billion Sturgeon Refinery, which represents approximately "$466 million in debt-servicing costs" since 2018—tied to the government's "commitments" to the project. [ 1 ]
By March 2020, due to start up issues, the refinery was not "processing the government’s bitumen at the facility — or generating revenue for the province from its refining operations" according to a Calgary Herald article. [ 1 ] By March 2020, the capital costs of the project had climbed to about $10 billion. [ 1 ]
It took fifteen years, but in May 2020 founder, president and CEO of North West Refining, Ian McGregor , announced that the Sturgeon Refinery was fully operational and had reached commercial operations, as the transition from "primarily processing synthetic crude feedstock to bitumen feedstock" had been successful. [ 24 ] [ 22 ]
Because of the agreement made by the former Progressive Conservative Association of Alberta government with North West Redwater Partnership (NWRP) in 2009, the current United Conservative Party (UCP) provincial government is responsible for continuing the debt-servicing costs that have been paid since June 2018, as well as an added cost of "debt principal repayments of about $21 million a month, on top of the debt-servicing costs," starting in June 2020. [ 1 ] This increase in payments comes against the backdrop of the collapse of global oil prices precipitated by interconnecting and unprecedented global events—the 2020 coronavirus pandemic , the COVID-19 recession , the 2020 stock market crash , and the 2020 Russia–Saudi Arabia oil price war , which Premier Jason Kenney called—"the greatest challenge" in Alberta's "modern history, threatening its main industry and wreaking havoc on its finances." [ 25 ]
APMC reported in its annual 2020 report on the loans and agreements with NWRP's Sturgeon Refinery project, that the NWRP's Sturgeon Refinery project, had a "negative $CDN2.52 billion net present value" based mainly on "pricing and on-stream factor". [ 22 ] | https://en.wikipedia.org/wiki/Sturgeon_Refinery |
Sturges's rule [ 1 ] is a method to choose the number of bins for a histogram . Given n {\displaystyle n} observations, Sturges's rule suggests using
bins in the histogram. This rule is widely employed in data analysis software including Python [ 2 ] and R , where it is the default bin selection method. [ 3 ]
Sturges's rule comes from the binomial distribution which is used as a discrete approximation to the normal distribution . [ 4 ] If the function to be approximated f {\displaystyle f} is binomially distributed then
where m {\displaystyle m} is the number of trials and p {\displaystyle p} is the probability of success and y = 0 , 1 , … , m {\displaystyle y=0,1,\ldots ,m} . Choosing p = 1 / 2 {\displaystyle p=1/2} gives
In this form we can consider 2 − m {\displaystyle 2^{-m}} as the normalisation factor and Sturges's rule is saying that the sample should result in a histogram with bin counts given by the binomial coefficients . Since the total sample size is fixed to n {\displaystyle n} we must have
using the well-known formula for sums of the binomial coefficients . Solving this by taking logs of both sides gives m = log 2 ( n ) {\displaystyle m=\log _{2}(n)} and finally using k = m + 1 {\displaystyle k=m+1} (due to counting the 0 outcomes) gives Sturges's rule. In general Sturges's rule does not give an integer answer so the result is rounded up.
Doane [ 5 ] proposed modifying Sturges's formula to add extra bins when the data is skewed . Using the method of moments estimator
along with its variance
Doane proposed adding log 2 ( 1 + | g 1 | σ g 1 ) {\displaystyle \log _{2}\left(1+{\frac {|g_{1}|}{\sigma _{g_{1}}}}\right)} extra bins giving Doane's formula
For symmetric distributions | g 1 | ≃ 0 {\displaystyle |g_{1}|\simeq 0} this is equivalent to Sturges's rule. For asymmetric distributions a number of additional bins will be used.
Sturges's rule is not based on any sort of optimisation procedure, like the Freedman–Diaconis rule or Scott's rule . It is simply posited based on the approximation of a normal curve by a binomial distribution. Hyndman has pointed out [ 6 ] that any multiple of the binomial coefficients would also converge to a normal distribution, so any number of bins could be obtained following the derivation above. Scott [ 4 ] shows that Sturges's rule in general produces oversmoothed histograms i.e. too few bins, and advises against its use in favour of other rules such as Freedman-Diaconis or Scott's rule. | https://en.wikipedia.org/wiki/Sturges's_rule |
In mathematics , the Sturm sequence of a univariate polynomial p is a sequence of polynomials associated with p and its derivative by a variant of Euclid's algorithm for polynomials . Sturm's theorem expresses the number of distinct real roots of p located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of p . [ 1 ]
Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity , it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrarily small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials.
For computing over the reals , Sturm's theorem is less efficient than other methods based on Descartes' rule of signs . However, it works on every real closed field , and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers.
The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm , who discovered the theorem in 1829. [ 2 ]
The Sturm chain or Sturm sequence of a univariate polynomial P ( x ) with real coefficients is the sequence of polynomials P 0 , P 1 , … , {\displaystyle P_{0},P_{1},\ldots ,} such that
for i ≥ 1 , where P' is the derivative of P , and rem ( P i − 1 , P i ) {\displaystyle \operatorname {rem} (P_{i-1},P_{i})} is the remainder of the Euclidean division of P i − 1 {\displaystyle P_{i-1}} by P i . {\displaystyle P_{i}.} The length of the Sturm sequence is at most the degree of P .
The number of sign variations at ξ of the Sturm sequence of P is the number of sign changes (ignoring zeros) in the sequence of real numbers
This number of sign variations is denoted here V ( ξ ) .
Sturm's theorem states that, if P is a square-free polynomial , the number of distinct real roots of P in the half-open interval ( a , b ] is V ( a ) − V ( b ) (here, a and b are real numbers such that a < b ). [ 1 ]
The theorem extends to unbounded intervals by defining the sign at +∞ of a polynomial as the sign of its leading coefficient (that is, the coefficient of the term of highest degree). At –∞ the sign of a polynomial is the sign of its leading coefficient for a polynomial of even degree, and the opposite sign for a polynomial of odd degree.
In the case of a non-square-free polynomial, if neither a nor b is a multiple root of p , then V ( a ) − V ( b ) is the number of distinct real roots of P .
The proof of the theorem is as follows: when the value of x increases from a to b , it may pass through a zero of some P i {\displaystyle P_{i}} ( i > 0 ); when this occurs, the number of sign variations of ( P i − 1 , P i , P i + 1 ) {\displaystyle (P_{i-1},P_{i},P_{i+1})} does not change. When x passes through a root of P 0 = P , {\displaystyle P_{0}=P,} the number of sign variations of ( P 0 , P 1 ) {\displaystyle (P_{0},P_{1})} decreases from 1 to 0. These are the only values of x where some sign may change.
Suppose we wish to find the number of roots in some range for the polynomial p ( x ) = x 4 + x 3 − x − 1 {\displaystyle p(x)=x^{4}+x^{3}-x-1} . So
The remainder of the Euclidean division of p 0 by p 1 is − 3 16 x 2 − 3 4 x − 15 16 ; {\displaystyle -{\tfrac {3}{16}}x^{2}-{\tfrac {3}{4}}x-{\tfrac {15}{16}};} multiplying it by −1 we obtain
Next dividing p 1 by p 2 and multiplying the remainder by −1 , we obtain
Now dividing p 2 by p 3 and multiplying the remainder by −1 , we obtain
As this is a constant, this finishes the computation of the Sturm sequence.
To find the number of real roots of p 0 {\displaystyle p_{0}} one has to evaluate the sequences of the signs of these polynomials at −∞ and ∞ , which are respectively (+, −, +, +, −) and (+, +, +, −, −) . Thus
where V denotes the number of sign changes in the sequence, which shows that p has two real roots.
This can be verified by noting that p ( x ) can be factored as ( x 2 − 1)( x 2 + x + 1) , where the first factor has the roots −1 and 1 , and second factor has no real roots. This last assertion results from the quadratic formula , and also from Sturm's theorem, which gives the sign sequences (+, –, –) at −∞ and (+, +, –) at +∞ .
Sturm sequences have been generalized in two directions. To define each polynomial in the sequence, Sturm used the negative of the remainder of the Euclidean division of the two preceding ones. The theorem remains true if one replaces the negative of the remainder by its product or quotient by a positive constant or the square of a polynomial. It is also useful (see below) to consider sequences where the second polynomial is not the derivative of the first one.
A generalized Sturm sequence is a finite sequence of polynomials with real coefficients
such that
The last condition implies that two consecutive polynomials do not have any common real root. In particular the original Sturm sequence is a generalized Sturm sequence, if (and only if) the polynomial has no multiple real root (otherwise the first two polynomials of its Sturm sequence have a common root).
When computing the original Sturm sequence by Euclidean division, it may happen that one encounters a polynomial that has a factor that is never negative, such a x 2 {\displaystyle x^{2}} or x 2 + 1 {\displaystyle x^{2}+1} . In this case, if one continues the computation with the polynomial replaced by its quotient by the nonnegative factor, one gets a generalized Sturm sequence, which may also be used for computing the number of real roots, since the proof of Sturm's theorem still applies (because of the third condition). This may sometimes simplify the computation, although it is generally difficult to find such nonnegative factors, except for even powers of x .
In computer algebra , the polynomials that are considered have integer coefficients or may be transformed to have integer coefficients. The Sturm sequence of a polynomial with integer coefficients generally contains polynomials whose coefficients are not integers (see above example).
To avoid computation with rational numbers , a common method is to replace Euclidean division by pseudo-division for computing polynomial greatest common divisors . This amounts to replacing the remainder sequence of the Euclidean algorithm by a pseudo-remainder sequence , a pseudo remainder sequence being a sequence p 0 , … , p k {\displaystyle p_{0},\ldots ,p_{k}} of polynomials such that there are constants a i {\displaystyle a_{i}} and b i {\displaystyle b_{i}} such that b i p i + 1 {\displaystyle b_{i}p_{i+1}} is the remainder of the Euclidean division of a i p i − 1 {\displaystyle a_{i}p_{i-1}} by p i . {\displaystyle p_{i}.} (The different kinds of pseudo-remainder sequences are defined by the choice of a i {\displaystyle a_{i}} and b i ; {\displaystyle b_{i};} typically, a i {\displaystyle a_{i}} is chosen for not introducing denominators during Euclidean division, and b i {\displaystyle b_{i}} is a common divisor of the coefficients of the resulting remainder; see Pseudo-remainder sequence for details.)
For example, the remainder sequence of the Euclidean algorithm is a pseudo-remainder sequence with a i = b i = 1 {\displaystyle a_{i}=b_{i}=1} for every i , and the Sturm sequence of a polynomial is a pseudo-remainder sequence with a i = 1 {\displaystyle a_{i}=1} and b i = − 1 {\displaystyle b_{i}=-1} for every i .
Various pseudo-remainder sequences have been designed for computing greatest common divisors of polynomials with integer coefficients without introducing denominators (see Pseudo-remainder sequence ). They can all be made generalized Sturm sequences by choosing the sign of the b i {\displaystyle b_{i}} to be the opposite of the sign of the a i . {\displaystyle a_{i}.} This allows the use of Sturm's theorem with pseudo-remainder sequences.
For a polynomial with real coefficients, root isolation consists of finding, for each real root, an interval that contains this root, and no other roots.
This is useful for root finding , allowing the selection of the root to be found and providing a good starting point for fast numerical algorithms such as Newton's method ; it is also useful for certifying the result, as if Newton's method converge outside the interval one may immediately deduce that it converges to the wrong root.
Root isolation is also useful for computing with algebraic numbers . For computing with algebraic numbers, a common method is to represent them as a pair of a polynomial to which the algebraic number is a root, and an isolation interval. For example 2 {\displaystyle {\sqrt {2}}} may be unambiguously represented by ( x 2 − 2 , [ 0 , 2 ] ) . {\displaystyle (x^{2}-2,[0,2]).}
Sturm's theorem provides a way for isolating real roots that is less efficient (for polynomials with integer coefficients) than other methods involving Descartes' rule of signs . However, it remains useful in some circumstances, mainly for theoretical purposes, for example for algorithms of real algebraic geometry that involve infinitesimals . [ 3 ]
For isolating the real roots, one starts from an interval ( a , b ] {\displaystyle (a,b]} containing all the real roots, or the roots of interest (often, typically in physical problems, only positive roots are interesting), and one computes V ( a ) {\displaystyle V(a)} and V ( b ) . {\displaystyle V(b).} For defining this starting interval, one may use bounds on the size of the roots (see Properties of polynomial roots § Bounds on (complex) polynomial roots ). Then, one divides this interval in two, by choosing c in the middle of ( a , b ] . {\displaystyle (a,b].} The computation of V ( c ) {\displaystyle V(c)} provides the number of real roots in ( a , c ] {\displaystyle (a,c]} and ( c , b ] , {\displaystyle (c,b],} and one may repeat the same operation on each subinterval. When one encounters, during this process an interval that does not contain any root, it may be suppressed from the list of intervals to consider. When one encounters an interval containing exactly one root, one may stop dividing it, as it is an isolation interval. The process stops eventually, when only isolating intervals remain.
This isolating process may be used with any method for computing the number of real roots in an interval. Theoretical complexity analysis and practical experiences show that methods based on Descartes' rule of signs are more efficient. It follows that, nowadays, Sturm sequences are rarely used for root isolation.
Generalized Sturm sequences allow counting the roots of a polynomial where another polynomial is positive (or negative), without computing these root explicitly. If one knows an isolating interval for a root of the first polynomial, this allows also finding the sign of the second polynomial at this particular root of the first polynomial, without computing a better approximation of the root.
Let P ( x ) and Q ( x ) be two polynomials with real coefficients such that P and Q have no common root and P has no multiple roots. In other words, P and P' Q are coprime polynomials . This restriction does not really affect the generality of what follows as GCD computations allows reducing the general case to this case, and the cost of the computation of a Sturm sequence is the same as that of a GCD.
Let W ( a ) denote the number of sign variations at a of a generalized Sturm sequence starting from P and P' Q . If a < b are two real numbers, then W ( a ) – W ( b ) is the number of roots of P in the interval ( a , b ] {\displaystyle (a,b]} such that Q ( a ) > 0 minus the number of roots in the same interval such that Q ( a ) < 0 . Combined with the total number of roots of P in the same interval given by Sturm's theorem, this gives the number of roots of P such that Q ( a ) > 0 and the number of roots of P such that Q ( a ) < 0 . [ 1 ] | https://en.wikipedia.org/wiki/Sturm's_theorem |
In mathematics , in the field of ordinary differential equations , Sturm separation theorem , named after Jacques Charles François Sturm , describes the location of roots of solutions of homogeneous second order linear differential equations . Basically the theorem states that given two linear independent solutions of such an equation the zeros of the two solutions are alternating.
If u ( x ) and v ( x ) are two non-trivial continuous linearly independent solutions to a homogeneous second order linear differential equation with x 0 and x 1 being successive roots of u ( x ), then v ( x ) has exactly one root in the open interval ( x 0 , x 1 ). It is a special case of the Sturm-Picone comparison theorem .
Since u {\displaystyle \displaystyle u} and v {\displaystyle \displaystyle v} are linearly independent it follows that the Wronskian W [ u , v ] {\displaystyle \displaystyle W[u,v]} must satisfy W [ u , v ] ( x ) ≡ W ( x ) ≠ 0 {\displaystyle W[u,v](x)\equiv W(x)\neq 0} for all x {\displaystyle \displaystyle x} where the differential equation is defined, say I {\displaystyle \displaystyle I} . Without loss of generality, suppose that W ( x ) < 0 ∀ x ∈ I {\displaystyle W(x)<0{\mbox{ }}\forall {\mbox{ }}x\in I} . Then
So at x = x 0 {\displaystyle \displaystyle x=x_{0}}
and either u ′ ( x 0 ) {\displaystyle u'\left(x_{0}\right)} and v ( x 0 ) {\displaystyle v\left(x_{0}\right)} are both positive or both negative. Without loss of generality, suppose that they are both positive. Now, at x = x 1 {\displaystyle \displaystyle x=x_{1}}
and since x = x 0 {\displaystyle \displaystyle x=x_{0}} and x = x 1 {\displaystyle \displaystyle x=x_{1}} are successive zeros of u ( x ) {\displaystyle \displaystyle u(x)} it causes u ′ ( x 1 ) < 0 {\displaystyle u'\left(x_{1}\right)<0} . Thus, to keep W ( x ) < 0 {\displaystyle \displaystyle W(x)<0} we must have v ( x 1 ) < 0 {\displaystyle v\left(x_{1}\right)<0} . We see this by observing that if u ′ ( x ) > 0 ∀ x ∈ ( x 0 , x 1 ] {\displaystyle \displaystyle u'(x)>0{\mbox{ }}\forall {\mbox{ }}x\in \left(x_{0},x_{1}\right]} then u ( x ) {\displaystyle \displaystyle u(x)} would be increasing (away from the x {\displaystyle \displaystyle x} -axis), which would never lead to a zero at x = x 1 {\displaystyle \displaystyle x=x_{1}} . So for a zero to occur at x = x 1 {\displaystyle \displaystyle x=x_{1}} at most u ′ ( x 1 ) = 0 {\displaystyle u'\left(x_{1}\right)=0} (i.e., u ′ ( x 1 ) ≤ 0 {\displaystyle u'\left(x_{1}\right)\leq 0} and it turns out, by our result from the Wronskian that u ′ ( x 1 ) ≤ 0 {\displaystyle u'\left(x_{1}\right)\leq 0} ). So somewhere in the interval ( x 0 , x 1 ) {\displaystyle \left(x_{0},x_{1}\right)} the sign of v ( x ) {\displaystyle \displaystyle v(x)} changed. By the Intermediate Value Theorem there exists x ∗ ∈ ( x 0 , x 1 ) {\displaystyle x^{*}\in \left(x_{0},x_{1}\right)} such that v ( x ∗ ) = 0 {\displaystyle v\left(x^{*}\right)=0} .
On the other hand, there can be only one zero in ( x 0 , x 1 ) {\displaystyle \left(x_{0},x_{1}\right)} , because otherwise v {\displaystyle v} would have two zeros and there would be no zeros of u {\displaystyle u} in between, and it was just proved that this is impossible. | https://en.wikipedia.org/wiki/Sturm_separation_theorem |
In mathematics, the Sturm series [ 1 ] associated with a pair of polynomials is named after Jacques Charles François Sturm .
Let p 0 {\displaystyle p_{0}} and p 1 {\displaystyle p_{1}} two univariate polynomials. Suppose that they do not have a common root and the degree of p 0 {\displaystyle p_{0}} is greater than the degree of p 1 {\displaystyle p_{1}} . The Sturm series is constructed by:
This is almost the same algorithm as Euclid's but the remainder p i + 2 {\displaystyle p_{i+2}} has negative sign.
Let us see now Sturm series p 0 , p 1 , … , p k {\displaystyle p_{0},p_{1},\dots ,p_{k}} associated to a characteristic polynomial P {\displaystyle P} in the variable λ {\displaystyle \lambda } :
where a i {\displaystyle a_{i}} for i {\displaystyle i} in { 1 , … , k } {\displaystyle \{1,\dots ,k\}} are rational functions in R ( Z ) {\displaystyle \mathbb {R} (Z)} with the coordinate set Z {\displaystyle Z} . The series begins with two polynomials obtained by dividing P ( ı μ ) {\displaystyle P(\imath \mu )} by ı k {\displaystyle \imath ^{k}} where ı {\displaystyle \imath } represents the imaginary unit equal to − 1 {\displaystyle {\sqrt {-1}}} and separate real and imaginary parts:
The remaining terms are defined with the above relation. Due to the special structure of these polynomials, they can be written in the form:
In these notations, the quotient q i {\displaystyle q_{i}} is equal to ( c i − 1 , 0 / c i , 0 ) μ {\displaystyle (c_{i-1,0}/c_{i,0})\mu } which provides the condition c i , 0 ≠ 0 {\displaystyle c_{i,0}\neq 0} . Moreover, the polynomial p i {\displaystyle p_{i}} replaced in the above relation gives the following recursive formulas for computation of the coefficients c i , j {\displaystyle c_{i,j}} .
If c i , 0 = 0 {\displaystyle c_{i,0}=0} for some i {\displaystyle i} , the quotient q i {\displaystyle q_{i}} is a higher degree polynomial and the sequence p i {\displaystyle p_{i}} stops at p h {\displaystyle p_{h}} with h < k {\displaystyle h<k} . | https://en.wikipedia.org/wiki/Sturm_series |
In mathematics , in the field of ordinary differential equations , the Sturm–Picone comparison theorem , named after Jacques Charles François Sturm and Mauro Picone , is a classical theorem which provides criteria for the oscillation and non-oscillation of solutions of certain linear differential equations in the real domain.
Let p i , q i for i = 1, 2 be real-valued continuous functions on the interval [ a , b ] and let
be two homogeneous linear second order differential equations in self-adjoint form with
and
Let u be a non-trivial solution of (1) with successive roots at z 1 and z 2 and let v be a non-trivial solution of (2). Then one of the following properties holds.
The first part of the conclusion is due to Sturm (1836), [ 1 ] while the second (alternative) part of the theorem is due to Picone (1910) [ 2 ] [ 3 ] whose simple proof was given using his now famous Picone identity . In the special case where both equations are identical one obtains the Sturm separation theorem . [ 4 ] | https://en.wikipedia.org/wiki/Sturm–Picone_comparison_theorem |
In embryology , sturt is a measure of distance. On the fate map , the further apart two regions are, the more likely the resulting structures are to form different genotypes . A difference of 1% in the ratio of differing genotypes is described as one sturt, after Alfred Henry Sturtevant . [ 1 ]
It was named by Yoshiki Hotta and Seymour Benzer .
This developmental biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sturt_(biology) |
Stuxnet is a malicious computer worm first uncovered in 2010 and thought to have been in development since at least 2005. Stuxnet targets supervisory control and data acquisition (SCADA) systems and is believed to be responsible for causing substantial damage to the Iran nuclear program . [ 2 ] Although neither the United States nor Israel has openly admitted responsibility, multiple independent news organizations claim Stuxnet to be a cyberweapon built jointly by the two countries in a collaborative effort known as Operation Olympic Games . [ 3 ] [ 4 ] [ 5 ] The program, started during the Bush administration , was rapidly expanded within the first months of Barack Obama 's presidency. [ 6 ]
Stuxnet specifically targets programmable logic controllers (PLCs), which allow the automation of electromechanical processes such as those used to control machinery and industrial processes including gas centrifuges for separating nuclear material. Exploiting four zero-day flaws in the systems, [ 7 ] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart. [ 2 ] Stuxnet's design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States. [ 8 ] Stuxnet reportedly destroyed almost one-fifth of Iran's nuclear centrifuges . [ 9 ] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade. [ 10 ]
Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack, a link file that automatically executes the propagated copies of the worm and a rootkit component responsible for hiding all malicious files and processes to prevent detection of Stuxnet. [ 11 ] It is typically introduced to the target environment via an infected USB flash drive , thus crossing any air gap . The worm then propagates across the network, scanning for Siemens Step7 software on computers controlling a PLC. In the absence of either criterion, Stuxnet becomes dormant inside the computer. If both the conditions are fulfilled, Stuxnet introduces the infected rootkit onto the PLC and Step7 software, modifying the code and giving unexpected commands to the PLC while returning a loop of normal operation system values back to the users. [ 12 ] [ 13 ]
Stuxnet, discovered by Sergey Ulasen from a Belarussian antivirus company VirusBlokAda , initially spread via Microsoft Windows, and targeted Siemens industrial control systems . While it is not the first time that hackers have targeted industrial systems, [ 14 ] nor the first publicly known intentional act of cyberwarfare to be implemented, it is the first discovered malware that spies on and subverts industrial systems, [ 15 ] and the first to include a programmable logic controller (PLC) rootkit . [ 16 ] [ 17 ]
The worm initially spreads indiscriminately, but includes a highly specialized malware payload that is designed to target only Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes. [ 18 ] [ 19 ] Stuxnet infects PLCs by subverting the Step-7 software application that is used to reprogram these devices. [ 20 ] [ 21 ]
Different variants of Stuxnet targeted five Iranian organizations, [ 22 ] with the probable target widely suspected to be uranium enrichment infrastructure in Iran ; [ 21 ] [ 23 ] [ 24 ] Symantec noted in August 2010 that 60 percent of the infected computers worldwide were in Iran. [ 25 ] Siemens stated that the worm caused no damage to its customers, [ 15 ] but the Iran nuclear program, which uses embargoed Siemens equipment procured secretly, was damaged by Stuxnet. [ 26 ] [ 27 ] [ 28 ] Kaspersky Lab concluded that the sophisticated attack could only have been conducted "with nation-state support." [ 29 ] F-Secure 's chief researcher Mikko Hyppönen , when asked if possible nation-state support were involved, agreed: "That's what it would look like, yes." [ 30 ]
In May 2011, the PBS program Need To Know cited a statement by Gary Samore , White House Coordinator for Arms Control and Weapons of Mass Destruction, in which he said, "we're glad they [the Iranians] are having trouble with their centrifuge machine and that we — the U.S. and its allies — are doing everything we can to make sure that we complicate matters for them," offering "winking acknowledgement" of United States involvement in Stuxnet. [ 31 ] According to The Daily Telegraph , a showreel that was played at a retirement party for the head of the Israel Defense Forces (IDF), Gabi Ashkenazi , included references to Stuxnet as one of his operational successes as the IDF chief of staff. [ 32 ]
On 1 June 2012, an article in The New York Times reported that Stuxnet was part of a US and Israeli intelligence operation named Operation Olympic Games , devised by the NSA under President George W. Bush and executed under President Barack Obama . [ 33 ]
On 24 July 2012, an article by Chris Matyszczyk from CNET [ 34 ] reported that the Atomic Energy Organization of Iran e-mailed F-Secure 's chief research officer Mikko Hyppönen to report a new instance of malware.
On 25 December 2012, an Iranian semi-official news agency announced there was a cyberattack by Stuxnet, this time on the industries in the southern area of the country. The malware targeted a power plant and some other industries in Hormozgan province in recent months. [ 35 ]
According to Eugene Kaspersky , the worm also infected a nuclear power plant in Russia. Kaspersky noted, however, that since the power plant is not connected to the public Internet, the system should remain safe. [ 36 ]
The worm was first identified by the security company VirusBlokAda in mid-June 2010. [ 20 ] Journalist Brian Krebs 's blog post on 15 July 2010 was the first widely read report on the worm. [ 37 ] [ 38 ] The original name given by VirusBlokAda was "Rootkit.Tmphider;" [ 39 ] Symantec, however, called it "W32.Temphid," later changing it to "W32.Stuxnet." [ 40 ] Its current name is derived from a combination of keywords found in the software (".stub" and "mrxnet.sys"). [ 41 ] [ 42 ] The timing of the discovery has been attributed to the virus accidentally spreading beyond its intended target due to a programming error introduced in an update. This may have caused the worm to spread to an engineer's computer connected to the centrifuges, further propagating when the engineer later connected to the internet at home. [ 33 ]
Kaspersky Lab experts initially estimated that Stuxnet began spreading around March or April 2010, [ 43 ] but the first variant of the worm appeared in June 2009. [ 20 ] On 15 July 2010, the day the worm's existence became widely known, a distributed denial-of-service attack targeted the servers of two leading mailing lists on industrial-systems security. This attack, from an unknown source but possibly related to Stuxnet, disabled one of the lists, interrupting a key information source for power plants and factories. [ 38 ] Separately, researchers at Symantec uncovered a version of the Stuxnet computer virus that was used to attack Iran's nuclear program in November 2007, with evidence indicating it was under development as early as 2005, when Iran was still setting up its uranium enrichment facility. [ 44 ]
The second variant, with substantial improvements, appeared in March 2010, reportedly due to concerns that Stuxnet was not spreading fast enough. A third variant, with minor improvements, followed in April 2010. [ 38 ] The worm contains a component with a build timestamp from 3 February 2010. [ 45 ] On 25 November 2010, Sky News in the United Kingdom reported receiving information from an anonymous source at an unidentified IT security organization claiming that Stuxnet, or a variation of the worm, had been traded on the black market . [ 46 ]
In 2015, Kaspersky Lab reported that the Equation Group had used two of the same zero-day attacks prior to their use in Stuxnet, in another malware called fanny.bmp. [ 47 ] [ 48 ] Kaspersky Lab noted that "the similar type of usage of both exploits together in different computer worms, at around the same time, indicates that the Equation Group and the Stuxnet developers are either the same or working closely together". [ 49 ]
In 2019, Chronicle researchers Juan Andres Guerrero-Saade and Silas Cutler presented findings indicating that at least four distinct threat actor malware platforms collaborated in developing the different versions of Stuxnet. [ 50 ] [ 51 ] The collaboration was referred to as 'GOSSIP GIRL', a name derived from a threat group mentioned in classified CSE slides that included Flame. [ 52 ] GOSSIP GIRL is described as a cooperative umbrella encompassing the Equation Group , Flame , Duqu , and Flowershop (also known as 'Cheshire Cat'). [ 53 ] [ 54 ] [ 55 ]
In 2020, researcher Facundo Muñoz presented findings suggesting that Equation Group may have collaborated with Stuxnet developers in 2009 by providing at least one zero-day exploit, [ 56 ] and one exploit from 2008 [ 57 ] that was actively used by the Conficker computer worm and Chinese hackers. [ 58 ] In 2017, a group of hackers known as The Shadow Brokers leaked a collection of tools attributed to Equation Group, including new versions of both exploits compiled in 2010. Analysis of the leaked data indicated significant code overlaps, as both Stuxnet's exploits and Equation Group's exploits were developed using a set of libraries called the "Exploit Development Framework", also leaked by The Shadow Brokers .
A study of the spread of Stuxnet by Symantec showed that the main affected countries in the early days of the infection were Iran, Indonesia and India: [ 59 ]
Iran was reported to have fortified its cyberwar abilities following the Stuxnet attack, and has been suspected of retaliatory attacks against United States banks in Operation Ababil . [ 60 ]
Unlike most malware, Stuxnet does little harm to computers and networks that do not meet specific configuration requirements; "The attackers took great care to make sure that only their designated targets were hit ... It was a marksman's job." [ 61 ] While the worm is promiscuous, it makes itself inert if Siemens software is not found on infected computers, and contains safeguards to prevent each infected computer from spreading the worm to more than three others, and to erase itself on 24 June 2012. [ 38 ]
For its targets, Stuxnet contains, among other things, code for a man-in-the-middle attack that fakes industrial process control sensor signals so an infected system does not shut down due to detected abnormal behavior. [ 38 ] [ 61 ] [ 62 ] Such complexity is unusual for malware . The worm consists of a layered attack against three different systems:
Stuxnet attacked Windows systems using an unprecedented four zero-day attacks (plus the CPLINK vulnerability and a vulnerability used by the Conficker worm [ 63 ] ). It is initially spread using infected removable drives such as USB flash drives , [ 21 ] [ 45 ] which contain Windows shortcut files to initiate executable code. [ 64 ] The worm then uses other exploits and techniques such as peer-to-peer remote procedure call (RPC) to infect and update other computers inside private networks that are not directly connected to the Internet. [ 65 ] [ 66 ] [ 67 ] The number of zero-day exploits used is unusual, as they are highly valued and malware creators do not typically make use of (and thus simultaneously make visible) four different zero-day exploits in the same worm. [ 23 ] Amongst these exploits were remote code execution on a computer with Printer Sharing enabled, [ 68 ] and the LNK/PIF vulnerability, [ 69 ] in which file execution is accomplished when an icon is viewed in Windows Explorer, negating the need for user interaction. [ 70 ] Stuxnet is unusually large at half a megabyte in size, [ 65 ] and written in several different programming languages (including C and C++ ) which is also irregular for malware. [ 15 ] [ 20 ] [ 62 ] The Windows component of the malware is promiscuous in that it spreads relatively quickly and indiscriminately. [ 45 ]
The malware has both user mode and kernel mode rootkit ability under Windows, [ 67 ] and its device drivers have been digitally signed with the private keys of two public key certificates that were stolen from separate well-known companies, JMicron and Realtek , both located at Hsinchu Science Park in Taiwan. [ 45 ] [ 65 ] The driver signing helped it install kernel mode rootkit drivers successfully without users being notified, and thus it remained undetected for a relatively long period of time. [ 71 ] Both compromised certificates have been revoked by Verisign .
Two websites in Denmark and Malaysia were configured as command and control servers for the malware, allowing it to be updated, and for industrial espionage to be conducted by uploading information. Both of these domain names have subsequently been redirected by their DNS service provider to Dynadot as part of a global effort to disable the malware. [ 67 ] [ 38 ]
According to researcher Ralph Langner, [ 72 ] [ 73 ] once installed on a Windows system, Stuxnet infects project files belonging to Siemens' WinCC / PCS 7 SCADA control software [ 74 ] (Step 7), and subverts a key communication library of WinCC called s7otbxdx.dll . Doing so intercepts communications between the WinCC software running under Windows and the target Siemens PLC devices, when the two are connected via a data cable. The malware is able to modify the code on PLC devices unnoticed, and subsequently to mask its presence from WinCC if the control software attempts to read an infected block of memory from the PLC system. [ 67 ]
The malware furthermore used a zero-day exploit in the WinCC/SCADA database software in the form of a hard-coded database password. [ 75 ]
Stuxnet's payload targets only those SCADA configurations that meet criteria that it is programmed to identify. [ 38 ]
Stuxnet requires specific subordinate system to be attached to the targeted Siemens S7-300 controller system: variable-frequency drives (frequency converter drives) and its associated modules. It only attacks those PLC systems with variable-frequency drives from two specific vendors: Vacon based in Finland and Fararo Paya based in Iran. [ 76 ] Furthermore, it monitors the frequency of the attached motors, and only attacks systems that spin between 807 Hz and 1,210 Hz. This is a much higher frequency than motors typically operate at in most industrial applications, with the notable exception of gas centrifuges . [ 76 ] Stuxnet installs malware into memory block DB890 of the PLC that monitors the Profibus messaging bus of the system. [ 67 ] When certain criteria are met, it periodically modifies the frequency to 1,410 Hz and then to 2 Hz and then to 1,064 Hz, and thus affects the operation of the connected motors by changing their rotational speed. [ 76 ] It also installs a rootkit – the first such documented case on this platform – that hides the malware on the system and masks the changes in rotational speed from monitoring systems.
Siemens has released a detection and removal tool for Stuxnet. Siemens recommends contacting customer support if an infection is detected and advises installing Microsoft updates for security vulnerabilities and prohibiting the use of third-party USB flash drives . [ 77 ] Siemens also advises immediately upgrading password access codes. [ 78 ]
The worm's ability to reprogram external PLCs may complicate the removal procedure. Symantec's Liam O'Murchu warns that fixing Windows systems may not fully solve the infection; a thorough audit of PLCs may be necessary. Despite speculation that incorrect removal of the worm could cause damage, [ 15 ] Siemens reports that in the first four months since discovery, the malware was successfully removed from the systems of 22 customers without any adverse effects. [ 77 ] [ 79 ]
Prevention of control system security incidents, [ 80 ] such as from viral infections like Stuxnet, is a topic that is being addressed in both the public and the private sector.
The US Department of Homeland Security National Cyber Security Division (NCSD) operates the Control System Security Program (CSSP). [ 81 ] The program operates a specialized computer emergency response team called the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), conducts a biannual conference ( ICSJWG ), provides training, publishes recommended practices, and provides a self-assessment tool. As part of a Department of Homeland Security plan to improve American computer security, in 2008 it and the Idaho National Laboratory (INL) worked with Siemens to identify security holes in the company's widely used Process Control System 7 (PCS 7) and its software Step 7. In July 2008, INL and Siemens publicly announced flaws in the control system at a Chicago conference; Stuxnet exploited these holes in 2009. [ 61 ]
Several industry organizations [ 82 ] [ 83 ] and professional societies [ 84 ] [ 85 ] have published standards and best practice guidelines providing direction and guidance for control system end-users on how to establish a control system security management program. The basic premise that all of these documents share is that prevention requires a multi-layered approach, often termed defense in depth . [ 86 ] The layers include policies and procedures, awareness and training, network segmentation , access control measures, physical security measures, system hardening , e.g., patch management , and system monitoring, anti-virus and intrusion prevention system (IPS). The standards and best practices [ who? ] also all [ improper synthesis? ] recommend starting with a risk analysis and a control system security assessment. [ 87 ] [ 88 ]
Experts [ who? ] believe that Stuxnet required the largest and costliest development effort in malware history. [ 38 ] Developing its abilities would have required a team of capable programmers, in-depth knowledge of industrial processes , and an interest in attacking industrial infrastructure. [ 15 ] [ 20 ] Eric Byres, who has years of experience maintaining and troubleshooting Siemens systems, told Wired that writing the code would have taken many man-months, if not man-years. [ 65 ] Symantec estimates that the group developing Stuxnet would have consisted of between five and thirty people, and would have taken six months to prepare. [ 89 ] [ 38 ] The Guardian , the BBC and The New York Times all claimed that (unnamed) experts studying Stuxnet believe the complexity of the code indicates that only a nation-state would have the abilities to produce it. [ 23 ] [ 89 ] [ 90 ] The self-destruct and other safeguards within the code implied that a Western government was responsible, or at least is responsible for its development. [ 38 ] However, software security expert Bruce Schneier initially condemned the 2010 news coverage of Stuxnet as hype, stating that it was almost entirely based on speculation. [ 91 ] But after subsequent research, Schneier stated in 2012 that "we can now conclusively link Stuxnet to the centrifuge structure at the Natanz nuclear enrichment lab in Iran". [ 92 ]
In January 2024, de Volkskrant reported that Dutch engineer Erik van Sabben was the saboteur who had infiltrated the underground nuclear complex in the city of Natanz and installed equipment infected with Stuxnet. [ 93 ]
Ralph Langner, the researcher who identified that Stuxnet infected PLCs, [ 21 ] first speculated publicly in September 2010 that the malware was of Israeli origin, and that it targeted Iranian nuclear facilities. [ 94 ] However Langner more recently, at a TED conference, recorded in February 2011, stated that, "My opinion is that the Mossad is involved, but that the leading force is not Israel. The leading force behind Stuxnet is the cyber superpower – there is only one; and that's the United States." [ 95 ] Kevin Hogan, Senior Director of Security Response at Symantec, reported that most infected systems were in Iran (about 60%), [ 96 ] which has led to speculation that it may have been deliberately targeting "high-value infrastructure" in Iran [ 23 ] including either the Bushehr Nuclear Power Plant or the Natanz nuclear facility . [ 65 ] [ 97 ] [ 98 ] Langner called the malware "a one-shot weapon" and said that the intended target was probably hit, [ 99 ] although he admitted this was speculation. [ 65 ] Another German researcher and spokesman of the German-based Chaos Computer Club , Frank Rieger, was the first to speculate that Natanz was the target. [ 38 ]
According to the Israeli newspaper Haaretz , in September 2010 experts on Iran and computer security specialists were increasingly convinced that Stuxnet was meant "to sabotage the uranium enrichment facility at Natanz – where the centrifuge operational capacity had dropped over the past year by 30 percent." [ 100 ] On 23 November 2010 it was announced that uranium enrichment at Natanz had ceased several times because of a series of major technical problems. [ 101 ] A "serious nuclear accident" (supposedly the shutdown of some of its centrifuges [ 102 ] ) occurred at the site in the first half of 2009, which is speculated to have forced Gholam Reza Aghazadeh , the head of the Atomic Energy Organization of Iran (AEOI), to resign. [ 103 ] Statistics published by the Federation of American Scientists (FAS) show that the number of enrichment centrifuges operational in Iran mysteriously declined from about 4,700 to about 3,900 beginning around the time the nuclear incident WikiLeaks mentioned would have occurred. [ 104 ] The Institute for Science and International Security (ISIS) suggests, in a report published in December 2010, that Stuxnet is a reasonable explanation for the apparent damage [ 105 ] at Natanz, and may have destroyed up to 1,000 centrifuges (10 percent) sometime between November 2009 and late January 2010. The authors conclude:
The attacks seem designed to force a change in the centrifuge's rotor speed, first raising the speed and then lowering it, likely with the intention of inducing excessive vibrations or distortions that would destroy the centrifuge. If its goal was to quickly destroy all the centrifuges in the FEP [Fuel Enrichment Plant], Stuxnet failed. But if the goal was to destroy a more limited number of centrifuges and set back Iran's progress in operating the FEP, while making detection difficult, it may have succeeded, at least temporarily. [ 105 ]
The Institute for Science and International Security (ISIS) report further notes that Iranian authorities have attempted to conceal the breakdown by installing new centrifuges on a large scale. [ 105 ] [ 106 ]
The worm worked by first causing an infected Iranian IR-1 centrifuge to increase from its normal operating speed of 1,064 hertz to 1,410 hertz for 15 minutes before returning to its normal frequency. Twenty-seven days later, the worm went back into action, slowing the infected centrifuges down to a few hundred hertz for a full 50 minutes. The stresses from the excessive, then slower, speeds caused the aluminium centrifugal tubes to expand, often forcing parts of the centrifuges into sufficient contact with each other to destroy the machine. [ 107 ]
According to The Washington Post , International Atomic Energy Agency (IAEA) cameras installed in the Natanz facility recorded the sudden dismantling and removal of approximately 900–1,000 centrifuges during the time the Stuxnet worm was reportedly active at the plant. Iranian technicians, however, were able to quickly replace the centrifuges and the report concluded that uranium enrichment was likely only briefly disrupted. [ 108 ]
On 15 February 2011, the Institute for Science and International Security released a report concluding that:
Assuming Iran exercises caution, Stuxnet is unlikely to destroy more centrifuges at the Natanz plant. Iran likely cleaned the malware from its control systems. To prevent re-infection, Iran will have to exercise special caution since so many computers in Iran contain Stuxnet.
Although Stuxnet appears to be designed to destroy centrifuges at the Natanz facility, destruction was by no means total. Moreover, Stuxnet did not lower the production of low enriched uranium (LEU) during 2010. LEU quantities could have certainly been greater, and Stuxnet could be an important part of the reason why they did not increase significantly. Nonetheless, there remain important questions about why Stuxnet destroyed only 1,000 centrifuges. One observation is that it may be harder to destroy centrifuges by use of cyber attacks than often believed. [ 109 ]
The Associated Press reported that the semi-official Iranian Students News Agency released a statement on 24 September 2010 stating that experts from the Atomic Energy Organization of Iran met in the previous week to discuss how Stuxnet could be removed from their systems. [ 19 ] According to analysts, such as David Albright , Western intelligence agencies had been attempting to sabotage the Iranian nuclear program for some time. [ 110 ] [ 111 ]
The head of the Bushehr Nuclear Power Plant told Reuters that only the personal computers of staff at the plant had been infected by Stuxnet and the state-run newspaper Iran Daily quoted Reza Taghipour , Iran's telecommunications minister, as saying that it had not caused "serious damage to government systems". [ 90 ] The Director of Information Technology Council at the Iranian Ministry of Industries and Mines, Mahmud Liaii, has said that: "An electronic war has been launched against Iran... This computer worm is designed to transfer data about production lines from our industrial plants to locations outside Iran." [ 112 ]
In response to the infection, Iran assembled a team to combat it. With more than 30,000 IP addresses affected in Iran, an official said that the infection was fast spreading in Iran and the problem had been compounded by the ability of Stuxnet to mutate. Iran had set up its own systems to clean up infections and had advised against using the Siemens SCADA antivirus since it is suspected that the antivirus contains embedded code which updates Stuxnet instead of removing it. [ 113 ] [ 114 ] [ 115 ] [ 116 ]
According to Hamid Alipour, deputy head of Iran's government Information Technology Company, "The attack is still ongoing and new versions of this virus are spreading." He reported that his company had begun the cleanup process at Iran's "sensitive centres and organizations." [ 114 ] "We had anticipated that we could root out the virus within one to two months, but the virus is not stable, and since we started the cleanup process three new versions of it have been spreading", he told the Islamic Republic News Agency on 27 September 2010. [ 116 ]
On 29 November 2010, Iranian president Mahmoud Ahmadinejad stated for the first time that a computer virus had caused problems with the controller handling the centrifuges at its Natanz facilities. According to Reuters, he told reporters at a news conference in Tehran, "They succeeded in creating problems for a limited number of our centrifuges with the software they had installed in electronic parts." [ 117 ] [ 118 ]
On the same day two Iranian nuclear scientists were targeted in separate, but nearly simultaneous car bomb attacks near Shahid Beheshti University in Tehran. Majid Shahriari , a quantum physicist, was killed. Fereydoon Abbasi , a high-ranking official at the Ministry of Defense was seriously wounded. Wired speculated that the assassinations could indicate that whoever was behind Stuxnet felt that it was not sufficient to stop the nuclear program. [ 119 ] That same Wired article suggested the Iranian government could have been behind the assassinations. [ 119 ] In January 2010, another Iranian nuclear scientist, a physics professor at Tehran University , was killed in a similar bomb explosion. [ 119 ] On 11 January 2012, a director of the Natanz nuclear enrichment facility, Mostafa Ahmadi Roshan , was killed in an attack quite similar to the one that killed Shahriari. [ 120 ]
An analysis by the FAS demonstrates that Iran's enrichment capacity grew during 2010. The study indicated that Iran's centrifuges appeared to be performing 60% better than in the previous year, which would significantly reduce Tehran's time to produce bomb-grade uranium. The FAS report was reviewed by an official with the IAEA who affirmed the study. [ 121 ] [ 122 ] [ 123 ]
European and US officials, along with private experts, told Reuters that Iranian engineers were successful in neutralizing and purging Stuxnet from their country's nuclear machinery. [ 124 ]
Given the growth in Iranian enrichment ability in 2010, the country may have intentionally put out misinformation to cause Stuxnet's creators to believe that the worm was more successful in disabling the Iranian nuclear program than it actually was. [ 38 ]
Israel , through Unit 8200 , [ 125 ] [ 126 ] has been speculated to be the country behind Stuxnet in multiple media reports [ 89 ] [ 102 ] [ 127 ] and by experts such as Richard A. Falkenrath , former Senior Director for Policy and Plans within the US Office of Homeland Security . [ 128 ] [ 90 ] Yossi Melman, who covers intelligence for Israeli newspaper Haaretz and wrote a book about Israeli intelligence, also suspected that Israel was involved, noting that Meir Dagan , the former (up until 2011) head of the national intelligence agency Mossad , had his term extended in 2009 because he was said to be involved in important projects. Additionally, in 2010 Israel grew to expect that Iran would have a nuclear weapon in 2014 or 2015 – at least three years later than earlier estimates – without the need for an Israeli military attack on Iranian nuclear facilities; "They seem to know something, that they have more time than originally thought", he added. [ 27 ] [ 61 ] Israel has not publicly commented on the Stuxnet attack but in 2010 confirmed that cyberwarfare was now among the pillars of its defense doctrine, with a military intelligence unit set up to pursue both defensive and offensive options. [ 129 ] [ 130 ] [ 131 ] When questioned whether Israel was behind the virus in the fall of 2010, some Israeli officials [ who? ] broke into "wide smiles", fueling speculation that the government of Israel was involved with its genesis. [ 132 ] American presidential advisor Gary Samore also smiled when Stuxnet was mentioned, [ 61 ] although American officials have suggested that the virus originated abroad. [ 132 ] According to The Telegraph , Israeli newspaper Haaretz reported that a video celebrating operational successes of Gabi Ashkenazi , retiring Israel Defense Forces (IDF) Chief of Staff, was shown at his retirement party and included references to Stuxnet, thus strengthening claims that Israel's security forces were responsible. [ 133 ]
In 2009, a year before Stuxnet was discovered, Scott Borg of the United States Cyber-Consequences Unit (US-CCU) [ 134 ] suggested that Israel may prefer to mount a cyberattack rather than a military strike on Iran's nuclear facilities. [ 111 ] In late 2010 Borg stated, "Israel certainly has the ability to create Stuxnet and there is little downside to such an attack because it would be virtually impossible to prove who did it. So a tool like Stuxnet is Israel's obvious weapon of choice." [ 135 ] Iran uses P-1 centrifuges at Natanz, the design for which A. Q. Khan stole in 1976 and took to Pakistan. His black market nuclear-proliferation network sold P-1s to, among other customers, Iran. Experts believe that Israel also somehow acquired P-1s and tested Stuxnet on the centrifuges, installed at the Dimona facility that is part of its own nuclear program . [ 61 ] The equipment may be from the United States, which received P-1s from Libya's former nuclear program . [ 136 ] [ 61 ]
Some have also cited several clues in the code such as a concealed reference to the word MYRTUS , believed to refer to the Latin name myrtus of the Myrtle tree, which in Hebrew is called hadassah . Hadassah was the birth name of the former Jewish queen of Persia, Queen Esther . [ 137 ] [ 138 ] However, it may be that the "MYRTUS" reference is simply a misinterpreted reference to SCADA components known as RTUs (Remote Terminal Units) and that this reference is actually "My RTUs"–a management feature of SCADA. [ 139 ] Also, the number 19790509 appears once in the code and may refer to the date 1979 May 09 , the day Habib Elghanian , a Persian Jew, was executed in Tehran . [ 67 ] [ 140 ] [ 141 ] Another date that appears in the code is "24 September 2007", the day that Iran's president Mahmoud Ahmadinejad spoke at Columbia University and made comments questioning the validity of the Holocaust . [ 38 ] Such data is not conclusive, since, as noted by Symantec, "...attackers would have the natural desire to implicate another party". [ 67 ]
There has also been reports on the involvement of the United States and its collaboration with Israel, [ 142 ] [ 143 ] with one report stating that "there is vanishingly little doubt that [it] played a role in creating the worm." [ 38 ] It has been reported that the United States, under one of its most secret programs, initiated by the Bush administration and accelerated by the Obama administration , [ 144 ] has sought to destroy Iran's nuclear program by novel methods such as undermining Iranian computer systems. A leaked diplomatic cable showed how the United States was advised to target Iran's nuclear abilities through 'covert sabotage'. [ 145 ] An article in The New York Times in January 2009 credited a then-unspecified program with preventing an Israeli military attack on Iran where some of the efforts focused on ways to destabilize the centrifuges. [ 146 ] A Wired article claimed that Stuxnet "is believed to have been created by the United States". [ 147 ] Dutch historian Peter Koop speculated that the Tailored Access Operations could have developed Stuxnet, possibly in collaboration with Israel. [ 148 ]
The fact that John Bumgarner, a former intelligence officer and member of the United States Cyber-Consequences Unit (US-CCU), published an article prior to Stuxnet being discovered or deciphered, that outlined a strategic cyber strike on centrifuges [ 149 ] and suggests that cyber attacks are permissible against nation states which are operating uranium enrichment programs that violate international treaties gives some credibility to these claims. Bumgarner pointed out that the centrifuges used to process fuel for nuclear weapons are a key target for cybertage operations and that they can be made to destroy themselves by manipulating their rotational speeds. [ 150 ]
In a March 2012 interview with 60 Minutes , retired US Air Force General Michael Hayden – who served as director of both the Central Intelligence Agency and National Security Agency – while denying knowledge of who created Stuxnet said that he believed it had been "a good idea" but that it carried a downside in that it had legitimized the use of sophisticated cyber weapons designed to cause physical damage. Hayden said, "There are those out there who can take a look at this... and maybe even attempt to turn it to their own purposes". In the same report, Sean McGurk, a former cybersecurity official at the Department of Homeland Security noted that the Stuxnet source code could now be downloaded online and modified to be directed at new target systems. Speaking of the Stuxnet creators, he said, "They opened the box. They demonstrated the capability... It's not something that can be put back." [ 151 ]
In April 2011, Iranian government official Gholam Reza Jalali stated that an investigation had concluded that the United States and Israel were behind the Stuxnet attack. [ 152 ] Frank Rieger stated that three European countries' intelligence agencies agreed that Stuxnet was a joint United States-Israel effort. The code for the Windows injector and the PLC payload differ in style, likely implying collaboration. Other experts believe that a US-Israel cooperation is unlikely because "the level of trust between the two countries' intelligence and military establishments is not high." [ 38 ]
A Wired magazine article about US General Keith B. Alexander stated: "And he and his cyber warriors have already launched their first attack. The cyber weapon that came to be known as Stuxnet was created and built by the NSA in partnership with the CIA and Israeli intelligence in the mid-2000s." [ 153 ]
China , [ 154 ] Jordan , and France are other possibilities, and Siemens may have also participated. [ 38 ] [ 142 ] Langner speculated that the infection may have spread from USB drives belonging to Russian contractors since the Iranian targets were not accessible via the Internet. [ 21 ] [ 155 ] In 2019, it was reported that an Iranian mole working for Dutch intelligence at the behest of Israel and the CIA inserted the Stuxnet virus with a USB flash drive or convinced another person working at the Natanz facility to do so. [ 156 ] [ 157 ]
Sandro Gaycken from the Free University Berlin argued that the attack on Iran was a ruse to distract from Stuxnet's real purpose. According to him, its broad dissemination in more than 100,000 industrial plants worldwide suggests a field test of a cyber weapon in different security cultures, testing their preparedness, resilience, and reactions, all highly valuable information for a cyberwar unit. [ 158 ]
The United Kingdom has denied involvement in the worm's creation. [ 159 ]
In July 2013, Edward Snowden claimed that Stuxnet was cooperatively developed by the United States and Israel. [ 160 ]
According to a report by Reuters, the NSA also tried to sabotage North Korea 's nuclear program using a version of Stuxnet. The operation was reportedly launched in tandem with the attack that targeted Iranian centrifuges in 2009–10. The North Korean nuclear program shares a number of similarities with the Iranian, both having been developed with technology transferred by Pakistani nuclear scientist A.Q. Khan . The effort failed, however, because North Korea's extreme secrecy and isolation made it impossible to introduce Stuxnet into the nuclear facility. [ 161 ]
In 2018, Gholamreza Jalali, Iran's chief of the National Passive Defence Organisation (NPDO), claimed that his country fended off a Stuxnet-like attack targeting the country's telecom infrastructure. Iran's Telecommunications minister Mohammad-Javad Azari Jahromi has since accused Israel of orchestrating the attack. Iran plans to sue Israel through the International Court of Justice (ICJ) and is also willing to launch a retaliation attack if Israel does not desist. [ 162 ]
A November 2013 article [ 163 ] in Foreign Policy magazine claims existence of an earlier, much more sophisticated attack on the centrifuge complex at Natanz, focused on increasing centrifuge failure rate over a long time period by stealthily inducing uranium hexafluoride gas overpressure incidents. This malware was capable of spreading only by being physically installed, probably by previously contaminated field equipment used by contractors working on Siemens control systems within the complex. It is not clear whether this attack attempt was successful, but follow-up by a different, simpler, and more conventional attack is indicative that it was not. [ citation needed ]
On 1 September 2011, a new worm was found, thought to be related to Stuxnet. The Laboratory of Cryptography and System Security (CrySyS) of the Budapest University of Technology and Economics analyzed the malware, naming the threat Duqu . [ 164 ] [ 165 ] Symantec , based on this report, continued the analysis of the threat, calling it "nearly identical to Stuxnet, but with a completely different purpose", and published a detailed technical paper. [ 166 ] The main component used in Duqu is designed to capture information [ 62 ] such as keystrokes and system information. The exfiltrated data may be used to enable a future Stuxnet-like attack. On 28 December 2011, Kaspersky Lab's director of global research and analysis spoke to Reuters about recent research results showing that the platform Stuxnet and Duqu both originated in 2007, and is being referred to as Tilded due to the ~d at the beginning of the file names. Also uncovered in this research was the possibility for three more variants based on the Tilded platform. [ 167 ]
In May 2012, the new malware "Flame" was found, thought to be related to Stuxnet. [ 168 ] Researchers named the program "Flame" after the name of one of its modules. [ 168 ] After analysing the code of Flame, Kaspersky Lab said that there is a strong relationship between Flame and Stuxnet. An early version of Stuxnet contained code to propagate infections via USB drives that is nearly identical to a Flame module that exploits the same vulnerability. [ 169 ]
Since 2010, there has been extensive international media coverage on Stuxnet and its aftermath. In early commentary, The Economist pointed out that Stuxnet was "a new kind of cyber-attack." [ 170 ] On 8 July 2011, Wired then published an article detailing how network security experts were able to decipher the origins of Stuxnet. In that piece, Kim Zetter claimed that Stuxnet's "cost–benefit ratio is still in question." [ 171 ] Later commentators tended to focus on the strategic significance of Stuxnet as a cyber weapon. Following the Wired piece, Holger Stark called Stuxnet the "first digital weapon of geopolitical importance, it could change the way wars are fought." [ 172 ] Meanwhile, Eddie Walsh referred to Stuxnet as "the world's newest high-end asymmetric threat." [ 173 ] Ultimately, some claim that the "extensive media coverage afforded to Stuxnet has only served as an advertisement for the vulnerabilities used by various cybercriminal groups." [ 174 ] While that may be the case, the media coverage has also increased awareness of cyber security threats.
Alex Gibney 's 2016 documentary Zero Days covers the phenomenon around Stuxnet. [ 175 ] A zero-day (also known as 0-day) vulnerability is a computer-software vulnerability that is unknown to, or unaddressed by, those who should be interested in mitigating the vulnerability (including the vendor of the target software). Until the vulnerability is mitigated, hackers can exploit it to adversely affect computer programs, data, additional computers or a network.
In 2016, it was revealed that General James Cartwright , the former head of the U.S. Strategic Command, had leaked information related to Stuxnet. He later pleaded guilty for lying to FBI agents pursuing an investigation into the leak. [ 176 ] [ 177 ] On 17 January 2017, he was granted a full pardon in this case by President Obama, thus expunging his conviction.
Besides the aforementioned Alex Gibney documentary Zero Days (2016), which looks into the malware and the cyberwarfare surrounding it, other works which reference Stuxnet include: | https://en.wikipedia.org/wiki/Stuxnet |
Stygofauna are any fauna that live in groundwater systems or aquifers, such as caves , fissures and vugs . Stygofauna and troglofauna are the two types of subterranean fauna (based on life-history). Both are associated with subterranean environments – stygofauna are associated with water, and troglofauna with caves and spaces above the water table . Stygofauna can live within freshwater aquifers and within the pore spaces of limestone , calcrete or laterite , whilst larger animals can be found in cave waters and wells. Stygofaunal animals, like troglofauna, are divided into three groups based on their life history - stygophiles, stygoxenes, and stygobites.
Extensive research of stygofauna has been undertaken in countries with ready access to caves and wells such as France , Slovenia , the US and, more recently, Australia . Many species of stygofauna, particularly obligate stygobites, are endemic to specific regions or even individual caves. This makes them an important focus for the conservation of groundwater systems.
Stygofauna have adapted to the limited food supply and are extremely energy efficient. Stygofauna feed on plankton, bacteria, and plants found in streams. [ 2 ]
To survive in an environment where food is scarce and oxygen levels are low, stygofauna often have very low metabolism . As a result, stygofauna may live longer than other terrestrial species. For example, the crayfish Orconectes australis from Shelta Cave in Alabama has been estimated to reproduce at 100 years and live to 175 [ 3 ] although more recent research suggests their lifespan is closer to 22 years. [ 4 ]
Stygofauna are found all over the world and include turbellarians , gastropods , isopods , amphipods , decapods , fishes , or salamanders .
Stygofaunal gastropods are found in the U.S., Europe, Japan, [ 5 ] and Australia. Stygobite turbellarians can be found in North America, Europe and Japan. [ 5 ] Stygobite isopods, amphipods and decapods are found widely around the world.
Cave salamanders are found in Europe and the U.S., but only some of these (such as the olm and Texas blind salamander ) are entirely aquatic.
The approximately 170 species of stygobite fish, popularly known as cavefish , are found in all continents, except Antarctica, but with major geographical differences in the species richness . [ 6 ] [ 7 ]
Several methods are currently used to sample stygofauna. The accepted method is to lower a haul net, which is a weighted plankton net (with minimum 50 μm mesh size), to the bottom of the bore, well or sinkhole and jiggled to agitate sediments at the base of the bore. The net is then slowly retrieved, filtering stygofauna out of the water column on the upward haul. [ 8 ] A more destructive method is to pump bore water (using a Bou-Rouch pump) through a net on the surface (referred to as the Karaman-Chappuis method). [ 8 ] [ 9 ] These two methods provide animals for morphological and molecular analyses. A video camera can also be used down the hole, providing information on life-history of the organisms but, given the small size of the animals no species determinations can be made. | https://en.wikipedia.org/wiki/Stygofauna |
Stylebase for Eclipse is a free and open-source tool for software architects and designers. The tool is an extension to Eclipse , the most widely used open source integrated development environment . Stylebase is a knowledge base of architectural models, e.g. architectural patterns , design patterns , reference architectures , macro- and microarchitectures. Stylebase for Eclipse is a tool for browsing and maintaining the stylebase.
Stylebase for Eclipse assist in quality- and model-driven architecture design. Quality-driven architecture design relies on the assumption that architectural patterns and styles, and also design patterns , embody different quality attributes . When patterns are applied in the architecture, quality characteristics of the selected patterns are reflected to the entire software architecture. Stylebase for Eclipse helps an architect in selecting styles and patterns which best promote the system's desired quality goals. Stylebase also aims to improve knowledge sharing and the reuse of architectural models in local and distributed development teams. Thereby, using the tool is claimed to increase both the productivity of development teams and quality of software products. [ 1 ]
Stylebase for Eclipse open source community is coordinated by VTT Technical Research Centre of Finland . The tool can be modified and redistributed under the terms of the GNU General Public Licence . The first version of the tool was published in October 2006 and 4 new releases have been published since them. Stylebase for Eclipse is in early phase of its life cycle - the currently available release is still a Beta version . | https://en.wikipedia.org/wiki/Stylebase_for_Eclipse |
Styphnic acid (from Greek stryphnos "astringent" [ 1 ] ), or 2,4,6-trinitro-1,3-benzenediol, is a yellow astringent acid that forms hexagonal crystals . It is used in the manufacture of dyes , pigments , inks, medicines, and explosives such as lead styphnate . It is itself a low-sensitivity explosive, similar to picric acid , but explodes upon rapid heating. [ 2 ]
It was discovered in 1808 by Michel Eugène Chevreul who was researching ways of producing colorants from tropical logwoods. [ 3 ] Upon boiling Pernambuco wood extract with nitric acid he filtered crystals understood to be styphnic acid in an impure form. [ 4 ] In mid-1840s chemists purified and systematically studied the substance with Rudolf Christian Böttger and Heinrich Will giving its modern name, [ 5 ] while in 1871 J. Schreder proved that it is trinitroresorcinol. [ 6 ] [ 7 ]
It may be prepared by the nitration of resorcinol with a mixture of nitric and sulfuric acid . [ 8 ]
This compound is an example of a trinitrophenol .
Like picric acid, it is a moderately strong acid, capable of displacing carbon dioxide from solutions of sodium carbonate , for example.
It may be reacted with weakly basic oxides, such as those of lead and silver, to form the corresponding salts.
The solubility of picric acid and styphnic acid in water is less than their corresponding mono- and di-nitro compounds, and far less than their non-nitrated precursor phenols, so they may be purified by fractional crystallisation. | https://en.wikipedia.org/wiki/Styphnic_acid |
Styrene is an organic compound with the chemical formula C 6 H 5 CH=CH 2 . Its structure consists of a vinyl group as substituent on benzene . Styrene is a colorless, oily liquid , although aged samples can appear yellowish. The compound evaporates easily and has a sweet smell, although high concentrations have a less pleasant odor. [ vague ] Styrene is the precursor to polystyrene and several copolymers, and is typically made from benzene for this purpose. Approximately 25 million tonnes of styrene were produced in 2010, [ 6 ] increasing to around 35 million tonnes by 2018.
Styrene is named after storax balsam (often commercially sold as styrax ), the resin of Liquidambar trees of the Altingiaceae plant family. Styrene occurs naturally in small quantities in some plants and foods ( cinnamon , coffee beans , balsam trees and peanuts ) [ 7 ] and is also found in coal tar .
In 1839, the German apothecary Eduard Simon isolated a volatile liquid from the resin (called storax or styrax (Latin)) of the American sweetgum tree ( Liquidambar styraciflua ). He called the liquid "styrol" (now called styrene). [ 8 ] [ 9 ] He also noticed that when styrol was exposed to air, light, or heat, it gradually transformed into a hard, rubber-like substance, which he called "styrol oxide". [ 10 ]
By 1845, the German chemist August Wilhelm von Hofmann and his student John Buddle Blyth had determined styrene's empirical formula : C 8 H 8 . [ 11 ] They had also determined that Simon's "styrol oxide"—which they renamed "metastyrol"—had the same empirical formula as styrene. [ 12 ] Furthermore, they could obtain styrene by dry-distilling "metastyrol". [ 13 ]
In 1865, the German chemist Emil Erlenmeyer found that styrene could form a dimer , [ 14 ] and in 1866 the French chemist Marcelin Berthelot stated that "metastyrol" was a polymer of styrene (i.e. polystyrene ). [ 15 ] Meanwhile, other chemists had been investigating another component of storax, namely, cinnamic acid . They had found that cinnamic acid could be decarboxylated to form "cinnamene" (or "cinnamol"), which appeared to be styrene.
In 1845, French chemist Emil Kopp suggested that the two compounds were identical, [ 16 ] and in 1866, Erlenmeyer suggested that both "cinnamol" and styrene might be vinylbenzene. [ 17 ] However, the styrene that was obtained from cinnamic acid seemed different from the styrene that was obtained by distilling storax resin: the latter was optically active . [ 18 ] Eventually, in 1876, the Dutch chemist van 't Hoff resolved the ambiguity: the optical activity of the styrene that was obtained by distilling storax resin was due to a contaminant. [ 19 ]
The vast majority of styrene is produced from ethylbenzene , [ 20 ] and almost all ethylbenzene produced worldwide is intended for styrene production. As such, the two production processes are often highly integrated. Ethylbenzene is produced via a Friedel–Crafts reaction between benzene and ethene ; originally this used aluminum chloride as a catalyst , but in modern production this has been replaced by zeolites .
Around 80% of styrene is produced by the dehydrogenation of ethylbenzene . This is achieved using superheated steam (up to 600 °C) over an iron(III) oxide catalyst. [ 21 ] The reaction is highly endothermic and reversible, with a typical yield of 88–94%.
The crude ethylbenzene/styrene product is then purified by distillation. As the difference in boiling points between the two compounds is only 9 °C at ambient pressure this necessitates the use of a series of distillation columns. This is energy intensive and is further complicated by the tendency of styrene to undergo thermally induced polymerisation into polystyrene, [ 22 ] requiring the continuous addition of polymerization inhibitor to the system.
Styrene is also co-produced commercially in a process known as POSM ( Lyondell Chemical Company ) or SM/PO ( Shell ) for styrene monomer / propylene oxide . In this process, ethylbenzene is treated with oxygen to form the ethylbenzene hydroperoxide . This hydroperoxide is then used to oxidize propylene to propylene oxide, which is also recovered as a co-product. The remaining 1-phenylethanol is dehydrated to give styrene:
Extraction from pyrolysis gasoline is performed on a limited scale. [ 20 ]
Styrene can be produced from toluene and methanol , which are cheaper raw materials than those in the conventional process. This process has suffered from low selectivity associated with the competing decomposition of methanol. [ 23 ] Exelus Inc. claims to have developed this process with commercially viable selectivities, at 400–425 °C and atmospheric pressure, by forcing these components through a proprietary zeolitic catalyst. It is reported [ 24 ] that an approximately 9:1 mixture of styrene and ethylbenzene is obtained, with a total styrene yield of over 60%. [ 25 ]
Another route to styrene involves the reaction of benzene and ethane . This process is being developed by Snamprogetti and Dow. Ethane, along with ethylbenzene, is fed to a dehydrogenation reactor with a catalyst capable of simultaneously producing styrene and ethylene. The dehydrogenation effluent is cooled and separated and the ethylene stream is recycled to the alkylation unit. The process attempts to overcome previous shortcomings in earlier attempts to develop production of styrene from ethane and benzene, such as inefficient recovery of aromatics, production of high levels of heavies and tars, and inefficient separation of hydrogen and ethane. Development of the process is ongoing. [ 26 ]
A laboratory synthesis of styrene entails the decarboxylation of cinnamic acid : [ 27 ]
Styrene was first prepared by this method. [ 28 ]
The presence of the vinyl group allows styrene to polymerize . Commercially significant products include polystyrene , acrylonitrile butadiene styrene (ABS), styrene-butadiene rubber (SBR), styrene-butadiene latex, SIS (styrene-isoprene-styrene), S-EB-S (styrene-ethylene/butylene-styrene), styrene- divinylbenzene (S-DVB), styrene-acrylonitrile resin (SAN), and unsaturated polyesters used in resins and thermosetting compounds . These materials are used in rubber, plastic, insulation, fiberglass , pipes, automobile and boat parts, food containers, and carpet backing.
As a liquid or a gas, pure styrene will polymerise spontaneously to polystyrene, without the need of external initiators . [ 29 ] This is known as autopolymerisation . At 100 °C it will autopolymerise at a rate of ~2% per hour, and more rapidly than this at higher temperatures. [ 22 ] As the autopolymerisation reaction is exothermic it can be self-accelerating, with a real risk of a thermal runaway , potentially leading to an explosion. Examples include the 2019 explosion of the tanker Stolt Groenland , [ 30 ] explosions at the Phillips Petroleum Company in 1999 and 2000 and overheating styrene tanks leading to the 2020 Visakhapatnam gas leak , which killed several people. [ 31 ] [ 32 ] The autopolymerisation reaction can only be kept in check by the continuous addition of polymerisation inhibitors .
Styrene is regarded as a "known carcinogen ", especially in case of eye contact, but also in case of skin contact, of ingestion and of inhalation, according to several sources. [ 20 ] [ 33 ] [ 34 ] [ 35 ] Styrene is largely metabolized into styrene oxide in humans, resulting from oxidation by cytochrome P450 . Styrene oxide is considered toxic , mutagenic , and possibly carcinogenic . Styrene oxide is subsequently hydrolyzed in vivo to styrene glycol by the enzyme epoxide hydrolase . [ 36 ] The US Environmental Protection Agency (EPA) has described styrene to be "a suspected toxin to the gastrointestinal tract, kidney, and respiratory system, among others". [ 37 ] [ 38 ]
On 10 June 2011, the US National Toxicology Program has described styrene as "reasonably anticipated to be a human carcinogen". [ 39 ] [ 40 ] However, a STATS author describes [ 41 ] a review that was done on scientific literature and concluded that "The available epidemiologic evidence does not support a causal relationship between styrene exposure and any type of human cancer". [ 42 ] Despite this claim, work has been done by Danish researchers to investigate the relationship between occupational exposure to styrene and cancer. They concluded, "The findings have to be interpreted with caution, due to the company based exposure assessment, but the possible association between exposures in the reinforced plastics industry , mainly styrene, and degenerative disorders of the nervous system and pancreatic cancer, deserves attention". [ 43 ] In 2012, the Danish EPA concluded that the styrene data do not support a cancer concern for styrene. [ 44 ] The US EPA does not have a cancer classification for styrene, [ 45 ] but it has been the subject of their Integrated Risk Information System (IRIS) program. [ 46 ]
The National Toxicology Program of the US Department of Health and Human Services has determined that styrene is "reasonably anticipated to be a human carcinogen". [ 47 ] Various regulatory bodies refer to styrene, in various contexts, as a possible or potential human carcinogen. The International Agency for Research on Cancer considers styrene to be "probably carcinogenic to humans". [ 48 ] [ 49 ]
The neurotoxic [ 50 ] properties of styrene have also been studied and reported effects include effects on vision [ 51 ] (although unable to reproduce in a subsequent study [ 52 ] ) and on hearing functions. [ 53 ] [ 54 ] [ 55 ] [ 56 ] Studies on rats have yielded contradictory results, [ 54 ] [ 55 ] but epidemiologic studies have observed a synergistic interaction with noise in causing hearing difficulties. [ 57 ] [ 58 ] [ 59 ] [ 60 ] | https://en.wikipedia.org/wiki/Styrene |
Styrene-butadiene or styrene-butadiene rubber ( SBR ) describe families of synthetic rubbers derived from styrene and butadiene (the version developed by Goodyear is called Neolite [ 1 ] ). These materials have good abrasion resistance and good aging stability when protected by additives. In 2012, more than 5.4 million tonnes of SBR were processed worldwide. [ 2 ] About 50% of car tires are made from various types of SBR. The styrene/butadiene ratio influences the properties of the polymer: with high styrene content, the rubbers are harder and less rubbery. [ 3 ] SBR is not to be confused with the thermoplastic elastomer , styrene-butadiene block copolymer , although being derived from the same monomers.
SBR is derived from two monomers , styrene and butadiene . The mixture of these two monomers is polymerized by two processes: from solution (S-SBR) or as an emulsion (E-SBR). [ 4 ] E-SBR is more widely used.
E-SBR produced by emulsion polymerization is initiated by free radicals . Reaction vessels are typically charged with the two monomers, a free radical generator, and a chain transfer agent such as an alkyl mercaptan . Radical initiators include potassium persulfate and hydroperoxides in combination with ferrous salts. Emulsifying agents include various soaps . By "capping" the growing organic radicals, mercaptans (e.g. dodecylthiol ), control the molecular weight of the product. Typically, polymerizations are allowed to proceed only to ca. 70%, a method called "short stopping". In this way, various additives can be removed from the polymer. [ 3 ]
Solution-SBR is produced by an anionic polymerization process. Polymerization is initiated by alkyl lithium compounds . Water and oxygen are strictly excluded. The process is homogeneous (all components are dissolved), which provides greater control over the process, allowing tailoring of the polymer. The organolithium compound adds to one of the monomers , generating a carbanion that then adds to another monomer, and so on. For tire manufacture, S-SBR is increasingly favored because it offers improved wet grip and reduced rolling resistance, which translate to greater safety and better fuel economy, respectively. [ 5 ]
The material was initially marketed with the brand name Buna S . Its name derives Bu for butadiene and Na for sodium ( natrium in several languages including Latin, German, and Dutch), and S for styrene . [ 6 ] [ 7 ] [ 5 ] Buna S is an addition copolymer.
Styrene-butadiene is a commodity material which competes with natural rubber . The elastomer is used widely in pneumatic tires . This application mainly calls for E-SBR, although S-SBR is growing in popularity. Other uses include shoe heels and soles, gaskets , and even chewing gum . [ 3 ]
Latex (emulsion) SBR is extensively used in coated papers , being one of the cheapest resins to bind pigmented coatings. In 2010, more than half (54%) of all used dry binders consisted of SB-based latexes. [ 8 ] This amounted for roughly 1.2 million tonnes.
It is also used in building applications, as a sealing and binding agent behind renders as an alternative to PVA , but is more expensive. In the latter application, it offers better durability, reduced shrinkage and increased flexibility, as well as being resistant to emulsification in damp conditions.
SBR is often used as part of cement based substructural (basement)waterproofing systems where as a liquid it is mixed with water to form the Gauging solution for mixing the powdered Tanking material to a slurry. SBR aids the bond strength, reduces the potential for shrinkage and adds an element of flexibility.
It is also used by speaker driver manufacturers as the material for low damping rubber surrounds.
Additionally, it is used in some rubber cutting boards .
SBR is also used as a binder in lithium-ion battery electrodes, in combination with carboxymethyl cellulose as a water-based alternative for, e.g. polyvinylidene fluoride . [ 9 ]
Styrene-butane rubber is also used in gasketed-plate heat exchangers. It is used at moderate temperature up to 85 deg C, (358 K) for aqueous systems. [ 10 ]
SBS Filaments [ 11 ] also exist for FDM 3D printing
SBR is a replacement for natural rubber . It was originally developed prior to World War II in Germany by chemist Walter Bock in 1929. [ 12 ] Industrial manufacture began during World War II, and was used extensively by the U.S. Synthetic Rubber Program to produce Government Rubber-Styrene (GR-S) to replace the Southeast Asian supply of natural rubber which, under Japanese occupation, was unavailable to Allied nations . [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Styrene-butadiene |
Styrofoam is a brand of closed-cell extruded polystyrene foam (XPS), manufactured to provide continuous building insulation board used in walls, roofs, and foundations as thermal insulation and as a water barrier. This material is light blue in color and is owned and manufactured by DuPont . DuPont also has produced a line of green and white foam shapes for use in crafts and floral arrangements. [ 1 ]
The term styrofoam has become a genericized trademark ; it is often used in the United States as a colloquial term to refer to expanded (not extruded) polystyrene foam ( EPS ). [ 2 ] Outside the United States, EPS is most commonly referred to as simply "polystyrene" with the term "styrofoam" being used in its capacity to describe all forms of extruded polystyrene, not just the Dupont brand itself. Polystyrene (EPS) is often used in food containers , coffee cups, and as cushioning material in packaging . [ 3 ] [ 1 ] Styrofoam is, however, a far less dense material than EPS and is more commonly suited to tasks such as thermal insulation . [ 2 ]
Additionally, it is moderately soluble in many organic solvents, cyanoacrylate , and the propellants and solvents of spray paint .
In the 1940s, researchers, originally at Dow 's Chemical Physics Lab, led by Ray McIntire , found a way to make foamed polystyrene . They rediscovered a method first used by Swedish inventor Carl Georg Munters , and obtained an exclusive license to Munters's patent in the United States. [ 4 ] Dow found ways to adapt Munters's method to make large quantities of extruded polystyrene as a closed cell foam that resists moisture. The patent on this adaptation was filed in 1947. [ 5 ]
Styrofoam has a variety of uses. Styrofoam is composed of 98% air, making it lightweight and buoyant. [ 6 ]
DuPont produces Styrofoam building materials, including varieties of building insulation sheathing and pipe insulation. The claimed R-value of Styrofoam insulation is approximately 5 °F⋅ft 2 ⋅h/BTU for 1 inch thick sheet. [ 7 ]
Styrofoam can be used under roads and other structures to prevent soil disturbances due to freezing and thawing. [ 8 ] [ 9 ]
DuPont also produces Styrofoam blocks and other shapes for use by florists and in craft products. [ 10 ] DuPont insulation Styrofoam has a distinctive blue color; Styrofoam for craft applications is available in white and green. [ 1 ]
The EPA and International Agency for Research on Cancer reported limited evidence that styrene is carcinogenic for humans and experimental animals , meaning that there is a positive association between exposure and cancer and that causality is credible, but that other explanations cannot be confidently excluded. [ 11 ] [ 12 ]
See also the expansive list of environmental issues of polystyrene , among which it being non-biodegradable. | https://en.wikipedia.org/wiki/Styrofoam |
The Stöber process is a chemical process used to prepare silica ( SiO 2 ) particles [ 1 ] of controllable and uniform size [ 2 ] for applications in materials science . It was pioneering [ 3 ] when it was reported by Werner Stöber and his team in 1968, [ 1 ] and remains today the most widely used wet chemistry synthetic approach to silica nanoparticles . [ 3 ] It is an example of a sol-gel process wherein a molecular precursor (typically tetraethylorthosilicate ) is first reacted with water in an alcoholic solution, the resulting molecules then joining together to build larger structures. The reaction produces silica particles with diameters ranging from 50 to 2000 nm , depending on conditions. The process has been actively researched since its discovery, including efforts to understand its kinetics and mechanism – a particle aggregation model was found to be a better fit for the experimental data [ 4 ] than the initially hypothesized LaMer model. [ 5 ] [ 6 ] The newly acquired understanding has enabled researchers to exert a high degree of control over particle size and distribution and to fine-tune the physical properties of the resulting material in order to suit intended applications.
In 1999 a two-stage modification was reported [ 7 ] that allowed the controlled formation of silica particles with small holes . [ 8 ] The process is undertaken at low pH in the presence of a surface-active molecule . The hydrolysis step is completed with the formation of a microemulsion [ 9 ] before adding sodium fluoride to nucleation the condensation process. The non-ionic surfactant is burned away to produce empty pores, increasing the surface area and altering the surface characteristics of the resulting particles, allowing for much greater control over the physical properties of the material. [ 7 ] Development work has also been undertaken for larger pore structures such as macroporous monoliths, [ 10 ] shell-core particles based on polystyrene , [ 11 ] cyclen , [ 12 ] or polyamines , [ 13 ] and carbon spheres. [ 14 ]
Silica produced using the Stöber process is an ideal material to serve as a model for studying colloid phenomena [ 15 ] because of the monodispersity (uniformity) of its particle sizes. [ 16 ] Nanoparticles prepared using the Stöber process have found applications including in the delivery of medications to within cellular structures [ 17 ] and in the preparation of biosensors . [ 18 ] Porous silica Stöber materials have applications in catalysis [ 19 ] and liquid chromatography [ 20 ] due to their high surface area and their uniform, tunable, and highly ordered pore structures. Highly effective thermal insulators known as aerogels can also be prepared using Stöber methods, [ 15 ] and Stöber techniques have been applied to prepare non-silica aerogel systems. [ 21 ] Applying supercritical drying techniques, a Stöber silica aerogel with a specific surface area of 700 m 2 ⋅g −1 and a density of 0.040 g⋅cm −3 can be prepared. [ 22 ] NASA has prepared silica aerogels with a Stöber-process approach for both the Mars Pathfinder and Stardust missions. [ 23 ]
The Stöber process is a sol-gel approach to preparing monodisperse (uniform) spherical silica ( SiO 2 ) materials that was developed by a team led by Werner Stöber and reported in 1968. [ 1 ] The process, an evolution and extension of research described in Gerhard Kolbe's 1956 PhD dissertation, [ 24 ] was an innovative discovery that still has wide applications more than 50 years later. [ 3 ] Silica precursor tetraethyl orthosilicate ( Si(O Et ) 4 , TEOS) is hydrolyzed in alcohol (typically methanol or ethanol ) in the presence of ammonia [ clarification needed ] as a catalyst : [ 1 ] [ 25 ]
The reaction produces ethanol and a mixture of ethoxy silanols (such as Si(OEt) 3 OH , Si(OEt) 2 (OH) 2 , and even Si(OH) 4 ), which can then condense with either TEOS or another silanol with loss of alcohol or water: [ 25 ]
Further hydrolysis of the ethoxy groups and subsequent condensation leads to crosslinking . It is a one-step process as the hydrolysis and condensation reactions occur together in a single reaction vessel. [ 1 ]
The process affords microscopic particles of colloidal silica with diameters ranging from 50 to 2000 nm ; particle sizes are fairly uniform with the distribution determined by the choice of conditions such as reactant concentrations, catalysts, and temperature. [ 2 ] Larger particles are formed when the concentrations of water and ammonia are raised, but with a consequent broadening of the particle-size distribution. [ 26 ] The initial concentration of TEOS is inversely proportional to the size of the resulting particles; thus, higher concentrations on average lead to smaller particles due to the greater number of nucleation sites, but with a greater spread of sizes. Particles with irregular shapes can result when the initial precursor concentration is too high. [ 26 ] The process is temperature-dependent, with cooling (and hence slower reaction rates ) leading to a monotonic increase in average particle size, but control over size distribution cannot be maintained at overly low temperatures. [ 2 ]
In 1999 Cédric Boissière and his team developed a two-step process whereby the hydrolysis at low pH (1 – 4) is completed before the condensation reaction is initiated by the addition of sodium fluoride (NaF). [ 7 ] The two-step procedure includes the addition of a nonionic surfactant template to ultimately produce mesoporous silica particles. [ 8 ] The main advantage of sequencing the hydrolysis and condensation reactions is the ability to ensure complete homogeneity of the surfactant and the precursor TEOS mixture. Consequently, the diameter and shape of the product particles as well as the pore size are determined solely by the reaction kinetics and the quantity of sodium fluoride introduced; higher relative fluoride levels produces a greater number of nucleation sites and hence smaller particles. [ 7 ] Decoupling the hydrolysis and condensation process affords a level of product control that is substantially superior to that afforded by the one-step Stöber process, with particle size controlled nearly completely by the sodium fluoride-to-TEOS ratio. [ 7 ]
The two-step Stöber process begins with a mixture of TEOS, water, alcohol, and a nonionic surfactant, to which hydrochloric acid is added to produce a microemulsion . [ 9 ] This solution is allowed to stand until hydrolysis is complete, much like in the one-step Stöber process but with the hydrochloric acid replacing the ammonia as catalyst. Sodium fluoride is added to the resulting homogeneous solution, initiating the condensation reaction by acting as nucleation seed. [ 7 ] The silica particles are collected by filtration and calcined to remove the nonionic surfactant template by combustion, resulting in the mesoporous silica product.
The selection of conditions for the process allows for control of pore sizes, particle diameter, and their distributions, as in the case of the one-step approach. [ 8 ] Porosity in the modified process is controllable through the introduction of a swelling agent, the choice of temperature, and the quantity of sodium fluoride catalyst added. A swelling agent (such as mesitylene ) causes increases in volume and hence in pore size, often by solvent absorption, but is limited by the solubility of the agent in the system. [ 9 ] Pore size varies directly with temperature, [ 7 ] bound by the lower out of the surfactant cloud point and the boiling point of water. Sodium fluoride concentration produces direct but non-linear changes in porosity, with the effect decreasing as the added fluoride concentration tends to an upper limit. [ 27 ]
The LaMer model for the kinetics of the formation of hydrosols [ 5 ] is widely applicable for production of monodisperse systems, [ 28 ] and it was originally hypothesized that the Stöber process followed this monomer addition model. [ 6 ] This model includes a rapid burst of nucleation forming all of the particle growth sites, then proceeds with hydrolysis as the rate-limiting step for condensation of triethylsilanol monomers to the nucleation sites. [ 29 ] The production of monodisperse particle sizes is attributed to monomer addition happening at a slower rate on larger particles as a consequence of diffusion-limited mass transfer of TEOS. [ 30 ] However, experimental evidence demonstrates that the concentration of hydrolyzed TEOS stays above that required for nucleation until late into the reaction, and the introduction of seeded growth nuclei does not match the kinetics of a monomer addition process. Consequently, the LaMer model has been rejected in favour of a kinetic model based around growth via particle aggregation . [ 4 ]
Under an aggregation-based model, nucleation sites are continually being generated and absorbed where the merging leads to particle growth. [ 31 ] The generation of the nucleation sites and the interaction energy between merging particles dictates the overall kinetics of the reaction. [ 32 ] The generation of the nucleation sites follows the equation below: [ 31 ]
where J is the nucleation rate, k 1 and k 2 are rate constants based on the concentrations of H 2 O and NH 3 and g s is the normalization factor based on the amount of silica precursor. Adjusting the concentration ratios of these compounds directly influences the rate at which nucleation sites are produced. [ 31 ]
Merging of nucleation sites between particles is influenced by their interaction energies. The total interaction energy is dependent on three forces: electrostatic repulsion of like charges, van der Waals attraction between particles, and the effects of solvation . [ 32 ] These interaction energies (equations below) describe the particle aggregation process and demonstrate why the Stöber process produces particles that are uniform in size.
The van der Waals attraction forces are governed by the following equation: [ 32 ]
where A H is the Hamaker constant , R is the distance between the centers of the two particles and a 1 , a 2 are the radii of the two particles. For electrostatic repulsion force the equation is as follows: [ 32 ]
Where ε is the dielectric constant of the medium, k B is the Boltzmann constant , e is the elementary charge , T is the absolute temperature , κ is the inverse Debye length for a 1:1 electrolyte, x is the (variable) distance between the particles, and φ 0 is the surface potential. The final component of the total interaction energy is the solvation repulsion which is as follows: [ 32 ]
where A s is the pre-exponential factor (1.5 × 10 −3 J⋅m −2 ) and L is the decay length (1 × 10 −9 m).
This model for controlled growth aggregation fits with experimental observations from small-angle X-ray scattering techniques [ 33 ] and accurately predicts particle sizing based on initial conditions. In addition, experimental data from techniques including microgravity analysis [ 34 ] and variable pH analysis [ 35 ] agree with predictions from the aggregate growth model.
Several different structural and compositional motifs can be prepared using the Stöber process by the addition of chemical compounds to the reaction mixture . These additives can interact with the silica through chemical and/or physical means either during or after the reaction, leading to substantial changes in morphology of the silica particles.
The one-step Stöber process may be modified to manufacture porous silica by adding a surfactant template to the reaction mixture and calcining the resulting particles. [ 36 ] Surfactants that have been used include cetrimonium bromide , [ 37 ] cetyltrimethylammonium chloride , [ 38 ] and glycerol . [ 39 ] The surfactant forms micelles , small near-spherical balls with a hydrophobic interior and a hydrophilic surface, around which the silica network grows, producing particles with surfactant- and solvent-filled channels. [ 40 ] Calcining the solid leads to removal of the surfactant and solvent molecules by combustion and/or evaporation, leaving mesopore voids throughout the structure, as seen in the illustration at right. [ 36 ] [ 40 ]
Varying the surfactant concentration allows control over the diameter and volume of pores, and thus of the surface area of the product material. [ 37 ] Increasing the amount of surfactant leads to increases in total pore volume and hence particle surface area, but with individual pore diameters remaining unchanged. [ 38 ] Altering the pore diameter can be achieved by varying the amount of ammonia used relative to surfactant concentration; additional ammonia leads to pores with greater diameters, but with a corresponding decrease in total pore volume and particle surface area. [ 37 ] The time allowed for the reaction to proceed also influences porosity, with greater reaction times leading to increases in total pore volume and particle surface area. Longer reaction times also lead to increases in overall silica particle size and related decreases in the uniformity of the size distribution. [ 37 ]
The addition of polyethylene glycol (PEG) to the process causes silica particles to aggregate into a macroporous continuous block, allowing access to a monolithic morphology. [ 10 ] PEG polymers with allyl or silyl end groups with a molecular weight of greater than 2000 g⋅mol −1 are required. The Stöber process is initiated under neutral pH conditions, so that the PEG polymers will congregate around the outside of the growing particles, providing stabilization. Once the aggregates are sufficiently large, the PEG-stabilized particles will contact and irreversibly fuse together by "sticky aggregation" between the PEG chains. [ 10 ] This continues until complete flocculation of all the particles has occurred and the monolith has been formed, at which point the monolith may be calcined and the PEG removed, resulting in a macroporous silica monolith. Both particle size and sticky aggregation can be controlled by varying the molecular weight and concentration of PEG.
Several additives, including polystyrene , [ 11 ] cyclen , [ 12 ] and polyamines , [ 13 ] to the Stöber process allow the creation of shell-core silica particles. Two configurations of the shell-core morphology have been described. One is a silica core with an outer shell of an alternative material such as polystyrene. The second is a silica shell with a morphologically different core such as a polyamine.
The creation of the polystrene/silica core composite particles begins with creation of the silica cores via the one-step Stöber process. Once formed, the particles are treated with oleic acid , which is proposed to react with the surface silanol groups. [ 11 ] Styrene is polymerized around the fatty-acid-modified silica cores. By virtue of size distribution of the silica cores, the styrene polymerizes around them evenly resulting composite particles are similarly sized. [ 11 ]
The silica shell particles created with cyclen and other polyamine ligands are created in a much different fashion. The polyamines are added to the Stöber reaction in the initial steps along with the TEOS precursor. [ 13 ] These ligands interact with the TEOS precursor, resulting in an increase in the speed of hydrolysis; however, as a result they get incorporated into the resulting silica colloids . [ 12 ] The ligands have several nitrogen sites that contain lone pairs of electrons that interact with the hydrolyzed end groups of TEOS. Consequently, the silica condense around the ligands encapsulating them. Subsequently, the silica/ligand capsules stick together to create larger particles. Once all of the ligand has been consumed by the reaction the remaining TEOS aggregates around the outside of the silica/ligand nanoparticles, creating a solid silica outer shell. [ 12 ] The resultant particle has a solid silica shell and an internal core of silica-wrapped ligands. The sizes of the particles cores and shells can be controlled through selection of the shape of the ligands along with the initial concentrations added to the reaction. [ 13 ]
A Stöber-like process has been used to produce monodisperse carbon spheres using resorcinol - formaldehyde resin in place of a silica precursor. [ 14 ] The modified process allows production of carbon spheres with smooth surfaces and a diameter ranging from 200 to 1000 nm. [ 14 ] Unlike the silica-based Stöber process, this reaction is completed at neutral pH and ammonia has a role in stabilizing the individual carbon particles by preventing self- adhesion and aggregation, as well as acting as a catalyst. [ 41 ]
One major advantage of the Stöber process is that it can produce silica particles that are nearly monodisperse, [ 16 ] and thus provides an ideal model for use in studying colloidal phenomena. [ 15 ] It was a pioneering discovery when first published, allowing synthesis of spherical monodisperse silica particles of controlled sizes, and in 2015 remains the most widely used wet chemistry approach to silica nanoparticles. [ 3 ]
The process provides a convenient approach to preparing silica nanoparticles for applications including intracellular drug delivery [ 17 ] and biosensing . [ 18 ] The mesoporous silica nanoparticles prepared by modified Stöber processes have applications in the field of catalysis [ 19 ] and liquid chromatography . [ 20 ] In addition to monodispersity, these materials have very large surface areas as well as uniform, tunable, and highly ordered pore structures, [ 20 ] which makes mesoporous silica uniquely attractive for these applications.
Aerogels are highly porous ultralight materials in which the liquid component of a gel has been replaced with a gas , [ 44 ] and are noteworthy for being solids that are extremely effective thermal insulators [ 43 ] [ 45 ] with very low density . [ 46 ] Aerogels can be prepared in a variety of ways, and though most have been based on silica, [ 45 ] materials based on zirconia , titania , cellulose , polyurethane , and resorcinol — formaldehyde systems, amongst others, have been reported and explored. [ 47 ] The prime disadvantage of a silica-based aerogel is its fragility, though NASA has used them for insulation on Mars rovers , [ 48 ] the Mars Pathfinder and they have been used commercially for insulating blankets and between glass panes for translucent day-lighting panels. [ 45 ] Particulate gels prepared by the Stöber process can be dehydrated rapidly to produce highly effective silica aerogels, as well as xerogels . [ 15 ] They key step is the use of supercritical fluid extraction to remove water from the gel while maintaining the gel structure, which is typically done with supercritical carbon dioxide , [ 45 ] as NASA does. [ 23 ] The resulting aerogels are very effective thermal insulators because of their high porosity with very small pores (in the nanometre range). Conduction of heat through the gas phase is poor, and as the structure greatly inhibits movement of air molecules through the structure, heat transfer through the material is poor, [ 45 ] as can be seen in the image at right where heat from a Bunsen burner transfers so poorly that crayons resting on the aerogel do not melt. [ 43 ] Due to their low density, aerogels have also been used to capture interstellar dust particles with minimal heat changes in slowing them down (to prevent heat-induced changes in the particles) as part of the Stardust mission . [ 23 ]
One method to produce a silica aerogel uses a modified Stöber process and supercritical drying . The product appears translucent with a blue tinge as a consequence of Rayleigh scattering ; when placed in front of a light source, the transmitted light becomes yellowish because the blue one has been scattered. [ 22 ] This aerogel has a surface area of 700 m 2 ⋅g −1 and a density of 0.040 g⋅cm −3 ; [ 22 ] by way of contrast, the density of air is 0.0012 g⋅cm −3 (at 15 °C and 1 atm ).
Silica aerogels held 15 entries for materials properties in the Guinness World Records in 2011, including for best insulator and lowest-density solid, though aerographite took the latter title in 2012. [ 49 ]
Aerographene , with a density of just 13% of that of room temperature air and less dense than helium gas, became the lowest-density solid yet developed in 2013. [ 50 ] [ 51 ] Stöber-like methods have been applied in the preparation of aerogels in non-silica systems. [ 21 ] NASA has developed silica aerogels with a polymer coating to reinforce the structure, [ 48 ] producing a material roughly two orders of magnitude stronger for the same density, and also polymer aerogels, which are flexible and can be formed into a bendable thin film. [ 45 ]
Colloidal silica is widely used in metal casting.
Stöber process may be used to produce spherical particles to grow lustrous opal mineraloids. [ a ] [ 52 ] [ 53 ] | https://en.wikipedia.org/wiki/Stöber_process |
In number theory , Størmer's theorem , named after Carl Størmer , gives a finite bound on the number of consecutive pairs of smooth numbers that exist, for a given degree of smoothness, and provides a method for finding all such pairs using Pell equations . It follows from the Thue–Siegel–Roth theorem that there are only a finite number of pairs of this type, but Størmer gave a procedure for finding them all. [ 1 ]
If one chooses a finite set P = { p 1 , p 2 , … p k } {\displaystyle P=\{p_{1},p_{2},\dots p_{k}\}} of prime numbers then the P -smooth numbers are defined as the set of integers
that can be generated by products of numbers in P . Then Størmer's theorem states that, for every choice of P , there are only finitely many pairs of consecutive P -smooth numbers. Further, it gives a method of finding them all using Pell equations.
Størmer's original procedure involves solving a set of roughly 3 k Pell equations , in each one finding only the smallest solution. A simplified version of the procedure, due to D. H. Lehmer , [ 2 ] is described below; it solves fewer equations but finds more solutions in each equation.
Let P be the given set of primes, and define a number to be P - smooth if all its prime factors belong to P . Assume p 1 = 2 ; otherwise there could be no consecutive P -smooth numbers, because all P -smooth numbers would be odd. Lehmer's method involves solving the Pell equation
for each P -smooth square-free number q other than 2 . Each such number q is generated as a product of a subset of P , so there are 2 k − 1 Pell equations to solve. For each such equation, let x i , y i be the generated solutions, for i in the range from 1 to max(3, ( p k + 1)/2) (inclusive), where p k is the largest of the primes in P .
Then, as Lehmer shows, all consecutive pairs of P -smooth numbers are of the form ( x i − 1)/2, ( x i + 1)/2 . Thus one can find all such pairs by testing the numbers of this form for P -smoothness.
Lehmer's paper furthermore shows [ 3 ] that applying a similar procedure to the equation
where D ranges over all P -smooth square-free numbers other than 1, yields those pairs of P -smooth numbers separated by 2: the smooth pairs are then (x − 1, x + 1) , where ( x , y ) is one of the first max(3, (max( P ) + 1) / 2) solutions of that equation.
To find the ten consecutive pairs of {2,3,5}-smooth numbers (in music theory , giving the superparticular ratios for just tuning ) let P = {2,3,5}. There are seven P -smooth squarefree numbers q (omitting the eighth P -smooth squarefree number, 2): 1, 3, 5, 6, 10, 15, and 30, each of which leads to a Pell equation. The number of solutions per Pell equation required by Lehmer's method is max(3, (5 + 1)/2) = 3, so this method generates three solutions to each Pell equation, as follows.
Størmer's original result can be used to show that the number of consecutive pairs of integers that are smooth with respect to a set of k primes is at most 3 k − 2 k . Lehmer's result produces a tighter bound for sets of small primes: (2 k − 1) × max(3,( p k +1)/2). [ 4 ]
The number of consecutive pairs of integers that are smooth with respect to the first k primes are
The largest integer from all these pairs, for each k , is
OEIS also lists the number of pairs of this type where the larger of the two integers in the pair is square (sequence A117582 in the OEIS ) or triangular (sequence A117583 in the OEIS ), as both types of pair arise frequently.
The size of the solutions can also be bounded: in the case where x and x +1 are required to be P -smooth, then [ 5 ]
where M = max(3, (max( P ) + 1) / 2) and S is the product of all elements of P , and in the case where the smooth pair is x ± 1 , we have [ 6 ]
Louis Mordell wrote about this result, saying that it "is very pretty, and there are many applications of it." [ 7 ]
Chein (1976) used Størmer's method to prove Catalan's conjecture on the nonexistence of consecutive perfect powers (other than 8,9) in the case where one of the two powers is a square .
Mabkhout (1993) proved that every number x 4 + 1, for x > 3, has a prime factor greater than or equal to 137. Størmer's theorem is an important part of his proof, in which he reduces the problem to the solution of 128 Pell equations.
Several authors have extended Størmer's work by providing methods for listing the solutions to more general diophantine equations , or by providing more general divisibility criteria for the solutions to Pell equations. [ 8 ]
Conrey, Holmstrom & McLaughlin (2013) describe a computational procedure that, empirically, finds many but not all of the consecutive pairs of smooth numbers described by Størmer's theorem, and is much faster
than using Pell's equation to find all solutions.
In the musical practice of just intonation , musical intervals can be described as ratios between positive integers. More specifically, they can be described as ratios between members of the harmonic series . Any musical tone can be broken into its fundamental frequency and harmonic frequencies, which are integer multiples of the fundamental. This series is conjectured to be the basis of natural harmony and melody. The tonal complexity of ratios between these harmonics is said to get more complex with higher prime factors. To limit this tonal complexity, an interval is said to be n -limit when both its numerator and denominator are n -smooth . [ 9 ] Furthermore, superparticular ratios are very important in just tuning theory as they represent ratios between adjacent members of the harmonic series. [ 10 ]
Størmer's theorem allows all possible superparticular ratios in a given limit to be found. For example, in the 3-limit ( Pythagorean tuning ), the only possible superparticular ratios are 2/1 (the octave ), 3/2 (the perfect fifth ), 4/3 (the perfect fourth ), and 9/8 (the whole step ). That is, the only pairs of consecutive integers that have only powers of two and three in their prime factorizations are (1,2), (2,3), (3,4), and (8,9). If this is extended to the 5-limit, six additional superparticular ratios are available: 5/4 (the major third ), 6/5 (the minor third ), 10/9 (the minor tone ), 16/15 (the minor second ), 25/24 (the minor semitone ), and 81/80 (the syntonic comma ). All are musically meaningful. | https://en.wikipedia.org/wiki/Størmer's_theorem |
Sub-Doppler cooling is a class of laser cooling techniques that reduce the temperature of atoms and molecules below the Doppler cooling limit . In experiment implementation, Doppler cooling is limited by the broad natural linewidth of the lasers used in cooling. [ 1 ] Regardless of the transition used, however, Doppler cooling processes have an intrinsic cooling limit that is characterized by the momentum recoil from the emission of a photon from the particle. This is called the recoil temperature and is usually far below the linewidth-based limit mentioned above. By laser cooling methods beyond the two-level approximations of atoms, temperature below this limit can be achieved.
Optical pumping between the sublevels that make up an atomic state introduces a new mechanism for achieving ultra-low temperatures. The essential feature of sub-Doppler cooling is the non-adiabaticity of the moving atoms to the light field. For a spatially dependent light field, the orientation of moving atoms is adjusted by optical pumping to fit the conditions of the light field. Yet the moving atoms do not instantly adjust to the light field as they move, their orientation always lags behind the orientation that would exist for stationary atoms, which determines the velocity-dependent differential absorption and hence the cooling. With this cooling process, lower temperatures can be obtained. [ 2 ]
Various methods have been used independently or combined in an experimental sequence to achieve sub-Doppler cooling. One method to produce spatially dependent optical pumping is polarization gradient cooling , where the superposition of two counter-propagating laser beams of orthogonal polarizations lead to a light field with polarization varying on the wavelength scale. A specific mechanism within polarization gradient cooling is Sisyphus cooling , where atoms climb "potential hills" created by the interaction of their internal energy states with spatially varying light fields. The light field in optical molasses in three-dimension also has polarization gradient.
Other methods of sub-Doppler cooling include evaporative cooling , free space Raman cooling , Raman side-band cooling , resolved sideband cooling , electromagnetically induced transparency (EIT) cooling, and the use of a dark magneto-optical trap. These techniques can be used depending on the minimum temperature needed and specifications of the individual setup. For example, an optical molasses time-of-flight technique was used to cool sodium (Doppler limit T D ≈ 240 μ K {\displaystyle T_{D}\approx 240\ \mu K} ) to 43 ± 20 μ K {\textstyle 43\pm 20\ \mu K} . [ 3 ]
Motivations for sub-doppler cooling include motional ground state cooling, cooling to the motional ground state, a requirement for maintaining fidelity during many quantum computation operations.
A magneto-optical trap (MOT) is commonly used for cooling and trapping a substance by Doppler cooling. In the process of Doppler cooling, the red detuned light would be absorbed by atoms from one certain direction and re-emitted in a random direction. The electrons of the atoms would decay to an alternative ground states if the atoms have more than one hyperfine ground level. There is the case of all the atoms in the other ground states rather than the ground states of Doppler cooling, then system cannot cool the atoms further.
In order to solve this problem, the other re-pumping light would be incident on the system to repopulate the atoms to restart the Doppler cooling process. This would induce higher amounts of fluorescence being emitted from the atoms which can be absorbed by other atoms, acting as a repulsive force. Due to this problem, the Doppler limit would increase and is easy to meet. When there is a dark spot or lines on the shape of the re-pumping light, the atoms in the middle of the atomic gas would not be excited by the re-pumping light which can decrease the repulsion force from the previous cases.
This can help to cool the atoms to a lower temperature than the typical Doppler cooling limit. This is called a dark magneto-optical trap (DMOT). [ 4 ]
The Doppler cooling limit is set by balancing the heating from the momentum kicks. Applying the results from the Fokker-Planck equation to the sub-Doppler processes would lead to an arbitrarily low final temperature as the damping coefficient become arbitrarily large. A few more considerations are needed. For instance, When a photon is scattered, the momentum change of the atom is assumed to be small relative to its overall momentum, but when the atom slows down to around the region of v r = ℏ k M {\textstyle v_{r}={\frac {\hbar k}{M}}} , the momentum change becomes significant. Thus at low velocities, spontaneous emission would leave the atom with a residual momentum around ℏ k {\textstyle \hbar k} , which sets a minimum velocity scale. The velocity distribution around v r {\textstyle v_{r}} cannot be well described by the Fokker Planck equation, and this sets an intuitive lower limit on the temperature. [ 2 ]
Furthermore, polarization gradient cooling depends on the ability to localize atoms to a scale of ∼ λ / 2 π {\textstyle \sim \lambda /{2\pi }} , where λ {\textstyle \lambda } is the wavelength of the light. Due to the uncertainty principle, this localization also imposes a minimum momentum spread ∼ ℏ k {\textstyle \sim \hbar k} , which also leads to a limit on how much the atoms can be cooled.
These theories are tested in the analytical and numerical calculations in [ 5 ] with a one-dimensional polarization gradient molasses. It was shown that in the limit of large detuning, the velocity distribution depends only on a dimensionless parameter, the light shift of the ground state divided by the recoil energy. The minimum kinetic energy was found to be on the order of 40 times the recoil energy. | https://en.wikipedia.org/wiki/Sub-Doppler_cooling |
The term sub-Neptune can refer to a planet with a smaller radius than Neptune even though it may have a larger mass [ 1 ] or to a planet with a smaller mass than Neptune even though it may have a larger radius like a super-puff and both meanings can even be used in the same publication. [ 2 ]
Neptune-like planets are considerably rarer than sub-Neptune sized planets, despite being only slightly bigger. [ 3 ] [ 4 ] This "radius cliff" separates sub-Neptunes (radii < 3 Earth radii) from Neptunes (radii > 3 Earth radii). [ 3 ] This radius cliff is thought to arise because during formation when gas is accreting, the atmospheres of planets that size reach the pressures required to force the hydrogen into the magma ocean stalling radius growth. Then, once the magma ocean saturates, radius growth can continue. However, planets that have enough gas to reach saturation are much rarer, because they require much more gas. [ 3 ]
On 29 November 2023, astronomers reported the discovery of six sub-Neptune exoplanets orbiting the star HD 110067 , with radii ranging from 1.94 R 🜨 to 2.85 R 🜨 . [ 5 ] [ 6 ] [ 7 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sub-Neptune |
In signal processing , sub-band coding ( SBC ) is any form of transform coding that breaks a signal into a number of different frequency bands , typically by using a fast Fourier transform , and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals.
SBC is the core technique used in many popular lossy audio compression algorithms including MP3 .
The simplest way to digitally encode audio signals is pulse-code modulation (PCM), which is used on audio CDs , DAT recordings, and so on. Digitization transforms continuous signals into discrete ones by sampling a signal's amplitude at uniform intervals and rounding to the nearest value representable with the available number of bits . This process is fundamentally inexact, and involves two errors: discretization error , from sampling at intervals, and quantization error , from rounding.
The more bits used to represent each sample, the finer the granularity in the digital representation, and thus the smaller the quantization error. Such quantization errors may be thought of as a type of noise, because they are effectively the difference between the original source and its binary representation. With PCM, the audible effects of these errors can be mitigated with dither and by using enough bits to ensure that the noise is low enough to be masked either by the signal itself or by other sources of noise. A high quality signal is possible, but at the cost of a high bitrate (e.g., over 700 kbit/s for one channel of CD audio). In effect, many bits are wasted in encoding masked portions of the signal because PCM makes no assumptions about how the human ear hears.
Coding techniques reduce bitrate by exploiting known characteristics of the auditory system. A classic method is nonlinear PCM, such as the μ-law algorithm . Small signals are digitized with finer granularity than are large ones; the effect is to add noise that is proportional to the signal strength. Sun's Au file format for sound is a popular example of mu-law encoding. Using 8-bit mu-law encoding would cut the per-channel bitrate of CD audio down to about 350 kbit/s, half the standard rate. Because this simple method only minimally exploits masking effects, it produces results that are often audibly inferior compared to the original.
The utility of SBC is perhaps best illustrated with a specific example. When used for audio compression, SBC exploits auditory masking in the auditory system . Human ears are normally sensitive to a wide range of frequencies, but when a sufficiently loud signal is present at one frequency, the ear will not hear weaker signals at nearby frequencies. We say that the louder signal masks the softer ones.
The basic idea of SBC is to enable a data reduction by discarding information about frequencies which are masked. The result differs from the original signal, but if the discarded information is chosen carefully, the difference will not be noticeable, or more importantly, objectionable.
First, a digital filter bank divides the input signal spectrum into some number (e.g., 32) of subbands. The psychoacoustic model looks at the energy in each of these subbands, as well as in the original signal, and computes masking thresholds using psychoacoustic information. Each of the subband samples are quantized and encoded so as to keep the quantization noise below the dynamically computed masking threshold. The final step is to format all these quantized samples into groups of data called frames, to facilitate eventual playback by a decoder.
Decoding is much easier than encoding, since no psychoacoustic model is involved. The frames are unpacked, subband samples are decoded, and a frequency-time mapping reconstructs an output audio signal.
Beginning in the late 1980s, a standardization body, the Moving Picture Experts Group (MPEG), developed standards for coding of both audio and video. Subband coding resides at the heart of the popular MP3 format (more properly known as MPEG-1 Audio Layer III ), for example.
Sub-band coding is used in the G.722 codec which uses sub-band adaptive differential pulse code modulation (SB- ADPCM ) within a bit rate of 64 kbit/s. In the SB-ADPCM technique, the frequency band is split into two sub-bands (higher and lower) and the signals in each sub-band are encoded using ADPCM. | https://en.wikipedia.org/wiki/Sub-band_coding |
In analytical chemistry , sub-sampling is a procedure by which a small, representative sample is taken from a larger sample. Good sub-sampling technique becomes important when the large sample is not homogeneous .
Coning and quartering is a method used by analytical chemists to reduce the sample size of a powder without creating a systematic bias. The technique involves pouring the sample so that it takes on a conical shape, and then flattening it out into a cake. The cake is then divided into quarters; the two quarters which sit opposite one another are discarded, while the other two are combined and constitute the reduced sample. The same process is continued until an appropriate sample size remains. Analyses are made with respect to the sample left behind.
A riffle box is a box containing a number (between 3 and 12) of "chutes" - slotted paths through which particles of the sample may slide. The sample is dropped into the top, and the box produces two equally divided subsamples. Riffle boxes are commonly used in mining to reduce the size of crushed rock samples prior to assaying . | https://en.wikipedia.org/wiki/Sub-sampling_(chemistry) |
SubEthaEdit is a collaborative real-time editor designed for Mac OS X . The name comes from the Sub-Etha communication network in The Hitchhiker's Guide to the Galaxy series. [ 2 ]
SubEthaEdit was first released under the name Hydra in early 2003 but, for legal reasons, the name was changed to SubEthaEdit in late 2004. [ 3 ]
The first version of Hydra was built in just a few months with the intent of winning an Apple Design Award , which it did at Apple's Worldwide Developers Conference 2003. [ 4 ] In 2007, TheCodingMonkeys licensed the "Subetha Engine" to Panic for use in Coda . [ 5 ]
In June 2014, SubEthaEdit 4 was released, distributed exclusively in the Mac App Store . With version 5 released in 2019, the application became free and open source, under the MIT license. [ 6 ]
Apart from the usual text-editing capabilities, collaborative editing is one of SubEthaEdit's key features. The collaboration is document -based, non-locking, and non-blocking. [ 7 ] Anyone participating in the collaborative edit can type in the document anywhere at any time. Using Bonjour (formerly Rendezvous) and BEEP , SubEthaEdit works without any configuration on the LAN but can also coordinate collaborative editing over the Internet . SubEthaEdit can be used for distributed pair programming and collaborative note-taking in conferences.
Other SubEthaEdit features include: | https://en.wikipedia.org/wiki/SubEthaEdit |
In computing, a SubOS may mean several related concepts:
It can also make processes easier and add details to the main operating system.
This security software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SubOS |
In mathematics , subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots . Additive maps are special cases of subadditive functions.
A subadditive function is a function f : A → B {\displaystyle f\colon A\to B} , having a domain A and an ordered codomain B that are both closed under addition, with the following property: ∀ x , y ∈ A , f ( x + y ) ≤ f ( x ) + f ( y ) . {\displaystyle \forall x,y\in A,f(x+y)\leq f(x)+f(y).}
An example is the square root function, having the non-negative real numbers as domain and codomain:
since ∀ x , y ≥ 0 {\displaystyle \forall x,y\geq 0} we have: x + y ≤ x + y . {\displaystyle {\sqrt {x+y}}\leq {\sqrt {x}}+{\sqrt {y}}.}
A sequence { a n } n ≥ 1 {\displaystyle \left\{a_{n}\right\}_{n\geq 1}} is called subadditive if it satisfies the inequality a n + m ≤ a n + a m {\displaystyle a_{n+m}\leq a_{n}+a_{m}} for all m and n . This is a special case of subadditive function, if a sequence is interpreted as a function on the set of natural numbers.
Note that while a concave sequence is subadditive, the converse is false. For example, arbitrarily assign a 1 , a 2 , . . . {\displaystyle a_{1},a_{2},...} with values in [ 0.5 , 1 ] {\displaystyle [0.5,1]} ; then the sequence is subadditive but not concave.
A useful result pertaining to subadditive sequences is the following lemma due to Michael Fekete . [ 1 ]
Fekete's Subadditive Lemma — For every subadditive sequence { a n } n = 1 ∞ {\displaystyle {\left\{a_{n}\right\}}_{n=1}^{\infty }} , the limit lim n → ∞ a n n {\displaystyle \displaystyle \lim _{n\to \infty }{\frac {a_{n}}{n}}} is equal to the infimum inf a n n {\displaystyle \inf {\frac {a_{n}}{n}}} . (The limit may be − ∞ {\displaystyle -\infty } .)
Let s ∗ := inf n a n n {\displaystyle s^{*}:=\inf _{n}{\frac {a_{n}}{n}}} .
By definition, lim inf n a n n ≥ s ∗ {\displaystyle \liminf _{n}{\frac {a_{n}}{n}}\geq s^{*}} . So it suffices to show lim sup n a n n ≤ s ∗ {\displaystyle \limsup _{n}{\frac {a_{n}}{n}}\leq s^{*}} .
If not, then there exists a subsequence ( a n k ) k {\displaystyle (a_{n_{k}})_{k}} , and an ϵ > 0 {\displaystyle \epsilon >0} , such that a n k n k > s ∗ + ϵ {\displaystyle {\frac {a_{n_{k}}}{n_{k}}}>s^{*}+\epsilon } for all k {\displaystyle k} .
Since s ∗ := inf n a n n {\displaystyle s^{*}:=\inf _{n}{\frac {a_{n}}{n}}} , there exists an a m {\displaystyle a_{m}} such that a m m < s ∗ + ϵ {\displaystyle {\frac {a_{m}}{m}}<s^{*}+\epsilon } .
By infinitary pigeonhole principle , there exists a sub-subsequence of ( a n k ) k {\displaystyle (a_{n_{k}})_{k}} , whose indices all belong to the same residue class modulo m {\displaystyle m} , and so they advance by multiples of m {\displaystyle m} . This sequence, continued for long enough, would be forced by subadditivity to dip below the s ∗ + ϵ {\displaystyle s^{*}+\epsilon } slope line, a contradiction.
In more detail, by subadditivity, we have
a n 2 ≤ a n 1 + a m ( n 2 − n 1 ) / m a n 3 ≤ a n 2 + a m ( n 3 − n 2 ) / m ≤ a n 1 + a m ( n 3 − n 1 ) / m ⋯ ⋯ a n k ≤ a n 1 + a m ( n k − n 1 ) / m {\textstyle {\begin{aligned}a_{n_{2}}&\leq a_{n_{1}}+a_{m}(n_{2}-n_{1})/m\\a_{n_{3}}&\leq a_{n_{2}}+a_{m}(n_{3}-n_{2})/m\leq a_{n_{1}}+a_{m}(n_{3}-n_{1})/m\\\cdots &\cdots \\a_{n_{k}}&\leq a_{n_{1}}+a_{m}(n_{k}-n_{1})/m\end{aligned}}}
which implies lim sup k a n k / n k ≤ a m / m < s ∗ + ϵ {\displaystyle \limsup _{k}a_{n_{k}}/n_{k}\leq a_{m}/m<s^{*}+\epsilon } , a contradiction.
The analogue of Fekete's lemma holds for superadditive sequences as well, that is: a n + m ≥ a n + a m . {\displaystyle a_{n+m}\geq a_{n}+a_{m}.} (The limit then may be positive infinity: consider the sequence a n = log n ! {\displaystyle a_{n}=\log n!} .)
There are extensions of Fekete's lemma that do not require the inequality a n + m ≤ a n + a m {\displaystyle a_{n+m}\leq a_{n}+a_{m}} to hold for all m and n , but only for m and n such that 1 2 ≤ m n ≤ 2. {\textstyle {\frac {1}{2}}\leq {\frac {m}{n}}\leq 2.}
Continue the proof as before, until we have just used the infinite pigeonhole principle.
Consider the sequence a m , a 2 m , a 3 m , . . . {\displaystyle a_{m},a_{2m},a_{3m},...} . Since 2 m / m = 2 {\displaystyle 2m/m=2} , we have a 2 m ≤ 2 a m {\displaystyle a_{2m}\leq 2a_{m}} . Similarly, we have a 3 m ≤ a 2 m + a m ≤ 3 a m {\displaystyle a_{3m}\leq a_{2m}+a_{m}\leq 3a_{m}} , etc.
By the assumption, for any s , t ∈ N {\displaystyle s,t\in \mathbb {N} } , we can use subadditivity on them if
ln ( s + t ) ∈ [ ln ( 1.5 s ) , ln ( 3 s ) ] = ln s + [ ln 1.5 , ln 3 ] {\displaystyle \ln(s+t)\in [\ln(1.5s),\ln(3s)]=\ln s+[\ln 1.5,\ln 3]}
If we were dealing with continuous variables, then we can use subadditivity to go from a n k {\displaystyle a_{n_{k}}} to a n k + [ ln 1.5 , ln 3 ] {\displaystyle a_{n_{k}}+[\ln 1.5,\ln 3]} , then to a n k + ln 1.5 + [ ln 1.5 , ln 3 ] {\displaystyle a_{n_{k}}+\ln 1.5+[\ln 1.5,\ln 3]} , and so on, which covers the entire interval a n k + [ ln 1.5 , + ∞ ) {\displaystyle a_{n_{k}}+[\ln 1.5,+\infty )} .
Though we don't have continuous variables, we can still cover enough integers to complete the proof. Let n k {\displaystyle n_{k}} be large enough, such that
ln ( 2 ) > ln ( 1.5 ) + ln ( 1.5 n k + m 1.5 n k ) {\displaystyle \ln(2)>\ln(1.5)+\ln \left({\frac {1.5n_{k}+m}{1.5n_{k}}}\right)} then let n ′ {\displaystyle n'} be the smallest number in the intersection ( n k + m Z ) ∩ ( ln n k + [ ln ( 1.5 ) , ln ( 3 ) ] ) {\displaystyle (n_{k}+m\mathbb {Z} )\cap (\ln n_{k}+[\ln(1.5),\ln(3)])} . By the assumption on n k {\displaystyle n_{k}} , it's easy to see (draw a picture) that the intervals ln n k + [ ln ( 1.5 ) , ln ( 3 ) ] {\displaystyle \ln n_{k}+[\ln(1.5),\ln(3)]} and ln n ′ + [ ln ( 1.5 ) , ln ( 3 ) ] {\displaystyle \ln n'+[\ln(1.5),\ln(3)]} touch in the middle. Thus, by repeating this process, we cover the entirety of ( n k + m Z ) ∩ ( ln n k + [ ln ( 1.5 ) , ∞ ] ) {\displaystyle (n_{k}+m\mathbb {Z} )\cap (\ln n_{k}+[\ln(1.5),\infty ])} .
With that, all a n k , a n k + 1 , . . . {\displaystyle a_{n_{k}},a_{n_{k+1}},...} are forced down as in the previous proof.
Moreover, the condition a n + m ≤ a n + a m {\displaystyle a_{n+m}\leq a_{n}+a_{m}} may be weakened as follows: a n + m ≤ a n + a m + ϕ ( n + m ) {\displaystyle a_{n+m}\leq a_{n}+a_{m}+\phi (n+m)} provided that ϕ {\displaystyle \phi } is an increasing function such that the integral ∫ ϕ ( t ) t − 2 d t {\textstyle \int \phi (t)t^{-2}\,dt} converges (near the infinity). [ 2 ]
There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. [ 3 ] [ 4 ]
Besides, analogues of Fekete's lemma have been proved for subadditive real maps (with additional assumptions) from finite subsets of an amenable group [ 5 ] [ 6 ] , [ 7 ] and further, of a cancellative left-amenable semigroup. [ 8 ]
Theorem: [ 9 ] — For every measurable subadditive function f : ( 0 , ∞ ) → R , {\displaystyle f:(0,\infty )\to \mathbb {R} ,} the limit lim t → ∞ f ( t ) t {\displaystyle \lim _{t\to \infty }{\frac {f(t)}{t}}} exists and is equal to inf t > 0 f ( t ) t . {\displaystyle \inf _{t>0}{\frac {f(t)}{t}}.} (The limit may be − ∞ . {\displaystyle -\infty .} )
If f is a subadditive function, and if 0 is in its domain, then f (0) ≥ 0. To see this, take the inequality at the top. f ( x ) ≥ f ( x + y ) − f ( y ) {\displaystyle f(x)\geq f(x+y)-f(y)} . Hence f ( 0 ) ≥ f ( 0 + y ) − f ( y ) = 0 {\displaystyle f(0)\geq f(0+y)-f(y)=0}
A concave function f : [ 0 , ∞ ) → R {\displaystyle f:[0,\infty )\to \mathbb {R} } with f ( 0 ) ≥ 0 {\displaystyle f(0)\geq 0} is also subadditive.
To see this, one first observes that f ( x ) ≥ y x + y f ( 0 ) + x x + y f ( x + y ) {\displaystyle f(x)\geq \textstyle {\frac {y}{x+y}}f(0)+\textstyle {\frac {x}{x+y}}f(x+y)} .
Then looking at the sum of this bound for f ( x ) {\displaystyle f(x)} and f ( y ) {\displaystyle f(y)} , will finally verify that f is subadditive. [ 10 ]
The negative of a subadditive function is superadditive .
Entropy plays a fundamental role in information theory and statistical physics , as well as in quantum mechanics in a generalized formulation due to von Neumann .
Entropy appears always as a subadditive quantity in all of its formulations, meaning the entropy of a supersystem or a set union of random variables is always less or equal than the sum of the entropies of its individual components.
Additionally, entropy in physics satisfies several more strict inequalities such as the Strong Subadditivity of Entropy in classical statistical mechanics and its quantum analog .
Subadditivity is an essential property of some particular cost functions . It is, generally, a necessary and sufficient condition for the verification of a natural monopoly . It implies that production from only one firm is socially less expensive (in terms of average costs) than production of a fraction of the original quantity by an equal number of firms.
Economies of scale are represented by subadditive average cost functions.
Except in the case of complementary goods, the price of goods (as a function of quantity) must be subadditive. Otherwise, if the sum of the cost of two items is cheaper than the cost of the bundle of two of them together, then nobody would ever buy the bundle, effectively causing the price of the bundle to "become" the sum of the prices of the two separate items. Thus proving that it is not a sufficient condition for a natural monopoly; since the unit of exchange may not be the actual cost of an item. This situation is familiar to everyone in the political arena where some minority asserts that the loss of some particular freedom at some particular level of government means that many governments are better; whereas the majority assert that there is some other correct unit of cost. [ citation needed ]
Subadditivity is one of the desirable properties of coherent risk measures in risk management . [ 11 ] The economic intuition behind risk measure subadditivity is that a portfolio risk exposure should, at worst, simply equal the sum of the risk exposures of the individual positions that compose the portfolio. The lack of subadditivity is one of the main critiques of VaR models which do not rely on the assumption of normality of risk factors. The Gaussian VaR ensures subadditivity: for example, the Gaussian VaR of a two unitary long positions portfolio V {\displaystyle V} at the confidence level 1 − p {\displaystyle 1-p} is, assuming that the mean portfolio value variation is zero and the VaR is defined as a negative loss, VaR p ≡ z p σ Δ V = z p σ x 2 + σ y 2 + 2 ρ x y σ x σ y {\displaystyle {\text{VaR}}_{p}\equiv z_{p}\sigma _{\Delta V}=z_{p}{\sqrt {\sigma _{x}^{2}+\sigma _{y}^{2}+2\rho _{xy}\sigma _{x}\sigma _{y}}}} where z p {\displaystyle z_{p}} is the inverse of the normal cumulative distribution function at probability level p {\displaystyle p} , σ x 2 , σ y 2 {\displaystyle \sigma _{x}^{2},\sigma _{y}^{2}} are the individual positions returns variances and ρ x y {\displaystyle \rho _{xy}} is the linear correlation measure between the two individual positions returns. Since variance is always positive, σ x 2 + σ y 2 + 2 ρ x y σ x σ y ≤ σ x + σ y {\displaystyle {\sqrt {\sigma _{x}^{2}+\sigma _{y}^{2}+2\rho _{xy}\sigma _{x}\sigma _{y}}}\leq \sigma _{x}+\sigma _{y}} Thus the Gaussian VaR is subadditive for any value of ρ x y ∈ [ − 1 , 1 ] {\displaystyle \rho _{xy}\in [-1,1]} and, in particular, it equals the sum of the individual risk exposures when ρ x y = 1 {\displaystyle \rho _{xy}=1} which is the case of no diversification effects on portfolio risk.
Subadditivity occurs in the thermodynamic properties of non- ideal solutions and mixtures like the excess molar volume and heat of mixing or excess enthalpy.
A factorial language L {\displaystyle L} is one where if a word is in L {\displaystyle L} , then all factors of that word are also in L {\displaystyle L} . In combinatorics on words , a common problem is to determine the number A ( n ) {\displaystyle A(n)} of length- n {\displaystyle n} words in a factorial language. Clearly A ( m + n ) ≤ A ( m ) A ( n ) {\displaystyle A(m+n)\leq A(m)A(n)} , so log A ( n ) {\displaystyle \log A(n)} is subadditive, and hence Fekete's lemma can be used to estimate the growth of A ( n ) {\displaystyle A(n)} . [ 12 ]
For every k ≥ 1 {\displaystyle k\geq 1} , sample two strings of length n {\displaystyle n} uniformly at random on the alphabet 1 , 2 , . . . , k {\displaystyle 1,2,...,k} . The expected length of the longest common subsequence is a super -additive function of n {\displaystyle n} , and thus there exists a number γ k ≥ 0 {\displaystyle \gamma _{k}\geq 0} , such that the expected length grows as ∼ γ k n {\displaystyle \sim \gamma _{k}n} . By checking the case with n = 1 {\displaystyle n=1} , we easily have 1 k < γ k ≤ 1 {\displaystyle {\frac {1}{k}}<\gamma _{k}\leq 1} . The exact value of even γ 2 {\displaystyle \gamma _{2}} , however, is only known to be between 0.788 and 0.827. [ 13 ]
This article incorporates material from subadditivity on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Subadditivity |
The subarctic climate (also called subpolar climate , or boreal climate ) is a continental climate with long, cold (often very cold) winters, and short, warm to cool summers. It is found on large landmasses, often away from the moderating effects of an ocean, generally at latitudes from 50°N to 70°N, poleward of the humid continental climates . Like other Class D climates, they are rare in the Southern Hemisphere, only found at some isolated highland elevations. Subarctic or boreal climates are the source regions for the cold air that affects temperate latitudes to the south in winter. These climates represent Köppen climate classification Dfc , Dwc , Dsc , Dfd , Dwd and Dsd .
This type of climate offers some of the most extreme seasonal temperature variations found on the planet: in winter, temperatures can drop to below −50 °C (−58 °F) and in summer, the temperature may exceed 26 °C (79 °F). However, the summers are short; no more than three months of the year (but at least one month) must have a 24-hour average temperature of at least 10 °C (50 °F) to fall into this category of climate, and the coldest month should average below 0 °C (32 °F) (or −3 °C (27 °F)). Record low temperatures can approach −70 °C (−94 °F). [ 1 ]
With 5–7 consecutive months when the average temperature is below freezing, all moisture in the soil and subsoil freezes solidly to depths of many feet. Summer warmth is insufficient to thaw more than a few surface feet, so permafrost prevails under most areas not near the southern boundary of this climate zone. Seasonal thaw penetrates from 2 to 14 ft (0.6 to 4.3 m), depending on latitude, aspect, and type of ground. [ 2 ] Some northern areas with subarctic climates located near oceans (southern Alaska , northern Norway , Sakhalin Oblast and Kamchatka Oblast ), have milder winters and no permafrost, and are more suited for farming unless precipitation is excessive. The frost-free season is very short, varying from about 45 to 100 days at most, and a freeze can occur anytime outside the summer months in many areas.
The first D indicates continentality, with the coldest month below 0 °C (32 °F) (or −3 °C (27 °F)).
The second letter denotes precipitation patterns:
The third letter denotes temperature:
Most subarctic climates have little precipitation, typically no more than 380 mm (15 in) over an entire year due to the low temperatures and evapotranspiration . Away from the coasts, precipitation occurs mostly in the summer months, while in coastal areas with subarctic climates the heaviest precipitation is usually during the autumn months when the relative warmth of sea vis-à-vis land is greatest. Low precipitation, by the standards of more temperate regions with longer summers and warmer winters, is typically sufficient in view of the very low evapotranspiration to allow a water-logged terrain in many areas of subarctic climate and to permit snow cover during winter, which is generally persistent for an extended period.
A notable exception to this pattern is that subarctic climates occurring at high elevations in otherwise temperate regions have extremely high precipitation due to orographic lift . Mount Washington , with temperatures typical of a subarctic climate, receives an average rain-equivalent of 101.91 inches (2,588.5 mm) of precipitation per year. [ 3 ] Coastal areas of Khabarovsk Krai also have much higher precipitation in summer due to orographic influences (up to 175 millimetres (6.9 in) in July in some areas), whilst the mountainous Kamchatka peninsula and Sakhalin island are even wetter, since orographic moisture isn't confined to the warmer months and creates large glaciers in Kamchatka. Labrador , in eastern Canada, is similarly wet throughout the year due to the semi-permanent Icelandic Low and can receive up to 1,300 millimetres (51 in) of rainfall equivalent per year, creating a snow cover of up to 1.5 metres (59 in) that does not melt until June.
Vegetation in regions with subarctic climates is generally of low diversity, as only hardy tree species can survive the long winters and make use of the short summers. Trees are mostly limited to conifers , as few broadleaved trees are able to survive the very low temperatures in winter. This type of forest is also known as taiga , a term which is sometimes applied to the climate found therein as well. Even though the diversity may be low, the area and numbers are high, and the taiga (boreal) forest is the largest forest biome on the planet, with most of the forests located in Russia and Canada . The process by which plants become acclimated to cold temperatures is called hardening .
Agricultural potential is generally poor, due to the natural infertility of soils [ 4 ] and the prevalence of swamps and lakes left by departing ice sheets , and short growing seasons prohibit all but the hardiest of crops. Despite the short season, the long summer days at such latitudes do permit some agriculture. In some areas, ice has scoured rock surfaces bare, entirely stripping off the overburden. Elsewhere, rock basins have been formed and stream courses dammed, creating countless lakes. [ 2 ]
Should one go northward or even toward a polar sea, one finds that the warmest month has an average temperature of less than 10 °C (50 °F), and the subarctic climate grades into a tundra climate not at all suitable for trees. Southward, this climate grades into the humid continental climates with longer summers (and usually less-severe winters) allowing broadleaf trees ; in a few locations close to a temperate sea (as in northern Norway and southern Alaska ), this climate can grade into a short-summer version of an oceanic climate , the subpolar oceanic climate , as the sea is approached where winter temperatures average near or above freezing despite maintaining the short, cool summers. In China and Mongolia, as one moves southwestwards or towards lower elevations, temperatures increase but precipitation is so low that the subarctic climate grades into a cold semi-arid climate .
The Dfc climate, by far the most common subarctic type, is found in the following areas: [ 5 ] [ 6 ]
Further north and east in Siberia, continentality increases so much that winters can be exceptionally severe, averaging below −38 °C (−36 °F), even though the hottest month still averages more than 10 °C (50 °F). This creates Dfd climates, which are mostly found in the Sakha Republic :
In the Southern Hemisphere , the Dfc climate is found only in small, isolated pockets in the Snowy Mountains of Australia , the Southern Alps of New Zealand , and the Lesotho Highlands . In South America , this climate occurs on the western slope of the central Andes in Chile and Argentina , where climatic conditions are notably more humid compared to the eastern slope. The presence of the Andes mountain range contributes to a wetter climate on the western slope by capturing moisture from the Pacific Ocean , resulting in increased precipitation, especially during the winter months. This climate zone supports the presence of temperate rainforests, mostly on highest areas of the Valdivian rainforest in Chile and the subantarctic forest in Argentina.
Climates classified as Dsc or Dsd , with a dry summer, are rare, occurring in very small areas at high elevation around the Mediterranean Basin ; Iran ; Kyrgyzstan ; Tajikistan ; Alaska and other parts of the northwestern United States ( Eastern Washington , Eastern Oregon , Southern Idaho , California's Eastern Sierra ); the Russian Far East ; Akureyri, Iceland ; Seneca, Oregon ; and Atlin, British Columbia . Turkey and Afghanistan are exceptions; Dsc climates are common in Northeast Anatolia , in the Taurus and Köroğlu Mountains , and the Central Afghan highlands .
In the Southern Hemisphere, the Dsc climate is present in South America as a subarctic climate influenced by Mediterranean characteristics, often considered a high-altitude variant of the Mediterranean climate. It is located on the eastern slopes of the central Argentine Andes and in some sections on the Chilean side. While there are no major settlements exhibiting this climate, several localities in the vicinity experience it, such as San Carlos de Bariloche , Villa La Angostura , San Martín de los Andes , Balmaceda , Punta de Vacas , and Termas del Flaco . [ 7 ]
Climates classified as Dwc or Dwd , with a dry winter, are found in parts of East Asia, like China, where the Siberian High makes the winters colder than places like Scandinavia or Alaska interior but extremely dry (typically with around 5 millimeters (0.20 in) of rainfall equivalent per month), meaning that winter snow cover is very limited. The Dwc climate can be found in:
In the Southern Hemisphere, small pockets of the Lesotho Highlands and the Drakensberg Mountains have a Dwc classification. | https://en.wikipedia.org/wiki/Subarctic_climate |
In physics , a subatomic particle is a particle smaller than an atom . [ 1 ] According to the Standard Model of particle physics , a subatomic particle can be either a composite particle , which is composed of other particles (for example, a baryon , like a proton or a neutron , composed of three quarks ; or a meson , composed of two quarks ), or an elementary particle , which is not composed of other particles (for example, quarks; or electrons , muons , and tau particles, which are called leptons ). [ 2 ] Particle physics and nuclear physics study these particles and how they interact. [ 3 ] Most force-carrying particles like photons or gluons are called bosons and, although they have quanta of energy, do not have rest mass or discrete diameters (other than pure energy wavelength) and are unlike the former particles that have rest mass and cannot overlap or combine which are called fermions . The W and Z bosons , however, are an exception to this rule and have relatively large rest masses at approximately 80 GeV/ c 2 and 90 GeV/ c 2 respectively.
Experiments show that light could behave like a stream of particles (called photons ) as well as exhibiting wave-like properties. This led to the concept of wave–particle duality to reflect that quantum-scale particles behave both like particles and like waves ; they are occasionally called wavicles to reflect this. [ 4 ]
Another concept, the uncertainty principle , states that some of their properties taken together, such as their simultaneous position and momentum , cannot be measured exactly. [ 5 ] Interactions of particles in the framework of quantum field theory are understood as creation and annihilation of quanta of corresponding fundamental interactions . This blends particle physics with field theory .
Even among particle physicists , the exact definition of a particle has diverse descriptions. These professional attempts at the definition of a particle include: [ 6 ]
Subatomic particles are either "elementary", i.e. not made of multiple other particles, or "composite" and made of more than one elementary particle bound together.
The elementary particles of the Standard Model are: [ 7 ]
All of these have now been discovered through experiments, with the latest being the top quark (1995), tau neutrino (2000), and Higgs boson (2012).
Various extensions of the Standard Model predict the existence of an elementary graviton particle and many other elementary particles , but none have been discovered as of 2021.
The word hadron comes from Greek and was introduced in 1962 by Lev Okun . [ 8 ] Nearly all composite particles contain multiple quarks (and/or antiquarks) bound together by gluons (with a few exceptions with no quarks, such as positronium and muonium ). Those containing few (≤ 5) quarks (including antiquarks) are called hadrons . Due to a property known as color confinement , quarks are never found singly but always occur in hadrons containing multiple quarks. The hadrons are divided by number of quarks (including antiquarks) into the baryons containing an odd number of quarks (almost always 3), of which the proton and neutron (the two nucleons ) are by far the best known; and the mesons containing an even number of quarks (almost always 2, one quark and one antiquark), of which the pions and kaons are the best known.
Except for the proton and neutron, all other hadrons are unstable and decay into other particles in microseconds or less. A proton is made of two up quarks and one down quark , while the neutron is made of two down quarks and one up quark. These commonly bind together into an atomic nucleus, e.g. a helium-4 nucleus is composed of two protons and two neutrons. Most hadrons do not live long enough to bind into nucleus-like composites; those that do (other than the proton and neutron) form exotic nuclei .
Any subatomic particle, like any particle in the three-dimensional space that obeys the laws of quantum mechanics , can be either a boson (with integer spin ) or a fermion (with odd half-integer spin).
In the Standard Model, all the elementary fermions have spin 1/2, and are divided into the quarks which carry color charge and therefore feel the strong interaction, and the leptons which do not. The elementary bosons comprise the gauge bosons (photon, W and Z, gluons) with spin 1, while the Higgs boson is the only elementary particle with spin zero.
The hypothetical graviton is required theoretically to have spin 2, but is not part of the Standard Model. Some extensions such as supersymmetry predict additional elementary particles with spin 3/2, but none have been discovered as of 2023.
Due to the laws for spin of composite particles, the baryons (3 quarks) have spin either 1/2 or 3/2 and are therefore fermions; the mesons (2 quarks) have integer spin of either 0 or 1 and are therefore bosons.
In special relativity , the energy of a particle at rest equals its mass times the speed of light squared , E = mc 2 . That is, mass can be expressed in terms of energy and vice versa. If a particle has a frame of reference in which it lies at rest , then it has a positive rest mass and is referred to as massive .
All composite particles are massive. Baryons (meaning "heavy") tend to have greater mass than mesons (meaning "intermediate"), which in turn tend to be heavier than leptons (meaning "lightweight"), but the heaviest lepton (the tau particle ) is heavier than the two lightest flavours of baryons ( nucleons ). It is also certain that any particle with an electric charge is massive.
When originally defined in the 1950s, the terms baryons, mesons and leptons referred to masses; however, after the quark model became accepted in the 1970s, it was recognised that baryons are composites of three quarks, mesons are composites of one quark and one antiquark, while leptons are elementary and are defined as the elementary fermions with no color charge.
All massless particles (particles whose invariant mass is zero) are elementary. These include the photon and gluon, although the latter cannot be isolated.
Most subatomic particles are not stable. All leptons, as well as baryons decay by either the strong force or weak force (except for the proton). Protons are not known to decay , although whether they are "truly" stable is unknown, as some very important Grand Unified Theories (GUTs) actually require it. The μ and τ muons, as well as their antiparticles, decay by the weak force. Neutrinos (and antineutrinos) do not decay, but a related phenomenon of neutrino oscillations is thought to exist even in vacuums. The electron and its antiparticle, the positron , are theoretically stable due to charge conservation unless a lighter particle having magnitude of electric charge ≤ e exists (which is unlikely). Its charge is not shown yet.
All observable subatomic particles have their electric charge an integer multiple of the elementary charge . The Standard Model's quarks have "non-integer" electric charges, namely, multiple of 1 / 3 e , but quarks (and other combinations with non-integer electric charge) cannot be isolated due to color confinement . For baryons, mesons, and their antiparticles the constituent quarks' charges sum up to an integer multiple of e .
Through the work of Albert Einstein , Satyendra Nath Bose , Louis de Broglie , and many others, current scientific theory holds that all particles also have a wave nature. [ 9 ] This has been verified not only for elementary particles but also for compound particles like atoms and even molecules. In fact, according to traditional formulations of non-relativistic quantum mechanics, wave–particle duality applies to all objects, even macroscopic ones; although the wave properties of macroscopic objects cannot be detected due to their small wavelengths. [ 10 ]
Interactions between particles have been scrutinized for many centuries, and a few simple laws underpin how particles behave in collisions and interactions. The most fundamental of these are the laws of conservation of energy and conservation of momentum , which let us make calculations of particle interactions on scales of magnitude that range from stars to quarks. [ 11 ] These are the prerequisite basics of Newtonian mechanics , a series of statements and equations in Philosophiae Naturalis Principia Mathematica , originally published in 1687.
The negatively charged electron has a mass of about 1 / 1836 of that of a hydrogen atom. The remainder of the hydrogen atom's mass comes from the positively charged proton . The atomic number of an element is the number of protons in its nucleus. Neutrons are neutral particles having a mass slightly greater than that of the proton. Different isotopes of the same element contain the same number of protons but different numbers of neutrons. The mass number of an isotope is the total number of nucleons (neutrons and protons collectively).
Chemistry concerns itself with how electron sharing binds atoms into structures such as crystals and molecules . The subatomic particles considered important in the understanding of chemistry are the electron , the proton , and the neutron . Nuclear physics deals with how protons and neutrons arrange themselves in nuclei. The study of subatomic particles, atoms and molecules, and their structure and interactions, requires quantum mechanics . Analyzing processes that change the numbers and types of particles requires quantum field theory . The study of subatomic particles per se is called particle physics . The term high-energy physics is nearly synonymous to "particle physics" since creation of particles requires high energies: it occurs only as a result of cosmic rays , or in particle accelerators . Particle phenomenology systematizes the knowledge about subatomic particles obtained from these experiments. [ 12 ]
The term " subatomic particle" is largely a retronym of the 1960s, used to distinguish a large number of baryons and mesons (which comprise hadrons ) from particles that are now thought to be truly elementary . Before that hadrons were usually classified as "elementary" because their composition was unknown.
A list of important discoveries follows: | https://en.wikipedia.org/wiki/Subatomic_particle |
The cells of eukaryotic organisms are elaborately subdivided into functionally-distinct membrane-bound compartments. Some major constituents of eukaryotic cells are: extracellular space , plasma membrane , cytoplasm , nucleus , mitochondria , Golgi apparatus , endoplasmic reticulum (ER), peroxisome , vacuoles , cytoskeleton , nucleoplasm , nucleolus , nuclear matrix and ribosomes .
Bacteria also have subcellular localizations that can be separated when the cell is fractionated. The most common localizations referred to include the cytoplasm , the cytoplasmic membrane (also referred to as the inner membrane in Gram-negative bacteria), the cell wall (which is usually thicker in Gram-positive bacteria) and the extracellular environment. The cytoplasm, the cytoplasmic membrane and the cell wall are subcellular localizations, whereas the extracellular environment is clearly not. Most Gram-negative bacteria also contain an outer membrane and periplasmic space . Unlike eukaryotes, most bacteria contain no membrane-bound organelles, however there are some exceptions (i.e. magnetosomes ). [ 1 ]
The experimentally determined subcellular locations of proteins can be found in UniProtKB , Compartments , and in a few more specialized resources, such as the lactic acid bacterial secretome database .
There are also several subcellular location databases with computational predictions , such as the fungal secretome and subcellular proteome knowledgebase - version 2 Archived 2016-04-10 at the Wayback Machine (FunSecKB2), the plant secretome and subcellular proteome knowledgebase Archived 2016-04-06 at the Wayback Machine (PlantSecKB), MetazSecKB Archived 2016-04-06 at the Wayback Machine for protein subcellular locations of human and animals, and ProtSecKB for protein subcellular locations of all protists.
Proteome Analyst is a freely available web server and online toolkit for predicting protein subcellular localization. [ 2 ] | https://en.wikipedia.org/wiki/Subcellular_localization |
In genetics , a subclade is a subgroup of a haplogroup . [ 1 ]
Although human mitochondrial DNA (mtDNA) and Y chromosome DNA (Y-DNA) haplogroups and subclades are named in a similar manner, their names belong to completely separate systems. [ 2 ]
mtDNA haplogroups are defined by the presence of a series of single-nucleotide polymorphism (SNP) markers in the hypervariable regions and the coding region of mitochondrial DNA . They are named with the capital letters A through Z, with further subclades named using numbers and lower case letters. [ 2 ] [ 3 ] [ 4 ]
Y-DNA haplogroups are defined by the presence of a series of SNP markers on the Y chromosome . Subclades are defined by a terminal SNP , the SNP furthest down in the Y chromosome phylogenetic tree. [ 5 ]
The Y Chromosome Consortium (YCC) developed a system of naming major human Y-DNA haplogroups with the capital letters A through T, with further subclades named using numbers and lower case letters (YCC longhand nomenclature ). YCC shorthand nomenclature names Y-DNA haplogroups and their subclades with the first letter of the major Y-DNA haplogroup followed by a dash and the name of the defining terminal SNP. [ 6 ] Y-DNA haplogroup nomenclature is changing over time to accommodate the increasing number of SNPs being discovered and tested, and the resulting expansion of the Y chromosome phylogenetic tree. This change in nomenclature has resulted in inconsistent nomenclature being used in different sources. [ 7 ] This inconsistency, and increasingly cumbersome longhand nomenclature, has prompted a move towards using the simpler shorthand nomenclature. | https://en.wikipedia.org/wiki/Subclade |
In molecular biology , subcloning is a technique used to move a particular DNA sequence from a parent vector to a destination vector .
Subcloning is not to be confused with molecular cloning , a related technique.
Restriction enzymes are used to excise the gene of interest (the insert ) from the parent. The insert is purified in order to isolate it from other DNA molecules. A common purification method is gel isolation . The number of copies of the gene is then amplified using polymerase chain reaction (PCR).
Simultaneously, the same restriction enzymes are used to digest (cut) the destination. The idea behind using the same restriction enzymes is to create complementary sticky ends , which will facilitate ligation later on. A phosphatase , commonly calf-intestinal alkaline phosphatase (CIAP), is also added to prevent self-ligation of the destination vector. The digested destination vector is isolated/purified.
The insert and the destination vector are then mixed together with DNA ligase. A typical molar ratio of insert genes to destination vectors is 3:1; [ 1 ] by increasing the insert concentration, self-ligation is further decreased. After letting the reaction mixture sit for a set amount of time at a specific temperature (dependent upon the size of the strands being ligated; for more information see DNA ligase ), the insert should become successfully incorporated into the destination plasmid .
The plasmid is often transformed into a bacterium like E. coli . Ideally when the bacterium divides the plasmid should also be replicated. In the best case scenario, each bacterial cell should have several copies of the plasmid. After a good number of bacterial colonies have grown, they can be miniprepped to harvest the plasmid DNA.
In order to ensure growth of only transformed bacteria (which carry the desired plasmids to be harvested), a marker gene is used in the destination vector for selection . Typical marker genes are for antibiotic resistance or nutrient biosynthesis . So, for example, the "marker gene" could be for resistance to the antibiotic ampicillin. If the bacteria that were supposed to pick up the desired plasmid had picked up the desired gene then they would also contain the "marker gene". Now the bacteria that picked up the plasmid would be able to grow in ampicillin whereas the bacteria that did not pick up the desired plasmid would still be vulnerable to destruction by the ampicillin. Therefore, successfully transformed bacteria would be "selected."
In this example, a gene from mammalian gene library will be subcloned into a bacterial plasmid (destination platform). The bacterial plasmid is a piece of circular DNA which contains regulatory elements allowing for the bacteria to produce a gene product ( gene expression ) if it is placed in the correct place in the plasmid. The production site is flanked by two restriction enzyme cutting sites "A" and "B" with incompatible sticky ends.
The mammalian DNA does not come with these restriction sites, so they are built in by overlap extension PCR . The primers are designed to put the restriction sites carefully, so that the coding of the protein is in-frame, and a minimum of extra amino acids is implanted on either side of the protein.
Both the PCR product containing the mammalian gene with the new restriction sites and the destination plasmid are subjected to restriction digestion, and the digest products are purified by gel electrophoresis .
The digest products, now containing compatible sticky ends with each other (but incompatible sticky ends with themselves) are subjected to ligation, creating a new plasmid which contains the background elements of the original plasmid with a different insert.
The plasmid is transformed into bacteria and the identity of the insert is confirmed by DNA sequencing . | https://en.wikipedia.org/wiki/Subcloning |
A subcritical reactor is a nuclear fission reactor concept that produces fission without achieving criticality . Instead of sustaining a chain reaction , a subcritical reactor uses additional neutrons from an outside source. There are two general classes of such devices. One uses neutrons provided by a nuclear fusion machine, a concept known as a fusion–fission hybrid . The other uses neutrons created through spallation of heavy nuclei by charged particles such as protons accelerated by a particle accelerator , a concept known as an accelerator-driven system (ADS) or accelerator-driven sub-critical reactor .
A subcritical reactor can be used to destroy heavy isotopes contained in the used fuel from a conventional nuclear reactor, while at the same time producing electricity. The long-lived transuranic elements in nuclear waste can in principle be fissioned , releasing energy in the process and leaving behind the fission products which are shorter-lived. This would shorten considerably the time for disposal of radioactive waste . However, some isotopes have threshold fission cross sections and therefore require a fast reactor for being fissioned. While they can be transmuted into fissile material with thermal neutrons, some nuclides need as many as three successive neutron capture reactions to reach a fissile isotope and then yet another neutron to fission. Also, they release on average too few new neutrons per fission, so that with a fuel containing a high fraction of them, criticality cannot be reached. The accelerator-driven reactor is independent of this parameter and thus can utilize these nuclides. The three most important long-term radioactive isotopes that could advantageously be handled that way are neptunium-237 , americium-241 and americium-243 . The nuclear weapon material plutonium-239 is also suitable although it can be expended in a cheaper way as MOX fuel or inside existing fast reactors .
Besides nuclear waste incineration, there is interest in this type reactor because it is perceived as inherently safe , unlike a conventional reactor. In most types of critical reactors, there exist circumstances in which the rate of fission can increase rapidly, damaging or destroying the reactor and allowing the escape of radioactive material (see SL-1 or Chernobyl disaster ). With a subcritical reactor, the reaction will cease unless continually fed neutrons from an outside source. However, the problem of heat generation even after ending the chain reaction remains, so that continuous cooling of such a reactor for a considerable period after shut-down remains vital in order to avoid overheating. However, even the issue of decay heat can be minimized as a subcritical reactor needn't assemble a critical mass of fissile material and can thus be built (nearly) arbitrarily small and thus reduce the required thermal mass of an emergency coolant system capable of absorbing all heat generated in the hours to days after a scram .
Another issue in which a subcritical reactor is different from a "normal" nuclear reactor (no matter whether it operates with fast or thermal neutrons) is that all "normal" nuclear power plants rely on delayed neutrons to maintain safe operating conditions. Depending on the fissioning nuclide, a bit under 1% of neutrons aren't released immediately upon fission ( prompt neutrons ) but rather with fractions of seconds to minutes of delay by fission products which beta decay followed by neutron emission. Those delayed neutrons are essential for reactor control as the time between fission "generations" is on such a short order of magnitude that macroscopic physical processes or human intervention cannot keep a power excursion under control. However, as only the delayed neutrons provide enough neutrons to maintain criticality, the reaction times become several orders of magnitude larger and reactor control becomes feasible. By contrast this means that too low a fraction of delayed neutrons makes an otherwise fissile material unsuitable for operating a "conventional" nuclear power plant. Conversely, a subcritical reactor actually has slightly improved properties with a fuel with low delayed neutron fractions. (See below). It just so happens that while 235 U the currently most used fissile material has a relatively high delayed neutron fraction, 239 Pu has a much lower one, which - in addition to other physical and chemical properties - limits the possible plutonium content in "normal" reactor fuel. For this reason spent MOX-fuel , which still contains significant amounts of plutonium (including fissile 239 Pu and - when "fresh" - 241 Pu ) is usually not reprocessed due to the ingrowth of non-fissile 240 Pu which would require a higher plutonium content in fuel manufactured from this plutonium to maintain criticality. The other main component of spent fuel - reprocessed uranium - is usually only recovered as a byproduct and fetches worse prices on the uranium market than natural uranium due to ingrowth of 236 U and other "undesirable" isotopes of uranium .
Most current ADS designs propose a high-intensity proton accelerator with an energy of about 1 GeV , directed towards a spallation target or spallation neutron source. The source located in the heart of the reactor core contains liquid metal which is impacted by the beam, thus releasing neutrons and is cooled by circulating the liquid metal such as lead - bismuth towards a heat exchanger. The nuclear reactor core surrounding the spallation neutron source contains the fuel rods, the fuel being any fissile or fertile actinide mix, but preferable already with a certain amount of fissile material to not have to run at zero criticality during startup. Thereby, for each proton intersecting the spallation target, an average of 20 neutrons is released which fission the surrounding fissile part of the fuel and transmute atoms in the fertile part, "breeding" new fissile material. If the value of 20 neutrons per GeV expended is assumed, one neutron "costs" 50 MeV while fission (which requires one neutron) releases on the order of 200 MeV per actinide atom that is split. Efficiency can be increased by reducing the energy needed to produce a neutron, increasing the share of usable energy extracted from the fission (if a thermal process is used Carnot efficiency dictates that higher temperatures are needed to increase efficiency) and finally by getting criticality ever closer to 1 while still staying below it. An important factor in both efficiency and safety is how subcritical the reactor is. To simplify, the value of k(effective) that is used to give the criticality of a reactor (including delayed neutrons) can be interpreted as how many neutrons of each "generation" fission further nuclei. If k(effective) is 1, for every 1000 neutrons introduced, 1000 neutrons are produced that also fission further nuclei. Obviously the reaction rate would steadily increase in that case due to more and more neutrons being delivered from the neutron source. If k(effective) is just below 1, few neutrons have to be delivered from outside the reactor to keep the reaction at a steady state, increasing efficiency. On the other hand, in the extreme case of "zero criticality", that is k(effective)=0 (e.g. If the reactor is run for transmutation alone) all neutrons are "consumed" and none are produced inside the fuel. However, as neutronics can only ever be known to a certain degree of precision, the reactor must in practice allow a safety margin below criticality that depends on how well the neutronics are known and on the effect of the ingrowth of nuclides that decay via neutron emitting spontaneous fission such as Californium-252 or of nuclides that decay via neutron emission .
The neutron balance can be regulated or indeed shut off by adjusting the accelerator power so that the reactor would be below criticality . The additional neutrons provided by the spallation neutron source provide the degree of control as do the delayed neutrons in a conventional nuclear reactor , the difference being that spallation neutron source-driven neutrons are easily controlled by the accelerator. The main advantage is inherent safety . A conventional nuclear reactor 's nuclear fuel possesses self-regulating properties such as the Doppler effect or void effect, which make these nuclear reactors safe. In addition to these physical properties of conventional reactors, in the subcritical reactor, whenever the neutron source is turned off, the fission reaction ceases and only the decay heat remains.
There are technical difficulties to overcome before ADS can become economical and eventually be integrated into future nuclear waste management. The accelerator must provide a high intensity and also be highly reliable - each outage of the accelerator in addition to causing a scram will put the system under immense thermal stress . There are concerns about the window separating the protons from the spallation target, which is expected to be exposed to stress under extreme conditions. However, recent experience with the MEGAPIE liquid metal neutron spallation source tested at the Paul Scherrer Institute has demonstrated a working beam window under a 0.78 MW intense proton beam. The chemical separation of the transuranic elements and the fuel manufacturing, as well as the structure materials, are important issues. Finally, the lack of nuclear data at high neutron energies limits the efficiency of the design. This latter issue can be overcome by introducing a neutron moderator between the neutron source and the fuel, but this can lead to increased leakage as the moderator will also scatter neutrons away from the fuel. Changing the geometry of the reactor can reduce but never eliminate leakage. Leaking neutrons are also of concern due to the activation products they produce and due to the physical damage to materials neutron irradiation can cause. Furthermore, there are certain advantages to the fast neutron spectrum which cannot be achieved with thermal neutrons as are the result of a moderator. On the other hand, thermal neutron reactors are the most common and well understood type of nuclear reactor and thermal neutrons also have advantages over fast neutrons.
Some laboratory experiments and many theoretical studies have demonstrated the theoretical possibility of such a plant. Carlo Rubbia , a nuclear physicist , Nobel laureate, and former director of CERN , was one of the first to conceive a design of a subcritical reactor, the so-called " energy amplifier ". In 2005, several large-scale projects are going on in Europe and Japan to further develop subcritical reactor technology. In 2012 CERN scientists and engineers launched the International Thorium Energy Committee (iThEC), [ 1 ] an organization dedicated to pursuing this goal and which organized the ThEC13 [ 2 ] conference on the subject.
Subcritical reactors have been proposed both as a means of generating electric power and as a means of transmutation of nuclear waste , so the gain is twofold. However, the costs for construction, safety and maintenance of such complex installations are expected to be very high, not to mention the amount of research needed to develop a practical design (see above). There exist cheaper and reasonably safe waste management concepts, such as the transmutation in fast-neutron reactors . However, the solution of a subcritical reactor might be favoured for a better public acceptance – it is considered more acceptable to burn the waste than to bury it for hundreds of thousands of years. For future waste management, a few transmutation devices could be integrated into a large-scale nuclear program, hopefully increasing only slightly the overall costs.
The main challenge facing partitioning and transmutation operations is the need to enter nuclear cycles of extremely long duration: about 200 years. [ 3 ] Another disadvantage is the generation of high quantities of intermediate-level long-lived radioactive waste (ILW) which will also require deep geological disposal to be safely managed. A more positive aspect is the expected reduction in size of the repository, which was estimated to be an order of 4 to 6. Both positive and negative aspects were examined in an international benchmark study [ 4 ] coordinated by Forschungszentrum Jülich and financed by the European Union .
While ADS was originally conceptualized as a part of a light water reactor design, other proposals have been made that incorporate an ADS into other generation IV reactor concepts. [ citation needed ]
One such proposal calls for a gas-cooled fast reactor that is fueled primarily by plutonium and americium . The neutronic properties of americium make it difficult to use in any critical reactor, because it tends to make the moderator temperature coefficient more positive, decreasing stability. The inherent safety of an ADS, however, would allow americium to be safely burned. These materials also have good neutron economy, allowing the pitch-to-diameter ratio to be large, which allows for improved natural circulation and economics.
Subcritical methods for use in nuclear waste disposal that do not rely on neutron sources are also being developed. [ 5 ] These include systems that rely on the mechanism of muon capture , in which muons (μ − ) produced by a compact accelerator-driven source transmute long-lived radioactive isotopes to stable isotopes. [ 6 ]
Generally the term "subcritical reactor" is reserved for artificial systems, but natural systems do exist—any natural source of fissile material exposed to cosmic and gamma rays (from even the sun ) could be considered a subcritical reactor. This includes space launched satellites with radioisotope thermoelectric generators as well as any such exposed reservoirs. | https://en.wikipedia.org/wiki/Subcritical_reactor |
In biology , a subculture is either a new cell culture or a microbiological culture made by transferring some or all cells from a previous culture to fresh growth medium . This action is called subculturing or passaging the cells. Subculturing is used to prolong the lifespan and/or increase the number of cells or microorganisms in the culture. [ 1 ]
Cell lines and microorganisms cannot be held in culture indefinitely due to the gradual rise in metabolites which may be toxic , the depletion of nutrients present in the culture medium, and an increase in cell count or population size due to growth. Once nutrients are depleted and levels of toxic byproducts increase, microorganisms in culture will enter the stationary phase , where proliferation is greatly reduced or ceased (the cell density value plateaus). When microorganisms from this culture are transferred into fresh media, nutrients trigger the growth of the microorganisms which will go through lag phase , a period of slow growth and adaptation to the new environment, followed by log phase , a period where the cells grow exponentially. [ 1 ]
Subculture is therefore used to produce a new culture with a lower density of cells than the originating culture, fresh nutrients and no toxic metabolites allowing continued growth of the cells without risk of cell death. Subculture is important for both proliferating (e.g. a microorganism like E. coli ) and non-proliferating (e.g. terminally differentiated white blood cells ) cells. Subculturing can also be used for growth curve calculations (ex. generation time) [ 2 ] and obtaining log-phase microorganisms for experiments (ex. Bacterial transformation). [ 3 ]
Typically, subculture is from a culture of a certain volume into fresh growth medium of equal volume, this allows long-term maintenance of the cell line. Subculture into a larger volume of growth medium is used when wanting to increase the number of cells for, for example, use in an industrial process or scientific experiment.
It is often important to record the approximate number of divisions cells have had in culture by recording the number of passages or subcultures. In the case of plant tissue cells, somaclonal variation may arise over long periods in culture. Similarly in mammalian cell lines, chromosomal aberrations have a tendency to increase over time. For microorganisms there is a tendency to adapt to culture conditions, which is rarely precisely like the microorganism's natural environment, which can alter their biology.
The protocol for subculturing cells depends heavily on the properties of the cells involved.
Many cell types, in particular, many microorganisms, grow in solution and not attached to a surface. These cell types can be subcultured by simply taking a small volume of the parent culture and diluting it in fresh growth medium. Cell density in these cultures is normally measured in cells per milliliter for large eukaryotic cells, or as optical density for 600nm light for smaller cells like bacteria. The cells will often have a preferred range of densities for optimal growth and subculture will normally try to keep the cells in this range.
Adherent cells, for example many mammalian cell lines, grow attached to a surface such as the bottom of a cell culture flask or petri dish. These cell types have to be detached from the surface before they can be subcultured. For adherent cells, cell density is normally measured in terms of confluency , the percentage of the growth surface covered by cells. The cells will often have a known range of confluencies for optimal growth, for example a mammalian cell line like HeLa generally prefers confluencies between 10% and 100%, and subculture will normally try to keep the cells in this range. For subculture, cells may be detached by one of several methods including trypsin treatment to break down the proteins responsible for surface adherence, chelating calcium ions with EDTA which disrupts some protein adherence mechanisms, or mechanical methods like repeated washing or use of a cell scraper. The detached cells are then resuspended in fresh growth medium and allowed to settle back onto their growth surface.
Trypsinization | https://en.wikipedia.org/wiki/Subculture_(biology) |
In topological data analysis , a subdivision bifiltration is a collection of filtered simplicial complexes , typically built upon a set of data points in a metric space , that captures shape and density information about the underlying data set. The subdivision bifiltration relies on a natural filtration of the barycentric subdivision of a simplicial complex by flags of minimum dimension, which encodes density information about the metric space upon which the complex is built. The subdivision bifiltration was first introduced by Donald Sheehy in 2011 as part of his doctoral thesis [ 1 ] (later subsumed by a conference paper in 2012 [ 2 ] ) as a discrete model of the multicover bifiltration , a continuous construction whose underlying framework dates back to the 1970s. [ 3 ] In particular, Sheehy applied the construction to both the Vietoris-Rips and Čech filtrations, two common objects in the field of topological data analysis. [ 4 ] [ 5 ] [ 6 ] Whereas single parameter filtrations are not robust with respect to outliers in the data, [ 7 ] the subdivision-Rips and -Cech bifiltrations satisfy several desirable stability properties. [ 8 ]
Let T {\displaystyle T} be a simplicial complex . Then a nested sequence of simplices σ 1 ⊂ σ 2 ⊂ ⋯ ⊂ σ k {\displaystyle \sigma _{1}\subset \sigma _{2}\subset \cdots \subset \sigma _{k}} of T {\displaystyle T} is called a flag or chain of T {\displaystyle T} . The set of all flags of T {\displaystyle T} comprises an abstract simplicial complex , known as the barycentric subdivision of T {\displaystyle T} , denoted by Bary ( T ) {\displaystyle \operatorname {Bary} (T)} . The barycentric subdivision is naturally identified with a geometric subdivision of T {\displaystyle T} , created by starring the geometric realization of T {\displaystyle T} at the barycenter of each simplex. [ 9 ]
There is a natural filtration on Bary ( T ) {\displaystyle \operatorname {Bary} (T)} by considering for each natural number k {\displaystyle k} the maximal subcomplex of Bary ( T ) {\displaystyle \operatorname {Bary} (T)} spanned by vertices of Bary ( T ) {\displaystyle \operatorname {Bary} (T)} corresponding to simplices of T {\displaystyle T} of dimension at least k − 1 {\displaystyle k-1} , which is denoted S ~ ( T ) k {\displaystyle {\tilde {\mathcal {S}}}(T)_{k}} . In particular, by this convention, then S ~ ( T ) 1 = Bary ( T ) {\displaystyle {\tilde {\mathcal {S}}}(T)_{1}=\operatorname {Bary} (T)} . Considering the sequence of nested subcomplexes given by varying the parameter k {\displaystyle k} , we obtain a filtration on Bary ( T ) {\displaystyle \operatorname {Bary} (T)} known as the subdivision filtration. Since the complexes in the subdivision filtration shrink as k {\displaystyle k} increases, we can regard it as a functor S ~ ( − ) : N op → S i m p {\displaystyle {\tilde {\mathcal {S}}}(-):\mathbb {N} ^{\operatorname {op} }\to \mathbf {Simp} } from the opposite posetal category N op {\displaystyle \mathbb {N} ^{\operatorname {op} }} to the category S i m p {\displaystyle \mathbf {Simp} } of simplicial complexes and simplicial maps .
Let P {\displaystyle P} be a partially ordered set . Given a simplicial filtration F : P → S i m p {\displaystyle F:P\to \mathbf {Simp} } , regarded as a functor from the posetal category of P {\displaystyle P} to the category S i m p {\displaystyle \mathbf {Simp} } , by applying the subdivision filtration object-wise on F {\displaystyle F} , we obtain a two-parameter filtration S ( F ) : N op × P → S i m p {\displaystyle {\mathcal {S}}(F):\mathbb {N} ^{\operatorname {op} }\times P\to \mathbf {Simp} } , called the subdivision bifiltration. [ 10 ]
In particular, when we take F {\displaystyle F} to be the Rips or Čech filtration, we obtain bifiltrations S Rips ( − ) {\displaystyle {\mathcal {S}}\operatorname {Rips} (-)} and S C ˇ e c h ( − ) {\displaystyle {\mathcal {S}}\operatorname {{\check {C}}ech} (-)} , respectively.
The subdivision-Čech bifiltration is weakly equivalent to the multicover bifiltration, implying that they have isomorphic persistent homology. A combinatorial proof of this statement was given in Sheehy's original conference paper, but a more algebraic version was presented in 2017 by Cavanna et al. [ 11 ] The ideas from Cavanna's proof were later generalized by Blumberg and Lesnick in a 2022 paper on 2-parameter persistent homology. [ 8 ]
By the size of a bifiltration, we mean the number of simplices in the largest complex. The subdivision-Čech bifiltration has exponential size as a function of the number of vertices. [ 12 ] This implies that its homology cannot be directly computed in polynomial time . However, for points in Euclidean space, the homology of subdivision-Čech can be computed in polynomial time, up to weak equivalence , via a construction known as the rhomboid bifiltration. As a precursor to the rhomboid bifiltration, Edelsbrunner and Osang presented in 2021 a polyhedral cell complex called the rhomboid tiling , which they used to compute horizontal or vertices slices of the multicover bifiltration up to weak equivalence. [ 13 ] This was extended a year later by Corbet et al. to the rhomboid bifiltration, which is weakly equivalent to the multicover bifiltration, but has polynomial size. [ 12 ] | https://en.wikipedia.org/wiki/Subdivision_bifiltration |
In algebra, a subfield of an algebra A over a field F is an F - subalgebra that is also a field. A maximal subfield is a subfield that is not contained in a strictly larger subfield of A .
If A is a finite-dimensional central simple algebra , then a subfield E of A is called a strictly maximal subfield if [ E : F ] = ( dim F A ) 1 / 2 {\displaystyle [E:F]=(\dim _{F}A)^{1/2}} .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Subfield_of_an_algebra |
A subframe is a structural component of a vehicle, such as an automobile or an aircraft , that uses a discrete, separate structure within a larger body-on-frame or unibody to carry specific components like the powertrain , drivetrain , and suspension . The subframe is typically bolted or welded to the vehicle. When bolted, it often includes rubber bushings or springs to dampen vibrations . [ 1 ] [ 2 ] [ 3 ]
The primary purposes of using a subframe are to distribute high chassis loads over a wide area of relatively thin sheet metal of a monocoque body shell and to isolate vibrations and harshness from the rest of the body. For example, in an automobile with its powertrain contained in a subframe, forces generated by the engine and transmission can be sufficiently damped to prevent disturbing the passengers. Modern vehicles use separate front and rear subframes to reduce overall weight and cost while maintaining structural integrity. Additionally, subframes benefit production by allowing subassemblies to be created and later introduced to the main body shell on an automated line.
There are generally three basic forms of the subframe:
Subframes are typically made of pressed steel panels that are thicker than body shell panels and are welded or spot-welded together. Hydroformed tubes may also be used in some designs.
The revolutionary monocoque, transverse-engined, front-wheel-drive 1959 Austin Mini set the template for modern front-wheel-drive cars by using front and rear subframes to provide accurate road wheel control while maintaining a stiff, lightweight body. The 1961 Jaguar E-Type (XKE) used a tubular space frame–type front subframe to mount the engine, gearbox, and long bonnet/hood to a monocoque "tub" passenger compartment. Beginning with the 1960s, subframes saw regular production with General Motors ' X- and F-platform bodies, and the Astro/Safari mid-size vans.
Subframes are prone to misalignment, which can cause vibration and alignment issues in the suspension and steering components. Misalignment is caused by space between the mounting bolts and the mounting hole. Several companies in the automotive aftermarket, including TyrolSport in the US and Spoon Sports in Japan, offer solutions for subframe misalignment and movement issues. | https://en.wikipedia.org/wiki/Subframe |
In game theory , a subgame is any part (a subset) of a game that meets the following criteria (the following terms allude to a game described in extensive form ): [ 1 ]
It is a notion used in the solution concept of subgame perfect Nash equilibrium , a refinement of the Nash equilibrium that eliminates non-credible threats .
The key feature of a subgame is that it, when seen in isolation, constitutes a game in its own right. When the initial node of a subgame is reached in a larger game, players can concentrate only on that subgame; they can ignore the history of the rest of the game (provided they know what subgame they are playing ). This is the intuition behind the definition given above of a subgame. It must contain an initial node that is a singleton information set since this is a requirement of a game. Otherwise, it would be unclear where the player with first move should start at the beginning of a game (but see nature's choice ). Even if it is clear in the context of the larger game which node of a non-singleton information set has been reached, players could not ignore the history of the larger game once they reached the initial node of a subgame if subgames cut across information sets. Furthermore, a subgame can be treated as a game in its own right, but it must reflect the strategies available to players in the larger game of which it is a subset. This is the reasoning behind 2 and 3 of the definition. All the strategies (or subsets of strategies) available to a player at a node in a game must be available to that player in the subgame the initial node of which is that node.
One of the principal uses of the notion of a subgame is in the solution concept subgame perfection, which stipulates that an equilibrium strategy profile be a Nash equilibrium in every subgame .
In a Nash equilibrium, there is some sense in which the outcome is optimal - every player is playing a best response to the other players. However, in some dynamic games this can yield implausible equilibria. Consider a two-player game in which player 1 has a strategy S to which player 2 can play B as a best response. Suppose also that S is a best response to B. Hence, {S,B} is a Nash equilibrium. Let there be another Nash equilibrium {S',B'}, the outcome of which player 1 prefers and B' is the only best response to S'. In a dynamic game, the first Nash equilibrium is implausible (if player 1 moves first) because player 1 will play S', forcing the response (say) B' from player 2 and thereby attaining the second equilibrium (regardless of the preferences of player 2 over the equilibria). The first equilibrium is subgame imperfect because B does not constitute a best response to S' once S' has been played, i.e. in the subgame reached by player 1 playing S', B is not optimal for player 2.
If not all strategies at a particular node were available in a subgame containing that node, it would be unhelpful in subgame perfection. One could trivially call an equilibrium subgame perfect by ignoring playable strategies to which a strategy was not a best response. Furthermore, if subgames cut across information sets, then a Nash equilibrium in a subgame might suppose a player had information in that subgame, he did not have in the larger game. | https://en.wikipedia.org/wiki/Subgame |
In metallurgy , materials science and structural geology , subgrain rotation recrystallization is recognized as an important mechanism for dynamic recrystallisation . It involves the rotation of initially low-angle sub-grain boundaries until the mismatch between the crystal lattices across the boundary is sufficient for them to be regarded as grain boundaries . [ 1 ] [ 2 ] This mechanism has been recognized in many minerals (including quartz , calcite , olivine , pyroxenes , micas , feldspars , halite , garnets and zircons ) and in metals (various magnesium , aluminium and nickel alloys ). [ 3 ] [ 4 ] [ 5 ]
In metals and minerals, grains are ordered structures in different crystal orientations. Subgrains are defined as grains that are oriented at a < 10–15 degree angle at the grain boundary, making it a low-angle grain boundary (LAGB). Due to the relationship between the energy versus the number of dislocations at the grain boundary, there is a driving force for fewer high-angle grain boundaries (HAGB) to form and grow instead of a higher number of LAGB. The energetics of the transformation depend on the interfacial energy at the boundaries, the lattice geometry (atomic and planar spacing, structure [i.e. FCC / BCC / HCP ] of the material, and the degrees of freedom of the grains involved ( misorientation , inclination). The recrystallized material has less total grain boundary area, which means that failure via brittle fracture along the grain boundary is less probable.
Subgrain rotation recrystallization is a type of continuous dynamic recrystallization . Continuous dynamic recrystallization involves the evolution of low-angle grains into high-angle grains, increasing their degree of misorientation. [ 6 ] One mechanism could be the migration and agglomeration of like-sign dislocations in the LAGB, followed by grain boundary shearing. [ 7 ] The transformation occurs when the subgrain boundaries contain small precipitates, which pin them in place. As the subgrain boundaries absorb dislocations, the subgrains transform into grains by rotation, instead of growth. This process generally occurs at elevated temperatures, which allows dislocations to both glide and climb; at low temperatures, dislocation movement is more difficult and the grains are less mobile. [ 8 ]
By contrast, discontinuous dynamic recrystallization involves nucleation and growth of new grains, where due to increased temperature and/or pressure, new grains grow at high angles compared to the surrounding grains.
Grain strength generally follows the Hall–Petch relation , which states that material strength decreases with the square root of the grain size. A higher number of smaller subgrains leads to a higher yield stress , and so some materials may be purposefully manufactured to have many subgrains, and in this case subgrain rotation recrystallization should be avoided.
Precipitates may also form in grain boundaries. It has been observed that precipitates in subgrain boundaries grow in a more elongated shape parallel to the adjacent grains, whereas precipitates in HAGB are blockier. This difference in aspect ratio may provide different strengthening effects to the material; long plate-like precipitates in the LAGB may delaminate and cause brittle failure under stress. Subgrain rotation recrystallization reduces the number of LAGB, thus reducing the number of flat, long precipitates, and also reducing the number of available pathways for this brittle failure.
Different grains and their orientations can be observed using scanning electron microscope (SEM) techniques such as electron backscatter diffraction (EBSD) or polarized optical microscopy (POM). Samples are initially cold- or hot-rolled to introduce a high degree of dislocation density, and then deformed at different strain rates so that dynamic recrystallization occurs. The deformation may be in the form of compression, tension, or torsion. [ 6 ] The grains elongate in the direction of applied stress and the misorientation angle of subgrain boundaries increases. [ 8 ] | https://en.wikipedia.org/wiki/Subgrain_rotation_recrystallization |
In abstract algebra , every subgroup of a cyclic group is cyclic. Moreover, for a finite cyclic group of order n , every subgroup's order is a divisor of n , and there is exactly one subgroup for each divisor. [ 1 ] [ 2 ] This result has been called the fundamental theorem of cyclic groups . [ 3 ] [ 4 ]
For every finite group G of order n , the following statements are equivalent:
If either (and thus both) are true, it follows that there exists exactly one subgroup of order d , for any divisor of n .
This statement is known by various names such as characterization by subgroups . [ 5 ] [ 6 ] [ 7 ] (See also cyclic group for some characterization.)
There exist finite groups other than cyclic groups with the property that all proper subgroups are cyclic; the Klein group is an example. However, the Klein group has more than one subgroup of order 2, so it does not meet the conditions of the characterization.
The infinite cyclic group is isomorphic to the additive subgroup Z of the integers. There is one subgroup d Z for each integer d (consisting of the multiples of d ), and with the exception of the trivial group (generated by d = 0) every such subgroup is itself an infinite cyclic group. Because the infinite cyclic group is a free group on one generator (and the trivial group is a free group on no generators), this result can be seen as a special case of the Nielsen–Schreier theorem that every subgroup of a free group is itself free. [ 8 ]
The fundamental theorem for finite cyclic groups can be established from the same theorem for the infinite cyclic groups, by viewing each finite cyclic group as a quotient group of the infinite cyclic group. [ 8 ]
In both the finite and the infinite case, the lattice of subgroups of a cyclic group is isomorphic to the dual of a divisibility lattice. In the finite case, the lattice of subgroups of a cyclic group of order n is isomorphic to the dual of the lattice of divisors of n , with a subgroup of order n / d for each divisor d . The subgroup of order n / d is a subgroup of the subgroup of order n / e if and only if e is a divisor of d . The lattice of subgroups of the infinite cyclic group can be described in the same way, as the dual of the divisibility lattice of all positive integers. If the infinite cyclic group is represented as the additive group on the integers, then the subgroup generated by d is a subgroup of the subgroup generated by e if and only if e is a divisor of d . [ 8 ]
Divisibility lattices are distributive lattices , and therefore so are the lattices of subgroups of cyclic groups. This provides another alternative characterization of the finite cyclic groups: they are exactly the finite groups whose lattices of subgroups are distributive. More generally, a finitely generated group is cyclic if and only if its lattice of subgroups is distributive and an arbitrary group is locally cyclic if and only its lattice of subgroups is distributive. [ 9 ] The additive group of the rational numbers provides an example of a group that is locally cyclic, and that has a distributive lattice of subgroups, but that is not itself cyclic. | https://en.wikipedia.org/wiki/Subgroups_of_cyclic_groups |
In chemistry, subhalide usually refers to inorganic compounds that have a low ratio of halide to metal, made possible by metal–metal bonding (or element–element bonding for nonmetals), sometimes extensive. Many compounds meet this definition. [ citation needed ]
The normal halide of boron is BF 3 . Boron forms many subhalides: several B 2 X 4 , including B 2 F 4 ; also BF . Aluminium forms a variety of subhalides. For gallium, adducts of Ga 2 Cl 4 are known. Phosphorus subhalides include P 2 I 4 , P 4 Cl 2 , and P 7 Cl 3 (structurally related to [P 7 ] 3− ). For bismuth, the compound originally described as bismuth monochloride was later shown to consist of [Bi 9 ] 5+ clusters and chloride anions. [ 1 ] There are many tellurium subhalides, including Te 3 Cl 2 , Te 2 X (X = Cl, Br, I), and two forms of TeI . [ 2 ] | https://en.wikipedia.org/wiki/Subhalide |
Subitizing is the rapid, accurate, and effortless ability to perceive small quantities of items in a set , typically when there are four or fewer items, without relying on linguistic or arithmetic processes. The term refers to the sensation of instantly knowing how many objects are in the visual scene when their number falls within the subitizing range. [ 1 ]
Sets larger than about four to five items cannot be subitized unless the items appear in a pattern with which the person is familiar (such as the six dots on one face of a die). Large, familiar sets might be counted one-by-one (or the person might calculate the number through a rapid calculation if they can mentally group the elements into a few small sets). A person could also estimate the number of a large set—a skill similar to, but different from, subitizing. The term subitizing was coined in 1949 by E. L. Kaufman et al., [ 1 ] and is derived from the Latin adjective subitus (meaning "sudden").
The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid, [ 2 ] accurate, [ 3 ] and confident. [ 4 ] However, once there are more than four items to count, judgments are made with decreasing accuracy and confidence. [ 1 ] In addition, response times rise in a dramatic fashion, with an extra 250–350 ms added for each additional item within the display beyond about four. [ 5 ]
While the increase in response time for each additional element within a display is 250–350 ms per item outside the subitizing range, there is still a significant, albeit smaller, increase of 40–100 ms per item within the subitizing range. [ 2 ] A similar pattern of reaction times is found in young children, although with steeper slopes for both the subitizing range and the enumeration range. [ 6 ] This suggests there is no span of apprehension as such, if this is defined as the number of items which can be immediately apprehended by cognitive processes, since there is an extra cost associated with each additional item enumerated. However, the relative differences in costs associated with enumerating items within the subitizing range are small, whether measured in terms of accuracy, confidence, or speed of response . Furthermore, the values of all measures appear to differ markedly inside and outside the subitizing range. [ 1 ] So, while there may be no span of apprehension, there appear to be real differences in the ways in which a small number of elements is processed by the visual system (i.e. approximately four or fewer items), compared with larger numbers of elements (i.e. approximately more than four items).
A 2006 study demonstrated that subitizing and counting are not restricted to visual perception, but also extend to tactile perception, when observers had to name the number of stimulated fingertips. [ 7 ] A 2008 study also demonstrated subitizing and counting in auditory perception. [ 8 ] Even though the existence of subitizing in tactile perception has been questioned, [ 9 ] this effect has been replicated many times and can be therefore considered as robust. [ 10 ] [ 11 ] [ 12 ] The subitizing effect has also been obtained in tactile perception with congenitally blind adults. [ 13 ] Together, these findings support the idea that subitizing is a general perceptual mechanism extending to auditory and tactile processing.
As the derivation of the term "subitizing" suggests, the feeling associated with making a number judgment within the subitizing range is one of immediately being aware of the displayed elements. [ 3 ] When the number of objects presented exceeds the subitizing range, this feeling is lost, and observers commonly report an impression of shifting their viewpoint around the display, until all the elements presented have been counted. [ 1 ] The ability of observers to count the number of items within a display can be limited, either by the rapid presentation and subsequent masking of items, [ 14 ] or by requiring observers to respond quickly. [ 1 ] Both procedures have little, if any, effect on enumeration within the subitizing range. These techniques may restrict the ability of observers to count items by limiting the degree to which observers can shift their "zone of attention" [ 15 ] successively to different elements within the display.
Atkinson, Campbell, and Francis [ 16 ] demonstrated that visual afterimages could be employed in order to achieve similar results. Using a flashgun to illuminate a line of white disks, they were able to generate intense afterimages in dark-adapted observers. Observers were required to verbally report how many disks had been presented, both at 10 s and at 60 s after the flashgun exposure. Observers reported being able to see all the disks presented for at least 10 s, and being able to perceive at least some of the disks after 60 s. Unlike simply displaying the images for 10 and 60 second intervals, when presented in the form of afterimages, eye movement cannot be employed for the purpose of counting: when the subjects move their eyes, the images also move. Despite a long period of time to enumerate the number of disks presented when the number of disks presented fell outside the subitizing range (i.e., 5–12 disks), observers made consistent enumeration errors in both the 10 s and 60 s conditions. In contrast, no errors occurred within the subitizing range (i.e., 1–4 disks), in either the 10 s or 60 s conditions. [ 17 ]
The work on the enumeration of afterimages [ 16 ] [ 17 ] supports the view that different cognitive processes operate for the enumeration of elements inside and outside the subitizing range, and as such raises the possibility that subitizing and counting involve different brain circuits. However, functional imaging research has been interpreted both to support different [ 18 ] and shared processes. [ 19 ]
Social theory supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients with simultanagnosia , one of the key components of Bálint's syndrome . [ 20 ] Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position. [ 20 ] Despite these dramatic symptoms, such patients are able to correctly recognize individual objects. [ 21 ] Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times. [ 22 ]
However, people with simultanagnosia have no difficulty enumerating objects within the subitizing range. [ 23 ] The disorder is associated with bilateral damage to the parietal lobe , an area of the brain linked with spatial shifts of attention. [ 18 ] These neuropsychological results are consistent with the view that the process of counting, but not that of subitizing, requires active shifts of attention. However, recent research has questioned this conclusion by finding that attention also affects subitizing. [ 24 ]
A further source of research on the neural processes of subitizing compared to counting comes from positron emission tomography (PET) research on normal observers. Such research compares the brain activity associated with enumeration processes inside (i.e., 1–4 items) for subitizing, and outside (i.e., 5–8 items) for counting. [ 18 ] [ 19 ]
Such research finds that within the subitizing and counting range activation occurs bilaterally in the occipital extrastriate cortex and superior parietal lobe/intraparietal sulcus. This has been interpreted as evidence that shared processes are involved. [ 19 ] However, the existence of further activations during counting in the right inferior frontal regions, and the anterior cingulate have been interpreted as suggesting the existence of distinct processes during counting related to the activation of regions involved in the shifting of attention. [ 18 ]
Historically, many systems have attempted to use subitizing to identify full or partial quantities. In the twentieth century, mathematics educators started to adopt some of these systems, as reviewed in the examples below, but often switched to more abstract color-coding to represent quantities up to ten.
In the 1990s, babies three weeks old were shown to differentiate between 1–3 objects, that is, to subitize. [ 22 ] A more recent meta-study summarizing five different studies concluded that infants are born with an innate ability to differentiate quantities within a small range, which increases over time. [ 25 ] By the age of seven that ability increases to 4–7 objects. Some practitioners claim that with training, children are capable of subitizing 15+ objects correctly. [ citation needed ]
The hypothesized use of yupana , an Inca counting system, placed up to five counters in connected trays for calculations.
In each place value, the Chinese abacus uses four or five beads to represent units, which are subitized, and one or two separate beads, which symbolize fives. This allows multi-digit operations such as carrying and borrowing to occur without subitizing beyond five.
European abacuses use ten beads in each register, but usually separate them into fives by color.
The idea of instant recognition of quantities has been adopted by several pedagogical systems, such as Montessori , Cuisenaire and Dienes . However, these systems only partially use subitizing, attempting to make all quantities from 1 to 10 instantly recognizable. To achieve it, they code quantities by color and length of rods or bead strings representing them. Recognizing such visual or tactile representations and associating quantities with them involves different mental operations from subitizing.
One of the most basic applications is in digit grouping in large numbers, which allow one to tell the size at a glance, rather than having to count. For example, writing one million (1000000) as 1,000,000 (or 1.000.000 or 1 000 000 ) or one ( short ) billion (1000000000) as 1,000,000,000 (or other forms, such as 1,00,00,00,000 in the Indian numbering system ) makes it much easier to read. This is particularly important in accounting and finance, as an error of a single decimal digit changes the amount by a factor of ten. This is also found in computer programming languages for literal values, some of which use digit separators .
Dice , playing cards and other gaming devices traditionally split quantities into subitizable groups with recognizable patterns. The behavioural advantage of this grouping method has been scientifically investigated by Ciccione and Dehaene , [ 26 ] who showed that counting performances are improved if the groups share the same amount of items and the same repeated pattern.
A comparable application is to split up binary and hexadecimal number representations, telephone numbers, bank account numbers (e.g., IBAN , social security numbers, number plates, etc.) into groups ranging from 2 to 5 digits separated by spaces, dots, dashes, or other separators. This is done to support overseeing completeness of a number when comparing or retyping. This practice of grouping characters also supports easier memorization of large numbers and character structures.
There is at least one game that can be played online to self assess one's ability to subitize. [ 27 ] | https://en.wikipedia.org/wiki/Subitizing |
Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG) , among them the well-known Prisoner's Dilemma game (PD). [ 1 ] SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2×2 games.
SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game's payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa . [ citation needed ]
The dilemma is described by a 2 × 2 payoff matrix that allows each player to choose between a cooperative and a competitive (or defective) move. If both players cooperate, each player obtains the reward (R) payoff. If both defect, each player obtains the punishment (P) payoff. However, if one player defects while the other cooperates, the defector obtains the temptation (T) payoff and the cooperator obtains the sucker's (S) payoff, where T > R > P > S {\displaystyle T>R>P>S} (and, R ≥ T + S 2 {\textstyle R\geq {\frac {T+S}{2}}} assuring that sharing the payoffs awarded for uncoordinated choices does not exceed the payoffs obtained by mutual cooperation).
Given the payoff structure of the game (see Table 1), each individual player has a dominant strategy of defection. This dominant strategy yields a better payoff regardless of the opponent's choice. By choosing to defect, players protect themselves from exploitation and retain the option to exploit a trusting opponent. Because this is the case for both players, mutual defection is the only Nash equilibrium of the game. However, this is a deficient equilibrium (since mutual cooperation results in a better payoff for both players). [ 2 ]
The PD game payoff matrix:
Players that knowingly interact for several games (where the end point of the game is unknown), thus playing a repeated Prisoner's Dilemma game, may still be motivated to cooperate with their opponent while attempting to maximise their payoffs along the entire set of their repeated games. Such players face a different challenge of choosing an efficient and lucrative strategy for the repeated play. This challenge may become more complex when individuals are embedded in an ecology, having to face many opponents with various and unknown strategies. [ 3 ] [ 4 ] [ 5 ]
SERS assumes that the similarity between the players is subjectively and individually perceived (denoted as p s {\displaystyle p_{s}} , where 0 ≤ p s ≤ 1 {\displaystyle 0\leq p_{s}\leq 1} ). Two players confronting each other may have either identical or different perceptions of their similarity to their opponent. In other words, similarity perceptions need neither be symmetric nor correspond to formal logic constraints. After perceiving p s {\displaystyle p_{s}} , each player chooses between cooperation and defection, attempting to maximize the expected outcome. This means that each player estimates his or her expected payoffs under each of two possible courses of action. The expected value of cooperation is given by R ⋅ p s + S ⋅ ( 1 − p s ) {\displaystyle R\cdot p_{s}+S\cdot (1-p_{s})} and the expected payoff of defection is given by P ⋅ p s + T ⋅ ( 1 − p s ) {\displaystyle P\cdot p_{s}+T\cdot (1-p_{s})} . Hence, cooperation provides a higher expected payoff whenever R ⋅ p s + S ⋅ ( 1 − p s ) > P ⋅ p s + T ⋅ ( 1 − p s ) {\displaystyle R\cdot p_{s}+S\cdot (1-p_{s})>P\cdot p_{s}+T\cdot (1-p_{s})} which may also be expressed further as:
Cooperate if p s > T − S T − S + R − P {\textstyle p_{s}>{\frac {T-S}{T-S+R-P}}} . Defining p s ∗ = T − S T − S + R − P {\textstyle p_{s}^{*}={\frac {T-S}{T-S+R-P}}} , we obtain a simple decision rule : cooperate whenever p s > p s ∗ {\displaystyle p_{s}>p_{s}^{*}} , where p s {\displaystyle p_{s}} denotes the level of perceived similarity with the opponent, and p s ∗ {\displaystyle p_{s}^{*}} denotes the similarity threshold derived from the payoff matrix.
To illustrate, consider a PD payoff matrix with T = 5 , R = 3 , P = 1 , S = 0 {\displaystyle T=5,R=3,P=1,S=0} . The similarity threshold calculated for the game is given by: p s ∗ = 5 − 0 5 − 0 + 3 − 1 ≈ 0.71 {\displaystyle p_{s}^{*}={\frac {5-0}{5-0+3-1}}\approx 0.71} . Thus a player perceiving the similarity with the opponent, p s {\displaystyle p_{s}} , exceeding 0.71 should cooperate in order to maximise his expected payoffs.
Several experiments were conducted to test whether SERS provides not only a normative theory but also a descriptive theory of human behaviour. For example, an experiment involving 215 university undergraduates revealed an average of 30% cooperation rate for a payoff matrix with p s ∗ = 0.8 {\displaystyle p_{s}^{*}=0.8} and an average of 46% cooperation rate for a payoff matrix p s ∗ = 0.63 {\displaystyle p_{s}^{*}=0.63} .
Participants cooperated 47% under high level of induced similarity and only 29% under low level of induced similarity.
The cooperation rate for manipulating the perception of similarity of the opponent, revealed an increase from 67% to 80% of cooperation for the lower similarity threshold and from 40% to 70% cooperation for the higher similarity threshold. Other experiments with various similarity induction methods and payoff matrices further confirmed SERS's status as a descriptive theory of human behaviour. [ 1 ] [ 2 ]
Experiments on the impact of SERS on repeated games are presently being conducted and analysed at the University of Haifa and the Max Planck Institute for Research on Collective Goods in Bonn. [ citation needed ]
The PD game is not the only similarity sensitive game. Games for which the choice of the action with the higher expected value depends on the value of p s {\displaystyle p_{s}} are defined as Similarity Sensitive Games (SSGs), whereas others are nonsimilarity sensitive. Focusing only on the 24 completely rank-ordered and symmetric games, we can mark 12 SSGs. After eliminating games that reflect permutations of other games generated either by switching rows, columns, or both rows and columns, we are left with six basic (completely rank-ordered and symmetric) SSGs.
These are games for which SERS provides a rational and payoff-maximizing strategy that recommends which alternative to choose for any given perception of similarity with the opponent. [ 1 ]
Developing the SERS theory into an evolutionary strategy yields the Mimicry and Relative Similarity (MaRS) algorithm.
Fusing enacted and expected mimicry generates a powerful and cooperative mechanism that enhances fitness and reduces the risks associated with trust and cooperation. When conflicts take the form of repeated PD games, individuals get the opportunity to learn and monitor the extent of similarity with their opponents. They can then react by choosing whether to enact, expect, or exclude mimicry. This rather simple behavior has the capacity to protect individuals from exploitation and drive the evolution of cooperation within entire populations. MaRS paves the way for the induction of cooperation and supports the survival of other cooperative strategies. The existence of MaRS in heterogeneous populations helps those cooperative strategies that do not have the capacity of MaRS to combat hostile and random opponents. Despite the fact that MaRS cannot prevail in a duel with an unconditional defector, interacting within heterogeneous populations allows MaRS to fight unpredictable and hostile strategies and cooperate with cooperative ones, including itself. The operation of MaRS promotes cooperation, minimizes the extent of exploitation, and accounts for high fitness levels.
Testing the model in computer simulations of behavioral niches, populated with agents that enact various strategies and learning algorithms, shows how mimicry and relative similarity outperforms all the opponent strategies it was tested against, pushes noncooperative opponents toward extinction, and promotes the development of cooperative populations. [ 6 ] | https://en.wikipedia.org/wiki/Subjective_expected_relative_similarity |
In decision theory , subjective expected utility is the attractiveness of an economic opportunity as perceived by a decision-maker in the presence of risk . Characterizing the behavior of decision-makers as using subjective expected utility was promoted and axiomatized by L. J. Savage in 1954 [ 1 ] [ 2 ] following previous work by Ramsey and von Neumann . [ 3 ] The theory of subjective expected utility combines two subjective concepts: first, a personal utility function, and second a personal probability distribution (usually based on Bayesian probability theory).
Savage proved that, if the decision-maker adheres to axioms of rationality, believing an uncertain event has possible outcomes { x i } {\displaystyle \{x_{i}\}} each with a utility of u ( x i ) , {\displaystyle u(x_{i}),} then the person's choices can be explained as arising from this utility function combined with the subjective belief that there is a probability of each outcome, P ( x i ) . {\displaystyle P(x_{i}).} The subjective expected utility is the resulting expected value of the utility,
If instead of choosing { x i } {\displaystyle \{x_{i}\}} the person were to choose { y j } , {\displaystyle \{y_{j}\},} the person's subjective expected utility would be
Which decision the person prefers depends on which subjective expected utility is higher. Different people may make different decisions because they may have different utility functions or different beliefs about the probabilities of different outcomes.
Savage assumed that it is possible to take convex combinations of decisions and that preferences would be preserved. So if a person prefers x ( = { x i } ) {\displaystyle x(=\{x_{i}\})} to y ( = { y i } ) {\displaystyle y(=\{y_{i}\})} and s ( = { s i } ) {\displaystyle s(=\{s_{i}\})} to t ( = { t i } ) {\displaystyle t(=\{t_{i}\})} then that person will prefer λ x + ( 1 − λ ) s {\displaystyle \lambda x+(1-\lambda )s} to λ y + ( 1 − λ ) t {\displaystyle \lambda y+(1-\lambda )t} , for any 0 < λ < 1 {\displaystyle 0<\lambda <1} .
Experiments have shown that many individuals do not behave in a manner consistent with Savage's axioms of subjective expected utility, e.g. most prominently Allais (1953) [ 4 ] and Ellsberg (1961). [ 5 ] | https://en.wikipedia.org/wiki/Subjective_expected_utility |
The sublimation sandwich method (also called the sublimation sandwich process and the sublimation sandwich technique ) is a kind of physical vapor deposition used for creating man-made crystals. Silicon carbide is the most common crystal grown this way, though other crystals may also be created with it (notably gallium nitride ).
In this method, the environment around a single crystal or a polycrystalline plate is filled with vapor heated to between 1600°C and 2100°C. Changes to this environment can affect the gas phase stoichiometry . The source-to-crystal distance is kept very low, between 0.02mm to 0.03mm. Parameters that can affect crystal growth include source-to-substrate distance, temperature gradient, and the presence of tantalum for gathering excess carbon . High growth rates are the result of small source-to-seed distances combined with a large heat flux onto a small amount of source material with no more than a moderate temperature difference between the substrate and the source (0.5-10°C). The growth of large boules , however, remains quite difficult using this method, and it is better suited to the creation of epitaxial films with uniform polytype structures. [ 1 ] Ultimately, samples with a thickness of up to 500 μm can be produced using this method. [ 2 ] | https://en.wikipedia.org/wiki/Sublimation_sandwich_method |
A sublimatory [ 1 ] [ 2 ] or sublimation apparatus is equipment, commonly laboratory glassware , for purification of compounds by selective sublimation . In principle, the operation resembles purification by distillation , except that the products do not pass through a liquid phase .
A typical sublimation apparatus separates a mix of appropriate solid materials in a vessel in which it applies heat under a controllable atmosphere (air, vacuum or inert gas). If the material is not at first solid, then it may freeze under reduced pressure . Conditions are so chosen that the solid volatilizes and condenses as a purified compound on a cooled surface, leaving the non- volatile residual impurities or solid products behind.
The form of the cooled surface often is a so-called cold finger which for very low-temperature sublimation may actually be cryogenically cooled. If the operation is a batch process , then the sublimed material can be collected from the cooled surface once heating ceases and the vacuum is released. Although this may be quite convenient for small quantities, adapting sublimation processes to large volume is generally not practical with the apparatus becoming extremely large and generally needing to be disassembled to recover products and remove residue.
Among the advantages of applying the principle to certain materials are the comparatively low working temperatures, reduced exposure to gases such as oxygen that might harm certain products, and the ease with which it can be performed on extremely small quantities. [ 3 ] The same apparatus may also be used for conventional distillation of extremely small quantities due to the very small volume and surface area between evaporating and condensing regions, although this is generally only useful if the cold finger can be cold enough to solidify the condensate.
More sophisticated variants of sublimation apparatus include those that apply a temperature gradient so as to allow for controlled recrystallization of different fractions along the cold surface. Thermodynamic processes follow a statistical distribution , and suitably designed apparatus exploit this principle with a gradient that will yield different purities in particular temperature zones along the collection surface. Such techniques are especially helpful when the requirement is to refine or separate multiple products or impurities from the same mix of raw materials. It is necessary in particular when some of the required products have similar sublimation points or pressure curves . [ 3 ] | https://en.wikipedia.org/wiki/Sublimatory |
Depth ratings are primary design parameters and measures of a submarine 's ability to operate underwater. The depths to which submarines can dive are limited by the strengths of their hulls .
The hull of a submarine must be able to withstand the forces created by the outside water pressure being greater than the inside air pressure. The outside water pressure increases with depth and so the stresses on the hull also increase with depth. Each 10 metres (33 ft) of depth puts another atmosphere (1 bar, 14.7 psi, 101 kPa) of pressure on the hull, so at 300 metres (1,000 ft), the hull is withstanding thirty standard atmospheres (30 bar; 440 psi; 3,000 kPa) of water pressure.
This is the maximum depth at which a submarine is permitted to operate under normal peacetime circumstances, and is tested during sea trials . The test depth is set at two-thirds (0.66) of the design depth for United States Navy submarines, while the Royal Navy sets test depth at 4/7 (0.57) the design depth, and the German Navy sets it at exactly one-half (0.50) of design depth. [ 1 ]
Also known as the maximum operating depth (or the never-exceed depth ), this is the maximum depth at which a submarine is allowed to operate under any ( e.g. battle) conditions.
The nominal depth listed in the submarine's specifications. From it the designers calculate the thickness of the hull metal, the boat's displacement , and many other related factors.
Sometimes referred to as the " collapse depth " in the United States, [ 2 ] [ citation needed ] this is the submerged depth at which the submarine implodes due to water pressure. Technically speaking, the crush depth should be the same as the design depth, but in practice is usually somewhat deeper. This is the result of compounding safety margins throughout the production chain, where at each point an effort is made to at least slightly exceed the required specifications to account for imperceptible material defects or variations in machining tolerances.
A submarine, by definition, cannot exceed crush depth without being crushed. However, when a prediction is made as to what a submarine's crush depth might be, that prediction may subsequently be mistaken for the actual crush depth of the submarine. Such misunderstandings, compounded by errors in translation and general confusion as to what the various depth ratings mean, have resulted in multiple erroneous accounts of submarines not being crushed at their crush depth.
Notably, several World War II submarines reported that, due to flooding or mechanical failure, they had gone below crush depth, before successfully resurfacing after having the failure repaired or the water pumped out. In these cases, the "crush depth" is always either a mistranslated official "safe" or design depth (i.e. the test depth, or the maximum operating depth) or a prior (incorrect) estimate of what the crush depth might be. World War II German U-boats of the types VII and IX generally imploded at depths of 200 to 280 m (660 to 920 ft). [ citation needed ] | https://en.wikipedia.org/wiki/Submarine_depth_ratings |
Submarine groundwater discharge ( SGD ) is a hydrological process which commonly occurs in coastal areas. It is described as submarine inflow of fresh-, and brackish groundwater from land into the sea. Submarine groundwater discharge is controlled by several forcing mechanisms, which cause a hydraulic gradient between land and sea. [ 1 ] Considering the different regional settings the discharge occurs either as (1) a focused flow along fractures in karst and rocky areas, (2) a dispersed flow in soft sediments, or (3) a recirculation of seawater within marine sediments. Submarine groundwater discharge plays an important role in coastal biogeochemical processes and hydrological cycles such as the formation of offshore plankton blooms, hydrological cycles, and the release of nutrients, trace elements and gases. [ 2 ] [ 3 ] [ 4 ] [ 5 ] It affects coastal ecosystems and has been used as a freshwater resource by some local communities for millennia. [ 6 ]
In coastal areas the groundwater and seawater flows are driven by a variety of factors. Both types of water can circulate in marine sediments due to tidal pumping, waves, bottom currents or density driven transport processes. Meteoric freshwaters can discharge along confined and unconfined aquifers into the sea or the oppositional process of seawater intruding into groundwater charged aquifers can take place. [ 1 ] The flow of both fresh and sea water is primarily controlled by the hydraulic gradients between land and sea and differences in the densities between both waters and the permeabilities of the sediments.
According to Drabbe and Badon-Ghijben (1888) [ 7 ] and Herzberg (1901), [ 8 ] the thickness of a freshwater lens below sea level (z) corresponds with the thickness of the freshwater level above sea level (h) as:
z= ρf/((ρs-ρf))*h
With z being the thickness between the saltwater-freshwater interface and the sea level, h being the thickness between the top of the freshwater lens and the sea level, ρf being the density of freshwater and ρs being the density of saltwater. Including the densities of freshwater (ρf = 1.00 g •cm-3) and seawater (ρs = 1.025 g •cm-3) equation (2) simplifies to:
z=40*h
Together with Darcy's Law , the length of a salt wedge from the shoreline into the hinterland can be calculated:
L= ((ρs-ρf)Kf m)/(ρf Q)
With Kf being the hydraulic conductivity, m the aquifer thickness and Q the discharge rate. [ 9 ] Assuming an isotropic aquifer system the length of a salt wedge solely depends on the hydraulic conductivity, the aquifer thickness and is inversely related to the discharge rate. These assumptions are only valid under hydrostatic conditions in the aquifer system. In general the interface between fresh and saline water forms a zone of transition due to diffusion/dispersion or local anisotropy. [ 10 ]
The first study about submarine groundwater discharge was done by Sonrel (1868), who speculated on the risk of submarine springs for sailors. However, until the mid-1990s, SGD remained rather unrecognized by the scientific community because it was hard to detect and measure the freshwater discharge. The first elaborated method to study SGD was done by Moore (1996), who used radium-226 as a tracer for groundwater. Since then several methods and instruments have been developed to attempt to detect and quantify discharge rates.
The first study which detected and quantified submarine groundwater discharge on a regional basis was done by Moore (1996) in the South Atlantic Bight off South Carolina . He measured enhanced radium-226 concentrations within the water column near shore and up to about 100 kilometres (62 mi) from the shoreline. Radium-226 is a decay product of thorium-230 , which is produced within sediments and supplied by rivers. However, these sources could not explain the high concentrations present in the study area. Moore (1996) hypothesized that submarine groundwater, enriched in radium-226, was responsible for the high concentrations. This hypothesis has been tested numerous times at sites around the world and confirmed at each site. [ 11 ]
Lee (1977) [ 12 ] designed a seepage meter, which consists of a chamber which is connected to a sampling port and a plastic bag. The chamber is inserted into the sediment and water discharging through the sediments is caught within the plastic bag. The change in volume of water which is caught in the plastic bag over time represents the freshwater flux.
According to Schlüter et al. (2004) [ 13 ] chloride pore water profiles can be used to investigate submarine groundwater discharge. Chloride can be used as a conservative tracer, as it is enriched in seawater and depleted in groundwater. Three different shapes of chloride pore water profiles reflect three different transport modes within marine sediments. A chloride profile showing constant concentrations with depth indicates that no submarine groundwater is present. A chloride profile with a linear decline indicates a diffusive mixing between groundwater and seawater and a concave shaped chloride profile represents an advective admixture of submarine groundwater from below. Stable isotope ratios in the water molecule may also be used to trace and quantify the sources of a submarine groundwater discharge. [ 14 ] | https://en.wikipedia.org/wiki/Submarine_groundwater_discharge |
A submerged floating tunnel ( SFT ), also known as submerged floating tube bridge ( SFTB ), suspended tunnel , or Archimedes bridge , is a proposed design for a tunnel that floats in water, supported by its buoyancy (specifically, by employing the hydrostatic thrust, or Archimedes' principle ). [ 1 ]
The tube would be placed underwater, deep enough to avoid water traffic and weather, but not so deep that high water pressure needs to be dealt with; usually a depth of 20 to 50 m (66 to 164 ft) is sufficient. Cables either anchored to the seabed [ 1 ] or to pontoons on the surface [ 2 ] would prevent it from floating to the surface or submerging, respectively.
The concept of submerged floating tunnels is based on well-known technology applied to floating bridges and offshore structures, but the construction is mostly similar to that of immersed tunnels : After the tube is prefabricated in sections in a dry dock and the sections are moved to the site, one way is to first seal the sections; sink them into place, while sealed; and, when the sections are fixed to each other, break the seals. Another possibility is to leave the sections unsealed, and after welding them together at the site, pump the water out.
The ballast is calculated so that the structure has approximate hydrostatic equilibrium (that is, the tunnel is roughly the same overall density as water), whereas immersed tube tunnels are ballasted to achieve negative buoyancy so they tend to remain on the sea bed. This, of course, means that a submerged floating tunnel must be anchored to the ground or to the water surface to keep it in place, depending on the buoyancy of the submerged floating tunnel: slightly positive or negative, respectively.
Submerged floating tubes allow construction of a tunnel in extremely deep water, where conventional bridges or tunnels are technically difficult or prohibitively expensive. They would be able to deal with seismic disturbances and weather events easily, as they have some degree of freedom in regards to movement, and their structural performance is independent of length (that is, it can be very long without compromising its stability and resistance).
On the other hand, they may be vulnerable in regards to anchors or submarine traffic, which therefore has to be taken in consideration when building one.
Likely applications include fjords, deep, narrow sea channels, and deep lakes. [ 3 ]
As of 2016 [update] , a submerged floating tunnel has never been built, but several proposals have been presented by different entities.
In Norway, a first patent on this structure was presented in 1923 by Trygve Olsen ("Submerged pontoon bridge") and a new request was done in 1947 by the engineer Erik Ødegård. The interest has been revived during the last centuries with several studies in Norway, but it is just with the studies done by the Norwegian Public Road Administration (NPRA) that the feasibility of the structure is proven, with the recent developments of the offshore structures. The Norwegian Public Roads Administration (NPRA) has investigated the technical and economic potential for eliminating all ferries on fjord crossings along the western corridor ( European route E39 ) between Kristiansand and Trondheim. [ 32 ] [ 33 ] This project also linked with FEHRL through the Forever Open Road programme. [ 34 ] If the project were to proceed it estimated to cost $25 billion and be completed by 2050. [ 35 ]
Ponte di Archimede International, an Italian company, investigated the SFT in collaboration with the Norwegian Roads Research Laboratory, [ 36 ] the Danish Road Institute and the Italian Shipping Register, with a financial grant from the European Union and the coordination of FEHRL (Forum European National Highway Research Laboratories) an International Association of over 30 National Road Centres. [ 37 ] Furthermore, the Provincial Administrations of Como ( Como Lake ) and Lecco , in Italy, have officially shown great interest in the Archimedes' Bridge for crossing the Lario and the study of the submerged floating tunnel in the Strait of Messina has been promoted by Ponte di Archimede S.p.A. and verified with a feasibility analysis by the Italian Naval Register (RINA). [ 38 ]
The SIJLAB (Sino-Italian Joint Laboratory of Archimedes' Bridge), created in 1998, between Institute of Mechanics, Chinese Academy of Sciences, China and Ponte di Archimede S.p.A., is financed by the Italian Ministry of Foreign Affairs, the Chinese Ministry of Science and Technology and the Institute of Mechanics of the Chinese Academy of Sciences .
The consortium planned to build a 100m demonstration tunnel in Qiandao Lake in China eastern province of Zhejiang . Inside it, two layers of one-way motorways will run through in the middle, with two railway tracks flanking them. [ 39 ] It was later reported that the pilot project would now be a tourist observation tunnel to allow undisturbed viewing of the ruins of flooded Hecheng city, which are currently only viewable by scuba diving. [ 40 ] [ 41 ] The Qiandao Lake prototype will serve to help plan for the project of a 3,300-meter submerged floating tunnel in the Jintang Strait , in the Zhoushan archipelago, also situated in Zhejiang . [ 42 ] [ 43 ] [ 44 ]
According to Elio Matacena, the President of Ponte di Archimede International, the only difficulty building such tunnels in deeper waters is the price of the structure. Namely, the cables, which are very expensive, would be very long. He also notes that the tunnel is capable of supporting more weight than a traditional bridge, which has very strict weight limits, while being up to two times cheaper. Matacena points out that environmental studies show that the tunnel would have a very low impact on aquatic life. [ 45 ]
Indonesia has also expressed interest in the technology. For the infrastructure that would connect Sumatra to Java Island two options were explored: a conventional bridge or an undersea tunnel.
In 2004 the tunnel option was more widely discussed, especially when Kwik Kian Gie , then the Minister of National Development, announced that a European consortium was interested in investing in an undersea tunnel between Java and Sumatra. The budget was said to be around 15 billion US dollars for an undersea tunnel in the Sunda Strait ; in the long term it would link up Java and Sumatra in an uninterrupted chain. The project was to begin construction in 2005 and be ready to use by 2018, and was a part of the Asian Highway . [ 46 ]
However, the bridge option was later favored. [ 47 ]
In 2007, Indonesian experts, led by Ir. Iskendar, Director for the Center of Assessment and Application of Technology for Transportation System and Industries, participated in a meeting with SIJLAB engineers, from the Sino-Italian Archimedes Bridge project. [ 43 ] [ 48 ] As an archipelagic country, consisting of more than 13 thousand islands, Indonesia could benefit from such tunnels. Conventional transportation between islands is mainly by ferry . Submerged floating tunnels could thus be an alternative means to connect adjacent islands, in addition to normal bridges. | https://en.wikipedia.org/wiki/Submerged_floating_tunnel |
Submillimeter Wave Astronomy Satellite ( SWAS , also Explorer 74 and SMEX-3 ) is a NASA submillimetre astronomy satellite, and is the fourth spacecraft in the Small Explorer program (SMEX). It was launched on 6 December 1998, at 00:57:54 UTC , from Vandenberg Air Force Base aboard a Pegasus XL launch vehicle . [ 1 ] The telescope was designed by the Smithsonian Astrophysical Observatory (SAO) and integrated by Ball Aerospace , while the spacecraft was built by NASA's Goddard Space Flight Center (GSFC). [ 2 ] The mission's principal investigator is Gary J. Melnick. [ 1 ]
The Submillimeter Wave Astronomy Satellite mission was approved on 1 April 1989. The project began with the Mission Definition Phase, officially starting on 29 September 1989, and running through 31 January 1992. During this time, the mission underwent a conceptual design review on 8 June 1990, and a demonstration of the Schottky receivers and acousto-optical spectrometer concept was performed on 8 November 1991. [ 3 ]
The mission's Development Phase ran from February 1992, through May 1996. The Submillimeter Wave Telescope underwent a preliminary design review on 13 May 1992, and a critical design review (CDR) on 23 February 1993. Ball Aerospace was responsible for the construction of and integration of components into the telescope. The University of Cologne delivered the acousto-optical spectrometer to Ball for integration into the telescope on 2 December 1993, while Millitech Corporation delivered the Schottky receivers to Ball on 20 June 1994. Ball delivered the finished telescope to Goddard Space Flight Center on 20 December 1994. GSFC, which was responsible for construction of the spacecraft bus, conducted integration of spacecraft and instruments from January through March 1995. Spacecraft qualification and testing took place between 1 April 1995, and 15 December 1995. After this, SWAS was placed into storage until 1 September 1998, when launch preparation was begun. [ 3 ]
SWAS was designed to study the chemical composition, energy balance and structure of interstellar clouds , both galactic and extragalactic, and investigate the processes of stellar and planetary formation . [ 1 ] Its sole instrument is a telescope operating in the submillimeter wavelengths of far infrared and microwave radiation. The telescope is composed of three main components: a 55 × 71 cm (22 × 28 in) elliptical off-axis Cassegrain reflector with a beam width of 4 arcminutes at operating frequencies, [ 1 ] [ 4 ] two Schottky diode receivers, and an acousto-optical spectrometer. [ 2 ] The system is sensitive to frequencies between 487–557 GHz (538–616 μm ), which allows it to focus on the spectral lines of molecular oxygen (O 2 ) at 487.249 GHz; neutral carbon ( C i ) at 492.161 GHz; isotopic water (H 2 18 O) at 548.676 GHz; isotopic carbon monoxide ( 13 CO) at 550.927 GHz; and water (H 2 O) at 556.936 GHz. [ 1 ] [ 2 ] Detailed 1° x 1° maps of giant molecular and dark cloud cores are generated from a grid of measurements taken at 3.7 arcminutes spacings. SWAS's submillimeter radiometers are a pair of passively cooled subharmonic Schottky diode receivers, with receiver noise figures of 2500-3000 K. An acousto-optical spectrometer (AOS) was provided by the University of Cologne , in Germany . Outputs of the two SWAS receivers are combined to form a final intermediate frequency, which extends from 1.4 to 2.8 GHz and is dispersed into 1400 1-MHz channels by the AOS. SWAS is designed to make pointed observations stabilized on three axes, with a position accuracy of about 38 arcseconds, and jitter of about 24 arcseconds. Attitude information is obtained from gyroscopes whose drift is corrected via a star tracker. Momentum wheels are used to maneuver the spacecraft. [ 1 ]
The SWAS instrument is a submillimeter-wave telescope that incorporates dual heterodyne radiometers and an acousto-optical spectrometer. SWAS will measure water, molecular oxygen, atomic carbon, and isotopic carbon monoxide spectral line emissions from galactic interstellar clouds in the wavelength range 540-616 micrometres. Such submillimetre wave radiation cannot be detected from the ground because of atmospheric attenuation. The SWAS measurements will provide new information about the physical conditions (density and temperature) and chemistry in star-forming molecular clouds. [ 6 ]
The spacecraft was delivered to Orbital Sciences Corporation at Vandenberg Air Force Base on 2 November 1998, for integration onto their Pegasus XL launch vehicle. [ 3 ] Launch occurred on 6 December 1998, at 00:57:54 UTC, from Orbital Sciences' Stargazer L-1011 TriStar mothership. [ 1 ] [ 7 ] Its initial orbit was a near-circular 638 × 651 km (396 × 405 mi) with an inclination of 69.90°. [ 8 ]
SWAS was originally scheduled to launch in June 1995 but was delayed due to back-to-back launch failures of the Pegasus XL launch vehicle in June 1994 and June 1995. A launch opportunity in January 1997 was again canceled due to a Pegasus XL launch failure in November 1996. [ 9 ]
The commissioning phase of the mission lasted until 19 December 1998, when the telescope began producing useful science data. [ 10 ] The SWAS mission had a planned duration of two years and a cost estimate of US$60 million, [ 9 ] [ 11 ] but mission extensions allowed for five and a half years of continuous science operations. During this time, data was taken on more than 200 astronomical objects. [ 3 ] The decision was made to end science and spacecraft operations on 21 July 2004, at which time the spacecraft was placed into hibernation. [ 12 ]
To support the Deep Impact mission at comet 9P/Tempel , SWAS was brought out of hibernation on 1 June 2005. Vehicle check-out was completed on 5 June 2005 with no discernible degradation of equipment found. SWAS observations of the comet focused on isotopic water output both before and after the Deep Impact impactor struck the comet's nucleus on 4 July 2005. While water output was found to naturally vary by more than a factor of three during the observation campaign, SWAS data showed that there was no excessive release of water due to the impact event. After three months of observation, SWAS was once again placed into hibernation on 1 September 2005. [ 13 ]
As of 2023 [update] , SWAS remains in Earth orbit on stand-by. | https://en.wikipedia.org/wiki/Submillimeter_Wave_Astronomy_Satellite |
A submitochondrial particle (SMP) is an artificial vesicle made from the inner mitochondrial membrane . They can be formed by subjecting isolated mitochondria to sonication , freezing and thawing, high pressure, or osmotic shock . [ 1 ] [ 2 ] SMPs can be used to study the electron transport chain in a cell-free context.
The process of SMP formation forces the inner mitochondrial membrane inside out, meaning that the matrix -facing leaflet becomes the outer surface of the SMP, and the intermembrane space -facing leaflet faces the lumen of the SMP. As a consequence, the F 1 particles which normally face the matrix are exposed. Chaotropic agents can destabilize F 1 particles and cause them to dissociate from the membrane, thereby uncoupling the final step of oxidative phosphorylation from the rest of the electron transport chain. [ 3 ]
This molecular or cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Submitochondrial_particle |
Submittals in construction management can include: shop drawings , material data, samples, and product data. Submittals are required primarily for the architect and engineer to verify that the correct products will be installed on the project. [ 1 ]
This process also gives the architect and sub-consultants the opportunity to select colors, patterns, and types of material that were not chosen prior to completion of the construction drawings . This is not an occasion for the architect to select different materials than specified, but rather to clarify the selection within the quality level indicated in the specification and quantities shown on plans.
For materials requiring fabrication, such as reinforcing steel and structural steel , the architect and engineer need to verify details furnished by the fabricator as well as the required quantities are met. The details from the fabricator reflect both material availability and production expediency. One tragic example of a submitted alternate design is the suspension rod and bolting details resulting in the Hyatt Regency walkway collapse . The steel fabricator was unable to produce lengths of steel as originally designed and instead proposed using shorter lengths. The proposed alternate compounded the loads on the bolts, which resulted in the skywalks collapsing on July 17, 1981. 114 people were killed.
The contractor also uses this information in installation, using dimensions and installation data from the submittal. The construction documents, specifically the technical specifications , require the contractor to submit product data, samples, and shop drawings to the architect and engineer for approval. This is one of the first steps that is taken by the contractor after execution of the construction contract and issuance of the "Notice to Proceed".
The submittal process affects cost, quality, schedule, and project success. On large, commercial projects the submittal process can involve thousands of different materials, fabrications and equipment. Commercial buildings will often have complex pre-fabricated components. These include: elevators, windows, cabinets, air handling units, generators, appliances and cooling towers. These pieces of equipment often require close coordination to ensure that they receive the correct power, fuel, water and structural support. The submittal process gives another level of detail usually not included as part of the design documents.
An "approved" submittal authorizes quantity and quality of a material or an assembly to be released for fabrication and shipment. It ensures that the submittals have been properly vetted before final ordering. In essence, this is the final quality control mechanism before a product arrives on-site.
The product data submittal usually consists of the manufacturer's product information. The information included in this submittal are:
See the article: Shop drawing .
Many products require submission of samples. A sample is a physical portion of the specified product. Some samples are full product samples, such as a brick or section of precast concrete , or a partial sample that indicates color or texture. [ 2 ] The product sample is often required when several products are acceptable, to confirm the quality and aesthetic level of the material. The size or unit of sample material usually is specified.
For some materials, a mock-up or sample panel is necessary. A common example of a sample panel is a wall mock-up. This is a full size mock-up of a wall assembly and can include window, exterior veneers and waterproofing. The mock-up serves as both an aesthetic review, but also provides the contractors the opportunity to field test the assembly before full-scale assembly. The mock up may be required to be tested for water tightness and lateral forces. The mock-up panel might be 10 feet wide by 12 feet high, showing the full wall span from floor to floor.
Samples usually are required for finish selection or approval. Color and textures in the actual product can vary considerably from the color and textures shown in printed material. The printed brochure gives an indication of available colors, but the colors are rendered in printer's ink, rather than in the actual material. A quality level may be specified, requiring a selection of color and/or texture from sample pieces of the material. Several acceptable manufacturers may be listed in the specification and a level of quality also may be specified. The contractor, subcontractor, or supplier may have a preference for one of these products, based on price, availability, quality, workability, or service.
Samples are usually stored at the jobsite and compared to the material delivered and installed. Comparison of samples with the product received is an important part of project quality control.
Processing time is required to produce and review submittals, shop drawings, and samples. The procedures can seem very cumbersome and time consuming, however, there are substantial reasons for review steps by all parties. The designer is ultimately responsible for the design of the facility to meet occupancy needs and must ensure that the products being installed are suitable to meet these needs. Any change in material fabrication or quantity needs to be reviewed for its acceptability with the original design. Both the architect, contractor and sub-contractor need to be able to coordinate the installation of the product with other building systems.
Each level must review, add information as necessary, and stamp or seal that the submittal was examined and approved by that party. After the submittal reaches the primary reviewer, it is returned through the same steps, which provides an opportunity for further comment and assures that each party is aware of the approval, partial approval, notes, or rejection. This approval process is cumbersome and time-consuming. However, modern software products can greatly simplify and improve the efficiency.
Typically, the architect will review the submittal for compliance to the requirement in the construction documents. Revisions may be noted on the submittal. Colors and other selection items will be made by the architect during this review. Sometimes the architect will reject the entire submittal and other times will request resubmittal of some of the items. The architect also will make corrections, which normally do not need to be resubmitted, but that do need to be applied to the product. While the architect and engineers review products for performance and design intent, the contractor must review the product for preparation, quantity and installation requirements.
The contractor should manage the submittal process just like any other process in the construction cycle. The submittal process requires lead-time consideration to produce the submittal, shop drawing (engineering), review and revise and the shop fabrication period. Careful planning is necessary to ensure that the products are ordered and delivered within the construction schedule, so as not to delay any activities. The contractor must prioritize the submittal process, submitting and obtaining approval for materials needed for the first part of the project. Present-day submittal software for the construction industry can help streamline that process by grouping submittals by submittal types, using a standard material library, and ability to filter by the due date or status. [ 3 ] | https://en.wikipedia.org/wiki/Submittals_(construction) |
In topology and related areas of mathematics, a subnet is a generalization of the concept of subsequence to the case of nets . The analogue of "subsequence" for nets is the notion of a "subnet". The definition is not completely straightforward, but is designed to allow as many theorems about subsequences to generalize to nets as possible.
There are three non-equivalent definitions of "subnet".
The first definition of a subnet was introduced by John L. Kelley in 1955 [ 1 ] and later, Stephen Willard introduced his own (non-equivalent) variant of Kelley's definition in 1970. [ 1 ] Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet" [ 1 ] but they are each not equivalent to the concept of "subordinate filter", which is the analog of "subsequence" for filters (they are not equivalent in the sense that there exist subordinate filters on X = N {\displaystyle X=\mathbb {N} } whose filter/subordinate–filter relationship cannot be described in terms of the corresponding net/subnet relationship).
A third definition of "subnet" (not equivalent to those given by Kelley or Willard) that is equivalent to the concept of "subordinate filter" was introduced independently by Smiley (1957), Aarnes and Andenaes (1972), Murdeshwar (1983), and possibly others, although it is not often used. [ 1 ]
This article discusses the definition due to Willard (the other definitions are described in the article Filters in topology#Non–equivalence of subnets and subordinate filters ).
There are several different non-equivalent definitions of "subnet" and this article will use the definition introduced in 1970 by Stephen Willard, [ 1 ] which is as follows:
If x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} and s ∙ = ( s i ) i ∈ I {\displaystyle s_{\bullet }=\left(s_{i}\right)_{i\in I}} are nets in a set X {\displaystyle X} from directed sets A {\displaystyle A} and I , {\displaystyle I,} respectively, then s ∙ {\displaystyle s_{\bullet }} is said to be a subnet of x ∙ {\displaystyle x_{\bullet }} ( in the sense of Willard or a Willard–subnet [ 1 ] ) if there exists a monotone final function h : I → A {\displaystyle h:I\to A} such that s i = x h ( i ) for all i ∈ I . {\displaystyle s_{i}=x_{h(i)}\quad {\text{ for all }}i\in I.} A function h : I → A {\displaystyle h:I\to A} is monotone , order-preserving , and an order homomorphism if whenever i ≤ j {\displaystyle i\leq j} then h ( i ) ≤ h ( j ) {\displaystyle h(i)\leq h(j)} and it is called final if its image h ( I ) {\displaystyle h(I)} is cofinal in A . {\displaystyle A.} The set h ( I ) {\displaystyle h(I)} being cofinal in A {\displaystyle A} means that for every a ∈ A , {\displaystyle a\in A,} there exists some b ∈ h ( I ) {\displaystyle b\in h(I)} such that b ≥ a ; {\displaystyle b\geq a;} that is, for every a ∈ A {\displaystyle a\in A} there exists an i ∈ I {\displaystyle i\in I} such that h ( i ) ≥ a . {\displaystyle h(i)\geq a.} [ note 1 ]
Since the net x ∙ {\displaystyle x_{\bullet }} is the function x ∙ : A → X {\displaystyle x_{\bullet }:A\to X} and the net s ∙ {\displaystyle s_{\bullet }} is the function s ∙ : I → X , {\displaystyle s_{\bullet }:I\to X,} the defining condition ( s i ) i ∈ I = ( x h ( i ) ) i ∈ I , {\displaystyle \left(s_{i}\right)_{i\in I}=\left(x_{h(i)}\right)_{i\in I},} may be written more succinctly and cleanly as either s ∙ = x h ( ∙ ) {\displaystyle s_{\bullet }=x_{h(\bullet )}} or s ∙ = x ∙ ∘ h , {\displaystyle s_{\bullet }=x_{\bullet }\circ h,} where ∘ {\displaystyle \,\circ \,} denotes function composition and x h ( ∙ ) := ( x h ( i ) ) i ∈ I {\displaystyle x_{h(\bullet )}:=\left(x_{h(i)}\right)_{i\in I}} is just notation for the function x ∙ ∘ h : I → X . {\displaystyle x_{\bullet }\circ h:I\to X.}
Importantly, a subnet is not merely the restriction of a net ( x a ) a ∈ A {\displaystyle \left(x_{a}\right)_{a\in A}} to a directed subset of its domain A . {\displaystyle A.} In contrast, by definition, a subsequence of a given sequence x 1 , x 2 , x 3 , … {\displaystyle x_{1},x_{2},x_{3},\ldots } is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. Explicitly, a sequence ( s n ) n ∈ N {\displaystyle \left(s_{n}\right)_{n\in \mathbb {N} }} is said to be a subsequence of ( x i ) i ∈ N {\displaystyle \left(x_{i}\right)_{i\in \mathbb {N} }} if there exists a strictly increasing sequence of positive integers h 1 < h 2 < h 3 < ⋯ {\displaystyle h_{1}<h_{2}<h_{3}<\cdots } such that s n = x h n {\displaystyle s_{n}=x_{h_{n}}} for every n ∈ N {\displaystyle n\in \mathbb {N} } (that is to say, such that ( s 1 , s 2 , … ) = ( x h 1 , x h 2 , … ) {\displaystyle \left(s_{1},s_{2},\ldots \right)=\left(x_{h_{1}},x_{h_{2}},\ldots \right)} ). The sequence ( h n ) n ∈ N = ( h 1 , h 2 , … ) {\displaystyle \left(h_{n}\right)_{n\in \mathbb {N} }=\left(h_{1},h_{2},\ldots \right)} can be canonically identified with the function h ∙ : N → N {\displaystyle h_{\bullet }:\mathbb {N} \to \mathbb {N} } defined by n ↦ h n . {\displaystyle n\mapsto h_{n}.} Thus a sequence s ∙ = ( s n ) n ∈ N {\displaystyle s_{\bullet }=\left(s_{n}\right)_{n\in \mathbb {N} }} is a subsequence of x ∙ = ( x i ) i ∈ N {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in \mathbb {N} }} if and only if there exists a strictly increasing function h : N → N {\displaystyle h:\mathbb {N} \to \mathbb {N} } such that s ∙ = x ∙ ∘ h . {\displaystyle s_{\bullet }=x_{\bullet }\circ h.}
Subsequences are subnets
Every subsequence is a subnet because if ( x h n ) n ∈ N {\displaystyle \left(x_{h_{n}}\right)_{n\in \mathbb {N} }} is a subsequence of ( x i ) i ∈ N {\displaystyle \left(x_{i}\right)_{i\in \mathbb {N} }} then the map h : N → N {\displaystyle h:\mathbb {N} \to \mathbb {N} } defined by n ↦ h n {\displaystyle n\mapsto h_{n}} is an order-preserving map whose image is cofinal in its codomain and satisfies x h n = x h ( n ) {\displaystyle x_{h_{n}}=x_{h(n)}} for all n ∈ N . {\displaystyle n\in \mathbb {N} .}
Sequence and subnet but not a subsequence
The sequence ( s i ) i ∈ N := ( 1 , 1 , 2 , 2 , 3 , 3 , … ) {\displaystyle \left(s_{i}\right)_{i\in \mathbb {N} }:=(1,1,2,2,3,3,\ldots )} is not a subsequence of ( x i ) i ∈ N := ( 1 , 2 , 3 , … ) {\displaystyle \left(x_{i}\right)_{i\in \mathbb {N} }:=(1,2,3,\ldots )} although it is a subnet because the map h : N → N {\displaystyle h:\mathbb {N} \to \mathbb {N} } defined by h ( i ) := ⌊ i + 1 2 ⌋ {\displaystyle h(i):=\left\lfloor {\tfrac {i+1}{2}}\right\rfloor } is an order-preserving map whose image is h ( N ) = N {\displaystyle h(\mathbb {N} )=\mathbb {N} } and satisfies s i = x h ( i ) {\displaystyle s_{i}=x_{h(i)}} for all i ∈ N . {\displaystyle i\in \mathbb {N} .} [ note 2 ]
While a sequence is a net, a sequence has subnets that are not subsequences. The key difference is that subnets can use the same point in the net multiple times and the indexing set of the subnet can have much larger cardinality . Using the more general definition where we do not require monotonicity, a sequence is a subnet of a given sequence, if and only if it can be obtained from some subsequence by repeating its terms and reordering them. [ 2 ]
Subnet of a sequence that is not a sequence
A subnet of a sequence is not necessarily a sequence. [ 3 ] For an example, let I = { r ∈ R : r > 0 } {\displaystyle I=\{r\in \mathbb {R} :r>0\}} be directed by the usual order ≤ {\displaystyle \,\leq \,} and define h : I → N {\displaystyle h:I\to \mathbb {N} } by letting h ( r ) = ⌈ r ⌉ {\displaystyle h(r)=\lceil r\rceil } be the ceiling of r . {\displaystyle r.} Then h : ( I , ≤ ) → ( N , ≤ ) {\displaystyle h:(I,\leq )\to (\mathbb {N} ,\leq )} is an order-preserving map (because it is a non-decreasing function) whose image h ( I ) = N {\displaystyle h(I)=\mathbb {N} } is a cofinal subset of its codomain. Let x ∙ = ( x i ) i ∈ N : N → X {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in \mathbb {N} }:\mathbb {N} \to X} be any sequence (such as a constant sequence, for instance) and let s r := x h ( r ) {\displaystyle s_{r}:=x_{h(r)}} for every r ∈ I {\displaystyle r\in I} (in other words, let s ∙ := x ∙ ∘ h {\displaystyle s_{\bullet }:=x_{\bullet }\circ h} ). This net ( s r ) r ∈ I {\displaystyle \left(s_{r}\right)_{r\in I}} is not a sequence since its domain I {\displaystyle I} is an uncountable set . However, ( s r ) r ∈ I {\displaystyle \left(s_{r}\right)_{r\in I}} is a subnet of the sequence x ∙ {\displaystyle x_{\bullet }} since (by definition) s r = x h ( r ) {\displaystyle s_{r}=x_{h(r)}} holds for every r ∈ I . {\displaystyle r\in I.} Thus s ∙ {\displaystyle s_{\bullet }} is a subnet of x ∙ {\displaystyle x_{\bullet }} that is not a sequence.
Furthermore, the sequence x ∙ {\displaystyle x_{\bullet }} is also a subnet of ( s r ) r ∈ I {\displaystyle \left(s_{r}\right)_{r\in I}} since the inclusion map ι : N → I {\displaystyle \iota :\mathbb {N} \to I} (that sends n ↦ n {\displaystyle n\mapsto n} ) is an order-preserving map whose image ι ( N ) = N {\displaystyle \iota (\mathbb {N} )=\mathbb {N} } is a cofinal subset of its codomain and x n = s ι ( n ) {\displaystyle x_{n}=s_{\iota (n)}} holds for all n ∈ N . {\displaystyle n\in \mathbb {N} .} Thus x ∙ {\displaystyle x_{\bullet }} and ( s r ) r ∈ I {\displaystyle \left(s_{r}\right)_{r\in I}} are (simultaneously) subnets of each another.
Subnets induced by subsets
Suppose I ⊆ N {\displaystyle I\subseteq \mathbb {N} } is an infinite set and ( x i ) i ∈ N {\displaystyle \left(x_{i}\right)_{i\in \mathbb {N} }} is a sequence. Then ( x i ) i ∈ I {\displaystyle \left(x_{i}\right)_{i\in I}} is a net on ( I , ≤ ) {\displaystyle (I,\leq )} that is also a subnet of ( x i ) i ∈ N {\displaystyle \left(x_{i}\right)_{i\in \mathbb {N} }} (take h : I → N {\displaystyle h:I\to \mathbb {N} } to be the inclusion map i ↦ i {\displaystyle i\mapsto i} ). This subnet ( x i ) i ∈ I {\displaystyle \left(x_{i}\right)_{i\in I}} in turn induces a subsequence ( x h n ) n ∈ N {\displaystyle \left(x_{h_{n}}\right)_{n\in \mathbb {N} }} by defining h n {\displaystyle h_{n}} as the n th {\displaystyle n^{\text{th}}} smallest value in I {\displaystyle I} (that is, let h 1 := inf I {\displaystyle h_{1}:=\inf I} and let h n := inf { i ∈ I : i > h n − 1 } {\displaystyle h_{n}:=\inf\{i\in I:i>h_{n-1}\}} for every integer n > 1 {\displaystyle n>1} ). In this way, every infinite subset of I ⊆ N {\displaystyle I\subseteq \mathbb {N} } induces a canonical subnet that may be written as a subsequence. However, as demonstrated below, not every subnet of a sequence is a subsequence.
The definition generalizes some key theorems about subsequences:
Taking h {\displaystyle h} be the identity map in the definition of "subnet" and requiring B {\displaystyle B} to be a cofinal subset of A {\displaystyle A} leads to the concept of a cofinal subnet , which turns out to be inadequate since, for example, the second theorem above fails for the Tychonoff plank if we restrict ourselves to cofinal subnets.
If s ∙ {\displaystyle s_{\bullet }} is a net in a subset S ⊆ X {\displaystyle S\subseteq X} and if x ∈ X {\displaystyle x\in X} is a cluster point of s ∙ {\displaystyle s_{\bullet }} then x ∈ cl X S . {\displaystyle x\in \operatorname {cl} _{X}S.} In other words, every cluster point of a net in a subset belongs to the closure of that set.
If x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} is a net in X {\displaystyle X} then the set of all cluster points of x ∙ {\displaystyle x_{\bullet }} in X {\displaystyle X} is equal to [ 3 ] ⋂ a ∈ A cl X ( x ≥ a ) {\displaystyle \bigcap _{a\in A}\operatorname {cl} _{X}\left(x_{\geq a}\right)} where x ≥ a := { x b : b ≥ a , b ∈ A } {\displaystyle x_{\geq a}:=\left\{x_{b}:b\geq a,b\in A\right\}} for each a ∈ A . {\displaystyle a\in A.}
If a net converges to a point x {\displaystyle x} then x {\displaystyle x} is necessarily a cluster point of that net. [ 3 ] The converse is not guaranteed in general. That is, it is possible for x ∈ X {\displaystyle x\in X} to be a cluster point of a net x ∙ {\displaystyle x_{\bullet }} but for x ∙ {\displaystyle x_{\bullet }} to not converge to x . {\displaystyle x.} However, if x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} clusters at x ∈ X {\displaystyle x\in X} then there exists a subnet of x ∙ {\displaystyle x_{\bullet }} that converges to x . {\displaystyle x.} This subnet can be explicitly constructed from ( A , ≤ ) {\displaystyle (A,\leq )} and the neighborhood filter N x {\displaystyle {\mathcal {N}}_{x}} at x {\displaystyle x} as follows: make I := { ( a , U ) ∈ A × N x : x a ∈ U } {\displaystyle I:=\left\{(a,U)\in A\times {\mathcal {N}}_{x}:x_{a}\in U\right\}} into a directed set by declaring that ( a , U ) ≤ ( b , V ) if and only if a ≤ b and U ⊇ V ; {\displaystyle (a,U)\leq (b,V)\quad {\text{ if and only if }}\quad a\leq b\;{\text{ and }}\;U\supseteq V;} then ( x a ) ( a , U ) ∈ I → x in X {\displaystyle \left(x_{a}\right)_{(a,U)\in I}\to x{\text{ in }}X} and ( x a ) ( a , U ) ∈ I {\displaystyle \left(x_{a}\right)_{(a,U)\in I}} is a subnet of x ∙ = ( x a ) a ∈ A {\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}} since the map α : I → A ( a , U ) ↦ a {\displaystyle {\begin{alignedat}{4}\alpha :\;&&I&&\;\to \;&A\\[0.3ex]&&(a,U)&&\;\mapsto \;&a\\\end{alignedat}}} is a monotone function whose image α ( I ) = A {\displaystyle \alpha (I)=A} is a cofinal subset of A , {\displaystyle A,} and x α ( ∙ ) := ( x α ( i ) ) i ∈ I = ( x α ( a , U ) ) ( a , U ) ∈ I = ( x a ) ( a , U ) ∈ I . {\displaystyle x_{\alpha (\bullet )}:=\left(x_{\alpha (i)}\right)_{i\in I}=\left(x_{\alpha (a,U)}\right)_{(a,U)\in I}=\left(x_{a}\right)_{(a,U)\in I}.}
Thus, a point x ∈ X {\displaystyle x\in X} is a cluster point of a given net if and only if it has a subnet that converges to x . {\displaystyle x.} [ 3 ] | https://en.wikipedia.org/wiki/Subnet_(mathematics) |
In computer science , subnormal numbers are the subset of denormalized numbers (sometimes called denormals ) that fill the underflow gap around zero in floating-point arithmetic . Any non-zero number with magnitude smaller than the smallest positive normal number is subnormal , while denormal can also refer to numbers outside that range.
In some older documents (especially standards documents such as the initial releases of IEEE 754 and the C language ), "denormal" is used to refer exclusively to subnormal numbers. This usage persists in various standards documents, especially when discussing hardware that is incapable of representing any other denormalized numbers, but the discussion here uses the term "subnormal" in line with the 2008 revision of IEEE 754 . In casual discussions the terms subnormal and denormal are often used interchangeably, in part because there are no denormalized IEEE binary numbers outside the subnormal range.
The term "number" is used rather loosely, to describe a particular sequence of digits, rather than a mathematical abstraction; see Floating-point arithmetic for details of how real numbers relate to floating-point representations. "Representation" rather than "number" may be used when clarity is required.
Mathematical real numbers may be approximated by multiple floating-point representations. One representation is defined as normal , and others are defined as subnormal , denormal , or unnormal by their relationship to normal .
In a normal floating-point value, there are no leading zeros in the significand (also commonly called mantissa); rather, leading zeros are removed by adjusting the exponent (for example, the number 0.0123 would be written as 1.23 × 10 −2 ). Conversely, a denormalized floating-point value has a significand with a leading digit of zero. Of these, the subnormal numbers represent values which if normalized would have exponents below the smallest representable exponent (the exponent having a limited range).
The significand (or mantissa) of an IEEE floating-point number is the part of a floating-point number that represents the significant digits . For a positive normalised number, it can be represented as m 0 . m 1 m 2 m 3 ... m p −2 m p −1 (where m represents a significant digit, and p is the precision) with non-zero m 0 . Notice that for a binary radix , the leading binary digit is always 1. In a subnormal number, since the exponent is the least that it can be, zero is the leading significant digit (0. m 1 m 2 m 3 ... m p −2 m p −1 ), allowing the representation of numbers closer to zero than the smallest normal number. A floating-point number may be recognized as subnormal whenever its exponent has the least possible value.
By filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the flush to zero on underflow approach (discarding all significant digits when underflow is reached). Hence the production of a subnormal number is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small.
In IEEE 754-2008 , denormal numbers are renamed subnormal numbers and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1). In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly.
Mathematically speaking, the normalized floating-point numbers of a given sign are roughly logarithmically spaced, and as such any finite-sized normal float cannot include zero . The subnormal floats are a linearly spaced set of values, which span the gap between the negative and positive normal floats.
Subnormal numbers provide the guarantee that addition and subtraction of floating-point numbers never underflows; two nearby floating-point numbers always have a representable non-zero difference. Without gradual underflow, the subtraction a − b can underflow and produce zero even though the values are not equal. This can, in turn, lead to division by zero errors that cannot occur when gradual underflow is used. [ 1 ]
Subnormal numbers were implemented in the Intel 8087 while the IEEE 754 standard was being written. They were by far the most controversial feature in the K-C-S format proposal that was eventually adopted, [ 2 ] but this implementation demonstrated that subnormal numbers could be supported in a practical implementation. Some implementations of floating-point units do not directly support subnormal numbers in hardware, but rather trap to some kind of software support. While this may be transparent to the user, it can result in calculations that produce or consume subnormal numbers being much slower than similar calculations on normal numbers.
In IEEE binary floating-point formats , subnormals are represented by having a zero exponent field with a non-zero significand field. [ 3 ]
No other denormalized numbers exist in the IEEE binary floating-point formats, but they do exist in some other formats, including the IEEE decimal floating-point formats.
Some systems handle subnormal values in hardware, in the same way as normal values. Others leave the handling of subnormal values to system software ("assist"), only handling normal values and zero in hardware. Handling subnormal values in software always leads to a significant decrease in performance. When subnormal values are entirely computed in hardware, implementation techniques exist to allow their processing at speeds comparable to normal numbers. [ 4 ] However, the speed of computation remains significantly reduced on many modern x86 processors; in extreme cases, instructions involving subnormal operands may take as many as 100 additional clock cycles, causing the fastest instructions to run as much as six times slower. [ 5 ] [ 6 ]
This speed difference can be a security risk. Researchers showed that it provides a timing side channel that allows a malicious web site to extract page content from another site inside a web browser. [ 7 ]
Some applications need to contain code to avoid subnormal numbers, either to maintain accuracy, or in order to avoid the performance penalty in some processors. For instance, in audio processing applications, subnormal values usually represent a signal so quiet that it is out of the human hearing range. Because of this, a common measure to avoid subnormals on processors where there would be a performance penalty is to cut the signal to zero once it reaches subnormal levels or mix in an extremely quiet noise signal. [ 8 ] Other methods of preventing subnormal numbers include adding a DC offset, quantizing numbers, adding a Nyquist signal, etc. [ 9 ] Since the SSE2 processor extension, Intel has provided such a functionality in CPU hardware, which rounds subnormal numbers to zero. [ 10 ]
Intel's C and Fortran compilers enable the DAZ (denormals-are-zero) and FTZ (flush-to-zero) flags for SSE by default for optimization levels higher than -O0 . [ 11 ] The effect of DAZ is to treat subnormal input arguments to floating-point operations as zero, and the effect of FTZ is to return zero instead of a subnormal float for operations that would result in a subnormal float, even if the input arguments are not themselves subnormal. clang and gcc have varying default states depending on platform and optimization level.
A non- C99 -compliant method of enabling the DAZ and FTZ flags on targets supporting SSE is given below, but is not widely supported. It is known to work on Mac OS X since at least 2006. [ 12 ]
For other x86-SSE platforms where the C library has not yet implemented this flag, the following may work: [ 13 ]
The _MM_SET_DENORMALS_ZERO_MODE and _MM_SET_FLUSH_ZERO_MODE macros wrap a more readable interface for the code above. [ 14 ]
Most compilers will already provide the previous macro by default, otherwise the following code snippet can be used (the definition for FTZ is analogous):
The default denormalization behavior is mandated by the ABI , and therefore well-behaved software should save and restore the denormalization mode before returning to the caller or calling code in other libraries.
AArch32 NEON (SIMD) FPU always uses a flush-to-zero mode [ citation needed ] , which is the same as FTZ + DAZ . For the scalar FPU and in the AArch64 SIMD, the flush-to-zero behavior is optional and controlled by the FZ bit of the control register – FPSCR in Arm32 and FPCR in AArch64. [ 15 ]
One way to do this can be:
Some ARM processors have hardware handling of subnormals. | https://en.wikipedia.org/wiki/Subnormal_number |
In mathematics , a subpaving is a set of nonoverlapping boxes of R⁺ . A subset X of Rⁿ can be approximated by two subpavings X⁻ and X⁺ such that X⁻ ⊂ X ⊂ X⁺ .
In R¹ the boxes are line segments, in R² rectangles and in Rⁿ hyperrectangles. A R² subpaving can be also a " non-regular tiling by rectangles", when it has no holes.
Boxes present the advantage of being very easily manipulated by computers, as they form the heart of interval analysis . Many interval algorithms naturally provide solutions that are regular subpavings. [ 1 ]
In computation , a well-known application of subpaving in R² is the Quadtree data structure . In image tracing context and other applications is important to see X⁻ as topological interior , as illustrated.
The three figures on the right below show an approximation of the set X = {( x 1 , x 2 ) ∈ R 2 | x 2 1 + x 2 2 +
sin( x 1 + x 2 ) ∈ [4,9]} with different accuracies. The set X⁻ corresponds to red boxes and the set X⁺ contains all red and yellow boxes.
Combined with interval-based methods , subpavings are used to approximate the solution set of non-linear problems such as set inversion problems . [ 2 ] Subpavings can also be used to prove that a set defined by nonlinear inequalities is path connected , [ 3 ] to provide topological properties of such sets, [ 4 ] to solve piano-mover's problems [ 5 ] or to implement set computation. [ 6 ] | https://en.wikipedia.org/wiki/Subpaving |
In zoological nomenclature , a subphylum is a taxonomic rank below the rank of phylum .
The taxonomic rank of " subdivision " in fungi and plant taxonomy is equivalent to "subphylum" in zoological taxonomy. Some plant taxonomists have also used the rank of subphylum, for instance monocotyledons as a subphylum of phylum Angiospermae and vertebrates as a subphylum of phylum Chordata . [ 1 ]
Subphylum is:
Where convenient, subphyla in turn can be divided into infraphyla ; in turn such an infraphylum also would be superordinate to any classes or superclasses in the hierarchy .
Not all fauna phyla are divided into subphyla. Those that are include:
Examples of infraphyla include the Mycetozoa , the Gnathostomata and the Agnatha . | https://en.wikipedia.org/wiki/Subphylum |
A subpulmonic effusion is excess fluid that collects at the base of the lung , in the space between the pleura and diaphragm. It is a type of pleural effusion in which the fluid collects in this particular space but can be "layered out" with decubitus chest radiographs. There is minimal nature of costophrenic angle blunting usually found with larger pleural effusions. The occult nature of the effusion can be suspected indirectly on radiograph by elevation of the right diaphragmatic border with a lateral peak and medial flattening. The presence of the gastric bubble on the left with an abnormalagm of more than 2 cm can also suggest the diagnosis . Lateral decubitus views, with the patient lying on their side, can confirm the effusion as it will layer along the lateral chest wall. [ citation needed ]
Subpulmonic space refers to the space below the lungs in which the subpulmonic fluid fills. Subpulmonic fluid is common particularly in trauma cases where the apparent hemidiaphragm appears defeated and the apex is displaced laterally. [ 1 ] [ 2 ]
This article about a medical condition affecting the respiratory system is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Subpulmonic_effusion |
In the mathematical fields of category theory and abstract algebra , a subquotient is a quotient object of a subobject . Subquotients are particularly important in abelian categories , and in group theory , where they are also known as sections , though this conflicts with a different meaning in category theory.
So in the algebraic structure of groups, H {\displaystyle H} is a subquotient of G {\displaystyle G} if there exists a subgroup G ′ {\displaystyle G'} of G {\displaystyle G} and a normal subgroup G ″ {\displaystyle G''} of G ′ {\displaystyle G'} so that H {\displaystyle H} is isomorphic to G ′ / G ″ {\displaystyle G'/G''} .
In the literature about sporadic groups wordings like “ H {\displaystyle H} is involved in G {\displaystyle G} “ [ 1 ] can be found with the apparent meaning of “ H {\displaystyle H} is a subquotient of G {\displaystyle G} “.
As in the context of subgroups, in the context of subquotients the term trivial may be used for the two subquotients G {\displaystyle G} and { 1 } {\displaystyle \{1\}} which are present in every group G {\displaystyle G} . [ citation needed ]
A quotient of a subrepresentation of a representation (of, say, a group) might be called a subquotient representation; e. g., Harish-Chandra 's subquotient theorem. [ 2 ]
There are subquotients of groups which are neither subgroup nor quotient of it. E. g. according to article Sporadic group , Fi 22 has a double cover which is a subgroup of Fi 23 , so it is a subquotient of Fi 23 without being a subgroup or quotient of it.
The relation subquotient of is an order relation – which shall be denoted by ⪯ {\displaystyle \preceq } . It shall be proved for groups.
Let H ′ / H ″ {\displaystyle H'/H''} be subquotient of H {\displaystyle H} , furthermore H := G ′ / G ″ {\displaystyle H:=G'/G''} be subquotient of G {\displaystyle G} and φ : G ′ → H {\displaystyle \varphi \colon G'\to H} be the canonical homomorphism . Then all vertical ( ↓ {\displaystyle \downarrow } ) maps φ : X → Y , x ↦ x G ″ {\displaystyle \varphi \colon X\to Y,\;x\mapsto x\,G''}
are surjective for the respective pairs
The preimages φ − 1 ( H ′ ) {\displaystyle \varphi ^{-1}\left(H'\right)} and φ − 1 ( H ″ ) {\displaystyle \varphi ^{-1}\left(H''\right)} are both subgroups of G ′ {\displaystyle G'} containing G ″ , {\displaystyle G'',} and it is φ ( φ − 1 ( H ′ ) ) = H ′ {\displaystyle \varphi \left(\varphi ^{-1}\left(H'\right)\right)=H'} and φ ( φ − 1 ( H ″ ) ) = H ″ , {\displaystyle \varphi \left(\varphi ^{-1}\left(H''\right)\right)=H'',} because every h ∈ H {\displaystyle h\in H} has a preimage g ∈ G ′ {\displaystyle g\in G'} with φ ( g ) = h . {\displaystyle \varphi (g)=h.} Moreover, the subgroup φ − 1 ( H ″ ) {\displaystyle \varphi ^{-1}\left(H''\right)} is normal in φ − 1 ( H ′ ) . {\displaystyle \varphi ^{-1}\left(H'\right).}
As a consequence, the subquotient H ′ / H ″ {\displaystyle H'/H''} of H {\displaystyle H} is a subquotient of G {\displaystyle G} in the form H ′ / H ″ ≅ φ − 1 ( H ′ ) / φ − 1 ( H ″ ) . {\displaystyle H'/H''\cong \varphi ^{-1}\left(H'\right)/\varphi ^{-1}\left(H''\right).}
In constructive set theory , where the law of excluded middle does not necessarily hold, one can consider the relation subquotient of as replacing the usual order relation (s) on cardinals . When one has the law of the excluded middle, then a subquotient Y {\displaystyle Y} of X {\displaystyle X} is either the empty set or there is an onto function X → Y {\displaystyle X\to Y} . This order relation is traditionally denoted ≤ ∗ . {\displaystyle \leq ^{\ast }.} If additionally the axiom of choice holds, then Y {\displaystyle Y} has a one-to-one function to X {\displaystyle X} and this order relation is the usual ≤ {\displaystyle \leq } on corresponding cardinals. | https://en.wikipedia.org/wiki/Subquotient |
A satellite ground track or satellite ground trace is the path on the surface of a planet directly below a satellite 's trajectory . It is also known as a suborbital track or subsatellite track , and is the vertical projection of the satellite's orbit onto the surface of the Earth (or whatever body the satellite is orbiting). [ 1 ] A satellite ground track may be thought of as a path along the Earth's surface that traces the movement of an imaginary line between the satellite and the center of the Earth. In other words, the ground track is the set of points at which the satellite will pass directly overhead, or cross the zenith , in the frame of reference of a ground observer. [ 2 ]
The ground track of a satellite can take a number of different forms, depending on the values of the orbital elements , parameters that define the size, shape, and orientation of the satellite's orbit, although identification of the always reliant upon the recognition of the physical form that is in motion; [ note 1 ] This was emphasised during speculation over the Vela incident , whereby identification of the matter in question was subject to numerous theories. [ 3 ]
Typically, satellites have a roughly sinusoidal ground track. A satellite with an orbital inclination between zero and ninety degrees is said to be in what is called a direct or prograde orbit , meaning that it orbits in the same direction as the planet's rotation. A satellite with an orbital inclination between 90° and 180° (or, equivalently, between 0° and −90°) is said to be in a retrograde orbit . [ note 2 ]
A satellite in a direct orbit with an orbital period less than one day will tend to move from west to east along its ground track. This is called "apparent direct" motion. A satellite in a direct orbit with an orbital period greater than one day will tend to move from east to west along its ground track, in what is called "apparent retrograde" motion. This effect occurs because the satellite orbits more slowly than the speed at which the Earth rotates beneath it. Any satellite in a true retrograde orbit will always move from east to west along its ground track, regardless of the length of its orbital period.
Because a satellite in an eccentric orbit moves faster near perigee and slower near apogee, it is possible for a satellite to track eastward during part of its orbit and westward during another part. This phenomenon allows for ground tracks that cross over themselves in a single orbit, as in the geosynchronous and Molniya orbits discussed below.
A satellite whose orbital period is an integer fraction of a day (e.g., 24 hours, 12 hours, 8 hours, etc.) will follow roughly the same ground track every day. This ground track is shifted east or west depending on the longitude of the ascending node , which can vary over time due to perturbations of the orbit. If the period of the satellite is slightly longer than an integer fraction of a day, the ground track will shift west over time; if it is slightly shorter, the ground track will shift east. [ 2 ] [ 4 ]
As the orbital period of a satellite increases, approaching the rotational period of the Earth (in other words, as its average orbital speed slows towards the rotational speed of the Earth), its sinusoidal ground track will become compressed longitudinally, meaning that the "nodes" (the points at which it crosses the equator ) will become closer together until at geosynchronous orbit they lie directly on top of each other. For orbital periods longer than the Earth's rotational period, an increase in the orbital period corresponds to a longitudinal stretching out of the (apparent retrograde) ground track.
A satellite whose orbital period is equal to the rotational period of the Earth is said to be in a geosynchronous orbit . Its ground track will have a "figure eight" shape over a fixed location on the Earth, crossing the equator twice each day. It will track eastward when it is on the part of its orbit closest to perigee , and westward when it is closest to apogee .
A special case of the geosynchronous orbit, the geostationary orbit , has an eccentricity of zero (meaning the orbit is circular), and an inclination of zero in the Earth-Centered, Earth-Fixed coordinate system (meaning the orbital plane is not tilted relative to the Earth's equator). The "ground track" in this case consists of a single point on the Earth's equator, above which the satellite sits at all times. Note that the satellite is still orbiting the Earth — its apparent lack of motion is due to the fact that the Earth is rotating about its own center of mass at the same rate as the satellite is orbiting.
Orbital inclination is the angle formed between the plane of an orbit and the equatorial plane of the Earth. The geographic latitudes covered by the ground track will range from –i to i , where i is the orbital inclination. [ 4 ] In other words, the greater the inclination of a satellite's orbit, the further north and south its ground track will pass. A satellite with an inclination of exactly 90° is said to be in a polar orbit , meaning it passes over the Earth's north and south poles .
Launch sites at lower latitudes are often preferred partly for the flexibility they allow in orbital inclination; the initial inclination of an orbit is constrained to be greater than or equal to the launch latitude. Vehicles launched from Cape Canaveral , for instance, will have an initial orbital inclination of at least 28°27′, the latitude of the launch site—and to achieve this minimum requires launching with a due east azimuth , which may not always be feasible given other launch constraints. At the extremes, a launch site located on the equator can launch directly into any desired inclination, while a hypothetical launch site at the north or south pole would only be able to launch into polar orbits. (While it is possible to perform an orbital inclination change maneuver once on orbit, such maneuvers are typically among the most costly, in terms of fuel, of all orbital maneuvers, and are typically avoided or minimized to the extent possible.)
In addition to providing for a wider range of initial orbit inclinations, low-latitude launch sites offer the benefit of requiring less energy to make orbit (at least for prograde orbits, which comprise the vast majority of launches), due to the initial velocity provided by the Earth's rotation. The desire for equatorial launch sites, coupled with geopolitical and logistical realities, has fostered the development of floating launch platforms, most notably Sea Launch .
If the argument of perigee is zero, meaning that perigee and apogee lie in the equatorial plane, then the ground track of the satellite will appear the same above and below the equator (i.e., it will exhibit 180° rotational symmetry about the orbital nodes .) If the argument of perigee is non-zero, however, the satellite will behave differently in the northern and southern hemispheres. The Molniya orbit , with an argument of perigee near −90°, is an example of such a case. In a Molniya orbit, apogee occurs at a high latitude (63°), and the orbit is highly eccentric ( e = 0.72). This causes the satellite to "hover" over a region of the northern hemisphere for a long time, while spending very little time over the southern hemisphere. This phenomenon is known as "apogee dwell", and is desirable for communications for high latitude regions. [ 4 ]
As orbital operations are often required to monitor a specific location on Earth, orbits that cover the same ground track periodically are often used. On earth, these orbits are commonly referred to as Earth-repeat orbits, and are often designed with "frozen orbit" parameters to achieve a repeat ground track orbit with stable (minimally time-varying) orbit elements. [ 5 ] These orbits use the nodal precession effect to shift the orbit so the ground track coincides with that of a previous orbit, so that this essentially balances out the offset in the revolution of the orbited body. The longitudinal rotation after a certain period of time of a planet is given by:
where
The effect of the nodal precession can be quantified as:
where
These two effects must cancel out after a set j {\displaystyle j} orbital revolutions and k {\displaystyle k} (sidereal) days. Hence, equating the elapsed time to the orbital period of the satellite and combining the above two equations yields an equation which holds for any orbit that is a repeat orbit:
where | https://en.wikipedia.org/wiki/Subsatellite_point |
Subsea 7 S.A. (stylised as Subsea7 ) is a Luxembourgish multinational services company involved in subsea engineering and construction serving the offshore energy industry . [ 4 ] The company is registered in Luxembourg with its headquarters in London . [ 5 ] Subsea 7 delivers offshore projects and provides services for the energy industry.
Subsea7 makes offshore energy transition feasible through working on lower-carbon oil and gas and by providing services for the growth of renewables and other emerging energy industries.
The company was formed by the January 2011 merger of two predecessor companies, Acergy S.A. and Subsea 7, Inc. [ 6 ] [ 7 ]
Acergy was founded in 1970 as Stolt Nielsen Seaway, a division of the Norwegian Stolt-Nielsen Group offering divers for the exploration of the North Sea . After a series of acquisitions, including Comex Services of France in 1992 and Houston , Texas –based Ceanic Corporation in 1998, the company changed its name to Stolt Offshore in 2000. Five years later Stolt-Nielsen spun out the company as an independent business listed on the Oslo Stock Exchange and NASDAQ . The firm renamed as Acergy in March 2006.
Subsea 7, Inc. was the result of a series of mergers between DSND Offshore AS, Halliburton Subsea , Subsea Offshore and Rockwater over an extended period, with Rockwater and SubSea merging in 1999 to form Halliburton Subsea, and the resulting company operating as a 50/50 joint venture with DSND in 2002 with the name Subsea 7. [ 8 ] Halliburton exited the joint venture in November 2004. The company was listed on the Oslo Stock Exchange in August 2005 following its restructuring the same year. [ 8 ]
On 21 June 2010 the combination of Acergy S.A. and Subsea 7 Inc. was announced and was completed on 7 January 2011. The new entity took the Subsea 7 name while retaining Acergy's Luxembourg domicile and operational headquarters in London. [ 9 ] The chairman and chief executive roles were filled by Kristian Siem and Jean Cahuzac, who had previously held the same roles at Subsea 7 and Acergy respectively.
The current headquarters for Subsea 7 is located at 40 Brighton Road, Sutton, London . [ 1 ] | https://en.wikipedia.org/wiki/Subsea_7 |
In mathematics , a subsequence of a given sequence is a sequence that can be derived from the given sequence by deleting some or no elements without changing the order of the remaining elements. For example, the sequence ⟨ A , B , D ⟩ {\displaystyle \langle A,B,D\rangle } is a subsequence of ⟨ A , B , C , D , E , F ⟩ {\displaystyle \langle A,B,C,D,E,F\rangle } obtained after removal of elements C , {\displaystyle C,} E , {\displaystyle E,} and F . {\displaystyle F.} The relation of one sequence being the subsequence of another is a partial order .
Subsequences can contain consecutive elements which were not consecutive in the original sequence. A subsequence which consists of a consecutive run of elements from the original sequence, such as ⟨ B , C , D ⟩ , {\displaystyle \langle B,C,D\rangle ,} from ⟨ A , B , C , D , E , F ⟩ , {\displaystyle \langle A,B,C,D,E,F\rangle ,} is a substring . The substring is a refinement of the subsequence.
The list of all subsequences for the word " apple " would be " a ", " ap ", " al ", " ae ", " app ", " apl ", " ape ", " ale ", " appl ", " appe ", " aple ", " apple ", " p ", " pp ", " pl ", " pe ", " ppl ", " ppe ", " ple ", " pple ", " l ", " le ", " e ", "" ( empty string ).
Given two sequences X {\displaystyle X} and Y , {\displaystyle Y,} a sequence Z {\displaystyle Z} is said to be a common subsequence of X {\displaystyle X} and Y , {\displaystyle Y,} if Z {\displaystyle Z} is a subsequence of both X {\displaystyle X} and Y . {\displaystyle Y.} For example, if X = ⟨ A , C , B , D , E , G , C , E , D , B , G ⟩ and {\displaystyle X=\langle A,C,B,D,E,G,C,E,D,B,G\rangle \qquad {\text{ and}}} Y = ⟨ B , E , G , J , C , F , E , K , B ⟩ and {\displaystyle Y=\langle B,E,G,J,C,F,E,K,B\rangle \qquad {\text{ and}}} Z = ⟨ B , E , E ⟩ . {\displaystyle Z=\langle B,E,E\rangle .} then Z {\displaystyle Z} is said to be a common subsequence of X {\displaystyle X} and Y . {\displaystyle Y.}
This would not be the longest common subsequence , since Z {\displaystyle Z} only has length 3, and the common subsequence ⟨ B , E , E , B ⟩ {\displaystyle \langle B,E,E,B\rangle } has length 4. The longest common subsequence of X {\displaystyle X} and Y {\displaystyle Y} is ⟨ B , E , G , C , E , B ⟩ . {\displaystyle \langle B,E,G,C,E,B\rangle .}
Subsequences have applications to computer science , [ 1 ] especially in the discipline of bioinformatics , where computers are used to compare, analyze, and store DNA , RNA , and protein sequences.
Take two sequences of DNA containing 37 elements, say:
The longest common subsequence of sequences 1 and 2 is:
This can be illustrated by highlighting the 27 elements of the longest common subsequence into the initial sequences:
Another way to show this is to align the two sequences, that is, to position elements of the longest common subsequence in a same column (indicated by the vertical bar) and to introduce a special character (here, a dash) for padding of arisen empty subsequences:
Subsequences are used to determine how similar the two strands of DNA are, using the DNA bases: adenine , guanine , cytosine and thymine .
This article incorporates material from subsequence on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Subsequence |
In mathematics, a set A is a subset of a set B if all elements of A are also elements of B ; B is then a superset of A . It is possible for A and B to be equal; if they are unequal, then A is a proper subset of B . The relationship of one set being a subset of another is called inclusion (or sometimes containment ). A is a subset of B may also be expressed as B includes (or contains) A or A is included (or contained) in B . A k -subset is a subset with k elements.
When quantified, A ⊆ B {\displaystyle A\subseteq B} is represented as ∀ x ( x ∈ A ⇒ x ∈ B ) . {\displaystyle \forall x\left(x\in A\Rightarrow x\in B\right).} [ 1 ]
One can prove the statement A ⊆ B {\displaystyle A\subseteq B} by applying a proof technique known as the element argument [ 2 ] :
Let sets A and B be given. To prove that A ⊆ B , {\displaystyle A\subseteq B,}
The validity of this technique can be seen as a consequence of universal generalization : the technique shows ( c ∈ A ) ⇒ ( c ∈ B ) {\displaystyle (c\in A)\Rightarrow (c\in B)} for an arbitrarily chosen element c . Universal generalisation then implies ∀ x ( x ∈ A ⇒ x ∈ B ) , {\displaystyle \forall x\left(x\in A\Rightarrow x\in B\right),} which is equivalent to A ⊆ B , {\displaystyle A\subseteq B,} as stated above.
If A and B are sets and every element of A is also an element of B , then:
If A is a subset of B , but A is not equal to B (i.e. there exists at least one element of B which is not an element of A ), then:
The empty set , written { } {\displaystyle \{\}} or ∅ , {\displaystyle \varnothing ,} has no elements, and therefore is vacuously a subset of any set X .
Some authors use the symbols ⊂ {\displaystyle \subset } and ⊃ {\displaystyle \supset } to indicate subset and superset respectively; that is, with the same meaning as and instead of the symbols ⊆ {\displaystyle \subseteq } and ⊇ . {\displaystyle \supseteq .} [ 4 ] For example, for these authors, it is true of every set A that A ⊂ A . {\displaystyle A\subset A.} (a reflexive relation ).
Other authors prefer to use the symbols ⊂ {\displaystyle \subset } and ⊃ {\displaystyle \supset } to indicate proper (also called strict) subset and proper superset respectively; that is, with the same meaning as and instead of the symbols ⊊ {\displaystyle \subsetneq } and ⊋ . {\displaystyle \supsetneq .} [ 5 ] This usage makes ⊆ {\displaystyle \subseteq } and ⊂ {\displaystyle \subset } analogous to the inequality symbols ≤ {\displaystyle \leq } and < . {\displaystyle <.} For example, if x ≤ y , {\displaystyle x\leq y,} then x may or may not equal y , but if x < y , {\displaystyle x<y,} then x definitely does not equal y , and is less than y (an irreflexive relation ). Similarly, using the convention that ⊂ {\displaystyle \subset } is proper subset, if A ⊆ B , {\displaystyle A\subseteq B,} then A may or may not equal B , but if A ⊂ B , {\displaystyle A\subset B,} then A definitely does not equal B .
Another example in an Euler diagram :
The set of all subsets of S {\displaystyle S} is called its power set , and is denoted by P ( S ) {\displaystyle {\mathcal {P}}(S)} . [ 6 ]
The inclusion relation ⊆ {\displaystyle \subseteq } is a partial order on the set P ( S ) {\displaystyle {\mathcal {P}}(S)} defined by A ≤ B ⟺ A ⊆ B {\displaystyle A\leq B\iff A\subseteq B} . We may also partially order P ( S ) {\displaystyle {\mathcal {P}}(S)} by reverse set inclusion by defining A ≤ B if and only if B ⊆ A . {\displaystyle A\leq B{\text{ if and only if }}B\subseteq A.}
For the power set P ( S ) {\displaystyle \operatorname {\mathcal {P}} (S)} of a set S , the inclusion partial order is—up to an order isomorphism —the Cartesian product of k = | S | {\displaystyle k=|S|} (the cardinality of S ) copies of the partial order on { 0 , 1 } {\displaystyle \{0,1\}} for which 0 < 1. {\displaystyle 0<1.} This can be illustrated by enumerating S = { s 1 , s 2 , … , s k } , {\displaystyle S=\left\{s_{1},s_{2},\ldots ,s_{k}\right\},} , and associating with each subset T ⊆ S {\displaystyle T\subseteq S} (i.e., each element of 2 S {\displaystyle 2^{S}} ) the k -tuple from { 0 , 1 } k , {\displaystyle \{0,1\}^{k},} of which the i th coordinate is 1 if and only if s i {\displaystyle s_{i}} is a member of T .
The set of all k {\displaystyle k} -subsets of A {\displaystyle A} is denoted by ( A k ) {\displaystyle {\tbinom {A}{k}}} , in analogue with the notation for binomial coefficients , which count the number of k {\displaystyle k} -subsets of an n {\displaystyle n} -element set. In set theory , the notation [ A ] k {\displaystyle [A]^{k}} is also common, especially when k {\displaystyle k} is a transfinite cardinal number . | https://en.wikipedia.org/wiki/Subset |
The subsolar point on a planet or a moon is the point at which its Sun is perceived to be directly overhead (at the zenith ); [ 1 ] that is, where the Sun's rays strike the planet exactly perpendicular to its surface. The subsolar point occurs at the location on a planet or a moon where the Sun culminates at the location's zenith . This occurs at solar noon . At this point, the Sun's rays will fall exactly vertical relative to an object on the ground and thus cast no observable shadow . [ 2 ]
To an observer on a planet with an orientation and rotation similar to those of Earth , the subsolar point will appear to move westward with a speed of 1600 km/h, completing one circuit around the globe each day, approximately moving along the equator . However, it will also move north and south between the tropics over the course of a year, so will appear to spiral like a helix .
The term subsolar point can also mean the point closest to the Sun on an astronomical object , even though the Sun might not be visible.
On Earth, the subsolar point occurs within the tropics . The subsolar point contacts the Tropic of Cancer on the June solstice and the Tropic of Capricorn on the December solstice . The subsolar point crosses the Equator on the March and September equinoxes .
The subsolar point moves constantly on the surface of the Earth, but for any given time, its coordinates, or latitude and longitude , can be calculated as follows: [ 3 ]
ϕ s = δ , {\displaystyle \phi _{s}=\delta ,} λ s = − 15 ( T G M T − 12 + E m i n / 60 ) . {\displaystyle \lambda _{s}=-15(T_{\mathrm {GMT} }-12+E_{\mathrm {min} }/60).}
where | https://en.wikipedia.org/wiki/Subsolar_point |
In mathematics, the subspace theorem says that points of small height in projective space lie in a finite number of hyperplanes . It is a result obtained by Wolfgang M. Schmidt ( 1972 ).
The subspace theorem states that if L 1 ,..., L n are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then
the non-zero integer points x with
lie in a finite number of proper subspaces of Q n .
A quantitative form of the theorem, which determines the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by Schlickewei (1977) to allow more general absolute values on number fields .
The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation . [ 1 ]
The following corollary to the subspace theorem is often itself referred to as the subspace theorem .
If a 1 ,..., a n are algebraic such that 1, a 1 ,..., a n are linearly independent over Q and ε>0 is any given real number, then there are only finitely many rational n -tuples ( x 1 /y,..., x n /y) with
The specialization n = 1 gives the Thue–Siegel–Roth theorem . One may also note that the exponent 1+1/ n +ε is best possible by Dirichlet's theorem on diophantine approximation . | https://en.wikipedia.org/wiki/Subspace_theorem |
In topology and related areas of mathematics , a subspace of a topological space ( X , 𝜏 ) is a subset S of X which is equipped with a topology induced from that of 𝜏 called the subspace topology [ 1 ] (or the relative topology , [ 1 ] or the induced topology , [ 1 ] or the trace topology ). [ 2 ]
Given a topological space ( X , τ ) {\displaystyle (X,\tau )} and a subset S {\displaystyle S} of X {\displaystyle X} , the subspace topology on S {\displaystyle S} is defined by
That is, a subset of S {\displaystyle S} is open in the subspace topology if and only if it is the intersection of S {\displaystyle S} with an open set in ( X , τ ) {\displaystyle (X,\tau )} . If S {\displaystyle S} is equipped with the subspace topology then it is a topological space in its own right, and is called a subspace of ( X , τ ) {\displaystyle (X,\tau )} . Subsets of topological spaces are usually assumed to be equipped with the subspace topology unless otherwise stated.
Alternatively we can define the subspace topology for a subset S {\displaystyle S} of X {\displaystyle X} as the coarsest topology for which the inclusion map
is continuous .
More generally, suppose ι {\displaystyle \iota } is an injection from a set S {\displaystyle S} to a topological space X {\displaystyle X} . Then the subspace topology on S {\displaystyle S} is defined as the coarsest topology for which ι {\displaystyle \iota } is continuous. The open sets in this topology are precisely the ones of the form ι − 1 ( U ) {\displaystyle \iota ^{-1}(U)} for U {\displaystyle U} open in X {\displaystyle X} . S {\displaystyle S} is then homeomorphic to its image in X {\displaystyle X} (also with the subspace topology) and ι {\displaystyle \iota } is called a topological embedding .
A subspace S {\displaystyle S} is called an open subspace if the injection ι {\displaystyle \iota } is an open map , i.e., if the forward image of an open set of S {\displaystyle S} is open in X {\displaystyle X} . Likewise it is called a closed subspace if the injection ι {\displaystyle \iota } is a closed map .
The distinction between a set and a topological space is often blurred notationally, for convenience, which can be a source of confusion when one first encounters these definitions. Thus, whenever S {\displaystyle S} is a subset of X {\displaystyle X} , and ( X , τ ) {\displaystyle (X,\tau )} is a topological space, then the unadorned symbols " S {\displaystyle S} " and " X {\displaystyle X} " can often be used to refer both to S {\displaystyle S} and X {\displaystyle X} considered as two subsets of X {\displaystyle X} , and also to ( S , τ S ) {\displaystyle (S,\tau _{S})} and ( X , τ ) {\displaystyle (X,\tau )} as the topological spaces, related as discussed above. So phrases such as " S {\displaystyle S} an open subspace of X {\displaystyle X} " are used to mean that ( S , τ S ) {\displaystyle (S,\tau _{S})} is an open subspace of ( X , τ ) {\displaystyle (X,\tau )} , in the sense used above; that is: (i) S ∈ τ {\displaystyle S\in \tau } ; and (ii) S {\displaystyle S} is considered to be endowed with the subspace topology.
In the following, R {\displaystyle \mathbb {R} } represents the real numbers with their usual topology.
The subspace topology has the following characteristic property. Let Y {\displaystyle Y} be a subspace of X {\displaystyle X} and let i : Y → X {\displaystyle i:Y\to X} be the inclusion map. Then for any topological space Z {\displaystyle Z} a map f : Z → Y {\displaystyle f:Z\to Y} is continuous if and only if the composite map i ∘ f {\displaystyle i\circ f} is continuous.
This property is characteristic in the sense that it can be used to define the subspace topology on Y {\displaystyle Y} .
We list some further properties of the subspace topology. In the following let S {\displaystyle S} be a subspace of X {\displaystyle X} .
If a topological space having some topological property implies its subspaces have that property, then we say the property is hereditary . If only closed subspaces must share the property we call it weakly hereditary . | https://en.wikipedia.org/wiki/Subspace_topology |
In biological classification , subspecies ( pl. : subspecies) is a rank below species , used for populations that live in different areas and vary in size, shape, or other physical characteristics ( morphology ), but that can successfully interbreed. [ 2 ] [ 3 ] Not all species have subspecies, but for those that do there must be at least two. Subspecies is abbreviated as subsp. or ssp. and the singular and plural forms are the same ("the subspecies is" or "the subspecies are").
In zoology , under the International Code of Zoological Nomenclature , the subspecies is the only taxonomic rank below that of species that can receive a name. In botany and mycology , under the International Code of Nomenclature for algae, fungi, and plants , other infraspecific ranks , such as variety , may be named. In bacteriology and virology , under standard bacterial nomenclature and virus nomenclature , there are recommendations but not strict requirements for recognizing other important infraspecific ranks.
A taxonomist decides whether to recognize a subspecies. A common criterion for recognizing two distinct populations as subspecies rather than full species is the ability of them to interbreed even if some male offspring may be sterile. [ 4 ] In the wild, subspecies do not interbreed due to geographic isolation or sexual selection . The differences between subspecies are usually less distinct than the differences between species.
The scientific name of a species is a binomial or binomen, and comprises two Latin words, the first denoting the genus and the second denoting the species. [ 5 ] The scientific name of a subspecies is formed slightly differently in the different nomenclature codes. In zoology, under the International Code of Zoological Nomenclature (ICZN), the scientific name of a subspecies is termed a trinomen , and comprises three words, namely the binomen followed by the name of the subspecies. [ 6 ] For example, the binomen for the leopard is Panthera pardus . The trinomen Panthera pardus fusca denotes a subspecies, the Indian leopard . [ 1 ] All components of the trinomen are written in italics. [ 7 ]
In botany , subspecies is one of many ranks below that of species, such as variety , subvariety , form , and subform. To identify the rank, the subspecific name must be preceded by "subspecies" (which can be abbreviated to "subsp." or "ssp."), as in Schoenoplectus californicus subsp. tatora . [ 8 ]
In bacteriology , the only rank below species that is regulated explicitly by the code of nomenclature is subspecies , but infrasubspecific taxa are extremely important in bacteriology; Appendix 10 of the code lays out some recommendations that are intended to encourage uniformity in describing such taxa. Names published before 1992 in the rank of variety are taken to be names of subspecies [ 9 ] (see International Code of Nomenclature of Prokaryotes ). As in botany, subspecies is conventionally abbreviated as "subsp.", and is used in the scientific name: Bacillus subtilis subsp. spizizenii . [ 10 ]
In zoological nomenclature , when a species is split into subspecies, the originally described population is retained as the "nominotypical subspecies" [ 11 ] or "nominate subspecies", which repeats the same name as the species. For example, Motacilla alba alba (often abbreviated M. a. alba ) is the nominotypical subspecies of the white wagtail ( Motacilla alba ).
The subspecies name that repeats the species name is referred to in botanical nomenclature as the subspecies " autonym ", and the subspecific taxon as the "autonymous subspecies". [ 12 ]
When zoologists disagree over whether a certain population is a subspecies or a full species, the species name may be written in parentheses. Thus Larus (argentatus) smithsonianus means the American herring gull ; the notation within the parentheses means that some consider it a subspecies of a larger herring gull species and therefore call it Larus argentatus smithsonianus , while others consider it a full species and therefore call it Larus smithsonianus (and the user of the notation is not taking a position). [ 13 ]
A subspecies is a taxonomic rank below species – the only such rank recognized in the zoological code, [ 14 ] and one of three main ranks below species in the botanical code. [ 12 ] When geographically separate populations of a species exhibit recognizable phenotypic differences, biologists may identify these as separate subspecies; a subspecies is a recognized local variant of a species. [ 15 ] Botanists and mycologists have the choice of ranks lower than subspecies, such as variety (varietas) or form (forma), to recognize smaller differences between populations. [ 12 ]
In biological terms, rather than in relation to nomenclature, a polytypic species has two or more genetically and phenotypically divergent subspecies, races , or more generally speaking, populations that differ from each other so that a separate description is warranted. [ 16 ] These distinct groups do not interbreed as they are isolated from another, but they can interbreed and have fertile offspring, e.g. in captivity. These subspecies, races, or populations, are usually described and named by zoologists, botanists and microbiologists. [ citation needed ]
In a monotypic species, all populations exhibit the same genetic and phenotypical characteristics. Monotypic species can occur in several ways: [ citation needed ] | https://en.wikipedia.org/wiki/Subspecies |
A substance of very high concern ( SVHC ) is a chemical substance (or part of a group of chemical substances) which has been proposed as a candidate for inclusion on the Authorization or Restriction list (see: ECHA Lists ) of REACH . [ 1 ] The addition of a substance to the SVHC Candidate List [ 2 ] by the European Chemicals Agency (ECHA) is the first step in the procedure for the authorisation or restriction of a chemical. [ 3 ] It is expected that industries operating in EU member states abide by the regulations of REACH and submit chemicals for consideration when appropriate. [ 4 ]
The first list of SVHCs was published on 28 October 2008 with the list being updated biannually since then. The most recent update occurred in June 2024, bringing the size to 241 SVHCs. [ 5 ]
The criteria are given in article 57 of the REACH Regulation. [ 6 ] [ 7 ] A substance may be proposed as an SVHC if it meets one or more of the following criteria:
The "equivalent concern" criterion is significant because it is this classification which allows substances which are, for example, neurotoxic , endocrine-disrupting or otherwise present an unanticipated environmental health risk to be regulated under REACH. [ 9 ]
Simply because a substance meets one or more of the criteria does not necessarily mean that it will be proposed as an SVHC. Many such substances are already subject to restrictions on their use within the European Union, such as those in Annex XVII of the REACH Regulation. [ 10 ] SVHCs are substances for which the current restrictions on use (where these exist) might be insufficient. There are three priority groups for assessment: [ 11 ]
Proposals for inclusion of a substance on the list of SVHCs can come either from the European Commission or one of the Member States of the European Union. The proposals are made public by the European Chemicals Agency (ECHA) and are open for public comment for 60–90 days. If the substance is deemed to meet one or more of the criteria, it is then listed as an SVHC. [ 12 ]
Once a substance has been listed as an SVHC, the Agency commissions a technical report from one or more national or private laboratories, which analyses the available information on manufacture, imports, uses and releases of the substance, as well as possible alternatives. On the basis of this technical report, the Agency decides whether to prioritise the substance, in effect, whether to make a recommendation to the European Commission to add the substance to Annex XIV of the REACH Regulation, making its use subject to authorisation. The draft recommendations must be made public and opened for comment for three months before the final recommendations are sent to the commission. [ 13 ] The first draft recommendations were published on 14 January 2009, and new draft recommendations must be issued at least once every two years.
The list of SVHCs is primarily a public list of substances for which the European Chemicals Agency is considering imposing a requirement for authorisation for some or all uses. However, there are some direct consequences of including a substance on the list of SVHCs. Suppliers of pure SVHCs must provide their customers with a safety data sheet (SDS). [ 14 ] Suppliers of mixtures of substances which contain more than 0.1% by weight of any SVHC must provide their customers with a safety data sheet on request . [ 15 ] Manufacturers or importers of articles containing more than 0.1% by weight of any SVHC must provide their customers, and consumers on request, with adequate information on the safe use and disposal of the article, including the name of the SVHC(s) concerned. [ 16 ] From 1 June 2011, manufacturers and importers of articles also have to notify the European Chemicals Agency of the quantities of SVHCs used in their articles. [ 16 ]
In addition to the obviously involved chemical industry, there are many more industries affected by this regulation: drapery and leather industry, plastic processing, cosmetic industry, food industry, petroleum processing, printing industry, sports equipment industry, toys industry, recycling industry, electrical engineering industry, fine mechanics industry, optics industry, engine and plant production industry. [ 17 ]
This list is referred to as the "candidate" list because all substances placed on it are candidates for inclusion in Annex XIV of REACH. If a substance is added to Annex XIV, it is given a "latest application date" and a "sunset date". The sunset date is the date after which the substance cannot be used or imported into the EU without authorisation from the ECHA, and the latest application date is the date by which any applications for use must be submitted to the ECHA. [ 18 ]
This table includes the Candidate list updates as of January 2024; find the complete list in references. [ 19 ] [ 18 ]
(2,4,6-TTBP)
Persistent, bioaccumulative and toxic
(Octrizole)
(DBP) | https://en.wikipedia.org/wiki/Substance_of_very_high_concern |
In food safety , the concept of substantial equivalence holds that the safety of a new food, particularly one that has been genetically modified (GM), may be assessed by comparing it with a similar traditional food that has proven safe in normal use over time. [ 1 ] It was first formulated as a food safety policy in 1993, by the Organisation for Economic Co-operation and Development (OECD). [ 2 ]
As part of a food safety testing process, substantial equivalence is the initial step, establishing toxicological and nutritional differences in the new food compared to a conventional counterpart—differences are analyzed and evaluated, and further testing may be conducted, leading to a final safety assessment. [ 3 ]
Substantial equivalence is the underlying principle in GM food safety assessment for a number of national and international agencies, including the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations ' Food and Agriculture Organization (FAO) and World Health Organization . [ 4 ]
The concept of comparing genetically modified foods to traditional foods as a basis for safety assessment was first introduced as a recommendation during the 1990 Joint FAO/WHO Expert Consultation on biotechnology and food safety (a scientific conference of officials and industry), although the term substantial equivalence was not used. [ 5 ] [ 6 ] Adopting the term, substantial equivalence was formulated as a food safety policy by the OECD, first described in their 1993 report, "Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles. [ 2 ]
The term was borrowed from the FDA's 1976 substantial equivalence definition for new medical devices —under Premarket Notification 510(k) , a new Class II device that is essentially similar to an existing device can be cleared for release without further testing. [ 2 ] [ 7 ] The underlying approach of comparing a new product or technique to an existing one has long been used in various fields of science and technology. [ 2 ]
In June 1999, G8 leaders requested the OECD to “undertake a study on the implications of biotechnology and other aspects of food safety.” In 2000, the OECD Edinburgh Conference on Scientific and Health Aspects of Genetically Modified Foods was held. Following those discussions, the OECD published an opinion that substantial equivalence is an important tool in analyzing the safety of novel foods, including GM foods. The document noted that substantial equivalence serves as a framework for approaching food safety assessment, rather than functioning as a quantitative standard or measure. [ 8 ]
The OECD bases the substantial equivalence principle on a definition of food safety where we can assume that a food is safe for consumption if it has been eaten over time without evident harm. It recognizes that traditional foods may naturally contain toxic components (usually called antinutrients )—such as the glycoalkaloids solanine in potatoes and alpha-tomatine in tomatoes—which do not affect their safety when prepared and eaten in traditional ways. [ 9 ] [ 10 ] [ 11 ] [ note 1 ]
The report proposes that, while biotechnology broadens the scope of food modification, it does not inherently introduce additional risk, and therefore, GM products may be assessed in the same way as conventionally bred products. [ 1 ] Further, the relative precision of biotech methods should allow assessment to be focused on the most likely problem areas. [ 1 ] The concept of substantial equivalence is then described as a comparison between a GM food and a similar conventional food, taking into account food processing, and how the food is normally consumed, including quantity, dietary patterns, and the characteristics of the consuming population. [ note 2 ]
Substantial equivalence is the starting point for GM food safety assessment: significant differences between a new food item and its conventional counterpart would indicate the need for further testing. A "targeted approach" is taken, by selecting specific relevant molecules for comparison. For plants, selection of a suitable comparator may involve growing the new plant side by side with genetically closely-related varieties, or using publicly available composition data for closely-related varieties. [ 8 ]
Evaluation for substantial equivalence can be applied at different points in the food chain, from unprocessed harvested crop to final ingredient or product, depending on the nature of the food item and its intended use. [ 11 ]
For a GM plant, the overall evaluation process may be viewed in four phases: [ 3 ]
There has been discussion about applying new biochemical concepts and methods in evaluating substantial equivalence, such as metabolic profiling and protein profiling . These concepts refer, respectively, to the complete measured biochemical spectrum (total fingerprint) of compounds (metabolites) or of proteins present in a food or crop. The goal would be to compare overall the biochemical profile of a new food to an existing food to see if the new food's profile falls within the range of natural variation already exhibited by the profile of existing foods or crops. However, these techniques are not considered sufficiently evaluated, and standards have not yet been developed, to apply them. [ 12 ] [ better source needed ]
Approaches to GM food regulation vary by country, while substantial equivalence is generally the underlying principle of GM food safety assessment. This is the case for national and international agencies that include the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations ' Food and Agriculture Organization (FAO) and World Health Organization . [ 11 ] [ 13 ] [ 4 ] In 1997, the European Union established a novel food assessment procedure whereby, once the producer has confirmed substantial equivalence with an existing food, government notification, with accompanying scientific evidence, is the only requirement for commercial release, however, foods containing genetically modified organisms (GMOs) are excluded and require mandatory authorization. [ 2 ]
To establish substantial equivalence, the modified product is tested by the manufacturer for unexpected changes to a targeted set of components such as toxins , nutrients , or allergens , that are present in a similar unmodified food. The manufacturer's data is then assessed by a regulatory agency. If regulators determine that there is no significant difference between the modified and unmodified products, then there will generally be no further requirement for food safety testing. However, if the product has no natural equivalent, or shows significant differences from the unmodified food, or for other reasons that regulators may have (for instance, if a gene produces a protein that has not been a food component before), further safety testing may be required. [ 1 ]
There have been criticisms of the effectiveness of substantial equivalence . | https://en.wikipedia.org/wiki/Substantial_equivalence |
Substantial truth is a legal doctrine affecting libel and slander laws in common law jurisdictions such as the United States or the United Kingdom .
Under the United States law, a statement cannot be held to be actionable as slanderous or libellous if the statement is true but has "slight inaccuracies of expression". That is not enough to make the alleged libel false. [ 1 ]
This doctrine is applied in matters in which truth is used as an absolute defence to a defamation claim brought against a public figure, but only false statements made with " actual malice " are subject to sanctions. [ 2 ] A defendant using truth as a defence in a defamation case is not required to justify every word of the alleged defamatory statements. It is sufficient to prove that "the substance, the gist, the sting, of the matter is true." [ 3 ]
This legal term article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Substantial_truth |
In organic chemistry , a substituent is one or a group of atoms that replaces (one or more) atoms, thereby becoming a moiety in the resultant (new) molecule . [ 1 ] [ note 1 ]
The suffix -yl is used when naming organic compounds that contain a single bond replacing one hydrogen; -ylidene and -ylidyne are used with double bonds and triple bonds , respectively. In addition, when naming hydrocarbons that contain a substituent, positional numbers are used to indicate which carbon atom the substituent attaches to when such information is needed to distinguish between isomers . Substituents can be a combination of the inductive effect and the mesomeric effect . Such effects are also described as electron-rich and electron withdrawing . Additional steric effects result from the volume occupied by a substituent.
The phrases most-substituted and least-substituted are frequently used to describe or compare molecules that are products of a chemical reaction . In this terminology, methane is used as a reference of comparison. Using methane as a reference, for each hydrogen atom that is replaced or "substituted" by something else, the molecule can be said to be more highly substituted. For example:
The suffix -yl is used in organic chemistry to form names of radicals , either separate species (called free radicals ) or chemically bonded parts of molecules (called moieties ). It can be traced back to the old name of methanol , "methylene" (from Ancient Greek : μέθυ méthu , 'wine' and ὕλη húlē , [ 4 ] 'wood', 'forest'), which became shortened to " methyl " in compound names, from which -yl was extracted. Several reforms of chemical nomenclature eventually generalized the use of the suffix to other organic substituents. [ citation needed ]
The use of the suffix is determined by the number of hydrogen atoms that the substituent replaces on a parent compound (and also, usually, on the substituent). According to the 1993 IUPAC recommendations: [ 5 ]
The suffix -ylid ine is encountered sporadically, and appears to be a variant spelling of "-ylidene"; [ 6 ] it is not mentioned in the IUPAC guidelines.
For multiple bonds of the same type, which link the substituent to the parent group, the infixes -di- , -tri- , -tetra- , etc., are used: -diyl (two single bonds), -triyl (three single bonds), -tetrayl (four single bonds), -diylidene (two double bonds).
For multiple bonds of different types, multiple suffixes are concatenated : - ylylidene (one single and one double), -ylylidyne (one single and one triple), -diylylidene (two single and one double).
The parent compound name can be altered in two ways: [ citation needed ]
Note that some popular terms such as " vinyl " (when used to mean "polyvinyl") represent only a portion of the full chemical name. [ citation needed ]
According to the above rules, a carbon atom in a molecule, considered as a substituent, has the following names depending on the number of hydrogens bound to it, and the type of bonds formed with the remainder of the molecule:
In a chemical structural formula , an organic substituent such as methyl , ethyl , or aryl can be written as R (or R 1 , R 2 , etc.) It is a generic placeholder, the R derived from radical or rest , which may replace any portion of the formula as the author finds convenient. The first to use this symbol was Charles Frédéric Gerhardt in 1844. [ 8 ]
The symbol X is often used to denote electronegative substituents such as the halides . [ 9 ] [ 10 ]
One cheminformatics study identified 849,574 unique substituents up to 12 non-hydrogen atoms large and containing only carbon , hydrogen , nitrogen , oxygen , sulfur , phosphorus , selenium , and the halogens in a set of 3,043,941 molecules. Fifty substituents can be considered common as they are found in more than 1% of this set, and 438 are found in more than 0.1%. 64% of the substituents are found in only one molecule. The top 5 most common are the methyl , phenyl , chlorine , methoxy , and hydroxyl substituents. The total number of organic substituents in organic chemistry is estimated at 3.1 million, creating a total of 6.7×10 23 molecules. [ 11 ] An infinite number of substituents can be obtained simply by increasing carbon chain length. For instance, the substituents methyl ( −CH 3 ) and pentyl ( −C 5 H 11 ). | https://en.wikipedia.org/wiki/Substituent |
The Substitute It Now! List is a database developed by the International Chemical Secretariat (ChemSec) of chemicals the uses of which are likely to become legally restricted under EU REACH regulation . The list is being used by public interest groups as a campaign tool to advocate for increasing the pace of implementation of REACH and by commercial interests to identify substances for control in chemicals management programmes. [ 1 ]
The SIN List is composed of chemicals evaluated by the environmental NGO ChemSec as meeting EU criteria for being Substances of Very High Concern (SVHCs) under Article 57 of REACH , being either carcinogenic , mutagenic or reprotoxic (CMR), persistent, bioaccumulative and toxic (PBT), very persistent and very bioaccumulative (vPvB), or posing an equivalent environmental or health threat., [ 1 ] [ 2 ]
The first SIN List, known as version 1.0, was published in 2008 and identified 267 chemicals as meeting the Article 57 criteria for being SVHCs. ChemSec's assessment was independently validated by the Technical University of Denmark. [ 3 ]
In 2009 a further 89 substances were added to the SIN List (Version 1.1), [ 4 ] before in 2011 another 22 chemicals were added (Version 2.0) [ 5 ] for fulfilling the REACH 57(f) criterion of equivalent concern as endocrine disrupting chemicals (EDCs). The 2011 EDC additions were made in consultation with TEDX, the US endocrine-disruption research NGO founded by Professor Theo Colborn , and coincided with EU plans over 2011–2012 to develop accepted criteria for identifying endocrine disrupting chemicals . [ 6 ]
In October 2014, the list was updated, this time with 28 new chemicals. With this update, the SIN List was also divided into 31 groups, and a tool for sustainable substitution based on the SIN List – SINimilarity – was presented. [ 7 ]
The development of the SIN List is guided by a nine-member NGO advisory committee: [ 8 ]
The disparity between the length of the SIN List in comparison to the 15 chemicals nominated by the EU as SVHCs in October 2008 was used to pressure the European regulatory authorities and Member States to accelerate the nomination process. [ 9 ] In 2011 Members of the European Parliament's Environment Committee cited the SIN List in criticising the European Commission for continuing slow progress on EDCs and evaluation of safety of chemicals in mixtures. [ 10 ]
EU regulators have been cautiously welcoming of the SIN List. Margot Wallström, Vice-President of the European Commission, stated that she welcomed initiatives such as the SIN List “[which] draw the attention of the public and industry to the most hazardous chemicals that should be a priority for inclusion in the REACH authorisation procedure”. [ 11 ] European Commissioner for the Environment Janez Potočnik has referred to the SIN list as “[indicating] the substances the European Commission will take into consideration for placement on the candidate list . [ 12 ] Industry representative group CEFIC has criticised the publication of the list for occurring outside the legal design of REACH. [ 13 ]
Sony Ericsson, Sara Lee, Skanska, [ 14 ] Marks & Spencer, [ 15 ] Dell [ 16 ] and Carrefour [ 17 ] are on record as referring to the SIN List in their chemical substitution programmes. The SIN List is also used by other public interest groups in lobbying companies to substitute or phase out hazardous chemicals. [ 18 ]
The potential for legal restrictions on chemical use increasing costs associated with reformulating products and modifying processes has resulted in SIN List data being used by investment analysis firms concerned with Socially Responsible Investment , to aid in calculating financial risk posed by companies’ sustainability profiles. [ 18 ] [ 19 ]
In March 2013 ChemSec published the SIN Producers List [ permanent dead link ] , a list of the 709 companies manufacturing or importing SIN List substances in the EU. The list is derived from data presented in the European Chemicals Agency (ECHA) database of registered substances. [ 20 ]
ChemSec has together with ClientEarth requested information about producers of REACH registered substances to be made publicly available, and launched a lawsuit against the European Chemicals Agency on this issue in 2011. [ 21 ] | https://en.wikipedia.org/wiki/Substitute_It_Now!_list |
Substitute natural gas ( SNG ), or synthetic natural gas , is a fuel gas (predominantly methane , CH 4 ) that can be produced from fossil fuels such as lignite coal , oil shale , or from biofuels (when it is named bio-SNG ) or using electricity with power-to-gas systems.
SNG in the form of LNG or CNG can be used in road, rail, air and marine transport vehicles as a substitute for costly diesel, petrol, etc. The carbon footprint of SNG derived from coal is comparable to petroleum products. Bio-SNG has a much smaller carbon footprint when compared to petroleum products. LPG can also be produced by synthesising SNG with partial reverse hydrogenation at high pressure and low temperature. LPG is more easily transportable than SNG, more suitable as fuel in two-wheeler or smaller HP vehicles/engines, and also fetches higher price in international market due to short supply.
Renewable electrical energy can also be used to create SNG (methane) via for example electrolysis of water or via a PEM fuel cell in reverse to create hydrogen which is then reacted with CO 2 from for example, CSS/U Utilisation in the Sabatier reaction .
It is advantageous to distribute SNG and bio-SNG together with natural gas in a gas grid. In this way, the production of renewable gas can be phased in at the same rate as the production capacity is increased. The gas market and infrastructure the natural gas has contributed with is a condition for large scale introduction of renewable biomethane produced through anaerobic digestion ( biogas ) or gasification and methanation bio-SNG.
The Great Plains Synfuels Plant injects approximately 4.1 million m 3 /day of SNG from lignite coal into the United States national gas grid. [ 1 ] The production process of SNG at the Great Plains plant involves gasification , gas cleaning, shift, and methanation . China is constructing nearly 30 nos massive SNG production plants from coal / lignite with aggregate annual capacity of 120 billion standard cubic meters of SNG. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Substitute_natural_gas |
Substituted tryptamines , or simply tryptamines , also known as serotonin analogues (i.e., 5-hydroxytryptamine analogues ), are organic compounds which may be thought of as being derived from tryptamine itself. The molecular structures of all tryptamines contain an indole ring, joined to an amino (NH 2 ) group via an ethyl (−CH2–CH2−) sidechain . In substituted tryptamines, the indole ring, sidechain, and/or amino group are modified by substituting another group for one of the hydrogen (H) atoms.
Well-known tryptamines include serotonin , an important neurotransmitter , and melatonin , a hormone involved in regulating the sleep-wake cycle. Tryptamine alkaloids are found in fungi , plants and animals ; and sometimes used by humans for the neurological or psychotropic effects of the substance. Prominent examples of tryptamine alkaloids include psilocybin (from " psilocybin mushrooms ") and DMT . In South America, dimethyltryptamine is obtained from numerous plant sources, like chacruna , and it is often used in ayahuasca brews. Many synthetic tryptamines have also been made, including the migraine drug sumatriptan , and psychedelic drugs . A 2022 study has found the variety of tryptamines present in wild mushrooms may affect the therapeutic impact. [ 1 ]
The tryptamine structure, in particular its indole ring, may be part of the structure of some more complex compounds, for example: LSD , ibogaine , mitragynine and yohimbine . A thorough investigation of dozens of tryptamine compounds was published by Ann and Alexander Shulgin under the title TiHKAL .
α-Alkyltryptamines are a group of substituted tryptamines which possess an alkyl group , such as a methyl or ethyl group, attached at the alpha carbon , and in most cases no substitution on the amine nitrogen. [ 19 ] [ 20 ] [ 21 ] α-Alkylation of tryptamine makes it much more metabolically stable and resistant to degradation by monoamine oxidase , resulting in increased potency and greatly lengthened half-life . [ 21 ] This is analogous to α-methylation of phenethylamine into amphetamine . [ 21 ]
Many α-alkyltryptamines are drugs , acting as monoamine releasing agents , non-selective serotonin receptor agonists , and/or monoamine oxidase inhibitors , [ 22 ] [ 23 ] [ 24 ] [ 25 ] and produce psychostimulant , entactogen , and/or psychedelic effects. [ 19 ] [ 20 ] [ 21 ] The most well-known of these agents are α-methyltryptamine (αMT) and α-ethyltryptamine (αET), both of which were used clinically as antidepressants for a brief period of time in the past and are abused as recreational drugs . [ 20 ] [ 21 ] In accordance with its action as a dual releasing agent of serotonin and dopamine , αET has been found to produce serotonergic neurotoxicity similarly to amphetamines like MDMA and PCA , and the same is also likely to hold true for other serotonin and dopamine-releasing α-alkyltryptamines such as αMT, 5-MeO-αMT , and various others. [ 26 ]
A number of β-ketotryptamines (beta-ketotryptamines) are known. [ 31 ] [ 33 ] [ 36 ] These compounds are α-alkyl-β-ketotryptamines and are analogous to the cathinones (β-ketoamphetamines) of the related phenethylamine family. Known β-ketotryptamines include BK-NM-AMT , BK-5F-NM-AMT , BK-5Cl-NM-AMT , and BK-5Br-NM-AMT . [ 31 ] [ 33 ] [ 36 ] They act as monoamine releasing agents . [ 31 ] [ 33 ] [ 36 ]
Examples of cyclized tryptamines include:
A number of related compounds are known, with a similar structure but having the indole core flipped ( isotryptamines ) and/or replaced with related cores such as indene , indoline , indazole , indolizine , benzothiophene , or benzofuran . Like tryptamines, these related compounds are primarily active as agonists at the 5-HT 2 family of serotonin receptors, with applications in the treatment of glaucoma , cluster headaches , or as anorectics . | https://en.wikipedia.org/wiki/Substituted_tryptamine |
A substitution is a syntactic transformation on formal expressions.
To apply a substitution to an expression means to consistently replace its variable, or placeholder, symbols with other expressions.
The resulting expression is called a substitution instance , or instance for short, of the original expression.
Where ψ and φ represent formulas of propositional logic , ψ is a substitution instance of φ if and only if ψ may be obtained from φ by substituting formulas for propositional variables in φ , replacing each occurrence of the same variable by an occurrence of the same formula. For example:
is a substitution instance of
That is, ψ can be obtained by replacing P and Q in φ with (R → S) and (T → S) respectively. Similarly:
is a substitution instance of:
since ψ can be obtained by replacing each A in φ with (A ↔ A).
In some deduction systems for propositional logic, a new expression (a proposition ) may be entered on a line of a derivation if it is a substitution instance of a previous line of the derivation. [ 1 ] [ failed verification ] This is how new lines are introduced in some axiomatic systems . In systems that use rules of transformation , a rule may include the use of a substitution instance for the purpose of introducing certain variables into a derivation.
A propositional formula is a tautology if it is true under every valuation (or interpretation ) of its predicate symbols. If Φ is a tautology, and Θ is a substitution instance of Φ, then Θ is again a tautology. This fact implies the soundness of the deduction rule described in the previous section.
In first-order logic , a substitution is a total mapping σ : V → T from variables to terms ; many, [ 2 ] : 73 [ 3 ] : 445 but not all [ 4 ] : 250 authors additionally require σ ( x ) = x for all but finitely many variables x . The notation { x 1 ↦ t 1 , …, x k ↦ t k } [ note 1 ] refers to a substitution mapping each variable x i to the corresponding term t i , for i =1,…, k , and every other variable to itself; the x i must be pairwise distinct. Most authors additionally require each term t i to be syntactically different from x i , to avoid infinitely many distinct notations for the same substitution. Applying that substitution to a term t is written in postfix notation as t { x 1 ↦ t 1 , ..., x k ↦ t k }; it means to (simultaneously) replace every occurrence of each x i in t by t i . [ note 2 ] The result tσ of applying a substitution σ to a term t is called an instance of that term t .
For example, applying the substitution { x ↦ z , z ↦ h ( a , y ) } to the term
The domain dom ( σ ) of a substitution σ is commonly defined as the set of variables actually replaced, i.e. dom ( σ ) = { x ∈ V | xσ ≠ x }.
A substitution is called a ground substitution if it maps all variables of its domain to ground , i.e. variable-free, terms.
The substitution instance tσ of a ground substitution is a ground term if all of t ' s variables are in σ ' s domain, i.e. if vars ( t ) ⊆ dom ( σ ).
A substitution σ is called a linear substitution if tσ is a linear term for some (and hence every) linear term t containing precisely the variables of σ ' s domain, i.e. with vars ( t ) = dom ( σ ).
A substitution σ is called a flat substitution if xσ is a variable for every variable x .
A substitution σ is called a renaming substitution if it is a permutation on the set of all variables. Like every permutation, a renaming substitution σ always has an inverse substitution σ −1 , such that tσσ −1 = t = tσ −1 σ for every term t . However, it is not possible to define an inverse for an arbitrary substitution.
For example, { x ↦ 2, y ↦ 3+4 } is a ground substitution, { x ↦ x 1 , y ↦ y 2 +4 } is non-ground and non-flat, but linear,
{ x ↦ y 2 , y ↦ y 2 +4 } is non-linear and non-flat, { x ↦ y 2 , y ↦ y 2 } is flat, but non-linear, { x ↦ x 1 , y ↦ y 2 } is both linear and flat, but not a renaming, since it maps both y and y 2 to y 2 ; each of these substitutions has the set { x , y } as its domain. An example for a renaming substitution is { x ↦ x 1 , x 1 ↦ y , y ↦ y 2 , y 2 ↦ x }, it has the inverse { x ↦ y 2 , y 2 ↦ y , y ↦ x 1 , x 1 ↦ x }. The flat substitution { x ↦ z , y ↦ z } cannot have an inverse, since e.g. ( x + y ) { x ↦ z , y ↦ z } = z + z , and the latter term cannot be transformed back to x + y , as the information about the origin a z stems from is lost. The ground substitution { x ↦ 2 } cannot have an inverse due to a similar loss of origin information e.g. in ( x +2) { x ↦ 2 } = 2+2, even if replacing constants by variables was allowed by some fictitious kind of "generalized substitutions".
Two substitutions are considered equal if they map each variable to syntactically equal result terms, formally: σ = τ if xσ = xτ for each variable x ∈ V .
The composition of two substitutions σ = { x 1 ↦ t 1 , …, x k ↦ t k } and τ = { y 1 ↦ u 1 , …, y l ↦ u l } is obtained by removing from the substitution { x 1 ↦ t 1 τ , …, x k ↦ t k τ , y 1 ↦ u 1 , …, y l ↦ u l } those pairs y i ↦ u i for which y i ∈ { x 1 , …, x k }.
The composition of σ and τ is denoted by στ . Composition is an associative operation , and is compatible with substitution application, i.e. ( ρσ ) τ = ρ ( στ ), and ( tσ ) τ = t ( στ ), respectively, for every substitutions ρ , σ , τ , and every term t .
The identity substitution , which maps every variable to itself, is the neutral element of substitution composition. A substitution σ is called idempotent if σσ = σ , and hence tσσ = tσ for every term t . When x i ≠ t i for all i , the substitution { x 1 ↦ t 1 , …, x k ↦ t k } is idempotent if and only if none of the variables x i occurs in any t j . Substitution composition is not commutative, that is, στ may be different from τσ , even if σ and τ are idempotent. [ 2 ] : 73–74 [ 3 ] : 445–446
For example, { x ↦ 2, y ↦ 3+4 } is equal to { y ↦ 3+4, x ↦ 2 }, but different from { x ↦ 2, y ↦ 7 }. The substitution { x ↦ y + y } is idempotent, e.g. (( x + y ) { x ↦ y + y }) { x ↦ y + y } = (( y + y )+ y ) { x ↦ y + y } = ( y + y )+ y , while the substitution { x ↦ x + y } is non-idempotent, e.g. (( x + y ) { x ↦ x + y }) { x ↦ x + y } = (( x + y )+ y ) { x ↦ x + y } = (( x + y )+ y )+ y . An example for non-commuting substitutions is { x ↦ y } { y ↦ z } = { x ↦ z , y ↦ z }, but { y ↦ z } { x ↦ y } = { x ↦ y , y ↦ z }.
In mathematics , there are two common uses of substitution: substitution of variables for constants (also called assignment for that variable), and the substitution property of equality , [ 5 ] also called Leibniz's Law . [ 6 ]
Considering mathematics as a formal language , a variable is a symbol from an alphabet , usually a letter like x , y , and z , which denotes a range of possible values . [ 7 ] If a variable is free in a given expression or formula , then it can be replaced with any of the values in its range. [ 8 ] Certain kinds of bound variables can be substituted too. For instance, parameters of an expression (like the coefficients of a polynomial ), or the argument of a function . Moreover, variables being universally quantified can be replaced with any of the values in its range, and the result will a true statement . (This is called Universal instantiation )
For a non-formalized language, that is, in most mathematical texts outside of mathematical logic , for an individual expression it is not always possible to identify which variables are free and bound. For example, in ∑ i < k a i k {\textstyle \sum _{i<k}a_{ik}} , depending on the context, the variable i {\textstyle i} can be free and k {\textstyle k} bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics .
The substitution property of equality , or Leibniz's Law (though the latter term is usually reserved for philosophical contexts), generally states that, if two things are equal, then any property of one, must be a property of the other. It can be formally stated in logical notation as: ( a = b ) ⟹ [ ϕ ( a ) ⇒ ϕ ( b ) ] {\displaystyle (a=b)\implies {\bigl [}\phi (a)\Rightarrow \phi (b){\bigr ]}} For every a {\textstyle a} and b {\textstyle b} , and any well-formed formula ϕ ( x ) {\textstyle \phi (x)} (with a free variable x). For example: For all real numbers a and b , if a = b , then a ≥ 0 implies b ≥ 0 (here, ϕ ( x ) {\displaystyle \phi (x)} is x ≥ 0 ). This is a property which is most often used in algebra , especially in solving systems of equations , but is apllied in nearly every area of math that uses equality. This, taken together with the reflexive property of equality, forms the axioms of equality in first-order logic. [ 9 ]
Substitution is related to, but not identical to, function composition ; it is closely related to β -reduction in lambda calculus . In contrast to these notions, however, the accent in algebra is on the preservation of algebraic structure by the substitution operation, the fact that substitution gives a homomorphism for the structure at hand (in the case of polynomials, the ring structure). [ citation needed ]
Substitution is a basic operation in algebra , in particular in computer algebra . [ 10 ] [ 11 ]
A common case of substitution involves polynomials , where substitution of a numerical value (or another expression) for the indeterminate of a univariate polynomial amounts to evaluating the polynomial at that value. Indeed, this operation occurs so frequently that the notation for polynomials is often adapted to it; instead of designating a polynomial by a name like P , as one would do for other mathematical objects, one could define
so that substitution for X can be designated by replacement inside " P ( X )", say
or
Substitution can also be applied to other kinds of formal objects built from symbols, for instance elements of free groups . In order for substitution to be defined, one needs an algebraic structure with an appropriate universal property , that asserts the existence of unique homomorphisms that send indeterminates to specific values; the substitution then amounts to finding the image of an element under such a homomorphism.
The following is a proof of the substitution property of equality in ZFC (as defined in first-order logic without equality), which is adapted from Introduction to Axiomatic Set Theory (1982) by Gaisi Takeuti and Wilson M. Zaring. [ 12 ]
Theorem — if a = b {\displaystyle a=b} , then, for any well-formed formula ϕ {\displaystyle \phi } , ϕ ( a ) ⇒ ϕ ( b ) {\displaystyle \phi (a)\Rightarrow \phi (b)} .
See Zermelo–Fraenkel set theory § Formal language for the definition of formulas in ZFC. The definition is recursive , so a proof by induction is used. In ZFC in first-order logic without equality, "set equality" is defined to mean that two sets have the same elements, written symbolically as "for all z, z is in x if and onlt if z is in y". Then, the Axiom of Extensionality asserts that if two sets have the same elements, then they belong to the same sets:
Definition — ( x = y ) := ∀ z [ z ∈ x ⇔ z ∈ y ] {\displaystyle (x=y):=\forall z[z\in x\Leftrightarrow z\in y]}
Axiom — ( x = y ) ⇒ ∀ z ( x ∈ z ⇔ y ∈ z ) {\displaystyle (x=y)\Rightarrow \forall z(x\in z\Leftrightarrow y\in z)}
Let X , Y , Z {\displaystyle X,Y,Z} , be metavariables for any variables or sets, such that X = Y {\displaystyle X=Y}
Case 1: ϕ ( X ) : ( Z ∈ X ) {\displaystyle \phi (X):(Z\in X)}
Assume Z ∈ X {\displaystyle Z\in X} , then, by the definition of equality, Z ∈ Y {\displaystyle Z\in Y} , thus ( X = Y ) ⟹ [ Z ∈ X ⇒ Z ∈ Y ] {\displaystyle (X=Y)\implies {\bigl [}Z\in X\Rightarrow Z\in Y{\bigr ]}}
Case 2: ϕ ( X ) : ( X ∈ Z ) {\displaystyle \phi (X):(X\in Z)}
Assume X ∈ Z {\displaystyle X\in Z} , then by the axiom of extensionality, Y ∈ Z {\displaystyle Y\in Z} , thus ( X = Y ) ⟹ [ X ∈ Z ⇒ Y ∈ Z ] {\displaystyle (X=Y)\implies {\bigl [}X\in Z\Rightarrow Y\in Z{\bigr ]}}
Let ψ , φ {\displaystyle \psi ,\varphi } be meta variables for any formulas with the property that ( a = b ) ⟹ [ ϕ ( a ) ⇒ ϕ ( b ) ] {\displaystyle (a=b)\implies {\bigl [}\phi (a)\Rightarrow \phi (b){\bigr ]}} . Let X , Y {\displaystyle X,Y} , be metavariables for any variables or sets, such that X = Y {\displaystyle X=Y} , and let z {\displaystyle z} be a metavariable for any variable.
Case 1: ¬ ( ψ ) {\displaystyle \neg (\psi )}
Since X = Y {\displaystyle X=Y} , then Y = X {\displaystyle Y=X} by symmetry of equality, therefore [ ψ ( Y ) ⇒ ψ ( X ) ] {\displaystyle {\bigl [}\psi (Y)\Rightarrow \psi (X){\bigr ]}} , by the induction hypothesis, therefore [ ¬ ψ ( X ) ⇒ ¬ ψ ( Y ) ] {\displaystyle {\bigl [}\neg \psi (X)\Rightarrow \neg \psi (Y){\bigr ]}} by contraposition , thus ( X = Y ) ⟹ [ ¬ ψ ( X ) ⇒ ¬ ψ ( Y ) ] {\displaystyle (X=Y)\implies {\bigl [}\neg \psi (X)\Rightarrow \neg \psi (Y){\bigr ]}}
Case 2: ψ ∧ φ {\displaystyle \psi \land \varphi }
Since X = Y {\displaystyle X=Y} , then [ ψ ( X ) ⇒ ψ ( Y ) ] {\displaystyle {\bigl [}\psi (X)\Rightarrow \psi (Y){\bigr ]}} and [ φ ( X ) ⇒ φ ( Y ) ] {\displaystyle {\bigl [}\varphi (X)\Rightarrow \varphi (Y){\bigr ]}} , which implies [ ( ψ ( X ) ∧ φ ( X ) ) ⇒ ψ ( Y ) ∧ φ ( Y ) ) ] {\displaystyle {\bigl [}{\bigl (}\psi (X)\land \varphi (X){\bigr )}\Rightarrow \psi (Y)\land \varphi (Y){\bigr )}{\bigr ]}} , thus ( X = Y ) ⟹ [ ( ψ ( X ) ∧ φ ( X ) ) ⇒ ψ ( Y ) ∧ φ ( Y ) ) ] {\displaystyle (X=Y)\implies {\bigl [}{\bigl (}\psi (X)\land \varphi (X){\bigr )}\Rightarrow \psi (Y)\land \varphi (Y){\bigr )}{\bigr ]}}
Case 3: ∃ z ( ψ ) {\displaystyle \exists z(\psi )}
Since X = Y {\displaystyle X=Y} , ψ ( X , z ) ⇒ ψ ( Y , z ) {\displaystyle \psi (X,z)\Rightarrow \psi (Y,z)} assume by way of contradiction that the result is false, that is ∃ z ( ψ ( X , z ) ) {\displaystyle \exists z(\psi (X,z))} is true but ∃ z ( ψ ( Y , z ) ) {\displaystyle \exists z(\psi (Y,z))} is false. By existential instantiation , let z 0 {\displaystyle z_{0}} denote the value such that ψ ( X , z 0 ) {\displaystyle \psi (X,z_{0})} is true. Then ψ ( Y , z 0 ) {\displaystyle \psi (Y,z_{0})} is false by asumption, and therefore ψ ( X , z 0 ) ⇒ ψ ( Y , z 0 ) {\displaystyle \psi (X,z_{0})\Rightarrow \psi (Y,z_{0})} is false, which contradicts our induction hypothesis, and the result follows. | https://en.wikipedia.org/wiki/Substitution_(logic) |
In bioinformatics and evolutionary biology , a substitution matrix describes the frequency at which a character in a nucleotide sequence or a protein sequence changes to other character states over evolutionary time. The information is often in the form of log odds of finding two specific character states aligned and depends on the assumed number of evolutionary changes or sequence dissimilarity between compared sequences. It is an application of a stochastic matrix . Substitution matrices are usually seen in the context of amino acid or DNA sequence alignments , where they are used to calculate similarity scores between the aligned sequences. [ 1 ]
In the process of evolution , from one generation to the next the amino acid sequences of an organism's proteins are gradually altered through the action of DNA mutations. For example, the sequence
could mutate into the sequence
in one step, and possibly
over a longer period of evolutionary time. Each amino acid is more or less likely to mutate into various other amino acids. For instance, a hydrophilic residue such as arginine is more likely to be replaced by another hydrophilic residue such as glutamine , than it is to be mutated into a hydrophobic residue such as leucine . (Here, a residue refers to an amino acid stripped of a hydrogen and/or a hydroxyl group and inserted in the polymeric chain of a protein.) This is primarily due to redundancy in the genetic code , which translates similar codons into similar amino acids. Furthermore, mutating an amino acid to a residue with significantly different properties could affect the folding and/or activity of the protein. This type of disruptive substitution is likely to be removed from populations by the action of purifying selection because the substitution has a higher likelihood of rendering a protein nonfunctional. [ 2 ]
If we have two amino acid sequences in front of us, we should be able to say something about how likely they are to be derived from a common ancestor, or homologous . If we can line up the two sequences using a sequence alignment algorithm such that the mutations required to transform a hypothetical ancestor sequence into both of the current sequences would be evolutionarily plausible, then we'd like to assign a high score to the comparison of the sequences.
To this end, we will construct a 20x20 matrix where the ( i , j ) {\displaystyle (i,j)} th entry is equal to the probability of the i {\displaystyle i} th amino acid being transformed into the j {\displaystyle j} th amino acid in a certain amount of evolutionary time. There are many different ways to construct such a matrix, called a substitution matrix . Here are the most commonly used ones:
The simplest possible substitution matrix would be one in which each amino acid is considered maximally similar to itself, but not able to transform into any other amino acid. This matrix would look like
This identity matrix will succeed in the alignment of very similar amino acid sequences but will be miserable at aligning two distantly related sequences. We need to figure out all the probabilities in a more rigorous fashion. It turns out that an empirical examination of previously aligned sequences works best.
We express the probabilities of transformation in what are called log-odds scores . The scores matrix S is defined as
where M i , j {\displaystyle M_{i,j}} is the probability that amino acid i {\displaystyle i} transforms into amino acid j {\displaystyle j} , and p i {\displaystyle p_{i}} , p j {\displaystyle p_{j}} are the frequencies of amino acids i and j . The base of the logarithm is not important, and the same substitution matrix is often expressed in different bases.
One of the first amino acid substitution matrices, the PAM ( Point Accepted Mutation ) matrix was developed by Margaret Dayhoff in the 1970s. This matrix is calculated by observing the differences in closely related proteins. Because the use of very closely related homologs, the observed mutations are not expected to significantly change the common functions of the proteins. Thus the observed substitutions (by point mutations) are considered to be accepted by natural selection.
One PAM unit is defined as 1% of the amino acid positions that have been changed. To create a PAM1 substitution matrix, a group of very closely related sequences with mutation frequencies corresponding to one PAM unit is chosen. Based on collected mutational data from this group of sequences, a substitution matrix can be derived. This PAM1 matrix estimates what rate of substitution would be expected if 1% of the amino acids had changed.
The PAM1 matrix is used as the basis for calculating other matrices by assuming that repeated mutations would follow the same pattern as those in the PAM1 matrix, and multiple substitutions can occur at the same site. With this assumption, the PAM2 matrix can estimated by squaring the probabilities. Using this logic, Dayhoff derived matrices as high as PAM250. Usually the PAM 30 and the PAM70 are used.
Dayhoff's methodology of comparing closely related species turned out not to work very well for aligning evolutionarily divergent sequences. Sequence changes over long evolutionary time scales are not well approximated by compounding small changes that occur over short time scales. The BLOSUM (BLOck SUbstitution Matrix) series of matrices rectifies this problem. Henikoff & Henikoff constructed these matrices using multiple alignments of evolutionarily divergent proteins. The probabilities used in the matrix calculation are computed by looking at "blocks" of conserved sequences found in multiple protein alignments. These conserved sequences are assumed to be of functional importance within related proteins and will therefore have lower substitution rates than less conserved regions. To reduce bias from closely related sequences on substitution rates, segments in a block with a sequence identity above a certain threshold were clustered, reducing the weight of each such cluster (Henikoff and Henikoff). For the BLOSUM62 matrix, this threshold was set at 62%. Pairs frequencies were then counted between clusters, hence pairs were only counted between segments less than 62% identical. One would use a higher numbered BLOSUM matrix for aligning two closely related sequences and a lower number for more divergent sequences.
It turns out that the BLOSUM62 matrix does an excellent job detecting similarities in distant sequences, and this is the matrix used by default in most recent alignment applications such as BLAST .
A number of newer substitution matrices have been proposed to deal with inadequacies in earlier designs.
The real substitution rates in a protein depends not only on the identity of the amino acid, but also on the specific structural or sequence context it is in. Many specialized matrices have been developed for these contexts, such as in transmembrane alpha helices, [ 4 ] for combinations of secondary structure states and solvent accessibility states, [ 5 ] [ 6 ] [ 7 ] or for local sequence-structure contexts. [ 8 ] These context-specific substitution matrices lead to generally improved alignment quality at some cost of speed but are not yet widely used.
Recently, sequence context-specific amino acid similarities have been derived that do not need substitution matrices but that rely on a library of sequence contexts instead. Using this idea, a context-specific extension of the popular BLAST program has been demonstrated to achieve a twofold sensitivity improvement for remotely related sequences over BLAST at similar speeds ( CS-BLAST ).
Although " transition matrix " is often used interchangeably with "substitution matrix" in fields other than bioinformatics, the former term is problematic in bioinformatics. With regards to nucleotide substitutions, " transition " is also used to indicate those substitutions that are between the two-ring purines (A → G and G → A) or are between the one-ring pyrimidines (C → T and T → C). Because these substitutions do not require a change in the number of rings, they occur more frequently than the other substitutions. " Transversion " is the term used to indicate the slower-rate substitutions that change a purine to a pyrimidine or vice versa (A ↔ C, A ↔ T, G ↔ C, and G ↔ T). | https://en.wikipedia.org/wiki/Substitution_matrix |
In biology, a substitution model , also called models of sequence evolution , are Markov models that describe changes over evolutionary time. These models describe evolutionary changes in macromolecules, such as DNA sequences or protein sequences , that can be represented as sequence of symbols (e.g., A, C, G, and T in the case of DNA or the 20 "standard" proteinogenic amino acids in the case of proteins ). Substitution models are used to calculate the likelihood of phylogenetic trees using multiple sequence alignment data. Thus, substitution models are central to maximum likelihood estimation of phylogeny as well as Bayesian inference in phylogeny . Estimates of evolutionary distances (numbers of substitutions that have occurred since a pair of sequences diverged from a common ancestor) are typically calculated using substitution models (evolutionary distances are used as input for distance methods such as neighbor joining ). Substitution models are also central to phylogenetic invariants because they are necessary to predict site pattern frequencies given a tree topology. Substitution models are also necessary to simulate sequence data for a group of organisms related by a specific tree.
Stationary, neutral, independent, finite sites models (assuming a constant rate of evolution) have two parameters, π , an equilibrium vector of base (or character) frequencies and a rate matrix, Q , which describes the rate at which bases of one type change into bases of another type; element Q i j {\displaystyle Q_{ij}} for i ≠ j is the rate at which base i goes to base j . The diagonals of the Q matrix are chosen so that the rows sum to zero:
The equilibrium row vector π must be annihilated by the rate matrix Q :
The transition matrix function is a function from the branch lengths (in some units of time, possibly in substitutions), to a matrix of conditional probabilities. It is denoted P ( t ) {\displaystyle P(t)} . The entry in the i th column and the j th row, P i j ( t ) {\displaystyle P_{ij}(t)} , is the probability, after time t , that there is a base j at a given position, conditional on there being a base i in that position at time 0. When the model is time reversible, this can be performed between any two sequences, even if one is not the ancestor of the other, if you know the total branch length between them.
The asymptotic properties of P ij (t) are such that P ij (0) = δ ij , where δ ij is the Kronecker delta function. That is, there is no change in base composition between a sequence and itself. At the other extreme, lim t → ∞ P i j ( t ) = π j , {\displaystyle \lim _{t\rightarrow \infty }P_{ij}(t)=\pi _{j}\,,} or, in other words, as time goes to infinity the probability of finding base j at a position given there was a base i at that position originally goes to the equilibrium probability that there is base j at that position, regardless of the original base. Furthermore, it follows that π P ( t ) = π {\displaystyle \pi P(t)=\pi } for all t .
The transition matrix can be computed from the rate matrix via matrix exponentiation :
where Q n is the matrix Q multiplied by itself enough times to give its n th power.
If Q is diagonalizable , the matrix exponential can be computed directly: let Q = U −1 Λ U be a diagonalization of Q , with
where Λ is a diagonal matrix and where { λ i } {\displaystyle \lbrace \lambda _{i}\rbrace } are the eigenvalues of Q , each repeated according to its multiplicity. Then
where the diagonal matrix e Λt is given by
Many useful substitution models are time-reversible ; in terms of the mathematics, the model does not care which sequence is the ancestor and which is the descendant so long as all other parameters (such as the number of substitutions per site that is expected between the two sequences) are held constant.
When an analysis of real biological data is performed, there is generally no access to the sequences of ancestral species, only to the present-day species. However, when a model is time-reversible, which species was the ancestral species is irrelevant. Instead, the phylogenetic tree can be rooted using any of the species, re-rooted later based on new knowledge, or left unrooted. This is because there is no 'special' species, all species will eventually derive from one another with the same probability.
A model is time reversible if and only if it satisfies the property (the notation is explained below)
or, equivalently, the detailed balance property,
for every i , j , and t .
Time-reversibility should not be confused with stationarity . A model is stationary if Q does not change with time. The analysis below assumes a stationary model.
Generalised time reversible (GTR) is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986. [ 1 ] The GTR model is often called the general time reversible model in publications; [ 2 ] it has also been called the REV model. [ 3 ]
The GTR parameters for nucleotides consist of an equilibrium base frequency vector, π → = ( π 1 , π 2 , π 3 , π 4 ) {\displaystyle {\vec {\pi }}=(\pi _{1},\pi _{2},\pi _{3},\pi _{4})} , giving the frequency at which each base occurs at each site, and the rate matrix
Because the model must be time reversible and must approach the equilibrium nucleotide (base) frequencies at long times, each rate below the diagonal equals the reciprocal rate above the diagonal multiplied by the equilibrium ratio of the two bases. As such, the nucleotide GTR requires 6 substitution rate parameters and 4 equilibrium base frequency parameters. Since the 4 frequency parameters must sum to 1, there are only 3 free frequency parameters. The total of 9 free parameters is often further reduced to 8 parameters plus μ {\displaystyle \mu } , the overall number of substitutions per unit time. When measuring time in substitutions ( μ {\displaystyle \mu } =1) only 8 free parameters remain.
In general, to compute the number of parameters, you count the number of entries above the diagonal in the matrix, i.e. for n trait values per site n 2 − n 2 {\displaystyle {{n^{2}-n} \over 2}} , and then add n-1 for the equilibrium frequencies, and subtract 1 because μ {\displaystyle \mu } is fixed. You get
For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins ), you would find there are 208 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are 4 3 = 64 {\displaystyle 4^{3}=64} codons, resulting in 2078 free parameters. However, the rates for transitions between codons which differ by more than one base are often assumed to be zero, reducing the number of free parameters to only 20 × 19 × 3 2 + 63 − 1 = 632 {\displaystyle {{20\times 19\times 3} \over 2}+63-1=632} parameters. Another common practice is to reduce the number of codons by forbidding the stop (or nonsense ) codons. This is a biologically reasonable assumption because including the stop codons would mean that one is calculating the probability of finding sense codon j {\displaystyle j} after time t {\displaystyle t} given that the ancestral codon is i {\displaystyle i} would involve the possibility of passing through a state with a premature stop codon.
An alternative (and commonly used [ 2 ] [ 4 ] [ 5 ] [ 6 ] ) way to write the instantaneous rate matrix ( Q {\displaystyle Q} matrix) for the nucleotide GTR model is:
Q = ( − ( a π C + b π G + c π T ) a π C b π G c π T a π A − ( a π A + d π G + e π T ) d π G e π T b π A d π C − ( b π A + d π C + f π T ) f π T c π A e π C f π G − ( c π A + e π C + f π G ) ) {\displaystyle Q={\begin{pmatrix}{-(a\pi _{C}+b\pi _{G}+c\pi _{T})}&a\pi _{C}&b\pi _{G}&c\pi _{T}\\a\pi _{A}&{-(a\pi _{A}+d\pi _{G}+e\pi _{T})}&d\pi _{G}&e\pi _{T}\\b\pi _{A}&d\pi _{C}&{-(b\pi _{A}+d\pi _{C}+f\pi _{T})}&f\pi _{T}\\c\pi _{A}&e\pi _{C}&f\pi _{G}&{-(c\pi _{A}+e\pi _{C}+f\pi _{G})}\end{pmatrix}}}
The Q {\displaystyle Q} matrix is normalized so − ∑ i = 1 4 π i Q i i = 1 {\displaystyle -\sum _{i=1}^{4}\pi _{i}Q_{ii}=1} .
This notation is easier to understand than the notation originally used by Tavaré , because all model parameters correspond either to "exchangeability" parameters ( a {\displaystyle a} through f {\displaystyle f} , which can also be written using the notation r i j {\displaystyle r_{ij}} ) or to equilibrium nucleotide frequencies π → = ( π A , π C , π G , π T ) {\displaystyle {\vec {\pi }}=(\pi _{A},\pi _{C},\pi _{G},\pi _{T})} . Note that the nucleotides in the Q {\displaystyle Q} matrix have been written in alphabetical order. In other words, the transition probability matrix for the Q {\displaystyle Q} matrix above would be:
P ( t ) = e Q t = ( p A A ( t ) p A C ( t ) p A G ( t ) p A T ( t ) p C A ( t ) p C C ( t ) p C G ( t ) p C T ( t ) p G A ( t ) p G C ( t ) p G G ( t ) p G T ( t ) p T A ( t ) p T C ( t ) p T G ( t ) p T T ( t ) ) {\displaystyle P(t)=e^{Qt}={\begin{pmatrix}p_{\mathrm {AA} }(t)&p_{\mathrm {AC} }(t)&p_{\mathrm {AG} }(t)&p_{\mathrm {AT} }(t)\\p_{\mathrm {CA} }(t)&p_{\mathrm {CC} }(t)&p_{\mathrm {CG} }(t)&p_{\mathrm {CT} }(t)\\p_{\mathrm {GA} }(t)&p_{\mathrm {GC} }(t)&p_{\mathrm {GG} }(t)&p_{\mathrm {GT} }(t)\\p_{\mathrm {TA} }(t)&p_{\mathrm {TC} }(t)&p_{\mathrm {TG} }(t)&p_{\mathrm {TT} }(t)\end{pmatrix}}}
Some publications write the nucleotides in a different order (e.g., some authors choose to group two purines together and the two pyrimidines together; see also models of DNA evolution ). These differences in notation make it important to be clear regarding the order of the states when writing the Q {\displaystyle Q} matrix.
The value of this notation is that instantaneous rate of change from nucleotide i {\displaystyle i} to nucleotide j {\displaystyle j} can always be written as r i j π j {\displaystyle r_{ij}\pi _{j}} , where r i j {\displaystyle r_{ij}} is the exchangeability of nucleotides i {\displaystyle i} and j {\displaystyle j} and π j {\displaystyle \pi _{j}} is the equilibrium frequency of the j t h {\displaystyle j^{th}} nucleotide. The matrix shown above uses the letters a {\displaystyle a} through f {\displaystyle f} for the exchangeability parameters in the interest of readability, but those parameters could also be to written in a systematic manner using the r i j {\displaystyle r_{ij}} notation (e.g., a = r A C {\displaystyle a=r_{AC}} , b = r A G {\displaystyle b=r_{AG}} , and so forth).
Note that the ordering of the nucleotide subscripts for exchangeability parameters is irrelevant (e.g., r A C = r C A {\displaystyle r_{AC}=r_{CA}} ) but the transition probability matrix values are not (i.e., p A C ( t ) {\displaystyle p_{\mathrm {AC} }(t)} is the probability of observing A in sequence 1 and C in sequence 2 when the evolutionary distance between those sequences is t {\displaystyle t} whereas p C A ( t ) {\displaystyle p_{\mathrm {CA} }(t)} is the probability of observing C in sequence 1 and A in sequence 2 at the same evolutionary distance).
An arbitrarily chosen exchangeability parameters (e.g., f = r G T {\displaystyle f=r_{GT}} ) is typically set to a value of 1 to increase the readability of the exchangeability parameter estimates (since it allows users to express those values relative to chosen exchangeability parameter). The practice of expressing the exchangeability parameters in relative terms is not problematic because the Q {\displaystyle Q} matrix is normalized. Normalization allows t {\displaystyle t} (time) in the matrix exponentiation P ( t ) = e Q t {\displaystyle P(t)=e^{Qt}} to be expressed in units of expected substitutions per site (standard practice in molecular phylogenetics). This is the equivalent to the statement that one is setting the mutation rate μ {\displaystyle \mu } to 1) and reducing the number of free parameters to eight. Specifically, there are five free exchangeability parameters ( a {\displaystyle a} through e {\displaystyle e} , which are expressed relative to the fixed f = r G T = 1 {\displaystyle f=r_{GT}=1} in this example) and three equilibrium base frequency parameters (as described above, only three π i {\displaystyle \pi _{i}} values need to be specified because π → {\displaystyle {\vec {\pi }}} must sum to 1).
The alternative notation also makes it easier to understand the sub-models of the GTR model, which simply correspond to cases where exchangeability and/or equilibrium base frequency parameters are constrained to take on equal values. A number of specific sub-models have been named, largely based on their original publications:
There are 203 possible ways that the exchangeability parameters can be restricted to form sub-models of GTR, [ 14 ] ranging from the JC69 [ 7 ] and F81 [ 8 ] models (where all exchangeability parameters are equal) to the SYM [ 13 ] model and the full GTR [ 1 ] (or REV [ 3 ] ) model (where all exchangeability parameters are free). The equilibrium base frequencies are typically treated in two different ways: 1) all π i {\displaystyle \pi _{i}} values are constrained to be equal (i.e., π A = π C = π G = π T = 0.25 {\displaystyle \pi _{A}=\pi _{C}=\pi _{G}=\pi _{T}=0.25} ); or 2) all π i {\displaystyle \pi _{i}} values are treated as free parameters. Although the equilibrium base frequencies can be constrained in other ways most constraints that link some but not all π i {\displaystyle \pi _{i}} values are unrealistic from a biological standpoint. The possible exception is enforcing strand symmetry [ 15 ] (i.e., constraining π A = π T {\displaystyle \pi _{A}=\pi _{T}} and π C = π G {\displaystyle \pi _{C}=\pi _{G}} but allowing π A + π T ≠ π C + π G {\displaystyle \pi _{A}+\pi _{T}\neq \pi _{C}+\pi _{G}} ).
The alternative notation also makes it straightforward to see how the GTR model can be applied to biological alphabets with a larger state-space (e.g., amino acids or codons ). It is possible to write a set of equilibrium state frequencies as π 1 {\displaystyle \pi _{1}} , π 2 {\displaystyle \pi _{2}} , ... π k {\displaystyle \pi _{k}} and a set of exchangeability parameters ( r i j {\displaystyle r_{ij}} ) for any alphabet of k {\displaystyle k} character states. These values can then be used to populate the Q {\displaystyle Q} matrix by setting the off-diagonal elements as shown above (the general notation would be Q i j = r i j π j {\displaystyle Q_{ij}=r_{ij}\pi _{j}} ), setting the diagonal elements Q i i {\displaystyle Q_{ii}} to the negative sum of the off-diagonal elements on the same row, and normalizing. Obviously, k = 20 {\displaystyle k=20} for amino acids and k = 61 {\displaystyle k=61} for codons (assuming the standard genetic code ). However, the generality of this notation is beneficial because one can use reduced alphabets for amino acids. For example, one can use k = 6 {\displaystyle k=6} and encode amino acids by recoding the amino acids using the six categories proposed by Margaret Dayhoff . Reduced amino acid alphabets are viewed as a way to reduce the impact of compositional variation and saturation. [ 16 ]
Importantly, evolutionary patterns can vary among genomic regions and thus different genomic regions can fit with different substitution models. [ 17 ] Actually, ignoring heterogeneous evolutionary patterns along sequences can lead to biases in the estimation of evolutionary parameters, including the K a /K s ratio . In this regard, the use of mixture models in phylogenentic frameworks is convenient to better mimic the molecular evolution observed in real data. [ 18 ]
Typically, a branch length of a phylogenetic tree is expressed as the expected number of substitutions per site; if the evolutionary model indicates that each site within an ancestral sequence will typically experience x substitutions by the time it evolves to a particular descendant's sequence then the ancestor and descendant are considered to be separated by branch length x .
Sometimes a branch length is measured in terms of geological years. For example, a fossil record may make it possible to determine the number of years between an ancestral species and a descendant species. Because some species evolve at faster rates than others, these two measures of branch length are not always in direct proportion. The expected number of substitutions per site per year is often indicated with the Greek letter mu (μ).
A model is said to have a strict molecular clock if the expected number of substitutions per year μ is constant regardless of which species' evolution is being examined. An important implication of a strict molecular clock is that the number of expected substitutions between an ancestral species and any of its present-day descendants must be independent of which descendant species is examined.
Note that the assumption of a strict molecular clock is often unrealistic, especially across long periods of evolution. For example, even though rodents are genetically very similar to primates , they have undergone a much higher number of substitutions in the estimated time since divergence in some regions of the genome . [ 19 ] This could be due to their shorter generation time , [ 20 ] higher metabolic rate , increased population structuring, increased rate of speciation , or smaller body size . [ 21 ] [ 22 ] When studying ancient events like the Cambrian explosion under a molecular clock assumption, poor concurrence between cladistic and phylogenetic data is often observed. There has been some work on models allowing variable rate of evolution. [ 23 ] [ 24 ]
Models that can take into account variability of the rate of the molecular clock between different evolutionary lineages in the phylogeny are called “relaxed” in opposition to “strict”. In such models the rate can be assumed to be correlated or not between ancestors and descendants and rate variation among lineages can be drawn from many distributions but usually exponential and lognormal distributions are applied. There is a special case, called “local molecular clock” when a phylogeny is divided into at least two partitions (sets of lineages) and a strict molecular clock is applied in each, but with different rates.
Phylogenetic tree topologies are often the parameter of interest; [ 25 ] thus, branch lengths and any other parameters describing the substitution process are often viewed as nuisance parameters . However, biologists are sometimes interested in the other aspects of the model. For example, branch lengths, especially when those branch lengths are combined with information from the fossil record and a model to estimate the timeframe for evolution. [ 26 ] Other model parameters have been used to gain insights into various aspects of the process of evolution. The K a /K s ratio (also called ω in codon substitution models) is a parameter of interest in many studies. The K a /K s ratio can be used to examine the action of natural selection on protein-coding regions, [ 27 ] it provides information about the relative rates of nucleotide substitutions that change amino acids (non-synonymous substitutions) to those that do not change the encoded amino acid (synonymous substitutions).
The first models of DNA evolution was proposed Jukes and Cantor [ 7 ] in 1969. The Jukes-Cantor (JC or JC69) model assumes equal transition rates as well as equal equilibrium frequencies for all bases and it is the simplest sub-model of the GTR model. In 1980, Motoo Kimura introduced a model with two parameters (K2P or K80 [ 9 ] ): one for the transition and one for the transversion rate. A year later, Kimura introduced a second model (K3ST, K3P, or K81 [ 11 ] ) with three substitution types: one for the transition rate, one for the rate of transversions that conserve the strong/weak properties of nucleotides ( A ↔ T {\displaystyle A\leftrightarrow T} and C ↔ G {\displaystyle C\leftrightarrow G} , designated β {\displaystyle \beta } by Kimura [ 11 ] ), and one for rate of transversions that conserve the amino/keto properties of nucleotides ( A ↔ C {\displaystyle A\leftrightarrow C} and G ↔ T {\displaystyle G\leftrightarrow T} , designated γ {\displaystyle \gamma } by Kimura [ 11 ] ). In 1981, Joseph Felsenstein proposed a four-parameter model (F81 [ 8 ] ) in which the substitution rate corresponds to the equilibrium frequency of the target nucleotide. Hasegawa, Kishino, and Yano unified the two last models to a five-parameter model (HKY [ 10 ] ). After these pioneering efforts, many additional sub-models of the GTR model were introduced into the literature (and common use) in the 1990s. [ 12 ] [ 13 ] Other models that move beyond the GTR model in specific ways were also developed and refined by several researchers. [ 28 ] [ 29 ]
Almost all DNA substitution models are mechanistic models (as described above). The small number of parameters that one needs to estimate for these models makes it feasible to estimate those parameters from the data. It is also necessary because the patterns of DNA sequence evolution often differ among organisms and among genes within organisms. The later may reflect optimization by the action of selection for specific purposes (e.g. fast expression or messenger RNA stability) or it might reflect neutral variation in the patterns of substitution. Thus, depending on the organism and the type of gene, it is likely necessary to adjust the model to these circumstances.
An alternative way to analyze DNA sequence data is to recode the nucleotides as purines (R) and pyrimidines (Y); [ 30 ] [ 31 ] this practice is often called RY-coding. [ 32 ] Insertions and deletions in multiple sequence alignments can also be encoded as binary data [ 33 ] and analyzed in using a two-state model. [ 34 ] [ 35 ]
The simplest two-state model of sequence evolution is called the Cavender-Farris model or the Cavender-Farris- Neyman (CFN) model; the name of this model reflects the fact that it was described independently in several different publications. [ 36 ] [ 37 ] [ 38 ] The CFN model is identical to the Jukes-Cantor model adapted to two states and it has even been implemented as the "JC2" model in the popular IQ-TREE software package (using this model in IQ-TREE requires coding the data as 0 and 1 rather than R and Y; the popular PAUP* software package can interpret a data matrix comprising only R and Y as data to be analyzed using the CFN model). It is also straightforward to analyze binary data using the phylogenetic Hadamard transform . [ 39 ] The alternative two-state model allows the equilibrium frequency parameters of R and Y (or 0 and 1) to take on values other than 0.5 by adding single free parameter; this model is variously called CFu [ 30 ] or GTR2 (in IQ-TREE).
For many analyses, particularly for longer evolutionary distances, the evolution is modeled on the amino acid level. Since not all DNA substitution also alter the encoded amino acid, information is lost when looking at amino acids instead of nucleotide bases. However, several advantages speak in favor of using the amino acid information: DNA is much more inclined to show compositional bias than amino acids, not all positions in the DNA evolve at the same speed ( non-synonymous mutations are less likely to become fixed in the population than synonymous ones), but probably most important, because of those fast evolving positions and the limited alphabet size (only four possible states), the DNA suffers from more back substitutions, making it difficult to accurately estimate evolutionary longer distances.
Unlike the DNA models, amino acid models traditionally are empirical models. They were pioneered in the 1960s and 1970s by Dayhoff and co-workers by estimating replacement rates from protein alignments with at least 85% identity (originally with very limited data [ 40 ] and ultimately culminating in the Dayhoff PAM model of 1978 [ 41 ] ). This minimized the chances of observing multiple substitutions at a site. From the estimated rate matrix, a series of replacement probability matrices were derived, known under names such as PAM 250. Log-odds matrices based on the Dayhoff PAM model were commonly used to assess the significance of homology search results, although the BLOSUM matrices [ 42 ] have superseded the PAM log-odds matrices in this context because the BLOSUM matrices appear to be more sensitive across a variety of evolutionary distances, unlike the PAM log-odds matrices. [ 43 ]
The Dayhoff PAM matrix was the source of the exchangeability parameters used in one of the first maximum-likelihood analyses of phylogeny that used protein data [ 44 ] and the PAM model (or an improved version of the PAM model called DCMut [ 45 ] ) continues to be used in phylogenetics. However, the limited number of alignments used to generate the PAM model (reflecting the limited amount of sequence data available in the 1970s) almost certainly inflated the variance of some rate matrix parameters (alternatively, the proteins used to generate the PAM model could have been a non-representative set). Regardless, it is clear that the PAM model seldom has as good of a fit to most datasets as more modern empirical models (Keane et al. 2006 [ 46 ] tested thousands of vertebrate , bacterial , and archaeal proteins and they found that the Dayhoff PAM model had the best-fit to at most <4% of the proteins).
Starting in the 1990s, the rapid expansion of sequence databases due to improved sequencing technologies led to the estimation of many new empirical matrices (see [ 47 ] for a complete list). The earliest efforts used methods similar to those used by Dayhoff, using large-scale matching of the protein database to generate a new log-odds matrix [ 48 ] and the JTT (Jones-Taylor-Thornton) model. [ 49 ] The rapid increases in compute power during this time (reflecting factors such as Moore's law ) made it feasible to estimate parameters for empirical models using maximum likelihood (e.g., the WAG [ 50 ] and LG [ 51 ] models) and other methods (e.g., the VT [ 52 ] and PMB [ 53 ] models). The IQ-Tree software package allows users to infer their own time reversible model using QMaker, [ 54 ] or non-time-reversible using nQMaker. [ 55 ]
A main difference in evolutionary models is how many parameters are estimated every time for the data set under consideration and how many of them are estimated once on a large data set. Mechanistic models describe all substitutions as a function of a number of parameters which are estimated for every data set analyzed, preferably using maximum likelihood . This has the advantage that the model can be adjusted to the particularities of a specific data set (e.g. different composition biases in DNA). Problems can arise when too many parameters are used, particularly if they can compensate for each other (this can lead to non-identifiability [ 56 ] ). Then it is often the case that the data set is too small to yield enough information to estimate all parameters accurately.
Empirical models are created by estimating many parameters (typically all entries of the rate matrix as well as the character frequencies, see the GTR model above) from a large data set. These parameters are then fixed and will be reused for every data set. This has the advantage that those parameters can be estimated more accurately. Normally, it is not possible to estimate all entries of the substitution matrix from the current data set only. On the downside, the parameters estimated from the training data might be too generic and therefore have a poor fit to any particular dataset. A potential solution for that problem is to estimate some parameters from the data using maximum likelihood (or some other method). In studies of protein evolution the equilibrium amino acid frequencies π → = ( π A , π R , π N , . . . π V ) {\displaystyle {\vec {\pi }}=(\pi _{A},\pi _{R},\pi _{N},...\pi _{V})} (using the one-letter IUPAC codes for amino acids to indicate their equilibrium frequencies) are often estimated from the data [ 50 ] while keeping the exchangeability matrix fixed. Beyond the common practice of estimating amino acid frequencies from the data, methods to estimate exchangeability parameters [ 57 ] or adjust the Q {\displaystyle Q} matrix [ 58 ] for protein evolution in other ways have been proposed.
With the large-scale genome sequencing still producing very large amounts of DNA and protein sequences, there is enough data available to create empirical models with any number of parameters, including empirical codon models. [ 59 ] Because of the problems mentioned above, the two approaches are often combined, by estimating most of the parameters once on large-scale data, while a few remaining parameters are then adjusted to the data set under consideration. The following sections give an overview of the different approaches taken for DNA, protein or codon-based models.
In 1997, Tuffley and Steel [ 60 ] described a model that they named the no common mechanism (NCM) model. The topology of the maximum likelihood tree for a specific dataset given the NCM model is identical to the topology of the optimal tree for the same data given the maximum parsimony criterion. The NCM model assumes all of the data (e.g., homologous nucleotides, amino acids, or morphological characters) are related by a common phylogenetic tree. Then 2 T − 3 {\displaystyle 2T-3} parameters are introduced for each homologous character, where T {\displaystyle T} is the number of sequences. This can be viewed as estimating a separate rate parameter for every character × branch pair in the dataset (note that the number of branches in a fully resolved phylogenetic tree is 2 T − 3 {\displaystyle 2T-3} ). Thus, the number of free parameters in the NCM model always exceeds the number of homologous characters in the data matrix, and the NCM model has been criticized as consistently "over-parameterized." [ 61 ]
Most of the work on substitution models has focused on DNA/ RNA and protein sequence evolution. Models of DNA sequence evolution, where the alphabet corresponds to the four nucleotides (A, C, G, and T), are probably the easiest models to understand. DNA models can also be used to examine RNA virus evolution; this reflects the fact that RNA also has a four nucleotide alphabet (A, C, G, and U). However, substitution models can be used for alphabets of any size; the alphabet is the 20 proteinogenic amino acids for proteins and the sense codons (i.e., the 61 codons that encode amino acids in the standard genetic code ) for aligned protein-coding gene sequences. In fact, substitution models can be developed for any biological characters that can be encoded using a specific alphabet (e.g., amino acid sequences combined with information about the conformation of those amino acids in three-dimensional protein structures [ 62 ] ).
The majority of substitution models used for evolutionary research assume independence among sites (i.e., the probability of observing any specific site pattern is identical regardless of where the site pattern is in the sequence alignment). This simplifies likelihood calculations because it is only necessary to calculate the probability of all site patterns that appear in the alignment then use those values to calculate the overall likelihood of the alignment (e.g., the probability of three "GGGG" site patterns given some model of DNA sequence evolution is simply the probability of a single "GGGG" site pattern raised to the third power). This means that substitution models can be viewed as implying a specific multinomial distribution for site pattern frequencies. If we consider a multiple sequence alignment of four DNA sequences there are 256 possible site patterns so there are 255 degrees of freedom for the site pattern frequencies. However, it is possible to specify the expected site pattern frequencies using five degrees of freedom if using the Jukes-Cantor model of DNA evolution, [ 7 ] which is a simple substitution model that allows one to calculate the expected site pattern frequencies only the tree topology and the branch lengths (given four taxa an unrooted bifurcating tree has five branch lengths).
Substitution models also make it possible to simulate sequence data using Monte Carlo methods . Simulated multiple sequence alignments can be used to assess the performance of phylogenetic methods [ 63 ] and generate the null distribution for certain statistical tests in the fields of molecular evolution and molecular phylogenetics. Examples of these tests include tests of model fit [ 64 ] and the "SOWH test" that can be used to examine tree topologies. [ 65 ] [ 66 ]
The fact that substitution models can be used to analyze any biological alphabet has made it possible to develop models of evolution for phenotypic datasets [ 67 ] (e.g., morphological and behavioural traits). Typically, "0" is. used to indicate the absence of a trait and "1" is used to indicate the presence of a trait, although it is also possible to score characters using multiple states. Using this framework, we might encode a set of phenotypes as binary strings (this could be generalized to k -state strings for characters with more than two states) before analyses using an appropriate mode. This can be illustrated using a "toy" example: we can use a binary alphabet to score the following phenotypic traits "has feathers", "lays eggs", "has fur", "is warm-blooded", and "capable of powered flight". In this toy example hummingbirds would have sequence 11011 (most other birds would have the same string), ostriches would have the sequence 11010, cattle (and most other land mammals ) would have 00110, and bats would have 00111. The likelihood of a phylogenetic tree can then be calculated using those binary sequences and an appropriate substitution model. The existence of these morphological models make it possible to analyze data matrices with fossil taxa, either using the morphological data alone [ 68 ] or a combination of morphological and molecular data [ 69 ] (with the latter scored as missing data for the fossil taxa).
There is an obvious similarity between use of molecular or phenotypic data in the field of cladistics and analyses of morphological characters using a substitution model. However, there has been a vociferous debate [ a ] in the systematics community regarding the question of whether or not cladistic analyses should be viewed as "model-free". The field of cladistics (defined in the strictest sense) favor the use of the maximum parsimony criterion for phylogenetic inference. [ 70 ] Many cladists reject the position that maximum parsimony is based on a substitution model and (in many cases) they justify the use of parsimony using the philosophy of Karl Popper . [ 71 ] However, the existence of "parsimony-equivalent" models [ 72 ] (i.e., substitution models that yield the maximum parsimony tree when used for analyses) makes it possible to view parsimony as a substitution model. [ 25 ] | https://en.wikipedia.org/wiki/Substitution_model |
The substitution of dangerous chemicals in the workplace is the process of replacing or eliminating the use chemicals that have significant chemical hazards . The goal of the substitution process is to improve occupational health and safety and minimize harmful environmental impacts . [ 1 ] The process can be time-consuming; assessments of dangers, costs, and practicality may be necessary. Substituting hazardous chemicals follows the principles of green chemistry and can result in clean technology . [ 2 ]
Alternatives assessments are used to determine which chemical is fit to be a substitute. [ 3 ] [ 4 ] [ 5 ] A process-based method of substituting chemicals in the workplace involves: [ 1 ]
Safety data sheets contain pertinent information about hazards associated with chemicals, including short- and long-term effects. A process analysis is performed, which studies how and when the chemical is used and what technology, equipment, and chemistry are needed. [ 1 ]
If a risk is not "small", then possible substitutions are considered. A chemical has a "small" risk to humans if there are no long-term negative effects. The exposure is lower than the threshold limit value (TLV), and there are no risks of disease or other health issues. [ 1 ]
Several factors must be assessed to determine if a chemical is a suitable substitute including potential hazards, exposure, technical feasibility, and low-budget considerations. [ 3 ] After substitutes are proposed, the risks of each substitute are compared to one another and tested until a suitable substitution is found.
The potential hazards of a chemical or a substitute candidate must be assessed by noting the toxicity of the chemical to both humans and the environment. An assessment of the chemical should list the dangerous properties of the chemical, such as flammability or corrosivity. [ 3 ] It should also note any carcinogenic , reprotoxic, allergenic, neurotoxic, and other related effects on the chemical has on human health. [ 1 ]
If a potential chemical substitute has greater exposure to humans and the environment than the original chemical, the toxicity of increased exposure must be considered. A chemical substitute with less exposure or a similar exposure but lower toxicity is preferred. [ 6 ]
A life-cycle assessment of the chemical considers the long-term effects a chemical will have on human health and the environment, as well as the ethical and social effects of chemical use. Examples include the addition of greenhouse gas emissions from the use of a chemical or carcinogenic effects of a chemical after prolonged usage. An ethical or social effect considered during the assessment could include a consideration of if the chemical is ethically sourced or if its use infringes on the rights of indigenous people. [ 6 ]
A chemical substitute performs the intended task efficiently. [ 3 ]
The availability of the chemical commercially in the quantities required is noted. [ 3 ] A substitution that is more cost-efficient is ideal, but is not always available.
Enacted in the EU in 2006, REACH requires industries to collect safety information on their chemicals and report them to a database. It also requires the substitution of dangerous chemicals to safer alternatives if they are found. [ 7 ]
The EPA uses the Toxic Substances Control Act (TSCA) to require industries to record and report the production, use, and disposal of specific dangerous chemicals. [ 8 ]
Substitution of hazardous chemicals can be on different levels such as using: | https://en.wikipedia.org/wiki/Substitution_of_dangerous_chemicals |
Substrate-level phosphorylation is a metabolism reaction that results in the production of ATP or GTP supported by the energy released from another high-energy bond that leads to phosphorylation of ADP or GDP to ATP or GTP (note that the reaction catalyzed by creatine kinase is not considered as "substrate-level phosphorylation"). This process uses some of the released chemical energy , the Gibbs free energy , to transfer a phosphoryl (PO 3 ) group to ADP or GDP. Occurs in glycolysis and in the citric acid cycle. [ 1 ]
Unlike oxidative phosphorylation , oxidation and phosphorylation are not coupled in the process of substrate-level phosphorylation, and reactive intermediates are most often gained in the course of oxidation processes in catabolism . Most ATP is generated by oxidative phosphorylation in aerobic or anaerobic respiration while substrate-level phosphorylation provides a quicker, less efficient source of ATP, independent of external electron acceptors . This is the case in human erythrocytes , which have no mitochondria , and in oxygen-depleted muscle.
Adenosine triphosphate (ATP) is a major "energy currency" of the cell. [ 2 ] The high energy bonds between the phosphate groups can be broken to power a variety of reactions used in all aspects of cell function. [ 3 ]
Substrate-level phosphorylation occurs in the cytoplasm of cells during glycolysis and in mitochondria either during the Krebs cycle or by MTHFD1L ( EC 6.3.4.3 ), an enzyme interconverting ADP + phosphate + 10-formyltetrahydrofolate to ATP + formate + tetrahydrofolate (reversibly), under both aerobic and anaerobic conditions. In the pay-off phase of glycolysis , a net of 2 ATP are produced by substrate-level phosphorylation.
The first substrate-level phosphorylation occurs after the conversion of 3-phosphoglyceraldehyde and Pi and NAD+ to 1,3-bisphosphoglycerate via glyceraldehyde 3-phosphate dehydrogenase . 1,3-bisphosphoglycerate is then dephosphorylated via phosphoglycerate kinase , producing 3-phosphoglycerate and ATP through a substrate-level phosphorylation.
The second substrate-level phosphorylation occurs by dephosphorylating phosphoenolpyruvate , catalyzed by pyruvate kinase , producing pyruvate and ATP.
During the preparatory phase, each 6-carbon glucose molecule is broken into two 3-carbon molecules. Thus, in glycolysis dephosphorylation results in the production of 4 ATP. However, the prior preparatory phase consumes 2 ATP, so the net yield in glycolysis is 2 ATP. 2 molecules of NADH are also produced and can be used in oxidative phosphorylation to generate more ATP.
ATP can be generated by substrate-level phosphorylation in mitochondria in a pathway that is independent from the proton motive force . In the matrix there are three reactions capable of substrate-level phosphorylation, utilizing either phosphoenolpyruvate carboxykinase or succinate-CoA ligase , or monofunctional C1-tetrahydrofolate synthase .
Mitochondrial phosphoenolpyruvate carboxykinase is thought to participate in the transfer of the phosphorylation potential from the matrix to the cytosol and vice versa. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] However, it is strongly favored towards GTP hydrolysis, thus it is not really considered as an important source of intra-mitochondrial substrate-level phosphorylation.
Succinate-CoA ligase is a heterodimer composed of an invariant α-subunit and a substrate-specific ß-subunit, encoded by either SUCLA2 or SUCLG2. This combination results in either an ADP-forming succinate-CoA ligase (A-SUCL, EC 6.2.1.5) or a GDP-forming succinate-CoA ligase (G-SUCL, EC 6.2.1.4). The ADP-forming succinate-CoA ligase is potentially the only matrix enzyme generating ATP in the absence of a proton motive force, capable of maintaining matrix ATP levels under energy-limited conditions, such as transient hypoxia .
This enzyme is encoded by MTHFD1L and reversibly interconverts ADP + phosphate + 10-formyltetrahydrofolate to ATP + formate + tetrahydrofolate.
In working skeletal muscles and the brain, Phosphocreatine is stored as a readily available high-energy phosphate supply, and the enzyme creatine phosphokinase transfers a phosphate from phosphocreatine to ADP to produce ATP. Then the ATP releases giving chemical energy. This is sometimes erroneously considered to be substrate-level phosphorylation, although it is a transphosphorylation .
During anoxia , provision of ATP by substrate-level phosphorylation in the matrix is important not only as a mere means of energy, but also to prevent mitochondria from straining glycolytic ATP reserves by maintaining the adenine nucleotide translocator in ‘forward mode’ carrying ATP towards the cytosol. [ 9 ] [ 10 ] [ 11 ]
An alternative method used to create ATP is through oxidative phosphorylation , which takes place during cellular respiration . This process utilizes the oxidation of NADH to NAD + , yielding 3 ATP, and of FADH 2 to FAD, yielding 2 ATP. The potential energy stored as an electrochemical gradient of protons (H + ) across the inner mitochondrial membrane is required to generate ATP from ADP and P i (inorganic phosphate molecule), a key difference from substrate-level phosphorylation. This gradient is exploited by ATP synthase acting as a pore, allowing H + from the mitochondrial intermembrane space to move down its electrochemical gradient into the matrix and coupling the release of free energy to ATP synthesis. Conversely, electron transfer provides the energy required to actively pump H + out of the matrix. | https://en.wikipedia.org/wiki/Substrate-level_phosphorylation |
Substrate is the earthy material that forms or collects at the bottom of an aquatic habitat. It is made of sediments that may consist of:
Stream substrate can affect the life found within the stream habitat. Muddy streams generally have more sediment in the water, reducing clarity. Clarity is one guide to stream health.
Marine substrate can be classified geologically as well. See Green et al., 1999 for a reference.
Mollusks and clams that live in areas with substrate, and need them to survive, use their silky byssal threads to cling to it. See Cteniodes Ales for reference. | https://en.wikipedia.org/wiki/Substrate_(aquatic_environment) |
In biology , a substrate is the surface on which an organism (such as a plant , fungus , or animal ) lives. A substrate can include biotic or abiotic materials and animals. For example, encrusting algae that lives on a rock (its substrate) can be itself a substrate for an animal that lives on top of the algae. Inert substrates are used as growing support materials in the hydroponic cultivation of plants. In biology substrates are often activated by the nanoscopic process of substrate presentation .
Requirements for animal cell and tissue culture are the same as described for plant cell, tissue and organ culture (In Vitro Culture Techniques: The Biotechnological Principles). Desirable requirements are (i) air conditioning of a room, (ii) hot room with temperature recorder, (iii) microscope room for carrying out microscopic work where different types of microscopes should be installed, (iv) dark room , (v) service room, (vi) sterilization room for sterilization of glassware and culture media, and (vii) preparation room for media preparation, etc. In addition the storage areas should be such where following should be kept properly : (i) liquids -ambient (4–20 °C), (ii) glassware-shelving, (iii) plastics-shelving, (iv) small items-drawers, (v) specialized equipments-cupboard, slow turnover, (vi) chemicals-sidled containers.
There are many types of vertebrate cells that require support for their growth in vitro otherwise they will not grow properly. Such cells are called anchorage-dependent cells. Therefore, many substrates which may be adhesive (e.g. plastic , glass , palladium , metallic surfaces, etc.) or non-adhesive (e.g. agar , agarose , etc.) types may be used as discussed below: | https://en.wikipedia.org/wiki/Substrate_(biology) |
The word substrate comes from the Latin sub - stratum meaning 'the level below' and refers to any material existing or extracted from beneath the topsoil , including sand , chalk and clay . [ 1 ] The term is also used for materials used in building foundations or else incorporated into plaster , brick , ceramic and concrete components, which are sometimes called 'filler' products.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Substrate_(building) |
In chemistry , the term substrate is highly context-dependent. [ 1 ] Broadly speaking, it can refer either to a chemical species being observed in a chemical reaction , or to a surface on which other chemical reactions or microscopy are performed.
In the former sense, a reagent is added to the substrate to generate a product through a chemical reaction. The term is used in a similar sense in synthetic and organic chemistry , where the substrate is the chemical of interest that is being modified. In biochemistry , an enzyme substrate is the material upon which an enzyme acts. When referring to Le Chatelier's principle , the substrate is the reagent whose concentration is changed.
In the latter sense, it may refer to a surface on which other chemical reactions are performed or play a supporting role in a variety of spectroscopic and microscopic techniques, as discussed in the first few subsections below. [ 2 ]
In three of the most common nano-scale microscopy techniques, atomic force microscopy (AFM), scanning tunneling microscopy (STM), and transmission electron microscopy (TEM), a substrate is required for sample mounting. Substrates are often thin and relatively free of chemical features or defects. [ 3 ] Typically silver, gold, or silicon wafers are used due to their ease of manufacturing and lack of interference in the microscopy data. Samples are deposited onto the substrate in fine layers where it can act as a solid support of reliable thickness and malleability. [ 2 ] [ 4 ] Smoothness of the substrate is especially important for these types of microscopy because they are sensitive to very small changes in sample height. [ citation needed ]
Various other substrates are used in specific cases to accommodate a wide variety of samples. Thermally-insulating substrates are required for AFM of graphite flakes for instance, [ 5 ] and conductive substrates are required for TEM. In some contexts, the word substrate can be used to refer to the sample itself, rather than the solid support on which it is placed.
Various spectroscopic techniques also require samples to be mounted on substrates, such as powder diffraction . This type of diffraction, which involves directing high-powered X-rays at powder samples to deduce crystal structures, is often performed with an amorphous substrate such that it does not interfere with the resulting data collection. Silicon substrates are also commonly used because of their cost-effective nature and relatively little data interference in X-ray collection. [ 6 ]
Single-crystal substrates are useful in powder diffraction because they are distinguishable from the sample of interest in diffraction patterns by differentiating by phase. [ 7 ]
In atomic layer deposition , the substrate acts as an initial surface on which reagents can combine to precisely build up chemical structures. [ 8 ] [ 9 ] A wide variety of substrates are used depending on the reaction of interest, but they frequently bind the reagents with some affinity to allow sticking to the substrate. [ citation needed ]
The substrate is exposed to different reagents sequentially and washed in between to remove excess. A substrate is critical in this technique because the first layer needs a place to bind to such that it is not lost when exposed to the second or third set of reagents. [ citation needed ] [ 10 ]
In biochemistry , the substrate is a molecule upon which an enzyme acts. Enzymes catalyze chemical reactions involving the substrate(s). In the case of a single substrate, the substrate bonds with the enzyme active site , and an enzyme-substrate complex is formed. The substrate is transformed into one or more products , which are then released from the active site. The active site is then free to accept another substrate molecule. In the case of more than one substrate, these may bind in a particular order to the active site, before reacting together to produce products. A substrate is called 'chromogenic' if it gives rise to a coloured product when acted on by an enzyme. In histological enzyme localization studies, the colored product of enzyme action can be viewed under a microscope, in thin sections of biological tissues. Similarly, a substrate is called 'fluorogenic' if it gives rise to a fluorescent product when acted on by an enzyme. [ citation needed ]
For example, curd formation ( rennet coagulation) is a reaction that occurs upon adding the enzyme rennin to milk. In this reaction, the substrate is a milk protein (e.g., casein ) and the enzyme is rennin. The products are two polypeptides that have been formed by the cleavage of the larger peptide substrate. Another example is the chemical decomposition of hydrogen peroxide carried out by the enzyme catalase . As enzymes are catalysts , they are not changed by the reactions they carry out. The substrate(s), however, is/are converted to product(s). Here, hydrogen peroxide is converted to water and oxygen gas.
While the first (binding) and third (unbinding) steps are, in general, reversible , the middle step may be irreversible (as in the rennin and catalase reactions just mentioned) or reversible (e.g. many reactions in the glycolysis metabolic pathway).
By increasing the substrate concentration, the rate of reaction will increase due to the likelihood that the number of enzyme-substrate complexes will increase; this occurs until the enzyme concentration becomes the limiting factor .
Although enzymes are typically highly specific, some are able to perform catalysis on more than one substrate, a property termed enzyme promiscuity . An enzyme may have many native substrates and broad specificity (e.g. oxidation by cytochrome p450s ) or it may have a single native substrate with a set of similar non-native substrates that it can catalyse at some lower rate. The substrates that a given enzyme may react with in vitro , in a laboratory setting, may not necessarily reflect the physiological, endogenous substrates of the enzyme's reactions in vivo . That is to say that enzymes do not necessarily perform all the reactions in the body that may be possible in the laboratory. For example, while fatty acid amide hydrolase (FAAH) can hydrolyze the endocannabinoids 2-arachidonoylglycerol (2-AG) and anandamide at comparable rates in vitro , genetic or pharmacological disruption of FAAH elevates anandamide but not 2-AG, suggesting that 2-AG is not an endogenous, in vivo substrate for FAAH. [ 11 ] In another example, the N -acyl taurines (NATs) are observed to increase dramatically in FAAH-disrupted animals, but are actually poor in vitro FAAH substrates. [ 12 ]
Sensitive substrates , also known as sensitive index substrates , are drugs that demonstrate an increase in AUC of ≥5-fold with strong index inhibitors of a given metabolic pathway in clinical drug-drug interaction (DDI) studies. [ 13 ]
Moderate sensitive substrates are drugs that demonstrate an increase in AUC of ≥2 to <5-fold with strong index inhibitors of a given metabolic pathway in clinical DDI studies. [ 13 ]
Metabolism by the same cytochrome P450 isozyme can result in several clinically significant drug-drug interactions. [ 14 ] | https://en.wikipedia.org/wiki/Substrate_(chemistry) |
Substrate is a term used in materials science and engineering to describe the base material on which processing is conducted. Surfaces have different uses, including producing new film or layers of material and being a base to which another substance is bonded.
In materials science and engineering , a substrate refers to a base material on which processing is conducted. This surface could be used to produce new film or layers of material such as deposited coatings . It could be the base to which paint, adhesives, or adhesive tape is bonded.
A typical substrate might be rigid such as metal , concrete , or glass , onto which a coating might be deposited. Flexible substrates are also used. [ 1 ] Some substrates are anisotropic with surface properties being different depending on the direction: examples include wood and paper products.
With all coating processes, the condition of the surface of the substrate can strongly affect the bond of subsequent layers. This can include cleanliness, smoothness, surface energy , moisture, etc.
Coating can be by a variety of processes, including:
In optics , glass may be used as a substrate for an optical coating —either an antireflection coating to reduce reflection, or a mirror coating to enhance it. Ceramic substrates are also used in the renewable energy sector to produce inverters for photovoltaic solar systems and concentrators for concentrated photovoltaic systems. [ 4 ]
A substrate may be also an engineered surface where an unintended or natural process occurs, like in:
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Substrate_(materials_science) |
Substrate analogs ( substrate state analogues ), are chemical compounds with a chemical structure that resemble the substrate molecule in an enzyme-catalyzed chemical reaction . Substrate analogs can act as competitive inhibitors of an enzymatic reaction. An example is phosphoramidate to the Tetrahymena group I ribozyme. [ 1 ] Other examples of substrate analogs include 5’-adenylyl-imidodiphosphate, a substrate analog of ATP , and 3-acetylpyridine adenine dinucleotide, a substrate analog of NADH . [ 2 ]
As a competitive inhibitor, substrate analogs occupy the same binding site as its analog, and decrease the intended substrate’s efficiency. [ 3 ] The maximum rate (V max ) remains the same [ 4 ] while the intended substrate’s affinity (measured by the Michaelis constant K M ) is decreased. [ 5 ] This means that less of the intended substrate will bind to the enzyme, resulting in less product being formed. In addition, the substrate analog may also be missing chemical components that allow the enzyme to go through with its reaction. This also causes the amount of product created to decrease.
Substrate analogs usually bind to the binding site reversibly. This means that the binding of the substrate analog to the enzyme’s binding site is non-permanent. The effect of the substrate analog can be nullified by increasing the concentration of the originally intended substrate. [ 6 ] There are also substrate analogs that bind to the binding site of an enzyme irreversibly. If this is the case, the substrate analog is called an inhibitory substrate analog, a suicide substrate, or a Trojan horse substrate. [ 7 ] An example of a substrate analog that is also a suicide substrate/Trojan horse substrate is penicillin , which is an inhibitory substrate analog of peptidoglycan . [ 8 ]
Some substrate analogs can still allow the enzyme to synthesize a product despite the enzyme’s inability to metabolize the substrate analog. These substrate analogs are known as gratuitous inducers. [ 9 ] An example of a substrate analog that is also a gratuitous inducer is IPTG (isopropyl β-D-1-thiogalactopyranoside), a substrate analog and gratuitous inducer of β-galactosidase activity. [ 10 ] | https://en.wikipedia.org/wiki/Substrate_analog |
In an integrated circuit , a signal can couple from one node to another via the substrate. This phenomenon is referred to as substrate coupling or substrate noise coupling .
The push for reduced cost, more compact circuit boards, and added customer features has provided
incentives for the inclusion of analog functions on primarily digital MOS integrated circuits (ICs) forming mixed-signal ICs . In these systems, the speed of digital circuits is constantly increasing, chips are
becoming more densely packed, interconnect layers are added, and analog resolution is increased. In addition, recent increase in wireless applications and its growing market are introducing a new set of aggressive design goals for realizing mixed-signal systems.
Here, the designer integrates radio frequency (RF) analog and base band digital circuitry on a single chip.
The goal is to make single-chip radio frequency
integrated circuits (RFICs) on silicon, where all the blocks are fabricated on the same chip.
One of the advantages of this integration is low power dissipation for portability due to a reduction in the number of package pins and associated bond wire capacitance.
Another reason that an integrated solution offers lower power consumption is that routing high-frequency signals off-chip often requires a 50Ω impedance match , which can result in higher power dissipation.
Other advantages include improved high-frequency performance due to reduced package interconnect parasitics, higher system reliability, smaller package count, and higher integration of RF components with VLSI-compatible digital circuits.
In fact, the single-chip transceiver is now a reality.
The design of such systems, however, is a complicated task . There are two main challenges in realizing
mixed-signal ICs. The first challenging task, specific to RFICs, is to fabricate good on-chip passive elements
such as high-Q inductors . The second challenging task, applicable to any mixed-signal IC and the subject
of this chapter, is to minimize noise coupling between various parts of the system to avoid any malfunctioning
of the system.
In other words, for successful system-on-chip integration of mixed-signal systems, the
noise coupling caused by nonideal isolation must be minimized so that sensitive analog
circuits and noisy digital circuits can effectively coexist, and the system operates correctly.
To elaborate, note that in mixed-signal
circuits, both sensitive analog circuits and high-swing high-frequency noise injector digital circuits may be
present on the same chip, leading to undesired signal coupling between these two types of circuit via the conductive substrate.
The reduced distance between these circuits, which is the result of constant technology scaling (see Moore's law and the International Technology Roadmap for Semiconductors ),
exacerbates the coupling.
The problem is severe, since signals of different nature and strength interfere,
thus affecting the overall performance, which demands higher clock rates and greater analog
precisions.
The primary mixed-signal noise coupling problem comes from fast-changing digital signals
coupling to sensitive analog nodes.
Another significant cause of undesired signal
coupling is the crosstalk between analog nodes themselves owing to
high-frequency/high-power analog
signals.
One of the media through which mixed-signal noise coupling occurs is the substrate.
Digital operations cause fluctuations in the underlying substrate voltage, which spreads
through the common substrate causing variations in the substrate potential of sensitive
devices in the analog section.
Similarly, in the case of crosstalk between analog nodes, a signal can couple from one
node to another via the substrate.
This phenomenon is referred to as substrate coupling or substrate noise coupling .
There is a sizeable literature on substrate, and mixed signal coupling. Some of the most common topics are: | https://en.wikipedia.org/wiki/Substrate_coupling |
Substrate inhibition in bioreactors occurs when the concentration of substrate (such as glucose , salts, or phenols [ 1 ] ) exceeds the optimal parameters and reduces the growth rate of the cells within the bioreactor. This is often confused with substrate limitation, which describes environments in which cell growth is limited due to of low substrate. Limited conditions can be modeled with the Monod equation ; however, the Monod equation is no longer suitable in substrate inhibiting conditions. A Monod deviation, such as the Haldane (Andrew) equation, is more suitable for substrate inhibiting conditions. These cell growth models are analogous to equations that describe enzyme kinetics, although, unlike enzyme kinetics parameters, cell growth parameters are generally empirically estimated.
Cell growth in bioreactors depends on a wide range of environmental and physiological conditions such as substrate concentration. With regards to bioreactor cell growth, substrate refers to the nutrients that the cells consume and is contained within the bioreactor medium . Cell growth can either be substrate limited or inhibited depending on whether the substrate concentration is too low or too high, respectively. The Monod equation accurately describes limiting conditions, but substrate inhibition models are more complex.
Substrate inhibition occurs when the rate of microbial growth lessens due to a high concentration of substrate. Higher substrate concentrations are usually caused by osmotic issues, viscosity , or inefficient oxygen transport. By slowly adding substrate into the medium, fed-batch bioreactor systems can help alleviate substrate inhibition. Substrate inhibition is also closely related to enzyme kinetics which is commonly modeled by the Michaelis–Menten equation. If an enzyme that is part of a rate-limiting step of microbial growth is substrate inhibited, then the cell growth will be inhibited in the same manner. However, the mechanisms are often more complex, and parameters for a model equation need to be estimated from experimental data. [ 1 ] Additionally, information on inhibitory effects caused by mixtures of compounds is limited because most studies have been performed with single-substrate systems. [ 2 ]
One of the most well known equations to describe single-substrate enzyme kinetics is the Michaelis-Menten equation. This equation relates the initial rate of reaction to the concentration of substrate present, and deviations of model can be used to predict competitive inhibition and non-competitive inhibition . The model takes the form of the following equation:
ν = V m [ S ] K M + [ S ] {\displaystyle \nu ={\frac {V_{m}[S]}{K_{M}+[S]}}} ( Michaelis-Menten equation)
Where
K M {\displaystyle K_{M}} is the Michaelis constant
ν {\displaystyle \nu } is the initial reaction rate
V m {\displaystyle V_{m}} is the maximum reaction rate
If the inhibitor is different from the substrate, then competitive inhibition will increase Km while Vmax remains the same, and non-competitive will decrease Vmax while Km remains the same. However, under substrate inhibiting effects where two of the same substrate molecules bind to the active sites and inhibitory sites, the reaction rate will reach a peak value before decreasing. The reaction rate will either decrease to zero under complete inhibition, or it will decrease to a non-zero asymptote during partial inhibition. [ 3 ] This can be described by the Haldane (or Andrew) equation, [ 4 ] which is a common deviation of the Michaelis-Menten equation, and takes the following form:
ν = V m [ S ] K M + [ S ] + [ S ] 2 K I {\displaystyle \nu ={\frac {V_{m}[S]}{K_{M}+[S]+{\frac {[S]^{2}}{K_{I}}}}}} ( Haldane equation for single-substrate inhibition of enzymatic reaction rate )
Where
K I {\displaystyle K_{I}} is the inhibition constant
Bioreactor cell growth kinetics is analogous to the equations presented in enzyme kinetics. Under non-inhibiting single-substrate conditions, the specific growth rate of biomass can be modeled by the well-known Monod equation. The Monod equation models the growth of organisms during substrate limiting conditions, and its parameters are determined through experimental observation. The Monod equation is based on a single substrate-consuming enzyme system that follows the Michaelis-Menten equation. [ 1 ] The Monod takes the following familiar form:
μ = μ m [ S ] K S + [ S ] {\displaystyle \mu ={\frac {\mu _{m}[S]}{K_{S}+[S]}}} ( Monod equation)
Where:
K S {\displaystyle K_{S}} is the saturation constant
μ {\displaystyle \mu } is the specific growth rate
μ m {\displaystyle \mu _{m}} is the maximum specific growth rate
Under single-substrate inhibiting conditions, the Monod equation is no longer suitable, and the most common Monod derivative is once again in the form of the Haldane equation. [ 2 ] [ 5 ] [ 6 ] As in enzyme kinetics, the growth rate will initially increase as substrate is increased before reaching a peak and decreasing at high substrate concentrations. Reasons for substrate inhibition in bioreactor cell growth includes osmotic issues, viscosity, or inefficient oxygen transport due to overly concentrated substrate in the bioreactor medium. [ 1 ] Substrates that are known to cause inhibition include glucose, NaCl, and phenols, among others [ 1 ] Substrate inhibition is also a concern in wastewater treatment , where one of the most studied biodegradation substrates are the toxic phenols. [ 5 ] Due to their toxicity, there is a large interest in bioremediation of phenols, and it is well known that phenol inhibition can be modeled by the following Haldane equation: [ 5 ]
μ = μ m [ S ] K S + [ S ] + [ S ] 2 K I {\displaystyle \mu ={\frac {\mu _{m}[S]}{K_{S}+[S]+{\frac {[S]^{2}}{K_{I}}}}}} ( Haldane equation for single-substrate inhibition of cell growth )
Where:
K I {\displaystyle K_{I}} is the inhibition constant
There are several equations that have been developed to describe substrate inhibition. Two equations listed below that are referred to as non-competitive substrate inhibition and competitive substrate inhibition models respectively by Shuler and Michael in Bioprocess Engineering: Basic Concepts. Note that the Haldane equation above is a special case of the following non-competitive substrate inhibition model, where KI >>Ks. [ 1 ]
μ = μ m ( 1 + K S [ S ] ) ( 1 + [ S ] K I ) {\displaystyle \mu ={\frac {\mu _{m}}{(1+{\frac {K_{S}}{[S]}})(1+{\frac {[S]}{K_{I}}})}}} ( non-competitive single-substrate inhibition )
μ = μ m [ S ] K S ( 1 + [ S ] K I ) + [ S ] {\displaystyle \mu ={\frac {\mu _{m}[S]}{K_{S}(1+{\frac {[S]}{K_{I}}})+[S]}}} ( competitive single-substrate inhibition )
These equations also have enzymatic counterparts, where the equations commonly describe the interactions between substrate and inhibitors at the active and inhibitory sites. The concept of competitive and non-competitive substrate inhibition is more well defined in enzyme kinetics, but these analogous equations also apply to cell growth models.
Substrate inhibition can be characterized by a high substrate concentration and decreased growth rate, resulting in decreased bioreactor outputs. The most common solution is to change the growth from a batch process to a fed-batch process. Other methods to overcome substrate inhibition include the addition of another substrate type in order to develop alternative metabolic pathways , immobilizing the cells or increasing the biomass concentration. [ 2 ] [ 3 ]
A fed-batch process is the most common way to decrease the effects of substrate inhibition. Fed-batch processes are characterized by the continuous addition of bioreactor media (which includes the substrate) into the inoculum (cellular solution). The addition of media will increase the overall volume within the reactor along with substrate and other growth materials. A fed-batch process will also have an output flow rate of the substrate/cell/product mixture which can be collected to retrieve the desired product. Fed-batch is a good way to overcome substrate inhibition because the amount of substrate can be changed at various points in the growth process. This allows for the bioreactor technician to provide the cells with the amount of substrate they need rather than providing them too much or too little. [ 1 ] [ 2 ]
Other methods to overcome substrate inhibition include the use of Two Phase Partitioning Bioreactors, the immobilization of cells, and increasing the biomass concentration in the bioreactor.
Two Phase Partitioning Bioreactors are able to reduce the aqueous phase substrate concentration by storing substrate in an alternative phase, which can be re-released into the biomass based on metabolic demand. The cell immobilization method the bioreactor works by encapsulating the cells into a material that makes the removal of inhibitory compounds easier, thus reducing inhibition by creating a matrix with the cells which can act as a protective barrier against the inhibitory effects of toxic materials. The method of increasing cell concentration is done by supporting the cellular material on a scaffold to create a biofilm . Biofilms allow for extremely high cell concentrations while preventing the overgrowth of inhibitory substrates. [ 2 ]
The impact of product production depends on how the product is created. Substrate inhibition will affect products produced by enzymatic reactions differently than growth associated product formation. Substrate inhibition of enzymatic product production will inhibit the enzyme's activity, which will lower the reaction rate and reduce the rate of product formation. However, if a product is being produced by cells, then substrate inhibition will narrow product formation by limiting the growth of cells.
There are multiple relationships that may exist between the rate of product formation, the specific rate of substrate consumption, and specific growth rate. The following equations demonstrate the relationship between cell growth and product production for growth associated production. The parameters q p {\displaystyle q_{p}} and μ {\displaystyle \mu } (specific rate of product formation and specific growth rate respectively) are defined below. [ 1 ]
q p = 1 X d P d t {\displaystyle q_{p}={\frac {1}{X}}{\frac {dP}{dt}}} ( specific rate of product formation )
μ = 1 X d X d t {\displaystyle \mu ={\frac {1}{X}}{\frac {dX}{dt}}} ( specific growth rate )
Where X {\displaystyle X} is the cell concentration, and P {\displaystyle P} is the product concentration.
The product formation and cell growth are both directly linked to the amount of substrate consumed through the yield coefficients, Y P S {\displaystyle Y_{\frac {P}{S}}} and Y X S {\displaystyle Y_{\frac {X}{S}}} respectively. These coefficients can be combined to define a yield coefficient, Y P X {\displaystyle Y_{\frac {P}{X}}} , that relates the product production to cell growth. [ 7 ]
Y P S Y X S = Y P X = Δ P Δ X {\displaystyle {\frac {Y_{\frac {P}{S}}}{Y_{\frac {X}{S}}}}=Y_{\frac {P}{X}}={\frac {\Delta P}{\Delta X}}}
This yield coefficient can be further used to directly relate the rate of change of product to the rate of change of cell growth [ 8 ]
d P d t = Y P X d X d t → d P d t = Y P X X 1 X d X d t {\displaystyle {\frac {dP}{dt}}=Y_{\frac {P}{X}}{\frac {dX}{dt}}\rightarrow {\frac {dP}{dt}}=Y_{\frac {P}{X}}X{\frac {1}{X}}{\frac {dX}{dt}}}
Rearranging this equation gives the following relationship between the specific rate of product formation and the specific growth rate of the cells for growth associated products.
q P = Y P X μ {\displaystyle q_{P}=Y_{\frac {P}{X}}\mu }
The above relationships demonstrate that for growth associated product, the specific growth rate is directly proportional to the specific rate of product formation. Furthermore, substrate inhibition limits the specific growth rate, which reduces the final biomass concentration. Increasing the substrate concentration may increase the viscosity of the media, lowers the rate of oxygen diffusivity, and affect the osmolarity of the system. [ 1 ] These effects can be detrimental to cell growth, and by extension, the yield of product. | https://en.wikipedia.org/wiki/Substrate_inhibition_in_bioreactors |
In molecular biology , substrate presentation is a biological process that activates a protein . The protein is sequestered away from its substrate and then activated by release and exposure to its substrate. [ 1 ] [ 2 ] A substrate is typically the substance on which an enzyme acts but can also be a protein surface to which a ligand binds. In the case of an interaction with an enzyme, the protein or organic substrate typically changes chemical form. Substrate presentation differs from allosteric regulation in that the enzyme need not change its conformation to begin catalysis . Substrate presentation is best described for domain partitioning at nanoscopic distances (<100 nm). [ 3 ]
Amyloid precursor protein (APP) is cleaved by beta and gamma secretase to yield a 40-42 amino acid peptide responsible for amyloid plaques associated with Alzheimer's disease . The secretase enzymes are regulated by substrate presentation. [ 4 ] The substrate APP is palmitoylated and moves in and out of GM1 lipid rafts in response to astrocyte cholesterol. Cholesterol delivered by apolipoprotein E (ApoE) drives APP to associate with GM1 lipid rafts. When cholesterol is low, the protein traffics to the disordered region and is cleaved by alpha secretase to produce a non-amylogenic product. The enzymes do not appear to respond to cholesterol, only the substrate moves.
Hydrophobicity drives the partitioning of molecules. In the cell, this gives rise to compartmentalization within the cell and within cell membranes . For lipid rafts, palmitoylation regulates raft affinity for the majority of integral raft proteins. [ 5 ] Raft regulation is regulated by cholesterol signaling and spatial biology
( PLD2 ) is a well-defined example of an enzyme activated by substrate presentation. [ 6 ] The enzyme is palmitoylated causing the enzyme to traffic to GM1 lipid domains or " lipid rafts ". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate ( PIP2 ). PLD2 has a PIP2 binding domain . When PIP2 concentration in the membrane increases, PLD2 leaves the GM1 domains and associates with PIP2 domains where it then gains access to its substrate PC and commences catalysis based on substrate presentation. Presumably, the enzyme is capable of catalyzing a reaction in a lipid raft but lacks a substrate for activity.
( ADAM17 ), also called TACE, is sequestered into lipid rafts away from its substrate, membrane bound tumor necrosis factor (mTNF). [ 7 ] Cholesterol causes mTNF to cluster with ADAM17 in lipid rafts and shed soluble TNF (sTNF) which is an inflammatory cytokine.
Receptor Tyrosine Kinases are cell surface receptors that bind to various polypeptide growth factors, cytokines, and hormones. Activation of RTKs is driven by palmitoylation and dimerization, a process facilitated by cholesterol within lipid rafts. [ 8 ] [ 9 ] Once dimerized, the receptor undergoes autophosphorylation, which triggers a subsequent phosphorylation cascade. This is a specific case where the substrate and the enzyme are the same molecule.
Protein Kinase C (PKC) is a class of enzymes that phosphorylates proteins. Its substrates are typically on the membrane surface where the enzyme is recruited by the lipid diacylglycerol. Thus a portion of PKC activation is through substrate presentation, i.e., by localization with its substrate on the membrane.
( Furin ) (producing cell, replication). When cells are loaded with cholesterol furin traffics to GM1 lipid rafts where it is localized with the palmitoylated spike protein of SARS-CoV-2 and primes it for viral entry. [ 10 ]
( ACE2 ) (target Cell, viral entry), the receptor for SARS-CoV-2 ACE2 traffics SARS-CoV-2 to GM1 lipid rafts where it is endocytosed and exposed to cathepsin for cleavage and optimal cells fusion. [ 11 ] [ 12 ] In low cholesterol ACE2 traffics the virus to TMPRSS2 which also cleaves and allows viral entry but through a putative surface mechanism that is much less efficient. The sensitivity of ACE2 to cholesterol is thought to contribute to less severe COVID19 symptoms in children.
Sequestration is the process of moving a molecule to a lipid raft. Within the plasma membrane, sequestration is primarily driven by packing of saturated lipid with cholesterol or phase separation at very small distances (< 100 nm). At a macroscopic level, organelles and vesicle can limit access of an enzyme with to substrate.
Sequestration can both elevate and reduce the concentration of a protein in proximity to its substrate. When the substrate is present within a lipid raft, sequestration leads to an increased concentration of the protein near the substrate. Conversely, if the substrate is excluded from a lipid raft, sequestration results in decreased interaction between the protein and the substrate, as seen with PLD2.
Either the substrate of the enzyme can move. Movement is typically the disruption of palmitate mediated localization or organelle trafficking . For proteins that are both palmitoylated and bind PIP2, increasing the concentration of PIP2 favors trafficking of the enzyme out of lipid rafts to PIP2. PIP2 is primarily polyunsaturated which causes the lipid to localize away from lipid rafts and allows the PIP2 to oppose palmitate mediated localization. [ 13 ]
Cholesterol and polyunsaturated fatty acids (PUFAs) regulate lipid raft formation, hence the biological function of rafts. When saturated lipids and cholesterol increase in the membrane, lipid rafts increase their affinity for palmitoylated proteins. [ 14 ] PUFAs have the opposite effect, they fluidize the membrane.
PUFAs may also increase the concentration of signaling lipids. The arachidonic acid, a very common PUFA in the brain, incorporates into PC and PIP2. [ 15 ] Arachidonyl PC is a preferred substrate of PLD likely increasing the amount of PA in a cell. Regulation of raft function by cholesterol effectively regulates substrate presentation and the many palmitoylated proteins that utilize substrate presentation as a mechanism of activation. While speculative, the profound effect of cholesterol and PUFAs on human health is likely through physiological regulation of lipid raft function in cells.
Mechanical force (shear or swell) can independently disrupt the packing and resultant affinity of palmitate to lipid rafts. This disruption also causes PLD2 to favor trafficking to PIP2 domains. [ 16 ] The mechanosensitive ion channel TREK-1 is released from cholesterol dependent lipid rafts in response to mechanical force. This has the effect of dampening pain. [ 17 ]
Membrane-mediated anesthesia employs substrate presentation. General anesthetics propofol and inhaled anesthetics xenon , chloroform , isoflurane , diethyl ether disrupt lipid raft function and palmitate mediated localization of PLD2 to lipid rafts. [ 18 ] [ 19 ] Activation of PLD then activates TREK-1 channels. The membrane mediated PLD2 activation could be transferred to an anesthetic insensitive homolog TRAAK, rending the channel anesthetic sensitive. | https://en.wikipedia.org/wiki/Substrate_presentation |
In logic , a substructural logic is a logic lacking one of the usual structural rules (e.g. of classical and intuitionistic logic ), such as weakening , contraction , exchange or associativity. Two of the more significant substructural logics are relevance logic and linear logic .
In a sequent calculus , one writes each line of a proof as
Here the structural rules are rules for rewriting the LHS of the sequent, denoted Γ, initially conceived of as a string (sequence) of propositions. The standard interpretation of this string is as conjunction : we expect to read
as the sequent notation for
Here we are taking the RHS Σ to be a single proposition C (which is the intuitionistic style of sequent); but everything applies equally to the general case, since all the manipulations are taking place to the left of the turnstile symbol ⊢ {\displaystyle \vdash } .
Since conjunction is a commutative and associative operation, the formal setting-up of sequent theory normally includes structural rules for rewriting the sequent Γ accordingly—for example for deducing
from
There are further structural rules corresponding to the idempotent and monotonic properties of conjunction: from
we can deduce
Also from
one can deduce, for any B ,
Linear logic , in which duplicated hypotheses 'count' differently from single occurrences, leaves out both of these rules, while relevant (or relevance) logics merely leaves out the latter rule, on the ground that B is clearly irrelevant to the conclusion.
The above are basic examples of structural rules. It is not that these rules are contentious, when applied in conventional propositional calculus. They occur naturally in proof theory, and were first noticed there (before receiving a name).
There are numerous ways to compose premises (and in the multiple-conclusion case, conclusions as well). One way is to collect them into a set. But since e.g. {a,a} = {a} we have contraction for free if premises are sets. We also have associativity and permutation (or commutativity) for free as well, among other properties. In substructural logics, typically premises are not composed into sets, but rather they are composed into more fine-grained structures, such as trees or multisets (sets that distinguish multiple occurrences of elements) or sequences of formulae. For example, in linear logic, since contraction fails, the premises must be composed in something at least as fine-grained as multisets.
Substructural logics are a relatively young field. The first conference on the topic was held in October 1990 in Tübingen, as "Logics with Restricted Structural Rules". During the conference, Kosta Došen proposed the term "substructural logics", which is now in use today. | https://en.wikipedia.org/wiki/Substructural_logic |
The substructure of a building transfers the load of the building to the ground and isolates it horizontally from the ground. This includes foundations and basement retaining walls . [ 1 ] It is differentiated from the superstructure .
It safeguards the building against the forces of wind, uplift, soil pressure etc. It provides a level and firm surface for the construction of superstructure . It also prevents unequal or differential settlement and ensures stability of the building against sliding, overturning, undermine due to floodwater or burrowing animals.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Substructure_(engineering) |
In mathematical logic , an ( induced ) substructure or ( induced ) subalgebra is a structure whose domain is a subset of that of a bigger structure, and whose functions and relations are restricted to the substructure's domain. Some examples of subalgebras are subgroups , submonoids , subrings , subfields , subalgebras of algebras over a field , or induced subgraphs . Shifting the point of view, the larger structure is called an extension or a superstructure of its substructure.
In model theory , the term " submodel " is often used as a synonym for substructure, especially when the context suggests a theory of which both structures are models.
In the presence of relations (i.e. for structures such as ordered groups or graphs , whose signature is not functional) it may make sense to relax the conditions on a subalgebra so that the relations on a weak substructure (or weak subalgebra ) are at most those induced from the bigger structure. Subgraphs are an example where the distinction matters, and the term "subgraph" does indeed refer to weak substructures. Ordered groups , on the other hand, have the special property that every substructure of an ordered group which is itself an ordered group, is an induced substructure.
Given two structures A and B of the same signature σ, A is said to be a weak substructure of B , or a weak subalgebra of B , if
A is said to be a substructure of B , or a subalgebra of B , if A is a weak subalgebra of B and, moreover,
If A is a substructure of B , then B is called a superstructure of A or, especially if A is an induced substructure, an extension of A .
In the language consisting of the binary operations + and ×, binary relation <, and constants 0 and 1, the structure ( Q , +, ×, <, 0, 1) is a substructure of ( R , +, ×, <, 0, 1). More generally, the substructures of an ordered field (or just a field ) are precisely its subfields. Similarly, in the language (×, −1 , 1) of groups, the substructures of a group are its subgroups . In the language (×, 1) of monoids , however, the substructures of a group are its submonoids . They need not be groups; and even if they are groups, they need not be subgroups.
Subrings are the substructures of rings , and subalgebras are the substructures of algebras over a field .
In the case of graphs (in the signature consisting of one binary relation), subgraphs , and its weak substructures are precisely its subgraphs.
For every signature σ, induced substructures of σ-structures are the subobjects in the concrete category of σ-structures and strong homomorphisms (and also in the concrete category of σ-structures and σ- embeddings ). Weak substructures of σ-structures are the subobjects in the concrete category of σ-structures and homomorphisms in the ordinary sense.
In model theory, given a structure M which is a model of a theory T , a submodel of M in a narrower sense is a substructure of M which is also a model of T . For example, if T is the theory of abelian groups in the signature (+, 0), then the submodels of the group of integers ( Z , +, 0) are the substructures which are also abelian groups. Thus the natural numbers ( N , +, 0) form a substructure of ( Z , +, 0) which is not a submodel, while the even numbers (2 Z , +, 0) form a submodel.
Other examples:
In the category of models of a theory and embeddings between them, the submodels of a model are its subobjects . | https://en.wikipedia.org/wiki/Substructure_(mathematics) |
Substructure search ( SSS ) is a method to retrieve from a database only those chemicals matching a pattern of atoms and bonds which a user specifies. It is an application of graph theory , specifically subgraph matching in which the query is a hydrogen-depleted molecular graph . The mathematical foundations for the method were laid in the 1870s, when it was suggested that chemical structure drawings were equivalent to graphs with atoms as vertices and bonds as edges. SSS is now a standard part of cheminformatics and is widely used by pharmaceutical chemists in drug discovery .
There are many commercial systems that provide SSS, typically having a graphical user interface and chemical drawing software. Large publicly-available databases like PubChem and ChemSpider can be searched this way, as can Wikipedia 's articles describing individual chemicals.
Substructure search is used to retrieve from a database of chemicals those which contain the pattern of atoms and bonds specified by a user. It is implemented using a specialist type of query language and in real-world applications the search may be further constrained using logical operators on additional data held in the database. Thus "return all carboxylic acids where a sample of >1 g is available". [ 1 ] [ 2 ] One definition of "substructure" was provided in 2008: "given two chemical structures A and B, if structure A is fully contained in structure B, then A is a substructure of B, while B is a superstructure of A." [ 3 ]
molecular graph : The graph with differently labelled (coloured) vertices (chromatic graph) which represent different kinds of atoms and differently labelled (coloured) edges related to different types of bonds. Within the topological electron distribution theory, a complete network of the bond paths for a given nuclear configuration. [ 4 ]
In this definition, the word "structure" is not synonymous with " compound ". If it were, the structure for ethanol , CH 3 CH 2 OH would not be a substructure of propanol , CH 3 CH 2 CH 2 OH , since the terminal CH 3 of ethanol is not fully contained at the propanol chain two atoms away from the OH group. Instead the query structure is, formally, a hydrogen-depleted molecular graph . The search is thus for substances which contain three atoms and two single bonds connected as C–C–O. Propanol is a "hit", as is diethyl ether , with C–C–O–C–C. If a user wished to limit the hits to alcohols , then the query structure would have to be drawn with an "explicit hydrogen", as C–C–O–H and ether would no longer match. [ 1 ] In mathematical terms, finding substructures is an application of graph theory , specifically subgraph matching . [ 5 ]
Standard conventions used when chemists draw chemical structures [ 6 ] need to be considered when implementing substructure search. Historically, the representation of tautomer [ 7 ] forms and stereochemistry [ 8 ] has posed difficulties. This can be illustrated using histidine . [ 9 ]
The top row shows the standard two-dimensional chemical drawing for (S)-histidine (the natural isomer of this amino acid ), its enantiomer (R)-histidine and a drawing which conventionally indicates the racemic mixture of equal amounts of the R and S forms. [ 10 ] The bottom row shows the same three compounds with the imidazole ring drawn in its alternative tautomer form. For histidine, it has been experimentally determined by 15 N NMR spectroscopy that the 1-H tautomer is preferred over the 3-H form in samples. [ 11 ] Choice of representation for storage in a database can influence substucture searches. All six drawings are hits for a propanol substructure C–C–C–O, as shown in red. However, only the top row would, apparently, be a hit for the blue substructure of 1-H imidazole-4-methyl, as this is not fully contained in the other three compounds. In fact, each vertical pair is the same chemical substance: tautomers in general cannot be isolated as separate samples. [ 7 ] In modern databases, substances are held in a single canonical form , with checks made for uniqueness. The InChIKey provides one way to do this. [ 9 ] (S)-Histidine's standard key is HNDVDQJCIGZPNO-YFKPBYRVSA-N, [ 12 ] (R)-histidine's key is HNDVDQJCIGZPNO-RXMQYKEDSA-N [ 13 ] and (RS)-histidine's is HNDVDQJCIGZPNO-UHFFFAOYSA-N. [ 14 ] The first block of 14 letters is identical for all these substances, as it encodes the molecular graph. [ 9 ]
Most substructure search systems present the user with a graphical user interface with a chemical structure drawing component. Query structures may contain bonding patterns such as "single/aromatic" or "any" to provide flexibility. Similarly, the vertices which in an actual compound would be a specific atom may be replaced with an atom list in the query. Cis – trans isomerism at double bonds is catered for by giving a choice of retrieving only the E form , the Z form , or both. [ 1 ] [ 15 ]
The algorithms for searching are computationally intensive, often of O ( n 3 ) or O ( n 4 ) time complexity (where n is the number of atoms involved) but the problem is known to be NP-complete . [ 16 ] Speedups are achieved using fragment screening as a first step. This pre-computation typically involves creation of bitstrings representing presence or absence of molecular fragments. Target compounds that do not possess the fragments present in the query cannot be hits and are eliminated. [ 17 ] [ 18 ] Atom-by-atom-searching, in which a mapping of the query's atoms and bonds with the target molecule is sought, is usually done with a variant of the Ullman algorithm. [ 5 ] [ 19 ]
As of 2024 [update] , substructure search is a standard feature in chemical databases accessible via the web . Large databases such as PubChem , [ 20 ] [ 15 ] maintained by the National Center for Biotechnology Information and ChemSpider , [ 21 ] maintained by the Royal Society of Chemistry have graphical interfaces for search. The Chemical Abstracts Service , a division of the American Chemical Society , provides tools to search the chemical literature and Reaxys supplied by Elsevier covers both chemicals and reaction information, including that originally held in the Beilstein database . [ 22 ] PATENTSCOPE maintained by the World Intellectual Property Organization makes chemical patents accessible by substructure [ 23 ] and Wikipedia's articles describing individual chemicals can also be searched that way. [ 24 ]
Suppliers of chemicals as synthesis intermediates or for high-throughput screening routinely provide search interfaces. Currently, the largest database that can be freely searched by the public is the ZINC database , which is claimed to contain over 37 billion commercially available molecules. [ 25 ] [ 26 ]
The idea that chemical structures as depicted using drawings of the type introduced by Kekulé were related to what is now called graph theory was suggested by the mathematician J. J. Sylvester in 1878. He was the first to use the word "graph" in the sense of a network . [ 27 ] [ 28 ] Arthur Cayley had already, in 1874, considered how to enumerate chemical isomers , in what was an early approach to molecular graphs , where atoms are at vertices and bonds correspond to edges . [ 29 ] [ 30 ]
structural formula : A formula which gives information about the way the atoms in a molecule are connected and arranged in space. [ 31 ]
In the 20th century, chemists developed standard ways to show structural formula , especially for individual organic compounds that were increasingly being synthesized and tested as potential drugs or agrochemicals, [ 32 ] [ 6 ] By the 1950s, as the number of compounds made and tested grew, the first attempts to create chemical databases were made and the sub-discipline of cheminformatics was established. [ 33 ] As stated in 2012, "searching for substructures in molecules belongs to the most elementary tasks in cheminformatics and is nowadays part of virtually every cheminformatics software". [ 34 ]
The first suggested use for substructure search was in 1957, to reduce the workload of patent examiners . They have to search published literature to decide whether an invention is novel, which for chemical patents often means finding known examples within the generic claims of a Markush structure. [ 35 ] [ 33 ] Before this could become a reality, a number of developments were required. Importantly, the existing literature had to be made searchable and a way to input a chemical structure query and return the matching results had to devised. These requirements had been partially met as early as 1881 when Friedrich Konrad Beilstein introduced the Handbuch der organischen Chemie ( Handbook of Organic Chemistry ) which carefully classified known chemicals in a very systematic manner so that all examples containing a given heterocycle would be located together. [ 36 ] [ 37 ]
In 1907, the American Chemical Society set up the Chemical Abstracts Service (CAS). This weekly subscription service included a printed publication with summaries of articles in thousands of scholarly journals and claims in worldwide patents. This had a chemical substance index that, in principle, allowed searching by chemical name or formula. [ 38 ] However, it was only when the CAS records had been fully converted into machine-readable form and the internet was available to connect its database to end-users that comprehensive searching became possible. CAS provided various specialist search services from the 1980s but it was not until 2008 that its "SciFinder" system became available via the web . [ 39 ]
By the 1960s, companies synthesizing and testing new chemicals made significant progress in creating in-house databases. Imperial Chemical Industries stored chemical structures encoded as text strings , using Wiswesser line notation . Its associated CROSSBOW software allowed substructure search using key-based searches followed by more processor-intensive atom-by-atom search. [ 40 ] [ 41 ] It was recognised that research chemists wanted not only to search company collections for existing inventory but also to search third-party databases supplied by vendors of small-molecule intermediates. The latter application evolved as a collaboration involving six companies with pharmaceutical interests and their commercial suppliers. [ 42 ] [ 9 ]
By the 1980s, other line notations were used for commercially-available substructure search systems. SMILES encoding, together with its SMARTS query language, [ 43 ] and SYBYL line notation [ 9 ] [ 44 ] are examples. [ 45 ] A comprehensive survey of then-available chemical information systems was produced for NASA in 1985. [ 46 ]
The need to combine chemistry search with biological data produced by screening compounds at ever-larger scales led to implementation of systems such as MACCS. [ 46 ] : 73–77 [ 47 ] This commercial system from MDL Information Systems made use of an algorithm specifically designed for storage and search within groups of chemicals that differed only in their stereochemistry. [ 48 ] A review of the many systems available by the mid-1980s pointed out that "most in-house developed systems have been replaced with commercially available standardised software for managing chemical structure databases." [ 49 ] The MDL Molfile is now an open file format for storing single-molecule data in the form of a connection table. [ 50 ] [ 9 ]
By the 2000s, personal computers had become powerful enough that storage and search of chemistry within office software such as Microsoft Excel was possible. [ 51 ]
Subsequent developments involved the use of new techniques to allow efficient searches over very large databases and, importantly, the use of a standardised International Chemical Identifier , a type of line notation, to uniquely define a chemical substance. [ 9 ] [ 25 ] [ 52 ] [ 53 ] | https://en.wikipedia.org/wiki/Substructure_search |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.