text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Institute of Physical Chemistry of the Polish Academy of Sciences ( Polish Instytut Chemii Fizycznej Polskiej Akademii Nauk, IChF PAN) is one of numerous institutes belonging to the Polish Academy of Sciences . As its name suggests, the institute's primary research interests are in the field of physical chemistry .
The Institute was established by a resolution of the Presidium of the Government of the Polish People's Republic on 19 March 1955. It was the first chemical institute of the Polish Academy of Sciences. Its tasks were defined at that time: "The Institute of Physical Chemistry covers research on current issues of physical chemistry important from the point of view of the development of chemical sciences and the needs of the national economy".
At the beginning of its activity, the main task was to prepare scientific staff who would be able to conduct scientific research in the field of physical chemistry. The development of scientific staff was facilitated because the employed scientific workers did not have the teaching burdens required in higher education institutions.
The first Director of the Institute and, at the same time, the Chairman of the Scientific Council of the Institute was prof. Wojciech Świętosławski. The subsequent directors of the Institute were prof. Michał Śmiałowski (1960–1973), prof. Wojciech Zielenkiewicz (1973–1990), prof. Jan Popielawski (1990–1992), prof. Janusz Lipkowski (1992–2003), prof. Aleksander Jabłoński (2003–2011), prof. Robert Hołyst (2011–2015), prof. Marcin Opałło (2015-2023), dr hab. Adam Kubas (since 2023). [ 1 ]
Over the following years, the structure of the IChF changed, the number of employees increased, and new research topics emerged, which is reflected in the current structure of the Institute.
The Institute is divided into research departments, within which research teams operate: [ 2 ]
Team leaders: prof. M. Wojtkowski, dr. Jan Guzowski and dr. hab. Jan Paczesny
Team leaders: dr. hab. Jacek Gregorowicz, dr. hab. Volodymyr Sashuk, prof. Robert Hołyst, prof. Piotr Garstecki, dr. hab. Marco Costantini
Team leaders: dr. hab. Zbigniew Kaszkur, prof. Rafał Szmigielski and dr. hab. Juan Carlos Colmenares Quintero
Team leaders: prof. Joanna Niedziółka-Jönsson, dr hab. Martin Jönsson-Niedziółka, dr Wojciech Nogala, prof. Marcin Opałło and dr inż Emilia Witkowska-Nery
Team leaders: dr hab. Wojciech Góźdź and prof. Jerzy Górecki
Team leaders: dr hab. Agnieszka Michota-Kamińska, dr hab. Gonzalo Angulo Nunez, prof. Robert Kołos, dr hab. Yuriy Stepanenko and prof. Jacek Waluk
Leaders: prof. Janusz Lewiński, dr Bartłomiej Wacław, dr Piyush Sindhu Sharma, dr hab. Adam Kubas, prof. Robert Nowakowski, dr hab. Daniel Prochowicz and dr Tomasz Ratajczyk
The work conducted by the Institute has given rise to five companies, operating mainly in the field of medical diagnostics: [ 4 ]
This article about a Polish building or structure is a stub . You can help Wikipedia by expanding it .
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
This article about an organisation in Poland is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Institute_of_Physical_Chemistry_of_the_Polish_Academy_of_Sciences |
The Thomson Medal and Prize is an award which has been made, originally only biennially in even-numbered years, since 2008 by the British Institute of Physics for "distinguished research in atomic (including quantum optics ) or molecular physics ". It is named after Nobel prizewinner Sir J. J. Thomson , the British physicist who demonstrated the existence of electrons, and comprises a silver medal and a prize of £1000. [ 1 ]
Not to be confused with the J. J. Thomson IET Achievement Medal for electronics.
The following have received a medal: [ 2 ] | https://en.wikipedia.org/wiki/Institute_of_Physics_Joseph_Thomson_Medal_and_Prize |
The Institute of Problems of Chemical Physics (IPCP) [ 1 ] ( Russian : Институт проблем химической физики РАН ) of the Russian Academy of Sciences (RAS) consists of 10 scientific departments and about 100 laboratories each one held by an independent research groups.
IPCP was established in 1956 as branch of the Moscow Institute of Chemical Physics. | https://en.wikipedia.org/wiki/Institute_of_Problems_of_Chemical_Physics |
The Institute of Professional Sound , previously the Institute of Broadcast Sound , is an organisation for audio professionals. The organisation provides opportunities for training and conferencing to assist in maintaining high standards in all areas of professional audio operations. The organisation is based in the UK .
The organisation was founded in 1977 by sound balancers in BBC Television and Radio and Independent TV , when its membership comprised audio practitioners working in all areas of broadcast audio including radio, location, and post-production sound. On 1 January 2012 the Institute of Professional Sound was adopted as the new name of the organisation, in order to attract a wider membership which is not exclusively from broadcasting. [ 1 ]
The Institute of Professional Sound was established in 1977 as the Institute of Broadcast Sound, by individuals working in radio and television, who recognised a need for a coordinated means for the exchange of innovative ideas between practitioners in the field of broadcast audio. [ 1 ] The organisation serves as a catalyst to promote collaborative initiatives between manufacturers of digital audio recording and editing equipment. Listed among the successes of the organisation is the File Exchange Initiative, from which the iXML specification was established, setting an open standard for the inclusion of location sound metadata in Broadcast Wave audio files.
The Institute of Professional Sound offers mentoring and career enhancement opportunities for entry-level employees, college graduates, and seasoned professionals. The mentoring program is designed to coordinate members who desire to assist their colleagues progress and succeed in their career with individuals seeking the advice and support of more experienced practitioners in the industry. [ 2 ]
The organisation provides training forums and conferences for its members which introduce members to emerging technologies, along with seminars on microphone placement and other operational issues. [ 3 ]
The Institute contributes to the ongoing discussions with Ofcom , regarding the changes to the management and use of the radio frequency spectrum, where it represents several hundred members who are independent users of radio microphone and associated equipment. [ 4 ]
Started in June 1995 with just 10 participants, the IPS's Internet email conference IBSNET has over 500 participants (as of 2012). Members include individuals from the UK , Germany, Austria, United States, Malaysia , Australia, Japan, and New Zealand. The conference provides opportunities for comment and feedback regarding professional standards, working conditions, visa requirements, and radio microphone frequencies in other countries, in addition to putting location recordists in contact with one another and with the dubbing mixers, who may ultimately use their work. [ citation needed ] | https://en.wikipedia.org/wiki/Institute_of_Professional_Sound |
The Institute of Quarrying is the international professional body for quarrying , construction materials and the related extractive and processing industries. The Institute's long-term objective is to promote progressive improvements in all aspects of operational performance of the extractives industry through education and training. The Institute has been supporting the extractives industry and associated sectors since 1917.
The Institute was founded on 19 October 1917 from a meeting of “The Association of Quarry Managers” in Caernarfon in North Wales . Anne Greaves was the first woman to become a member of the Institute of Quarrying in 1925. Gradually expanding over the years, IQ now has affiliate organisations in Australia , New Zealand , Malaysia , Southern Africa and Hong Kong .
In September 2012 the Institute moved to its new premises at McPherson House, named after its founder Simon McPherson, in Chilwell , Nottingham .
The largest membership group remains in the UK, where the Institute was founded in 1917. Australia constitutes the largest group in the Pacific region and close ties are maintained with their neighbours in New Zealand and Malaysia. To the north, members are based in Hong Kong, operating both in the territory and China. The Institute's activities in Southern Africa are centred on South Africa which provides support for members in other countries of the region.
It has thirteen regional branches in the UK.
It regulates the quarrying industry, providing training and consultation for standards in the industry, similar to other engineering professional bodies. | https://en.wikipedia.org/wiki/Institute_of_Quarrying |
Institute of Solid State Chemistry and Mechanochemistry of the Siberian Branch of the RAS ( Russian : Институт химии твердого тела и механохимии СО РАН ) is a research institute in Novosibirsk , Russia. It was founded in 1944.
Institute of Solid State Chemistry and Mechanochemistry is one of the oldest scientific institutes in Siberia . It was founded in 1944 as the Chemical and Metallurgical Institute. Five years later, thanks to the institute, a ceramic pipe plant was launched in Dorogino. Later, the institute became part of the Siberian Branch of the USSR Academy of Sciences.
In 1964, the scientific organization was renamed the Institute of Physicochemical Principles of Mineral Raw Materials Processing, and in 1980, it was renamed the Institute of Solid State Chemistry and Mineral Raw Materials Processing. In 1997, the institute was renamed the Institute of Solid State Chemistry and Mechanochemistry.
The institute is located in Tsentralny District ( Frunze Street 13) and Akademgorodok . | https://en.wikipedia.org/wiki/Institute_of_Solid_State_Chemistry_and_Mechanochemistry |
The Institut au service du spatial, de ses applications et technologies ( ISSAT ) (in English Institute of Space, its Applications and Technologies [ 1 ] ), is a French association supported by French Ministry of Education , [ 2 ] created in 1995 [ 3 ] in order to develop the aerospace activities in Toulouse [ 4 ] and to help developing the knowledge of aerospace in France and Europe [ 5 ] [ 6 ]
The ISSAT has several physical members and 7 juridical person members: [ 7 ] | https://en.wikipedia.org/wiki/Institute_of_Space,_its_Applications_and_Technologies |
The Institute of Technical Mechanics of the National Academy of Sciences of Ukraine and the State Space Agency of Ukraine (ITM NASU and SSAU) is a research institution composed of the Mechanics division of National Academy of Sciences of Ukraine, the leading institute of the Ukrainian aerospace sector. [ 1 ]
The institute owed its establishment to Mikhail K. Yangel , one of the originators of rocket engineering in the USSR and Ukraine. M.K. Yangel was quite aware of the fact that the advancement of space engineering should be based on the latest achievements in basic and applied research in the field of engine engineering, heat-and-mass exchange and thermal protection, aero- and gas-dynamics, novel materials and technologies, strength and reliability, structural optimization, etc. in April 1966 on Academician M.K. Yangel’s initiative, a new academic division was founded in Dnepropetrovsk: the Sector for Problems in Technical Mechanics, as a part of the Dnepropetrovsk Branch of the Institute of Mechanics of the Ukrainian Soviet Socialist Republic (Ukrainian SSR)’s Academy of Sciences. The next stage in the institute’s history came in April 1968, when the Sector was reorganized into the Dnepropetrovsk Division of the Institute of Mechanics of the Ukrainian SSR’s Academy of Sciences. Vsevolod A. Lazaryan, Corresponding Member (from 1972 Academician) of the Ukrainian SSR’s Academy of Sciences, was appointed Head of Division. In June 1968, the physics and metallurgy departments of the former Branch of the B Verkin Institute for Low Temperature Physics and Engineering of the Ukrainian SSR’s Academy of Sciences were joined to the Division. The scope of research at the Division grew from year to year, thus calling for the establishment of new departments. In February 1970, the research department of propulsion systems dynamics was established under the leadership of Viktor V. Pylypenko, D.Sc., with the task of augmenting the linear theory and devising a nonlinear theory of pogo self-oscillations of space vehicle structures, developing methods of dynamic characterization of liquid-propellant rocket engines and their components, and studying the dynamics of engine feed systems and transient regimes with consideration for cavitation in the centrifugal pumps with an inducer. In 1973, the Sector for Problems in Space Engineering was structurally singled out at the Division. Until 1980, the sector was headed by Vasily S. Budnik , Academician of Ukrainian SSR’s Academy of Sciences. In May 1980, the Division was transformed into the Institute of Technical Mechanics of the Ukrainian SSR’s Academy of Sciences. The institute was headed by Viktor V. Pylypenko Archived 2013-08-17 at the Wayback Machine , Corresponding Member of the Ukrainian SSR’s Academy of Sciences (now Academician of the National Academy of Sciences of Ukraine), D.Sc., Professor. At the institute, a number of laboratories were organized and equipped with up-to-date testing facilities: laboratories of hydrodynamics, plasmadynamics, vacuum and aerodynamic engineering, dynamic testing, strength, high energy, and electroforming; a controlled flow gas-dynamics system; a system to study detonation solid-propellant rocket engines and gas generators; a high-pressure system for rocket nozzle testing, etc. To further develop, coordinate, and improve organization of R&D’s in space engineering in Ukraine, the Presidium of the National Academy of Sciences of Ukraine and the National Space Agency of Ukraine issued in 1993 a joint decree-order, according to which the Institute of Technical Mechanics of the National Academy of Sciences of Ukraine was given the status of dual subordination and its lines of research were modified to solve scientific and technical problems involving the development and operation of current and prospective space hardware. In 1995, the institute became the leading institute of the Ukrainian space sector. The institute provides scientific and technical support to the execution of the projects of the National Space Programs of Ukraine and coordinates R&D’s in space engineering under the supervision of the National Space Agency of Ukraine.
At present, fundamental and applied research are conducted at the institute. Those involve: the dynamics of mechanical and hydromechanical systems, of launch vehicle systems, and of rail and motor transport; the aero-thermo and gas dynamics of power plants and flying and space vehicles and their subsystems; the strength, reliability, and optimization of mechanical systems, launch vehicles, and spacecraft; the mechanics of interaction of a solid with ionized media and electromagnetic radiation; the systems analysis of trends and prospects in space engineering.
The official website of the Institute of Technical Mechanics of the National Academy of Sciences of Ukraine and the State Space Agency of Ukraine The official website of the State Space Agency of Ukraine (SSAU ) The official website of the National academy of sciences of Ukraine The catalogue of leading enterprises of Ukraine [ permanent dead link ] | https://en.wikipedia.org/wiki/Institute_of_Technical_Mechanics |
The Institute of Technology Assessment ( ITA ; German: Institut für Technikfolgen-Abschätzung) is a research unit of the Austrian Academy of Sciences in Vienna .
The ITA is the only institution in Austria that is entirely devoted to technology assessment (TA). Together with its partner Austrian Institute of Technology , it advises the Austrian Parliament . The institute organises for many years the internationally renowned annual TA conferences in Vienna. [ 1 ] Moreover, it serves as a national and international networking node of the TA community. ITA is a full member of the European Parliamentary Technology Assessment (EPTA) network, [ 2 ] is a founding member of the German-speaking Network NTA (Netzwerk Technikfolgenabschätzung) [ 3 ] and of the globalTA network [ 4 ] and is, since 2008, a member of the European Technology Assessment Group (ETAG).
Founded in 1985 as a working group within the former "Institute for Socio-economic Development Research and Technology Assessment" (Institut für Sozio-ökonomische Entwicklungsforschung und Technikbewertung, ISET), since the end of 1987 independent as "Technology Assessment Unit (TAU)" ("Forschungsstelle für Technikbewertung ", FTB)); since 1994 an institute (ITA) of the Austrian Academy of Sciences . Its founding director was the physicist prof. Ernest Braun (previously director of the Technology Policy Unit at Aston University in Great Britain). The second director was the economist prof. Gunther Tichy (full professor at the University of Graz and senior researcher at the Austrian Institute of Economic Research [ 5 ] in Vienna. Since 2006 Michael Nentwich , law and STS ( science and technology studies ) scholar, directs the institute. Currently, the institute has approx. Twenty-five staff members, among them twenty researchers from the natural and engineering sciences, humanities and social sciences.
The ITA focuses on the following themes: [ 6 ]
ITA recently added a "topics" section describing the main topics it deals with in plain language. [ 7 ]
Some of ITA's publications are published in special series on the publication server [ 8 ] of the Austrian Academy of Sciences with the institute serving as a publisher: | https://en.wikipedia.org/wiki/Institute_of_Technology_Assessment |
The Institute of Theoretical Physics ("Institut de physique théorique") ( IPhT ) is a research institute of the Direction of Fundamental Research (DRF) of the French Alternative Energies and Atomic Energy Commission (CEA). The Institute is also a joint research unit of the Institute of Physics (INP), a subsidiary of the French National Center for Scientific Research (CNRS). It is associated to the Paris-Saclay University . IPhT is situated on the Saclay Plateau South of Paris.
The IPhT was created in 1963 [ 1 ] as the "Service de Physique Théorique" (SPhT), in succession of the "Service de Physique Mathématique" [ 2 ] (SPM) of CEA. It became an Institute (and took the name IPhT) in 2008. It was initially devoted to nuclear physics and superconductivity. Particle physics quickly became an important theme. After its move in 1968 from the main CEA-Saclay site to the present site of Orme des Merisiers, quantum field theory became a major research topic, together with statistical physics . Subsequently, new topics such as conformal theories and matrix models, cosmology and string theory , condensed matter physics and out-of-equilibrium statistical physics, quantum information , found their place there. IPhT is usually considered one of the top theoretical physics research institute in Europe. [ 3 ]
Research at IPhT covers most areas of theoretical physics:
IPhT organizes each spring the "Itzykson Conference", an international meeting centered on theme which is different every year. Its name is a tribute to Claude Itzykson , former IPhT researcher.
IPhT is not part of a teaching department, but graduate and postgraduate courses of theoretical physics are organized at IPhT. [ 5 ] They are aimed at graduate students and researchers of Paris area. The lecturers are researchers from IPhT or other Paris Area labs, and senior visitors of IPhT. Most courses are part of the courses of the Ecole Doctorale Physique en Ile de France (EDPIF).
IPhT hosts numerous master and graduate students, as well as postdoctoral researchers.
Talks and conferences of IPhT are usually available by live streaming and are available for replay on the IPhT YouTube channel. [ 6 ]
Outreach talks and presentations for the general public are also available there. [ 6 ]
Many scientific books are being published by researchers from IPhT, aiming at students and researchers as well as at the general public.
Some researchers who held permanent positions at SPM/SPhT/IPhT:
Claude Bloch , Édouard Brézin , Gilles Cohen-Tannoudji, Cirano de Dominicis, Bernard Derrida , Michel Gaudin , Claude Itzykson , Stanislas Leibler , Madan Lal Mehta , Albert Messiah , Stéphane Nonnenmacher, Yves Pomeau , Volker Schomerus, Raymond Stora , Lenka Zdeborová , Jean Zinn-Justin , Jean-Bernard Zuber
Some researchers who are presently members of IPhT:(2023)
Roger Balian , Jean-Paul Blaizot, François David, Philippe Di Francesco , David Kosower, Vincent Pasquier, Mannque Rho , Hubert Saleur, Pierre Vanhove, André Voros
The IPhT is located on the Plateau de Saclay, about 20 km southwest of Paris, on the Orme des Merisiers site, which is an annex of the main CEA-Saclay center. | https://en.wikipedia.org/wiki/Institute_of_Theoretical_Physics,_Saclay |
The Institute of Transport Economics (Transportøkonomisk institutt –TØI) is a national, Norwegian institution for multidisciplinary transport research. Its mission is to develop and disseminate transportation knowledge of scientific quality and practical application. The Institute is an independent, non-profit research foundation. It holds no interests in any commercial, manufacturing or supplying organisation.
TØI has a multidisciplinary research environment with approximately 110 employees, of which about 80 are researchers.
Its sphere of activity includes most of the current issues in road, rail, sea and air transport, as well as urban mobility, environmental sustainability and road safety. In recent years the Institute has been engaged in more than 70 research projects under EU's Research Framework Programmes.
This Norwegian government -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Institute_of_Transport_Economics |
The Institute of Transportation Engineers ( ITE ) is an international educational and scientific association of transportation professionals who are responsible for meeting mobility and safety needs. ITE facilitates the application of technology and scientific principles to research, planning, functional design, implementation, operation, policy development, and management for any mode of ground transportation.
The organization was formed in October 1930 amid growing public demand for experts to alleviate traffic congestion and the frequency of crashes that came from the rapid development of automotive transportation. [ 3 ] Various national and regional conferences called for discussions of traffic problems. These discussions led to a group of transportation engineers starting the creation of the first professional traffic society. A meeting took place in Pittsburgh on October 2, 1930, where a tentative draft of the organization's constitution and by-laws came to fruition. The constitution and by-laws were later adopted at a meeting in New York on January 20, 1931. The first chapter of the Institute of Traffic Engineers [ 1 ] was established consisting of 30 men with Ernest P. Goodrich as its first president. [ 4 ]
The organization consists of 10 districts, 62 sections, and 30 chapters from various parts of the world. [ 5 ]
ITE founded the Transportation Professional Certification Board Inc. ( TPCB ) in 1996 as an autonomous certification body. [ 6 ] TPCB facilitates multiple testing and certification pathways for transportation professionals.
ITE is also a standards development organization designated by the United States Department of Transportation (USDOT). One of the current standardization efforts is the advanced transportation controller . ITE is also known for publishing articles about trip generation, parking generation, parking demand, and various transportation-related material through ITE Journal , a monthly publication. [ 7 ]
Urbanists such as Jeff Speck have criticized ITE standards for encouraging towns to build more, wider streets making pedestrians less safe and cities less walkable. [ 8 ] Donald Shoup in his book The High Cost of Free Parking argues that the ITE Trip Generation Manual estimates give towns the false confidence to regulate minimum parking requirements which reinforce sprawl. [ 9 ] | https://en.wikipedia.org/wiki/Institute_of_Transportation_Engineers |
The notion of institution was created by Joseph Goguen and Rod Burstall in the late 1970s, in order to deal with the "population explosion among the logical systems used in computer science ". The notion attempts to "formalize the informal" concept of logical system. [ 1 ]
The use of institutions makes it possible to develop concepts of specification languages (like structuring of specifications, parameterization, implementation, refinement, and development), proof calculi , and even tools in a way completely independent of the underlying logical system. There are also morphisms that allow to relate and translate logical systems. Important applications of this are re-use of logical structure (also called borrowing), and heterogeneous specification and combination of logics.
The spread of institutional model theory has generalized various notions and results of model theory , and institutions themselves have impacted the progress of universal logic . [ 2 ] [ 3 ]
The theory of institutions does not assume anything about the nature of the logical system. That is, models and sentences may be arbitrary objects; the only assumption is that there is a satisfaction relation between models and sentences, telling whether a sentence holds in a model or not. Satisfaction is inspired by Tarski's truth definition , but can in fact be any binary relation.
A crucial feature of institutions is that models, sentences, and their satisfaction, are always considered to live in some vocabulary or context (called signature ) that defines the (non-logic) symbols that may be used in sentences and that need to be interpreted in models. Moreover, signature morphisms allow to extend signatures, change notation, and so on. Nothing is assumed about signatures and signature morphisms except that signature morphisms can be composed; this amounts to having a category of signatures and morphisms. Finally, it is assumed that signature morphisms lead to translations of sentences and models in a way that satisfaction is preserved. While sentences are translated along with signature morphisms (think of symbols being replaced along the morphism), models are translated (or better: reduced) against signature morphisms. For example, in the case of a signature extension, a model of the (larger) target signature may be reduced to a model of the (smaller) source signature by just forgetting some components of the model.
Let C a t o p {\displaystyle \mathbf {Cat} ^{\mathrm {op} }} denote the opposite of the category of small categories . An institution formally consists of
such that for each σ : Σ → Σ ′ {\displaystyle \sigma \colon \Sigma \to \Sigma '} in S i g n {\displaystyle \mathbf {Sign} } , the following satisfaction condition holds:
M ′ ⊨ Σ ′ σ ( φ ) if and only if M ′ | σ ⊨ Σ φ {\displaystyle M'\models _{\Sigma '}\sigma (\varphi )\quad {\text{if and only if}}\quad M'|_{\sigma }\models _{\Sigma }\varphi }
for each M ′ ∈ M o d ( Σ ′ ) {\displaystyle M'\in \mathbf {Mod} (\Sigma ')} and φ ∈ S e n ( Σ ) {\displaystyle \varphi \in {\mathit {Sen}}(\Sigma )} .
The satisfaction condition expresses that truth is invariant under change of notation
(and also under enlargement or quotienting of context).
Strictly speaking, the model functor ends in the "category" of all large categories. | https://en.wikipedia.org/wiki/Institution_(computer_science) |
The Institution of Chemical Engineers ( IChemE ) is a global professional engineering institution with 30,000 members in 114 countries. [ 2 ] It was founded in 1922 and awarded a Royal Charter in 1957.
The Institution has offices in Rugby , Melbourne , Wellington, New Zealand and Kuala Lumpur . [ 5 ]
In 1881, George E. Davis proposed the formation of a Society of Chemical Engineers, but instead the Society of Chemical Industry (SCI) was formed. [ 6 ] [ 7 ]
The First World War required a huge increase in chemical production to meet the needs of the munitions and its supply industries, including a twenty-fold increase in explosives. [ 8 ] This brought a number of chemical engineers into high positions within the Ministry of Munitions, notably K. B. Quinan , [ 9 ] [ 10 ] Frederic Nathan [ 9 ] and Arthur Duckham . [ 11 ]
The increased public perception of chemical engineers renewed the interest in a society, and in 1918 John Hinchley , who was a Council Member of the SCI, petitioned it to form a Chemical Engineers Group (CEG), which was done, with him as chairman and 510 members. [ 10 ] In 1920 this group voted to form a separate Institution of Chemical Engineers,which was achieved in 1922 with Hinchley as the Secretary, a role he held until his death. [ 12 ] The inaugural meeting was held on 2 May 1922, at the Hotel Cecil, London . [ 13 ]
Despite opposition from the Institute of Chemistry and the Institution of Civil Engineers , [ 14 ] [ 15 ] it was formally incorporated with the Board of Trade on 21 December 1922 as a company not for profit and limited by guarantee. [ 16 ] The first Corporate meeting was held 14 March 1923 and the first Annual General Meeting on 8 June 1923: Arthur Duckham was confirmed as President, Hinchley as Secretary and Quinan as Vice-President. [ 15 ] [ 16 ] At this time it had about 200 members. [ 16 ] Nathan was the second President in 1925. [ 17 ]
The American Institute of Chemical Engineers , which had been founded in 1908, served as a useful model. While suggestions of amalgamation were made and there was friendly but limited contact, the two organisations developed independently. [ 18 ]
In 1926 an official Seal of the Institution was produced by Edith Mary Hinchley , wife of John Hinchley. [ 19 ] [ 20 ]
The same year the Institution set the first examinations for Associate (i.e. professionally qualified) membership, bringing it into line with the Civil and Mechanical Institutions. [ 21 ] In addition to four set examinations of three hours each, there was a 'Home Paper' requiring the candidate to gather information and data and design a chemical plant, accompanied by drawings and a written design proposal within a time limit of a month. [ 22 ]
In 1938 the membership passed 1000. [ 23 ]
In 1939 the first courses were recognised as granting exemption from the examinations for Associate Membership, being Manchester College of Technology and of the South Wales and Monmouthshire School of Mines . [ 23 ] Others followed in subsequent years.
In 1942 Mrs Hilda Derrick (née Stroud) was the first female member, in the category Student, taking a correspondence course in chemical engineering during the war. She was active in promoting the Institution and profession to women. [ 24 ]
In 1955 Canterbury University College , New Zealand, and University of Cape Town , South Africa, were the first overseas institutions to have their qualifications recognised. [ 25 ]
On 8 April 1957 IChemE was granted a Royal Charter, changing it from a limited company to a body incorporated by Royal Charter, a professional institution like the Civil and Mechanical ones, [ 26 ] [ 27 ] with HRH Prince Philip, Duke of Edinburgh as patron, [ 28 ] a role he continued for over 63 years. [ 29 ]
In 1971, the membership grades were changed: Associate became Member and Member became Fellow. [ 30 ]
In 1976 the Institution moved its Headquarters from London to Rugby . [ 30 ]
IChemE is licensed by the Engineering Council UK to assess candidates for inclusion on ECUK 's Register of professional Engineers, giving the status of Chartered Engineer , Incorporated Engineer and Engineering Technician . It is licensed by the Science Council to grant the status of Chartered Scientist and Registered Science Technician . It is licensed by the Society for the Environment to grant the status of Chartered Environmentalist . It is a member of the European Federation of Chemical Engineering . [ 31 ] It accredits chemical engineering degree courses in 25 countries worldwide.
In 2023, IChemE entered into a 'hydrogen alliance' with the American Institute of Chemical Engineers (AIChE). The collaboration aims to support industry's adoption of hydrogen as an energy carrier in the drive to net zero. [ 32 ]
IChemE's vision is to "engineer a sustainable world" and its mission is to "put chemical and process engineering at the heart of a sustainable future, to benefit members, society, and the environment." These aims will be achieved by working towards two strategic goals: "Supporting a vibrant and thriving profession" and "serving society by collaborating with others", which are underpinned by five strategic enablers. [ 33 ]
IChemE has two main types of membership, qualified and non-qualified, with the technician member grade being available in both categories. [ 34 ]
Qualified membership grades.
Fellow – A chemical engineering professional in a very senior position in industry and/or academia. Entitling the holder to the post-nominal FIChemE and is a chartered grade encompassing all the privileges of Chartered Member grade.
Chartered Member – Internationally recognised level of professional and academic competence requiring at least 4 years of field experience and a bachelors degree with honours. Entitles the holder to the post-nominal MIChemE and registration as one or a combination of; Chartered Engineer (CEng), Chartered Scientist (CSci) and Chartered Environmentalist (CEnv). This also entitles the individual to register as a European Engineer with the pre-nominal Eur Ing.
Associate Member – This grade is for young professionals who are qualified in chemical & process engineering to bachelors with honours level or a higher. Typically this is the grade held by those working towards Chartered Member level or those graduates working other fields. This grade entitles the holder to the post-nominal AMIChemE. This grade can also lead to the grade of Incorporated Engineer (IEng) for those with some field experience but which falls short of the level required for Chartered Member grade.
Technician Member – Uses practical understanding to solve engineering problems and could have a qualification, an apprenticeship or years of experience. This grade can lead to the Eng Tech TIChemE post-nominal and now in conjunction with the Nuclear Institute the post-nominal Eng Tech TIChemE TNucI.
Non-qualified membership grades.
Associate Fellow – Senior professionals trained in other fields of a level comparable to Fellow in other professional bodies.
Affiliate – For people working in, with or with a general interest in the sector.
Student – For undergraduate chemical & process engineering students.
The Institution has been awarding Medals for different areas of chemical engineering work since the first Moulton medals were issued in 1929. The medal was named after Lord Moulton who helped develop chemical engineering during World War I when he took charge of explosive supplies. [ 35 ] Today the institution gives out eleven medals related to research and teaching, [ 36 ] six medals in special interest groups, [ 37 ] four medals relating to publications, [ 38 ] two medals for services to the profession [ 39 ] and two medals for contribution to the Institution. [ 40 ]
The IChemE Global Awards take place in November in the UK. The awards are highly regarded throughout the process industries for recognising and rewarding chemical engineering excellence and innovation. The first awards took place at the National Motorcycle Museum in Birmingham on 23 March 1994. [ 41 ]
There are 16 categories in total that applicants are invited to enter, including Business Start-Up, Industry Project, Process Safety, and Sustainability, offering a broad scope for entries. [ 41 ]
The organisation also holds awards ceremonies in other locations across the globe. 2024 will see the return of the IChemE Malaysia Awards alongside the first-ever IChemE Australasia Awards. [ 42 ]
The Ashok Kumar Fellowship is an opportunity for a graduate to spend three months working at the UK Parliamentary Office for Science and Technology (POST). The fellowship was jointly funded by IChemE and the Northeast of England Process Industry Cluster (NEPIC). However, NEPIC was unable to contribute in 2018 and the Fellowship was not offered in 2019. [ 43 ] As of 2021 it is jointly funded by IChemE and the Materials Processing Institute (reflecting Kumar's employment with British Steel ). [ 44 ]
The Fellowship was set up in memory of Dr Ashok Kumar, the only serving chemical engineer in the Parliament of the United Kingdom at the time of his sudden death in 2010. Kumar was an IChemE Fellow who had been the Labour MP for Middlesbrough South and Cleveland East . [ 43 ]
In 2023, the Institution launched DiscoverChemEng, [ 45 ] an initiative focused on the development of a package of education outreach activities to help inspire future process and chemical engineers and raise awareness of the profession as a career option for young people. A range of resources have been created for IChemE volunteers and STEM ambassadors to use within schools and at careers fairs, alongside an Educator Network that informs volunteers of upcoming events in their local area.
In order to celebrate its centenary, in 2022 the Institution produced a website with short articles about historic matters in the history of chemical engineering and IChemE, hosting videos and webinars throughout the year. ChemEng Evolution
The coat of arms is a shield with two figures. [ 46 ] On the left a helmeted woman, Pallas Athene , the goddess of wisdom, and on the right, a bearded man with a large hammer, Hephaestus the god of technology and of fire. The shield itself shows a salamander as the symbol of chemistry , and a corn grinding mill as a symbol of continuous processes. Between these is a diagonal stripe in red and blue in steps to indicate the cascade nature of many chemical engineering processes. The shield is surmounted by helmet on which is a dolphin , which is in heraldry associated with intellectual activity, and also represents the importance of fluid mechanics . Just below the dolphin are two Integral signs to illustrate the necessity of mathematics and in particular calculus .
The Latin motto is "Findendo Fingere Disco" or "I learn to make by separating". | https://en.wikipedia.org/wiki/Institution_of_Chemical_Engineers |
The Institution of Diesel and Gas Turbine Engineers is the professional association for engineers in the diesel and gas turbine industry in the UK and internationally.
Diesel engines and gas turbines are broadly related because they use a similar thermodynamic cycle, and both are often used (and interchangeable) for power generation for heating and electricity in large installations.
It was established in 1913 as the Diesel Engine Users' Association. It changed to its current name in 1984. [ citation needed ]
It is based in the north-west of Bedford towards Brickhill at Bedford Heights [1]
Types of membership are Student, Associate, Member, Fellow, Retired, Retired Associate, Company and Subscriber.
It represents engineers in the diesel and gas turbine industry in the UK and internationally, enabling current knowledge to be widely known. It organises conferences and industry-based training.
It registers Chartered Engineers Incorporated Engineers and Engineering Technicians in the industry. | https://en.wikipedia.org/wiki/Institution_of_Diesel_and_Gas_Turbine_Engineers |
Institution of Diploma Engineers, Bangladesh, widely known as IDEB is a professional organization for Diploma Engineers and Diploma Architects in Bangladesh, which was established on 8 November 1970. [ 1 ] [ 2 ] The aim of this organisation is to make a union among diploma holders who are working in field level of different engineering & technological service in different capacities. [ 3 ] [ 4 ]
IDEB is a multidisciplinary organization which is dedicated in developing the knowledge, understanding and practice for diploma holders in different engineering branch. [ 5 ] IDEB also has 11 members of advisory council. [ 6 ]
People with engineering diplomas, and/or a person having post Matriculation or Post Secondary School Certificate (SSC) with 3 or 4 years of schooling in engineering and technology on successful completion be awarded a Diploma in Engineering by any university or Education Board of UK, US, India, Pakistan and Bangladesh and/or as recognized by the government of Bangladesh is eligible for membership of IDEB. IDEB offered six category membership i.e. student member, general member, fellow member, life member, donor member and honorary member. [ 7 ]
Any individuals who have attained Diploma in Engineering from a Bangladesh government-recognized educational institute and obtained a certificate from Bangladesh Technical Education Board (BTTB) are eligible for the membership of this institution. But they have to apply to the institute for the membership and be awarded with membership certificate.
All students of the Diploma-in Engineering course in Polytechnic and Technical Institute and their equivalents shall be eligible to become student members of the Institution subject to be conditions as prescribed in the constitution or as may be decided by the Central Executive Committee from time to time.
Any member shall be entitled to be a life member subscribing an amount of Tk. 10,000/- (Ten Thousand) to the IDEB Fund at a time.
Any member who has completed 25 years of membership who were associated with the activities of the Institution and having the expertise in engineering work and have shall be eligible to be the Fellow Member. Any such member applied for, the District Executive Committee shall prepare detailed particulars and submit to the Central Executive Committee. The CEC shall examine the information and nominate as Fellow Member and eventually award the Fellow Membership Certificate.
(a) Any member engineer donating a fixed amount to the "Build construction fund" shall be entitled to be "Donor Member".
(b) Any person on being sympathized to the activities of IDEB donates 1 (One) Lac Taka to IDEB fund subjected to the approval of the central executive committee may be the donor member of IDEB.
A person who does not fulfill the conditions as mentioned in clause - 01 of Article- 03 (03.01) of constitution, but having sympathy with the objective and principles of this Institution or having some outstanding achievements in the field of Engineering and who assist in the healthy growth for development of the Institution, shall be eligible to be nominated for Honorary Member of the Institution with the consent and Institution, provided that the number of such membership shall not be more than 1% (One Percent) of the total enrolled members.
IDEB formed a research and study cell to contribute to the nation and the people in the development works. [ 8 ] This cell conducts studies on various issues and problems and suggests recommendations that may help the planners and policy makers to adopt appropriate policies and plans in the various technological fields like irrigation, flood control, roads and highways, electricity, energy, water logging in cities / towns, education, health, sanitation, housing and all other productive of commonalities. The institution has also been conducting activities viz. submitting memorandum, holding press conference, meeting with authorities to press the demands of the members and to correct and revise the wrong national policies, projects and programs etc.
IDEB has an ICT & Innovation cell to empower its members in ICT & Encourage its members in Innovation Activity. This cell has young leadership with many success record. This cell has two wings IDEB Women's ICT Wing & Innovation Coordination committee. [ 9 ]
The IDEB maintain a library namely " IDEB Shadhinota Pathagar " at 2nd floor of IDEB Bhaban, 160/A, Kakrail VIP Road, Dhaka-1000, Bangladesh with various type of books viz. Engineering, Technology, Social Science, General Science, Philosophy, Historical and other disciplines. There is a fixed program to collect a number of books every year regularly to enrich the library.
IDEB member engineers and research person & students from different stage of society to visit the library regular for seeking knowledge by readout the reserved books. The management authority of library is proud for spreading knowledge to society by maintaining it. We are seeking whole hearten co-operation & help from government & non-government official, intellectual, writer & donor agencies for abounding the library.
The IDEB publishes a socio-technological monthly journal named "KARIGAR" meaning "THE ARTISAN". A Board of Editors consisting of reputed and learned Diploma Engineers are entrusted with the publication of the journal. They select the quality articles for it. Besides, special supplement also published from time to time as and when necessary. Quite a good number of intellectuals contribute their articles regularly.
To create public awareness on regarding science and technology of this novel effort's already been quite welcome by the intellectual community of the society. The editorial panel of the Journal is working to improve the quality of the uninterrupted publications in demand of present era. To endless support & cooperation of many readers & admirers in the last half decade we are very pledge and go-ahead to reach our goal.
IDEB has organized several National and International Conferences in Engineering, Technology & Education. [ 10 ]
International Conference on TVET for Sustainable Development was jointly organized by IDEB and CPSC, Manila. It was first international conference on Technical Voactional Education & Training (TVET) in Bangladesh.
The International Conference on "Skills for the Future World of Work and TVET for Global Competitiveness" has taken place in July 2017 at the Institution of Diploma Engineers Bangladesh (IDEB), Dhaka, Bangladesh . The conference was jointly organizing by IDEB and Colombo Plan Staff College (CPSC), Manila, Philippines . in Association with Ministry of Education (MoE) Government of Bangladesh, National Skills Development Council (NSDC) Where International Labour Organization (ILO), Directorate of Technical Education (DTE) and Bangladesh Technical Education Board , a2i , ESD Australia, IOM, PKSF, STEP, BMET, FBCCI, BEF was the Co-partner of the Conference.
This international conference was addressed the future demand of skilled manpower and the acceleration process to ensure the demand based quality Technical and Vocational Education & Training (TVET) .
People's Engineering Day is being celebrated throughout the country on the founding anniversary day of the Institution of Diploma Engineer's Bangladesh (IDEB) on 8 November [ 11 ] each year of the Headquarters and at all centers in a befitting manner to animate science & technology to the people. At the eve of the day, taken programs have been brought out to the Nation through News conference. Special supplementary is published in National Dailies and special picture broadcast participating leaders in Bangladesh Television and other Electronic Media. The messages are being given by Honorable President, Prime Minister, National Parliament Speaker, Opposition Leader, Member of Cabinet and different political parties' chief on the occasion. Colorful rally's being brought out by the participating member Engineers and polytechnic teachers and students. On the occasion of Peoples Engineering Day IDEB arranged week-wide program [ 12 ] with various discussion meeting, seminar, technical lecture session on national issues and blood donation program is a regular program considering the distressed humanity. | https://en.wikipedia.org/wiki/Institution_of_Diploma_Engineers,_Bangladesh |
The Institution of Electrical Engineers ( IEE ) was a British professional organisation of electronics , electrical , manufacturing , and information technology professionals, especially electrical engineers . It began in 1871 as the Society of Telegraph Engineers . In 2006, it merged with the Institution of Incorporated Engineers and the new organisation is Institution of Engineering and Technology (IET).
Notable past presidents have included Lord Kelvin (1889), Sir Joseph Swan (1898) and Sebastian de Ferranti (1910–11). Notable chairmen include John M. M. Munro (1910–11).
The IEE was founded in 1871 as the Society of Telegraph Engineers, changed its name in 1880 to the Society of Telegraph Engineers and Electricians and changed to the Institution of Electrical Engineers in 1888. It was Incorporated by a Royal Charter in 1921. [ 1 ]
In 1988 the Institution of Electrical Engineers (IEE) merged with the Institution of Electronic and Radio Engineers (IERE), originally the British Institution of Radio Engineers (Brit IRE) founded in 1925.
By the mid-2000s, the IEE was the largest professional engineering society in Europe , with a worldwide membership of around 120,000.
Discussions about a merger with the Institution of Incorporated Engineers (IIE) under a new name started in 2004, and following membership voting, the IEE merged with the IIE on 31 March 2006 to form the Institution of Engineering and Technology (IET). [ 2 ] [ 3 ]
The IEE was the publisher of the British Standard for Electrical wiring in the United Kingdom , BS 7671 . This is now published by the IET.
This article about an organisation in the United Kingdom is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Institution_of_Electrical_Engineers |
The Institution of Electronic and Radio Engineers ( IERE ) was a professional organization for radio engineers . [ 1 ] It was originally established in 1925 as the Institute of Wireless Technology . [ 2 ] [ 3 ] It renamed itself British Institution of Radio Engineers in 1941, and eventually Institution of Electronic and Radio Engineers . [ 2 ] In 1988, it merged with the Institution of Electrical Engineers (IEE), and in another merger in 2006 became the Institution of Engineering and Technology (IET).
It had an Indian division based in Bangalore . [ 2 ]
The main aim of the Institution was the advancement of the practice of radio engineering , through conferences, meetings, and training. [ 1 ] The Institution published the Journal of the British Institution of Radio Engineers between 1939 and 1962. [ 4 ] From 1963–64, it published the Proceedings of the British Institution of Radio Engineers . [ 5 ]
The Institution awarded the Clerk Maxwell Prize. As the British Institution of Radio Engineers, it established the Charles Babbage Premium in 1959 as an annual award "for an outstanding paper on the design or use of electronic computers". [ 6 ] [ 7 ]
This article about an organisation in the United Kingdom is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Institution_of_Electronic_and_Radio_Engineers |
The Institution of Electronics and Telecommunication Engineers ( IETE ) is India 's leading recognized professional society devoted to the advancement of science, technology, electronics, telecommunication and information technology. Founded in 1953, it serves more than 120,000+ members through 60+ centers/sub centers primarily located in India (3 abroad). The Institution provides leadership in scientific and technical areas of direct importance to the national development and economy. Association of Indian Universities (AIU), Union Public Service Commission (UPSC) has recognized AMIETE, ALCCS (Advanced Level Course in Computer Science). Government of India has recognized IETE as a Scientific and Industrial Research Organization (SIRO) and also notified as an educational institution of national eminence. The IETE focuses on advancement of electronics and telecommunication technology. The IETE conducts and sponsors technical meetings, conferences, symposium, and exhibitions all over India, publishes technical [ 1 ] [ 2 ] and research journals [ 3 ] and provides continuing education as well as career advancement opportunities to its members.
IETE today is one of the prominent technical institution to provide education to working professionals in India and is fast expanding its wings across the country through its 60+ centres. Since 1953, IETE has expanded its educational activities in areas of electronics, telecommunications, computer science and information technology. IETE conduct programs by examination, leading to DipIETE equivalent to Diploma in Engineering, AMIETE equivalent to B Tech, and ALCCS equivalent to M Tech. IETE started Dual Degree, Dual Diploma and Integrated programs in December 2011. DipIETE is a three year, six semester course whereas AMIETE is a four year, eight semester course. IETE conducts examination for the above said courses, twice a year once in June and in December. Courses are divided into two sections, Section A and Section B. [ 4 ] Courses of IETE are recognized by IISc , [ 5 ] IIT , NIT , IIIT , IIM , CMI , [ 6 ] TIFR , ISI , [ 7 ] IIIT various Central and State Universities. [ 8 ] IETE graduates are eligible to take national levels tests like GATE , CAT , JEST etc. for Masters as well as for PhD admissions in various leading technical institutions across the country. AMIETE by examination is recognized by the AICTE and MHRD , Government of India, as equivalent to B.E./B.Tech. [ 9 ] IETE is a member of ECI . [ 10 ] | https://en.wikipedia.org/wiki/Institution_of_Electronics_and_Telecommunication_Engineers |
The Institution of Engineering and Technology ( IET ) is a multidisciplinary professional engineering institution. The IET was formed in 2006 from two separate institutions: the Institution of Electrical Engineers (IEE), dating back to 1871, [ 2 ] and the Institute of Incorporated Engineers (IIE), dating back to 1884. Its worldwide membership is currently in excess of 156,000 in 148 countries. [ 1 ] The IET's main offices are in Savoy Place in London, England , and at Michael Faraday House in Stevenage, England .
In the United Kingdom , the IET has the authority to establish professional registration for the titles of Chartered Engineer, Incorporated Engineer, Engineering Technician, and ICT Technician, as a licensed member institution of the Engineering Council . [ 3 ]
The IET is registered as a charity in England , Wales and Scotland . [ 1 ]
Discussions started in 2004 between the IEE and the IIE about merging to form a new institution. In September 2005, both institutions held votes of the merger and the members voted in favour (73.5% IEE, 95.7% IIE). This merger also needed government approval, so a petition was then made to the Privy Council of the United Kingdom for a Supplemental Charter, to allow the creation of the new institution. This was approved by the Privy Council on 14 December 2005, and the new institution emerged on 31 March 2006.
The IET is the result of mergers and absorptions between over forty institutions since 1871. [ 4 ]
The Society of Telegraph Engineers (STE) was formed on 17 May 1871, and it published the Journal of the Society of Telegraph Engineers from 1872 through 1880. Carl Wilhelm Siemens was first President of IEE in 1872. On 22 December 1880, the STE was renamed as the Society of Telegraph Engineers and of Electricians and, as part of this change, it renamed its journal the Journal of the Society of Telegraph Engineers and of Electricians (1881–1
82) and later the Journal of the Society of Telegraph-Engineers and Electricians (1883–1888). Following a meeting of its Council on 10 November 1887, it was decided to adopt the name of the Institution of Electrical Engineers (IEE). As part of this change, its Journal was renamed Journal of the Institution of Electrical Engineers in 1889, and it kept this title through 1963. In 1921, the Institution was Incorporated by royal charter and, following mergers with the Institution of Electronic and Radio Engineers (IERE) in 1988 and the Institution of Manufacturing Engineers (IMfgE) in 1990, it had a worldwide membership of around 120,000. The IEE represented the engineering profession, operated Professional Networks (worldwide groups of engineers sharing common technical and professional interests), had an educational role including the accreditation of degree courses and operated schemes to provide awards scholarships, grants and prizes. It was well known for publication of the IEE Wiring Regulations which now continue to be written by the IET and to be published by the British Standards Institution as BS 7671 .
The modern Institution of Incorporated Engineers (IIE) traced its heritage to The Vulcanic Society that was founded in 1884 and became the Junior Institution of Engineers in 1902, which became the Institution of General Technician Engineers in 1970. [ 5 ] It changed its name in 1976 to the Institution of Mechanical and General Technician Engineers. At this point it merged with the Institution of Technician Engineers in Mechanical Engineering and formed the Institution of Mechanical Incorporated Engineers (IMechIE) in 1988. The Institution of Engineers in Charge, which was founded in 1895, was merged into the Institution of Mechanical Incorporated Engineers in 1990.
The Institution of Electrical and Electronic Technician Engineers, the Society of Electronic and Radio Technicians, and the Institute of Practitioners in Radio and Electronics merged in 1990 to form the Institution of Electronics and Electrical Incorporated Engineers (IEEIE).
The IIE was formed in April 1998 by the merger of The Institution of Electronic and Electrical Incorporated Engineers (IEEIE), The Institution of Mechanical Incorporated Engineers (IMechIE), and The Institute of Engineers and Technicians (IET, not to be confused with the later-formed Institution of Engineering and Technology). In 1999 there was a further merger with The Institution of Incorporated Executive Engineers (IIExE). The IIE had a worldwide membership of approximately 40,000.
This forerunner institution was known in all but its last year as the Institution of Production Engineers (IProdE) and was initiated by H. E. Honer. He wrote to technical periodical Engineering Production suggesting that the time was ripe to form an institution for the specialised interests of engineers engaged in manufacture/production. The resulting mass of correspondence spawned a meeting at Cannon Street Hotel on 26 February 1921. There it was decided to form the IProdE to:
The term 'production engineering' came into use to describe the management of factory production techniques first developed by Henry Ford , which had expanded greatly during the First World War . The IProdE was incorporated in 1931 and was granted armorial bearings in 1937. From the outset it operated through decentralised branches called local sections wherever enough members existed. These were self-governing and elected their own officers. They held monthly meetings at which papers were read and discussed.
Outstanding papers were published in the IProdE's Journal. The work of six foremost production engineers took centre stage in certain national meetings: Viscount Nuffield, Sir Alfred Herbert, Colonel George Bray, Lord Sempill, E. H. Hancock, and J. N. Kirby. National and regional conferences were arranged dealing with specific industrial problems. Sister Councils took hold including in Australia, Canada, India, New Zealand, and South Africa.
The Institution's education committee established a graduate examination which all junior entrants undertook from 1932 onwards. An examination for Associate Membership was introduced in 1951.
The Second World War accelerated developments in production engineering and by 1945 membership of the IProdE stood at 5,000. The 1950s and 1960s were perhaps the most fruitful period for the Institution. Major conferences such as 'The Automatic Factory' in 1955 ensured that the Institution held a place at the forefront of production technology. A Royal Charter was granted in 1964 and membership stood at over 17,000 by 1969.
In 1981 the IProdE instituted four medals starting from its Diamond Jubilee: the International Award, the Mensforth Gold Medal, the Nuffield Award and the Silver Medal. The Mensforth Gold Medal was named after Sir Eric Mensforth , founder and chairman of Westland Helicopters and a former IProdE President. It was awarded to British recipients who had made an outstanding contribution to the advancement of production engineering technology. Renamed the Mensforth Manufacturing Gold Medal, it is the IET's top manufacturing award.
Financial constraints, a slowing in membership and a blurring of distinctions between the various branches of engineering led the IProdE to merger proposals in the late 1980s. The Institution of Electrical Engineers (IEE) had interests very close to those of the IProdE. The IEE was a much larger organisation than the IProdE and the proposal was that the IProdE should be represented as a specialist division within the IEE. While these talks were reaching fruition in 1991 the IProdE changed its name to the Institution of Manufacturing Engineers. A merger with the IEE took place the same year, with the IMfgE becoming the IEE's new Manufacturing Division. [ 6 ]
The IET is governed by the President and Board of Trustees. [ 7 ] The IET Council, [ 8 ] on the other hand, serves as the advisory and consultative body, representing views of the members at large and offering advice to the Board of Trustees. Since founding the IET, several prominent engineers [ 9 ] have served as its President and the recent Presidents are listed below:
The IET represents the engineering profession in matters of public concern and assists governments to make the public aware of engineering and technological issues. It provides advice on all areas of engineering, regularly advising Parliament and other agencies.
The IET Archives collect and retain material relating to the IET and its predecessor institutions as well as the history of engineering and technology. The collections cover innovation and developments in these areas from the fourteenth century to the present day, including the archive for the Women's Engineering Society (WES) .
The IET also grants Chartered Engineer , Incorporated Engineer , Engineering Technician , and ICT Technician professional designations on behalf of the Engineering Council UK . IEng is roughly equivalent to North American Professional Engineer designations and CEng is set at a higher level. [ 10 ] Both designations have far greater geographical recognition. This is made possible through a number of networks for engineers established by the IET including the Professional Networks, worldwide groups of engineers sharing common technical and professional interests. Through the IET website, these networks provide sector-specific news, stock a library of technical articles and give members the opportunity to exchange knowledge and ideas with peer groups through discussion forums. Particular areas of focus include education, IT, energy and the environment .
The IET accredits degree courses worldwide in subjects relevant to electrical , electronic , manufacturing and information engineering . In addition, it secures funding for professional development schemes for engineering graduates including awards scholarships , grants and prizes. [ citation needed ]
In August 2019 the Department for Digital, Culture, Media and Sport (DCMS) appointed the IET as the lead organisation in charge of designing and delivering the new UK Cyber Security Council , alongside 15 other cyber security professional organisations collectively known as the Cyber Security Alliance. The council, which officially launched in April 2021, will be "charged with the development of a framework that speaks across the different specialisms, setting out a comprehensive alignment of career pathways, including the certifications and qualifications required within certain levels". [ citation needed ]
The IET has several categories of membership, some with designatory postnominals :
The IET has a journals publishing programme, [ 16 ] totalling 24 titles such as IET Software as of March 2012 (with the addition of IET Biometrics and IET Networks ). The journals contain both original and review-oriented papers relating to various disciplines in electrical, electronics, computing, control, biomedical and communications technologies.
Electronics Letters [ 17 ] is a peer-reviewed rapid-communication journal, which publishes short original research papers every two weeks. Its scope covers developments in all electronic and electrical engineering related fields. Also available to Electronics Letters subscribers are something called the Insight Letters . [ 18 ]
Micro & Nano Letters , [ 19 ] first published in 2006, specialises in the rapid online publication of short research papers concentrating on advances in miniature and ultraminiature structures and systems that have at least one dimension ranging from a few tens of micrometres to a few nanometres. It offers a rapid route for international dissemination of research findings generated by researchers from the micro and nano communities.
The IET Achievement Medals [ 20 ] are awarded to individuals who have made major and distinguished contributions in the various sectors of science, engineering and technology. The medals are named after famous engineers and persons, such as Michael Faraday , John Ambrose Fleming , J. J. Thomson , and Oliver Heaviside . The judging panel look for outstanding and sustained excellence in one or more activities. For example: research and development, innovation, design, manufacturing, technical management, and the promotion of engineering and technology.
The IET offer Diamond Jubilee undergraduate scholarships for first year students studying an IET accredited degree. Winners receive between £1,000 and £2,000 per year for up to four years to help with their studies. Eligibility is partially based on the exam results at the final year of school prior to university. IET also offers postgraduate scholarships intended for IET members carrying out doctoral research; the postgraduate scholarships offered by the IET assist members with awards of up to £10,000 to further research engineering related topics at universities. The IET Engineering Horizons Bursary is offered at £1,000 per year for undergraduate students on IET accredited degree courses in the UK and apprentices starting an IET Approved Apprenticeship scheme to those UK residents who have overcome personal challenges to pursue an engineering education.
The IET refers to its region-specific branches as "Local Networks".
IET Australian is the Australian Local Network of the IET (Institution of Engineering and Technology). The Australian Local Network of the IET has representation in all the states and territories of Australia. These include the state branches, their associated Younger Members Sections, and university sections in Australia. The Younger Members Sections are divided in categories based on each state, e.g. IET YMS New South Wales (IET YMS NSW).
The IET Toronto Network covers IET activities in the Southern and Western areas of Ontario and has approximately 500 members. The first Canadian Branch of the IEE (now the IET) was inaugurated by John Thompson, FIEE, and Harry Copping, FIEE, in Toronto in the early 1950s.
IET China office is in Beijing . It started in 2005 with core purposes of international collaboration, engineering exchange, organisation of events and seminars, and the promotion of the concept/requirements of and awarding of the title of Chartered Engineer. [ 22 ] [ 23 ]
IET Hong Kong is the Hong Kong Local Network (formerly Branch) of the IET (Institution of Engineering and Technology). The Hong Kong Local Network of the IET has representations in the Asian region and provides a critical link into mainland China. It includes six sections, i.e. Electronics & Communications Section (ECS); Informatics and Control Technologies Section (ICTS); Management Section(MS); Power and Energy Section (PES); Manufacturing & Industrial Engineering (MIES); Railway Section( RS), as well as the Younger Members Section. It has over 5,000 members and activities are coordinated locally. It is one of the professional organisations for chartered engineers in Hong Kong. [ 24 ]
IET Italy Local Network was established in 2007 by a group of active members led by Dr M Fiorini with the purpose to represent locally the aims and services of the IET. The vision of sharing and advancing knowledge throughout the global science, engineering and technology community to enhance people's lives is achieved building-up an open, flexible and global knowledge network supported by individuals, companies and institutions and facilitated by the IET and its members. [ 25 ]
An IET India Office was established in 2006. It has eight Local Networks: Bengaluru, Chennai, Delhi, Kanyakumari, Kolkata, Mumbai, Nashik and Pune.
An IET in Kenya was established on 16 November 2011. [ 26 ] It has been enacted with powers including its awards being recognised by the Kenya National Assembly. [ 27 ] With support of Faculty at the newly established Technical University of Kenya (formerly the Kenya Polytechnic) and Jomo Kenyatta University of Agriculture & Technology, the institution considers registration of Technologists, Technicians and Craftspeople, particularly being open to those excluded from Engineering Board of Kenya registration.
IET Kuwait community was established in 2013 by Dr. Abdelrahman Abdelazim. The community is very active in the region, overseeing 4 student chapters in Kuwait universities. The community's most notable event was the 2015 GCC robotics challenge, which involved collaboration with many networks in the region.
IET Malaysia Local Network has more than 1,900 members in Malaysia. In addition, the network has facilitated On Campuses in public and private universities. These are mentored by the Young Professional Section (YPS) of IET. As of December 2019, there are 19 active On Campuses. [ 28 ] | https://en.wikipedia.org/wiki/Institution_of_Engineering_and_Technology |
The Institution of Engineers in Scotland ( IES ) is a multi-disciplinary professional body and learned society , founded in Scotland, for professional engineers in all disciplines and for those associated with or taking an interest in their work. Its main activities are an annual series of evening talks on engineering, open to all, and a range of school events aimed at encouraging young people to consider engineering careers. Between 1870 and 2020 the institution was known as the Institution of Engineers and Shipbuilders in Scotland (IESIS) .
IES is registered as a Scottish Charity, No SC011583 and is the fourth oldest, still-active, registered Company in Scotland. [ 2 ] [ 3 ] [ 4 ]
Members, Fellows, Graduates or Companions are entitled to use the abbreviated distinctive letters after their name - MIES, FIES, GIES, CIES.
The inaugural meeting of the Institution of Engineers in Scotland was held on 1 May 1857. Office bearers were appointed and the principal objective of the new institution was set down as "the encouragement and advancement of Engineering Science and Practice". It was to have a broad basis for membership, and engineers from the mining, foundry, railway, iron, shipbuilding and other industries were to be eligible. The prime movers behind the founding of the Institution were William John Macquorn Rankine , Regius Professor of Civil Engineering and Mechanics at the University of Glasgow , and Walter Montgomerie Neilson , one of the major figures in establishing Glasgow 's locomotive-building industry. Rankine was the first President of the Institution and Neilson succeeded him in 1859. The engineer James Howden , who died in 1913, was the last surviving founding member of the Institution. [ 5 ]
The Institution was an early promoter of consciousness of industrial effects on the environment. In those early years there was a pervading atmosphere of enquiry into the applications of steam power. In 1858 the Institution was responsible for a public meeting, held in the Glasgow City Chambers , to establish "An Association for Promoting Safety, Economy and Absence of Smoke in the raising and use of Steam".
The Scottish Shipbuilders Association had been formed in 1860 and amalgamated with the Institution of Engineers in Scotland on 25 October 1865. The name Institution of Engineers and Shipbuilders in Scotland was adopted in 1870. The first female President of the Institution, Karen Dinardo, took office on 4 October 2016, at the start of a two-year term. Her father, Carlo Dinardo, had been president in 1999–2001.
The Institution has had a number of headquarters. The building at 39 Elmbank Crescent, Glasgow was commissioned and built in 1906–08 and was designed by J.B. Wilson. In the foyer of this building, there is a memorial to the 36 engineers who died on RMS Titanic . The marble and bronze memorial was subscribed by members, designed by the sculptor William Kellock Brown , and unveiled on 15 April 1914. The Institution, with the permission of Scottish Opera , current occupiers of the building, organised a memorial service in the building on 14 April 2012. [ 6 ]
In 2020, the Institution reverted its name to the Institution of Engineers in Scotland, reflecting the breadth of engineering disciplines among its membership and practised throughout Scotland. [ 7 ]
In addition to an annual programme of evening talks on various engineering topics, the Institution endows two prestige lectures:
Both have attracted high-profile speakers. [ citation needed ]
IES has a significant collection of engineering papers and other materials in its archives. Since 2013, there has been a programme to digitise all Transactions of the Institution from its earliest days so that these may be made available as a reference resource. [ 10 ]
In 2011, IES launched a new initiative, The Scottish Engineering Hall of Fame , [ 11 ] to celebrate Scotland's tradition of engineering and shipbuilding. It provides role models for young people considering careers in engineering.
The first seven inductees were announced by President Gordon Masterton at the Institution's annual James Watt Dinner in September 2011. As of 2024, there have been 60 names added to the Hall of Fame, 14 of whom were living inductees (in alphabetical order): Douglas Anderson (retinal imaging), Thomas Graham Brown (ultrasound scanner), Craig Clark (satellite engineer), James Goodfellow (automated teller machine), Hugh Gill (bionic hand), Naeem Hussain (bridge engineer), Carol Marsh (electronics engineer), Gordon McConnell (aircraft engineer), Sir Jim McDonald (electrical power engineer and University leader), Sir Duncan Michael (structural engineer and business leader), Sir Donald Miller (electrical power engineer and business leader), David Milne (electronics pioneer and business leader), Ian Ritchie (computing engineer and business leader), Stephen Salter (wave power pioneer).
To date there have been six female inductees, Dorothée Pullinger , Anne Gillespie Shaw , Victoria Drummond , Mary Fergusson , Anne Neville and Carol Marsh.
The Hall of Fame panel encourages nominations from the public as well as members.
The following is a list of the presidents of the Institution since its inception:. [ 12 ] | https://en.wikipedia.org/wiki/Institution_of_Engineers_in_Scotland |
The Institution of Engineers of Ireland ( Irish : Cumann na nInnealtóirí ) or the IEI , is an engineering society primarily representing members based in Ireland. The institution is Ireland’s recognised organisation for accreditation of professional engineering qualifications under the Washington Accord , Sydney Accord , and Dublin Accord .
Membership of the institution is open to individuals based on academic and professional background and is separated into grades in accordance with criteria, including the Chartered Engineer and European Engineer titles.
The institution received its current legal name in 1969 by an Act of the Oireachtas . In October 2005 the institution adopted the operating name Engineers Ireland ; the legal name is, however, unchanged.
The history of the institution can be traced to 6 August 1835 when civil engineers met in Dublin; the result was the Civil Engineers Society of Ireland , in 1844 the society adopted the name the Institution of Civil Engineers of Ireland ( ICEI ). The institution received a royal charter on 15 October 1877, this being a significant milestone in obtaining international recognition and standing. In the early years of the Irish Free State Cumann na nInnealtóirí (The Engineers Association) was set up independently, in 1928, by incorporation under the Companies Act, 1908 to "improve and advance the status and remuneration of qualified members of the engineering profession" [ 1 ] as it was felt that the ICEI's charter prevented its negotiation of employment conditions and salary.
In 1927 the ICEI elected their first woman member when Iris Cummins was admitted to the organisation. [ 2 ]
As time progressed it was realised that the institution and association might better advance engineering in Ireland by amalgamation of both into a single organisation which would represent a broader set of engineering disciplines, so discussions commenced in 1965, [ 1 ] and resulted in The Institution of Civil Engineers of Ireland (Charter Amendment) Act, 1969 leading to the redesignation of the unified institution as The Institution of Engineers of Ireland – Cumann na nInnealtóirí . Since this Act the institution has represented all branches of engineering in Ireland.
In 1997 the institution set up the Irish Academy of Engineering – official website , based at Bolton Street , Dublin Institute of Technology (Now Technological University Dublin [ 3 ] ).
"The institution promotes the art and science of engineering..." , in particular:
Other Membership Titles
The institution is divided into three sectors; Divisions, Regions, and Societies, which are further subdivided – their purpose is to promote engineering and share knowledge.
In accordance with EU requirements it is the designated authority for the engineering profession in Ireland. [ 4 ] The institution is a national member of European Federation of National Engineering Associations ( FEANI ). The institution is also a signatory to a number of multilateral agreements, these are principally for registered professional titles and accredited engineering programmes , for academic programmes these are: [ 5 ]
The institution is also the signatory to a number of bilateral agreements with engineering societies in the United Kingdom. These are for the dual recognition of corresponding Chartered Engineer, Associate Engineer and Engineering Technician grades of the institution.
Free CPD Series and Sector webinars;
Awarding Registered Professional Titles;
Continuing Professional Development (CPD);
International Recognition of your Qualifications;
Advocating on behalf of Members and the Profession;
Supporting Career Progression;
Online Resources/Communications | https://en.wikipedia.org/wiki/Institution_of_Engineers_of_Ireland |
The Institution of Environmental Sciences (IES) is a professional association and registered charity in the United Kingdom. The organisation promotes environmental protection and conservation, and performs related education and meta-analysis of scientific research. [ 1 ] IES is a constituent body of both the Society for the Environment (SocEnv), and the Science Council and trains environmental technicians, chartered environmentalist [ 2 ] and chartered scientist [ 3 ] qualifications. The IES provides administration for two other organisations in the UK: the Community for Environmental Disciplines in Higher Education [ 4 ] who accredit university programmes, and for the Institute of Air Quality Management .
The IES maintains the environmental SCIENTIST [ 5 ] journal, sends quarterly to members and available open access three-months after publication. Articles are written by experts and professionals working in the environmental field. IES also publishes reports on issues in the environmental science sector and provides guidance for professionals working in environmental science. [ 6 ]
The Institution of Environmental Sciences was founded as a result of an initiative by Dr John Rose during a series of meetings held during 1971-1972 at the Royal Society in London and chaired by Lord Burntwood. | https://en.wikipedia.org/wiki/Institution_of_Environmental_Sciences |
The Institution of Incorporated Engineers ( IIE ) was a multidisciplinary engineering institution in the United Kingdom . In 2006 it merged with the Institution of Electrical Engineers (IEE) to form the Institution of Engineering and Technology (IET). Before the merger the IIE had approximately 40,000 members. The IET is now the second largest engineering society in the world next to the IEEE . The IET has the authority to establish professional registration of engineers (Chartered Engineer or Incorporated Engineer) through the Engineering Council. The IEEE does not have the authority to replicate the registration process in its complementary environment.
The IIE traces its heritage to the Vulcanic Society that was founded in 1884. The Vulcanic Society was formed [ 1 ] by a group of apprentices from the works of Maudslay, Son & Field Ltd, in Lambeth, London. This society went through three name changes before it became the Junior Institution of Engineers in 1902, which became the Institution of General Technician Engineers in 1970 and the Institute of Mechanical and General Technician Engineers (IMGTechE) in 1976. In 1982 the IMGTechE and Institution of Technician Engineers in Mechanical Engineering (ITEME) merged to form the Institution of Mechanical Incorporated Engineers (IMechIE).
The Institution of Electrical and Electronic Incorporated Engineers (IEEIE) and the Society of Electronic and Radio Technicians (SERT) merged in 1990 to form the Institution of Electronics and Electrical Incorporated Engineers (IEEIE).
The IIE was formed in April 1998 by the merger of the IMechIE, the IEEIE and The Institute of Engineers and Technicians (IET). In 1999, the Institution of Incorporated Executive Engineers (IIExE) merged with the IIE.
In October 2001, IIE received a Royal Charter in recognition of the significant contribution of its members to the UK economy and society.
In 2005, the Society of Engineers also merged with the IIE.
Discussions started in 2004 between the IEE and the IIE about the formation of a new institution, the Institution of Engineering and Technology . Following members voting in favour of the merger, the IET became operational on 31 March 2006. | https://en.wikipedia.org/wiki/Institution_of_Incorporated_Engineers |
The Institution of Mechanical Engineers ( IMechE ) is an independent professional association and learned society headquartered in London, United Kingdom, that represents mechanical engineers and the engineering profession. With over 120,000 members in 140 countries, working across industries such as railways, automotive, aerospace, manufacturing, energy, biomedical and construction, the Institution is licensed by the Engineering Council to assess candidates for inclusion on its Register of Chartered Engineers , Incorporated Engineers and Engineering Technicians .
The Institution was founded at the Queen's Hotel, Birmingham , by George Stephenson in 1847. It received a Royal Charter in 1930. The Institution's headquarters, purpose-built for the Institution in 1899, is situated at No. 1 Birdcage Walk in central London.
Informal meetings are said to have taken place in 1846, at locomotive designer Charles Beyer 's house in Cecil Street, Manchester , [ a ] or alternatively at Bromsgrove at the house of James McConnell , after viewing locomotive trials at the Lickey Incline . [ 1 ] Beyer, Richard Peacock , George Selby , Archibald Slate and Edward Humphrys were present. Bromsgrove seems the more likely candidate for the initial discussion, not least because McConnell was the driving force in the early years. [ 2 ] A meeting took place at the Queen's Hotel in Birmingham to consider the idea further on 7 October and a committee appointed with McDonnell at its head to see the idea to its inauguration. [ 3 ]
The Institution of Mechanical Engineers was then founded on 27 January 1847, in the Queen's Hotel next to Curzon Street station in Birmingham by the railway pioneer George Stephenson and others. [ 4 ] McConnnell became the first chairman. [ 1 ] The founding of the Institution was said by Stephenson's biographer Samuel Smiles to have been spurred by outrage that Stephenson, the most famous mechanical engineer of the age, had been refused admission to the Institution of Civil Engineers unless he sent in "a probationary essay as proof of his capacity as an engineer". [ 5 ] However, this account has been challenged as part of a pattern of exaggeration on Smiles' part aimed at glorifying the struggles that various Victorian mechanical engineers had to overcome in their personal efforts to attain greatness. [ 6 ] Though there was certainly coolness between Stephenson and the Institution of Civil Engineers, it is more likely that the motivation behind the founding of the Institution of Mechanical Engineers was simply the need for a specific home for the growing number of mechanical engineers employed in the burgeoning railway and manufacturing industries. [ 5 ]
Beyer proposed that George Stephenson become the Institution's first president in 1847, [ 7 ] followed by his son, Robert Stephenson , in 1849. Beyer became vice-president and was one of the first to present papers to the Institution; [ 8 ] Charles Geach was the first treasurer. Throughout the 19th and 20th centuries some of Britain's most notable engineers held the position of president, including Joseph Whitworth , Carl Wilhelm Siemens and Harry Ricardo . It operated from premises in Birmingham until 1877 when it moved to London, taking up its present headquarters on Birdcage Walk in 1899. [ 9 ]
Upon its move to London in 1877 the Institution rented premises at No. 10 Victoria Chambers, where it remained for 20 years. In 1895 the Institution bought a plot of land at Storey's Gate, on the eastern end of Birdcage Walk , for £9,500. [ 9 ] Architect Basil Slade looked to the newly-completed Admiralty buildings facing the site for inspiration. The building was designed in the Queen Anne, 'streaky bacon', style in red brick and Portland stone . Inside, there were several features that were state of the art for the time, including a telephone, a 54-inch fan in the lecture theatre for driving air into the building, an electric lift from the Otis Elevator Company , and a Synchronome master-clock, which controlled all house timepieces. In 1933 architect James Miller , who also designed the neighbouring Institution of Civil Engineers , remodelled the building, expanding the library and introducing electric lighting.
The building would go on to host the first public presentation of Frank Whittle 's jet engine in 1945. [ 10 ] In 1943 it became the venue for the Royal Electrical & Mechanical Engineers ' planning of Operation Overlord and the invasion of Normandy. [ 9 ]
Today No. 1 Birdcage Walk hosts events, lectures, seminars and meetings in 17 conference and meeting rooms named after notable former members of the Institution, such as Whittle, Stephenson and Charles Parsons .
The following are membership grades with post-nominals :
The James Watt International Medal is an award for excellence in engineering established in 1937 by the Institution of Mechanical Engineers. It is named after Scottish engineer James Watt (1736-1819) who developed the Watt steam engine in 1781, which was fundamental to the changes brought by the Industrial Revolution in both his native Great Britain and the rest of the world.
The Whitworth Scholarship is awarded to a few promising engineers of the main engineering disciplines for the length of a degree course. On successful completion, they become Whitworth Scholars, with a medal and are entitled to use post-nominals Wh.Sch.. It was founded by Joseph Whitworth .
The Engineering Heritage Awards were created in 1984 to help recognise and promote the value of artefacts, locations, collections and landmarks of significant engineering importance.
The Energy, Environment and Sustainability Group Prize was created in 2017 to celebrate people who have taken "significant steps to bridge the gap between an unsustainable present and a more sustainable future." [ 11 ]
Along with The Manufacturer, the Institution also runs The Manufacturer MX Awards, [ 12 ] and Formula Student , the world's largest student motorsport event.
The Tribology Gold Medal is awarded each year for outstanding and supreme achievement in the field of tribology . It is funded from The Tribology Trust Fund. [ 13 ] It was established and first awarded in 1972. As of 2017, it has been awarded to 39 individuals from 12 different countries. [ 14 ]
As of 2020 [update] , there have been 135 presidents of the Institution, who since 1922 have been elected annually for one year. The first president was George Stephenson , followed by his son Robert . Prior to 2018, Joseph Whitworth , John Penn and William Armstrong were the only presidents to have served two terms.
Pamela Liversidge in 1997 became the first female president; Professor Isobel Pollock became the second in 2012 and Carolyn Griffiths became the third in 2017.
† Baker resigned in June 2018. [ 24 ] The Institution's by-laws state that a casual vacancy for President shall be filled by appointing a Past President to the role; Tony Roche was elected and duly took up office for a second term in August of that year. [ 25 ]
The Institution of Mechanical Engineers has a number of committees that work to promote and develop thought leadership in different industry sectors. The Institution has 8 divisions: - Aerospace, Automobile, Biomedical Engineering Association, Construction & Building Services, Manufacturing Industries, Power Industries, Process Industries and Railway. [ 26 ]
Biomedical Engineering Association (BmEA) aims to bring together key workers from both medicine and engineering to discuss the latest advances and issues, to enable networking among different industry leaders, and to promote the field of Medical Engineering, also known as Bioengineering or Biomedical Engineering , to government, healthcare professionals and the wider public. This committee offers:
The Railway Division was formed in 1969 when the Institution of Locomotive Engineers amalgamated with IMechE. [ 28 ] | https://en.wikipedia.org/wiki/Institution_of_Mechanical_Engineers |
The Institution of Metallurgists was a British professional association for metallurgists, largely involved in the iron and steel industry.
It was founded in 1945. [ 1 ] The inaugural meeting was held on 28 November 1945; the organization was formed by the Iron and Steel Institute and the Institute of Metals.
The International Iron and Steel Institute was formed in 1967, which is now the World Steel Association . by the late 1960s the Institution had around 10,000 metallurgists.
It was involved in the formation of the Association of Professional Scientists and Technologists (APST) in 1971, [ 2 ] which was formed as a result of the Industrial Relations Act 1971 .
In September 1965, Ordinary National Certificates in science were introduced, in consultation with the Institution, the Royal Institute of Chemistry , the Institute of Physics , the Physical Society , the Institute of Biology , and the Mathematical Association .
In January 1969, these same set of institutions set up the Council of Science and Technology Institutes (CSTI), which ended up as the Science Council in 2003.
It was given a Royal Charter in 1975. In 1977 it became the sixteenth constituent of the •Council of Engineering Institutions, which became the Engineering Council in 1981.
It merged with the Metals Society to become Institute of Metals on 1 January 1985.
In the 1960s it was headquartered at 17 Belgrave Square in the City of Westminster . In the 1970s it moved to Northway House on the A1000 (High Road) in north London.
Source: The Institute of Materials, Minerals and Mining | https://en.wikipedia.org/wiki/Institution_of_Metallurgists |
The Institution of Mining Engineers (IMinE) was a former British professional institution.
It began as the Federated Institution of Mining Engineers in 1889, comprising the Chesterfield and Midland Counties Institution of Engineers; Midland Institute of Mining, Civil and Mechanical Engineers; North of England Institute of Mining and Mechanical Engineers ; South Staffordshire and East Worcestershire Institute of Mining Engineers and later the North Staffordshire Institute of Mining and Mechanical Engineers, the Mining Institute of Scotland and the Manchester Geological and Mining Society. It was given a Royal Charter in 1915. [ 1 ] In the early 1980s it became affiliated with Group Four of the Engineering Council ; there were fifty-one affiliated engineering organisations to the Engineering Council.
It merged with the National Association of Colliery Managers , effective from 23 October 1968. In 1995 it merged with the Institution of Mining Electrical and Mining Mechanical Engineers . Soon after discussions about a merger with the Institution of Mining and Metallurgy , founded in 1892, took place, which it merged with in 2002.
It was headquartered at Cleveland House on City Road in London.
Fellows of the institution took the initials FIMinE.
It awarded the Medal of the Institution of Mining Engineers. | https://en.wikipedia.org/wiki/Institution_of_Mining_Engineers |
In mathematical logic , institutional model theory generalizes a large portion of first-order model theory to an arbitrary logical system .
The notion of "logical system" here is formalized as an institution . Institutions constitute a model-oriented meta-theory on logical systems similar to how the theory of rings and modules constitute a meta-theory for classical linear algebra . Another analogy can be made with universal algebra versus groups , rings , modules etc. By abstracting away from the realities of the actual conventional logics, it can be noticed that institution theory comes in fact closer to the realities of non-conventional logics.
Institutional model theory analyzes and generalizes classical model-theoretic notions and results, like
For each concept and theorem, the infrastructure and properties required are analyzed and formulated as conditions on institutions, thus providing a detailed insight to which properties of first-order logic they rely on and how much they can be generalized to other logics. | https://en.wikipedia.org/wiki/Institutional_model_theory |
An institutional review board ( IRB ), also known as an independent ethics committee ( IEC ), ethical review board ( ERB ), or research ethics board ( REB ), is a committee at an institution that applies research ethics by reviewing the methods proposed for research involving human subjects, to ensure that the projects are ethical . The main goal of IRB reviews is to ensure that study participants are not harmed (or that harms are minimal and outweighed by research benefits). Such boards are formally designated to approve (or reject), monitor, and review biomedical and behavioral research involving humans , and they are legally required in some countries under certain specified circumstances. Most countries use some form of IRB to safeguard ethical conduct of research so that it complies with national and international norms, regulations or codes. [ 1 ]
The purpose of the IRB is to assure that appropriate steps are taken to protect the rights and welfare of people participating in a research study. A key goal of IRBs is to protect human subjects from physical or psychological harm , which they attempt to do by reviewing research protocols and related materials. The protocol review assesses the ethics of the research and its methods, promotes fully informed and voluntary participation by prospective subjects, and seeks to maximize the safety of subjects. They often conduct some form of risk-benefit analysis in an attempt to determine whether or not research should be conducted. [ 2 ]
IRBs are most commonly used for studies in the fields of health and the social sciences, including anthropology , sociology , and psychology . Such studies may be clinical trials of new drugs or medical devices, studies of personal or social behavior, opinions or attitudes, or studies of how health care is delivered and might be improved. Many types of research that involves humans, such as research into which teaching methods are appropriate, unstructured research such as oral histories , journalistic research, research conducted by private individuals, and research that does not involve human subjects, are not typically required to have IRB approval.
Formal review procedures for institutional human subject studies were originally developed in direct response to research abuses in the 20th century. Among the most notorious of these abuses were the experiments of Nazi physicians , which became a focus of the post-World War II Doctors' Trial , the Tuskegee Syphilis Study , a long-term project conducted between 1932 and 1972 by the U.S. Public Health Service , and numerous human radiation experiments conducted during the Cold War . Other controversial U.S. projects undertaken during this era include the Milgram obedience experiment , the Stanford prison experiment , and Project MKULTRA , a series of classified mind control studies organized by the CIA .
The result of these abuses was the National Research Act of 1974 and the development of the Belmont Report , which outlined the primary ethical principles in human subjects review; these include "respect for persons", "beneficence", and "justice". An IRB may approve only research for which the risks to subjects are balanced by potential benefits to society, and for which the selection of subjects presents a fair or just distribution of risks and benefits to eligible participants. A bona fide process for obtaining informed consent from participants is also generally needed. However, this requirement may be waived in certain circumstances – for example, when the risk of harm to participants is clearly minimal.
In the United States, IRBs are governed by Title 45 Code of Federal Regulations Part 46. [ 3 ] These regulations define the rules and responsibilities for institutional review, which is required for all research that receives support, directly or indirectly, from the United States federal government. Specifically, research on human subjects that is conducted by any institution must be reviewed by that institution's review board if it is not an exempt type and it also involves:
Additionally, the states of California and Maryland have more expansive rules for reviewing research that is conducted within those two states. [ 5 ] Many institutions that engage in substantial amounts of research, such as research universities and research hospitals , have their board reviews all research programs, even though it is not required, as a matter of their own internal policy. [ 4 ] [ 5 ]
IRBs are themselves regulated by the Office for Human Research Protections (OHRP) within the Department of Health and Human Services (HHS). Additional requirements apply to IRBs that oversee clinical trials of drugs involved in new drug applications , or to studies that are supported by the United States Department of Defense . In the United States, the Food and Drug Administration (FDA) and the OHRP have empowered IRBs to approve, require modifications in planned research prior to approval, or disapprove research. IRBs are responsible for critical oversight functions for research conducted on human subjects that are "scientific", "ethical", and "regulatory". The equivalent body responsible for overseeing U.S. federally funded research using animals is the Institutional Animal Care and Use Committee (IACUC).
In addition to registering its IRB with the OHRP, an institution is also required to obtain and maintain a Federalwide Assurance or FWA, before undertaking federally funded human research. [ 6 ] This is an agreement in which the institution commits to abiding by the regulations governing human research. A secondary supplement to the FWA is required when institutions are undertaking research supported by the U.S. Department of Defense. [ 7 ] This DoD Addendum includes further compliance requirements for studies using military personnel, or when the human research involves populations in conflict zones, foreign prisoners, etc. [ 8 ]
U.S. regulations identify several research categories that are considered exempt from IRB oversight. These categories include:
Generally, human research ethics guidelines require that decisions about exemption are made by an IRB representative, not by the investigators themselves. [ 10 ]
Additionally, research projects conducted outside of a federal government agency or government-funded institution, such as a citizen science project conducted by a private individual or a group of private individuals, are generally not required to be approved by any institutional review board, unless the project is funded by the US federal government. [ 4 ] [ 5 ]
Numerous other countries have equivalent regulations or guidelines governing human subject studies and the ethics committees that oversee them. However, the organizational responsibilities and the scope of the oversight purview can differ substantially from one nation to another, especially in the domain of non-medical research. The United States Department of Health and Human Services maintains a comprehensive compilation of regulations and guidelines in other countries, as well as related standards from a number of international and regional organizations. [ 11 ]
Although "IRB" is a generic term used in the United States by the FDA and HHS, each institution that establishes such a board may use whatever name it chooses. Many simply capitalize the term "Institutional Review Board" as the proper name of their instance. Regardless of the name chosen, the IRB is subject to the US FDA's IRB regulations when studies of FDA-regulated products are reviewed and approved. At one time, such a committee was named the "Committee for the Protection of Human Subjects".
Originally, IRBs were simply committees at academic institutions and medical facilities to monitor research studies involving human participants, primarily to minimize or avoid ethical problems. Today, some of these reviews are conducted by for-profit organizations known as independent or commercial IRBs. Anyone, including private individuals, can pay a commercial IRB for review. [ 4 ] The responsibilities of these IRBs are identical to those based at academic or medical institutions, and within the US, they are governed by the same US federal regulations.
While its composition varies, it often includes a balance of academia and non-academia members. This serves to provide a greater scope of understanding which helps ensure ethics in research. In the US, regulations set out the board's membership and composition requirements, with provisions for diversity in experience, expertise, and institutional affiliation. For example, the minimum number of members is five, at least one scientist, and at least one non-scientist. The guidance strongly suggests that the IRB contain both men and women, but there is no regulatory requirement for gender balance in the IRB's membership. The full requirements are set out in 21 CFR 56.107. [ 12 ]
As IRBs are normally staffed with employees, who have to be paid, there are costs to operating them. In 2001, the cost of operating an IRB typically ranged from about $75,000 to $770,000 ($133,000 to $1,367,000, after accounting for inflation) per year, depending on the volume of research reviewed. [ 13 ]
Unless a research proposal is determined to be exempt (see below), the IRB undertakes its work either in a convened meeting (a "full" review) or by using an expedited review procedure. [ 14 ] When a full review is required, a majority of the IRB members must be present at the meeting, at least one of whom has primary concern for the nonscientific aspects of the research. [ 14 ] The research can be approved if a majority of those present are in favor. [ 14 ]
An expedited review may be carried out if the research involves no more than minimal risk to subjects, or where minor changes are being made to previously approved research. [ 15 ] The regulations provide a list of research categories that may be reviewed in this manner. [ 15 ] An expedited review is carried out by the IRB chair, or by their designee(s) from the board membership. In the US, research activity cannot be disapproved by expedited review. [ 15 ]
The International Conference on Harmonisation sets out guidelines for registration of pharmaceuticals in multiple countries. It defines Good Clinical Practice (GCP), which is an agreed quality standard that governments can transpose into regulations for clinical trials involving human subjects. [ 16 ]
Here is a summary of several key regulatory guidelines for oversight of clinical trials:
The reviewers may also request that more information be given to subjects when, in their judgment, the additional information would add meaningfully to the protection of the rights, safety and/or well-being of the subjects. When a non-therapeutic trial is to be carried out with the consent of the subject's legally acceptable representative, reviewers should determine that the proposed protocol and/or other document(s) adequately address relevant ethical concerns and meets applicable regulatory requirements for such trials. Where the protocol indicates prior consent of the trial subject or the subject's legally acceptable representative is not possible, the review should determine that the proposed protocol and/or other document(s) adequately addresses relevant ethical concerns and meets applicable regulatory requirements for such trials (i.e., in emergency situations).
While the Belmont Principles and U.S. federal regulations were formulated with biomedical and social-behavioral research in mind, the enforcement of the regulations, the examples used in typical presentations regarding the history of the regulatory requirements, and the extensiveness of written guidance have been predominantly focused on biomedical research .
Numerous complaints by investigators about the fit between the federal regulations and its IRB review requirements as they relate to social science research have been received. [ 17 ] Broad complaints range from the legitimacy of IRB review, the applicability of the concepts of risk as it pertains to social science (e.g., possibly unneeded, over-burdensome requirements), [ 18 ] and the requirements for the documentation of participants' informed consent (i.e., consent forms). [ 19 ] Researchers have tried to determine under what instances participants are more likely to read informed consent forms, and ways to improve their efficacy in the social sciences. [ 20 ] IRB members have been criticized for assuming that surveys about past trauma has a re-traumatizing effect. [ 21 ] [ 22 ] Social scientists have criticized biomedical IRBs for failing to adequately understand their research methods (such as ethnography ). For this reason, some large research institutions have set up multiple specialized IRBs, and may have one committee that exclusively oversees social science research.
In 2003, the US Office for Human Research Protections (OHRP), in conjunction with the Oral History Association and American Historical Association , issued a formal statement that taking oral histories , unstructured interviews (as if for a piece of journalism), collecting anecdotes, and similar free speech activities often do not constitute "human subject research" as defined in the regulations and were never intended to be covered by clinical research rules. [ 23 ] In 2017, the federal government announced that effective January 2018, the regulations would no longer cover "Scholarly and journalistic activities (e.g., oral history, journalism, biography, literary criticism, legal research, and historical scholarship), including the collection and use of information that focus directly on the specific individuals about whom the information is collected." [ 24 ]
Other US federal agencies supporting social science have attempted to provide guidance in this area, especially the National Science Foundation . In general, the NSF guidelines assure IRBs that the regulations have some flexibility and rely on the common sense of the IRB to focus on limiting harm, maximizing informed consent, and limiting bureaucratic limitations of valid research. [ 25 ]
Aspects of big data research pose formidable challenges for research ethics and thus show potential for wider applicability of formal review processes. [ 26 ] [ 27 ] [ 28 ] One theme is data breaches , but another with high difficulty is potentially dangerous predictive analytics with unintended consequences , via false-positives or new ways to invade privacy . A 2016 article on the hope to expand ethics reviews of such research included an example of a data breach in which a big data researcher leaked 70,000 OkCupid profiles with usernames and sexual orientation data. [ 27 ] It also gave an example of potential privacy invasion and government repression in which machine learning was used to build automated gaydar , labeling strangers as "probably gay" based on their facial photographs. [ 27 ] Analogies with phrenology [ 26 ] and Nazis identifying people as "probably part-Jewish" based on facial features have been made to show what can go wrong with research whose authors may have failed to adequately think through the risks of harm. Such challenges broach familiar themes, such as mistaken identity , pre-crime , and persecution , in new applications.
Generally speaking, citizen science , whether conducted by a single private individual or a group of individuals, is not required to follow the IRB process. [ 4 ] This is true even if some of the individuals involved are professional researchers or are also employed at institutions that normally review all research conducted by the institution. [ 4 ]
However, many academic journals require proof of IRB approval for all human-subject research, even when it is not legally required, which means that citizen scientists may be unable to publish scientific papers describing their findings. [ 4 ] Citizen scientists who expect to need IRB approval for publication or to comply with the terms of a research grant can pay a commercial IRB company. [ 4 ] In the US, a standard initial review often costs a few thousand dollars; a review to determine that the project is less expensive. [ 29 ]
The IRB-based approach to ethics assumes that human-subject research is conducted by an institution employing researchers, and that the institution and researchers have far more power and knowledge than the participants. The researchers and the participants are seen as distinct groups, and the concern is to prevent the researchers from exploiting the participants as a means to an end . This leads to IRBs issuing requirements such as having researchers explain the research project and obtain informed consent. However, this model does not always fit citizen science projects, especially when the participants are themselves the experts and researchers. [ 4 ] In such cases, a requirement to explain the project means participants would absurdly be informing themselves of their own plans. In a citizen science project, the boundaries between the researcher and the participant are blurred. [ 30 ] Similarly, many institutionally-driven research programs are limiting or prohibiting the return of results to individuals, especially for genetic or medical studies, for fear that some participants could be harmed if they misunderstand the results. [ 4 ] In this restrictive model, the participant never finds out their test results, or they can only find out their test results if the researchers carefully explain the results to them. But in a citizen science project, learning the results is a highly desired reason for participating, and, since the researchers are themselves participants, it would be impossible to prevent them from obtaining the results.
While the IRB approval and oversight process is designed to protect the rights and welfare of the research subjects , it has been the subject of criticism, by bioethicists and others, for conflicts of interest resulting in lax oversight. [ 31 ] [ 32 ] In 2005, the for-profit Western Institutional Review Board claimed to conduct the majority of reviews for new drug submissions to the FDA. [ 33 ] In a 2006 study of 575 IRB members at university medical centers, over one-third reported industry financial ties, and over one-third admitted they "rarely or never" disclosed conflicts of interest to other board members. [ 34 ]
In 2009 the US Government Accountability Office (GAO) set up a series of undercover tests to determine whether the IRB system was vulnerable to unethical manipulation. In one test, a fake product "Adhesiabloc" was submitted to a number of IRBs for approval for human tests. The product, company, and CVs of the supposed researchers were all fictitious and documents were forged by the GAO. The product was deliberately formulated to match some "significant risk" criteria of the FDA and was described by GAO as a "gel that would be poured into a patient's stomach after surgery to collect the bits and pieces left over from an operation." Despite this, one IRB approved the device for human testing. Other IRBs to whom the device was submitted rejected the application, one of them saying it was "the riskiest thing I've ever seen on this board". However, none of the IRBs approached detected that the company and product were fake. The GAO also set up a fake IRB and obtained requests for approval from companies. They succeeded in getting assurance approval from the HHS for their fake IRB. At the time, the US HHS has only three staff to deal with 300 IRB registrations and 300 assurance applications per month. HHS stated that it would not be worthwhile to carry out additional evaluation even if they had the staff to do it. [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] | https://en.wikipedia.org/wiki/Institutional_review_board |
Institutiones calculi differentialis ( Foundations of differential calculus ) is a mathematical work written in 1748 by Leonhard Euler and published in 1755. It lays the groundwork for the differential calculus . It consists of a single volume containing two internal books; there are 9 chapters in book I, and 18 in book II.
W. W. Rouse Ball ( 1888 ) writes that "this is the first textbook on the differential calculus which has any claim to be both complete and accurate, and it may be said that all modern treatises on the subject are based on it."
This article about a mathematical publication is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Institutiones_calculi_differentialis |
Institutiones rei herbariae ( transl. The Instruction of Botany ), originally published in French as Eléments de botanique , [ note 1 ] is a 1700 Latin-language botanical compendium . The book was the principal work of Joseph Pitton de Tournefort , a French botanist credited with establishing the modern concept of the genus .
As a part of the book's introduction, Tournefort included what may be the first recorded history of botany , titled Isagoge in rem herbarium . In it, some of the most important botanical authors are noted, and brief biographies are given for each. [ 1 ] In the 1694 edition Eléments de botanique , Tournefort argued against John Ray's conception of the genus, to which Ray responded twice in 1696. However, in Institutiones rei herbariae in 1700, criticisms towards Ray were removed and replaced with praise. [ 2 ]
The main portion of the book contains an exhaustive list of plant names, organized in a system of " classes ", "sections", "genera", and "species". Furthermore, myriad images of plant leaves and flowers are included throughout the volume, engraved on copper-plate. [ 3 ]
While Institutiones rei herbariae was published in 1700 (and again in 1719), the book was originally written in French in 1694 as Eléments de botanique . [ 3 ] Beginning in 1716, an English language version of Institutiones was published monthly under the title Botanical institutions . [ note 2 ] Rather than being translated from the original French work, Botanical institutions was adapted from the Latin Institutiones rei herbariae . The edition included a direct translation of the original, additional commentary from English contributors, two alphabetical indices, and a brief biography on Tournefort. [ 4 ]
Tournefort's central work has been praised for its simplicity of organization, and for its role as a foundational document for later botanists. One biographer of Tournefort noted that the work was highly influenced by the societal thinking of the time. Eléments de botanique was a strictly utilitarian work: it was solely designed to facilitate plant identification in order that those plants may be put to use for their various purposes. [ 5 ] As such, every name had to be clearly linked to one species only; there was as little ambiguity as possible. [ 6 ] Many French, English, Italian, and German botanists continued to use Tournefort's system throughout the first half of the 18th century, much in the same way that later taxonomists would model their works off the system of Carl Linnaeus . [ 3 ]
The book also reached outside of botanical circles. For example, Charles De Geer (who would later become a prominent entomologist ) purchased three volumes of the 1719 edition of Institutiones rei herbariae . De Geer used the book to identify plants in his own garden, and also made use of Tournefort's classification system in his publications. [ 7 ]
However, some 18th-century naturalists , following the principles of John Locke , argued against the nominalism of Tournefort. [ 6 ] Where Tournefort argued that the "essence of the plant" could be tied to specific and generic names, botanists like Georges-Louis Leclerc and Jean-Baptiste Lamarck did not believe an organized science should be burdened by arbitrary nominal distinctions. [ 8 ] | https://en.wikipedia.org/wiki/Institutiones_rei_herbariae |
The Instituto Nacional de Biodiversidad ( INBio ) is the national institute for biodiversity and conservation in Costa Rica . Created at the end of the 1980s, and despite having national status, it is a privately run institution that works closely with various government agencies, universities, business sector and other public and private entities inside and outside of the country. [ 1 ] The goals of the institute are to complete an inventory of the natural heritage of Costa Rica, promote conservation and identify chemical compounds and genetic material present in living organisms that could be used by industries such as pharmaceuticals, cosmetics or others.
The institute has a collection of over three million insects representing tens of thousands of species all recorded in Atta, a computer database that contains all of the data such as exact location (including GPS coordinates), date of collection, name of the collector and method of collection.
Due to impending insolvency, in March 2015, the INBio's biodiversity collection and database will be taken over by the state [ 2 ] (and returned to the Natural History Museum, from which much of it was taken when INBio was founded), and its theme park converted to government operation. [ 3 ] INBio will move forward as a "think tank" type institute with money raised from transfer of most of its assets to the government.
Costa Rica decided in 1989 that some sort of organization was necessary to study the biodiversity of Costa Rica. The government did not have the ability at the time to fund a new organization so a handful of scientists and entrepreneurs took the initiative and created the non-profit organization now known as INBio. Among the founders of the institution was Rodrigo Gámez , a remarkable and well known Costa Rican scientist who has a strong desire for teaching people about the importance of biodiversity and its conservation. He received the MAGÓN award (Premio Nacional de Cultura Magón) in 2012, which is an award of great importance that is given every year to someone who has contributed to Costa Rica, in this case related to science. In that same year he received an international award known as the MIDORI prize, given to him in Japan, by a Japanese institution; he has also received a great number of other awards in the past. Rodrigo Gámez is still president at the institution. [ 1 ]
In 1995 INBio was awarded the Prince of Asturias Award for Technical and Scientific Research.
There are many different components to INBio such as Bio-prospecting, INBioparque, INBio editorial, and the many different research areas such as arthropods , fungi , and plants . Bio-prospecting is the division dealing with finding useful products from the specimens collected. INBio has worked with organizations such as Merck , Bristol-Myers Squibb , and the University of Massachusetts Amherst . [ 4 ] INBioparque is a natural park in Santo Domingo, Heredia , just 85 km (53 mi) north/east of downtown San Jose in Costa Rica . The research programs vary from studying the spider family Oonopidae to compiling a book with all of the genera of known and described flies in Central America. Such a project has never been done in a tropical place with such a large biodiversity.
The institute's work has chiefly developed in the following areas:
Generating information on the diversity of the country's species and ecosystems. It currently owns a collection of more than 3 million specimens, each identified and cataloged, including arthropods, plants, fungi and mollusks. Furthermore, information on the country's different ecosystems is generated.
Integrating the information generated by INBio into decision-making processes for the protection and sustainable use of its biodiversity, for both the public and private sectors. INBio works closely with SINAC (Sistema de Áreas de Conservación; Conservation Areas System), being considered a strategic partner in the protection of the country's protected areas.
Sharing information and understanding of biodiversity with different sectors of the public, seeking to create a wider knowledge of its value. Most of this effort is centered in the INBiopark, a theme-park opened in 2000 which aims to bring families and visitors closer to the rich Costa Rican nature. Furthermore, through other methods INBio looks to strengthen the environmental component of the Costa Rican population's actions and decisions.
Developing and applying technological tools to support the process of generation, administration, analysis and dissemination of information on biodiversity. The information on each specimen in the biodiversity inventory can be found in a database named Atta, accessible to the public through INBio's webpage.
Searching for sustainable, commercially applicable uses of the resources of biodiversity. INBio has been a pioneering institution in establishing research agreements for the search for chemical substances, genes, etc., present in plants, insects, marine organisms and microorganisms, which could be used by the pharmaceutical, medical, biotechnology, cosmetics, nutritional and agricultural industries. INBio, although it is a national initiative given its scope, has become an international force for trying to integrate conservation and development. The application of scientific knowledge of biodiversity to economic activities such as ecotourism, medicine, agriculture or the development of mechanisms of collection and payment for environmental services exemplify this force for integration, and are part of the activities which attract the attention of the international community.
From 1996 to 2011, Editorial InBio published important books about specific aspects of Costa Rican and Latin American biodiversity: | https://en.wikipedia.org/wiki/Instituto_Nacional_de_Biodiversidad |
The Institute of Astrophysics of Andalusia ( Spanish : Instituto de Astrofísica de Andalucía , IAA-CSIC) is a research institute funded by the High Council of Scientific Research of the Spanish government Consejo Superior de Investigaciones Científicas (CSIC), and is located in Granada , Andalusia , Spain . IAA activities are related to research in the field of astrophysics , and instrument development both for ground-based telescopes and for space missions. Scientific research at the Institute covers the Solar System , star formation , stellar structure and evolution , galaxy formation and evolution and cosmology . The IAA was created as a CSIC research institute in July 1975. Presently, the IAA operates the Sierra Nevada Observatory , and (jointly with the also the Max-Planck Institute of Heidelberg) the Calar Alto Observatory .
The Instituto de Astrofísica de Andalucía is divided in the following departments, each with an (incomplete) outline of research avenues and groups:
The technological needs of IAA's research groups are fulfilled by the Instrumental and Technological Developments Unit . | https://en.wikipedia.org/wiki/Instituto_de_Astrofísica_de_Andalucía |
The Instituto de Medicina Molecular João Lobo Antunes (Institute of Molecular Medicine), or iMM for short, is an associated research institution of the University of Lisbon , in Lisbon , Portugal .
IMM is devoted to human genome research with the aim of contributing to a better understanding of disease mechanisms, developing novel predictive tests, improving diagnostics tools, and developing new therapeutic approaches.
IMM was created in November 2001, as a result from the association of 5 research centres from the University of Lisbon Medical School: the Biology and Molecular Pathology Centre (CEBIP), the Lisbon Neurosciences Centre (CNL), the Microcirculation and Vascular Pathobiology Centre (CMBV), the Gastroenterology Centre (CG), and the Nutrition and Metabolism Centre (CNB). [ citation needed ]
In 2003, the Molecular Pathobiology Research Centre (CIPM) of the Portuguese Institute of Oncology Francisco Gentil (IPOFG) became an associate member of IMM. [ citation needed ]
Historically, IMM benefited from the full integration of academic researchers into the Lisbon Medical School who initiated their academic training and scientific careers at Instituto Gulbenkian de Ciência (IGC), in Oeiras , one of the first national institutions to introduce and make use of state-of-the-art cell and molecular biology techniques. [ citation needed ]
The IMM is now known as Instituto de Medicina Molecular João Lobo Antunes, to honour one of its founders and president (2001-2014), Professor João Lobo Antunes. [ 1 ] Maria do Carmo-Fonseca is the current president of IMM, having served before as IMM Executive Director since its creation. [ 2 ] The current executive director is the malaria researcher Maria Mota. [ 3 ] | https://en.wikipedia.org/wiki/Instituto_de_Medicina_Molecular |
Instream use refers to water use taking place within a stream channel . Examples are hydroelectric power generation, navigation , fish propagation and use, and recreational activities. Some instream uses, usually associated with fish populations and navigation, require a minimum amount of water to be viable.
The term is often used in discussions concerning water resources allocation and/or water rights . | https://en.wikipedia.org/wiki/Instream_use |
In computer architecture , instructions per cycle ( IPC ), commonly called instructions per clock , is one aspect of a processor 's performance: the average number of instructions executed for each clock cycle . It is the multiplicative inverse of cycles per instruction . [ 1 ] [ 2 ] [ 3 ]
While early generations of CPUs carried out all the steps to execute an instruction sequentially, modern CPUs can do many things in parallel. As it is impossible to just keep doubling the speed of the clock, instruction pipelining and superscalar processor design have evolved so CPUs can use a variety of execution units in parallel - looking ahead through the incoming instructions in order to optimise them. This leads to the instructions per cycle completed being much higher than 1 and is responsible for much of the speed improvements in subsequent CPU generations.
The calculation of IPC is done through running a set piece of code, calculating the number of machine-level instructions required to complete it, then using high-performance timers to calculate the number of clock cycles required to complete it on the actual hardware. The final result comes from dividing the number of instructions by the number of CPU clock cycles.
The number of instructions per second and floating point operations per second for a processor can be derived by multiplying the number of instructions per cycle with the clock rate (cycles per second given in Hertz ) of the processor in question. The number of instructions per second is an approximate indicator of the likely performance of the processor.
The number of instructions executed per clock is not a constant for a given processor; it depends on how the particular software being run interacts with the processor, and indeed the entire machine, particularly the memory hierarchy . However, certain processor features tend to lead to designs that have higher-than-average IPC values; the presence of multiple arithmetic logic units (an ALU is a processor subsystem that can perform elementary arithmetic and logical operations), and short pipelines. When comparing different instruction sets , a simpler instruction set may lead to a higher IPC figure than an implementation of a more complex instruction set using the same chip technology; however, the more complex instruction set may be able to achieve more useful work with fewer instructions. As such comparing IPC figures between different instruction sets (for example x86 vs ARM) is usually meaningless.
The useful work that can be done with any computer depends on many factors besides the processor speed. These factors include the instruction set architecture , the processor's microarchitecture , and the computer system organization (such as the design of the disk storage system and the capabilities and performance of other attached devices), the efficiency of the operating system , and the high-level design of application software .
For computer users and purchasers, application benchmarks , rather than instructions per cycle, are typically a much more useful indication of system performance. However, IPC does provide an example of why clock speed is not the only factor relevant to computer performance. | https://en.wikipedia.org/wiki/Instructions_per_cycle |
Instructions per second ( IPS ) is a measure of a computer 's processor speed. For complex instruction set computers (CISCs), different instructions take different amounts of time, so the value measured depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematic. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches and no cache contention , whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.
The term is commonly used in association with a metric prefix (k, M, G, T, P, or E) to form kilo instructions per second ( kIPS ), mega instructions per second ( MIPS ), giga instructions per second ( GIPS ) and so on. Formerly TIPS was used occasionally for "thousand IPS".
IPS can be calculated using this equation: [ 1 ]
However, the instructions/cycle measurement depends on the instruction sequence, the data and external factors.
Before standard benchmarks were available, average speed rating of computers was based on calculations for a mix of instructions with the results given in kilo instructions per second (kIPS). The most famous was the Gibson Mix , [ 2 ] produced by Jack Clark Gibson of IBM for scientific applications in 1959.
Other ratings, such as the ADP mix which does not include floating point operations, were produced for commercial applications. The thousand instructions per second (kIPS) unit is rarely used today, as most current microprocessors can execute at least a million instructions per second.
Gibson divided computer instructions into 12 classes, based on the IBM 704 architecture, adding a 13th class to account for indexing time. Weights were primarily based on analysis of seven scientific programs run on the 704, with a small contribution from some IBM 650 programs. The overall score was then the weighted sum of the average execution speed for instructions in each class. [ 3 ]
The speed of a given CPU depends on many factors, such as the type of instructions being executed, the execution order and the presence of branch instructions (problematic in CPU pipelines). CPU instruction rates are different from clock frequencies, usually reported in Hz , as each instruction may require several clock cycles to complete or the processor may be capable of executing multiple independent instructions simultaneously. MIPS can be useful when comparing performance between processors made with similar architecture (e.g. Microchip branded microcontrollers), but they are difficult to compare between differing CPU architectures . [ 4 ] This led to the term "Meaningless Indicator of Processor Speed," [ 5 ] or less commonly, "Meaningless Indices of Performance," [ 6 ] being popular amongst technical people by the mid-1980s.
For this reason, MIPS has become not a measure of instruction execution speed, but task performance speed compared to a reference. In the late 1970s, minicomputer performance was compared using VAX MIPS , where computers were measured on a task and their performance rated against the VAX-11/780 that was marketed as a 1 MIPS machine. (The measure was also known as the VAX Unit of Performance or VUP .) This was chosen because the 11/780 was roughly equivalent in performance to an IBM System/370 model 158–3, which was commonly accepted in the computing industry as running at 1 MIPS.
Many minicomputer performance claims were based on the Fortran version of the Whetstone benchmark , giving Millions of Whetstone Instructions Per Second (MWIPS). The VAX 11/780 with FPA (1977) runs at 1.02 MWIPS.
Effective MIPS speeds are highly dependent on the programming language used. The Whetstone Report has a table showing MWIPS speeds of PCs via early interpreters and compilers up to modern languages. The first PC compiler was for BASIC (1982) when a 4.8 MHz 8088/87 CPU obtained 0.01 MWIPS. Results on a 2.4 GHz Intel Core 2 Duo (1 CPU 2007) vary from 9.7 MWIPS using BASIC Interpreter, 59 MWIPS via BASIC Compiler, 347 MWIPS using 1987 Fortran, 1,534 MWIPS through HTML/Java to 2,403 MWIPS using a modern C / C++ compiler.
For the most early 8-bit and 16-bit microprocessors , performance was measured in thousand instructions per second (1000 kIPS = 1 MIPS).
zMIPS refers to the MIPS measure used internally by IBM to rate its mainframe servers ( zSeries , IBM System z9 , and IBM System z10 ).
Weighted million operations per second (WMOPS) is a similar measurement, used for audio codecs.
[ 7 ] | https://en.wikipedia.org/wiki/Instructions_per_second |
In aviation , an instrument approach or instrument approach procedure ( IAP ) is a series of predetermined maneuvers for the orderly transfer of an aircraft operating under instrument flight rules from the beginning of the initial approach to a landing , or to a point from which a landing may be made visually . [ 1 ] These approaches are approved in the European Union by EASA and the respective country authorities and in the United States by the FAA or the United States Department of Defense for the military. The ICAO defines an instrument approach as "a series of predetermined maneuvers by reference to flight instruments with specific protection from obstacles from the initial approach fix , or where applicable, from the beginning of a defined arrival route to a point from which a landing can be completed and thereafter, if landing is not completed, to a position at which holding or en route obstacle clearance criteria apply." [ 2 ]
There are three categories of instrument approach procedures: precision approach (PA), approach with vertical guidance (APV), and non-precision approach (NPA). A precision approach uses a navigation system that provides course and glidepath guidance. Examples include precision approach radar (PAR), instrument landing system (ILS), and GBAS landing system (GLS). An approach with vertical guidance also uses a navigation system for course and glidepath deviation, just not to the same standards as a PA. Examples include baro-VNAV , localizer type directional aid (LDA) with glidepath, LNAV /VNAV and LPV . A non-precision approach uses a navigation system for course deviation but does not provide glidepath information. These approaches include VOR , NDB , LP (Localizer Performance), and LNAV. PAs and APVs are flown to a decision height/altitude (DH/DA), while non-precision approaches are flown to a minimum descent altitude (MDA). [ 2 ] : 757 [ 3 ]
IAP charts are aeronautical charts that portray the aeronautical data that is required to execute an instrument approach to an airport. Besides depicting topographic features, hazards and obstructions, they depict the procedures and airport diagram. Each procedure chart uses a specific type of electronic navigation system such as an NDB, TACAN , VOR, ILS/ MLS and RNAV . [ 2 ] : 981–982 The chart name reflects the primary navigational aid (NAVAID), if there is more than one straight-in procedure or if it is just a circling-only procedure. A communication strip on the chart lists frequencies in the order they are used. Minimum, maximum and mandatory altitudes are depicted in addition to the minimum safe altitude (MSA) for emergencies. A cross depicts the final approach fix (FAF) altitude on NPAs while a lightning bolt does the same for PAs. NPAs depict the MDA while a PA shows both the decision altitude (DA) and decision height (DH). Finally, the chart depicts the missed approach procedures in plan and profile view, besides listing the steps in sequence. [ 4 ] : 4–9, 4–11, 4–19, 4–20, 4–41
Before satellite navigation (GNSS) was available for civilian aviation, the requirement for large land-based navigation aid (NAVAID) facilities generally limited the use of instrument approaches to land-based (i.e. asphalt, gravel, turf, ice) runways (and those on aircraft carriers ). GNSS technology allows, at least theoretically, to create instrument approaches to any point on the Earth's surface (whether on land or water); consequently, there are nowadays examples of water aerodromes (such as Rangeley Lake Seaplane Base in Maine , United States) that have GNSS-based approaches.
An instrument approach procedure may contain up to five separate segments, which depict course, distance, and minimum altitude. These segments are [ 4 ] : 4–43, 4–53
When an aircraft is under radar control , air traffic control (ATC) may replace some or all of these phases of the approach with radar vectors (ICAO radar vectoring is the provision of navigational guidance to aircraft in the form of specific headings, based on the use of radar). [ 2 ] : 1033 ATC will use an imaginary "approach gate" when vectoring aircraft to the final approach course. This gate will be 1 nautical mile (NM) from the FAF and at least 5 NM from the landing threshold. Outside radar environments, the instrument approach starts at the IAF. [ 4 ] : 4–54, 4–56
Though ground-based NAVAID approaches still exist, the FAA is transitioning to approaches which are satellite-based (RNAV). Additionally, in lieu of the published approach procedure, a flight may continue as an IFR flight to landing while increasing the efficiency of the arrival with either a contact or visual approach. [ 4 ] : 4–57
A visual approach is an ATC authorization for an aircraft on an IFR flight plan to proceed visually to the airport of intended landing; it is not an instrument approach procedure. [ 5 ]
A visual approach may be requested by the pilot or offered by ATC. Visual approaches are possible when weather conditions permit continuous visual contact with the destination airport. They are issued in such weather conditions in order to expedite handling of IFR traffic. The ceiling must be reported or expected to be at least 1000 feet AGL ( above ground level ) and the visibility is at least 3 SM (statute miles). [ 4 ] : 4–57
A pilot may accept a visual approach clearance as soon as the pilot has the destination airport in sight. According to ICAO Doc. 4444, it is enough for a pilot to see the terrain to accept a visual approach. The point is that if a pilot is familiar with the terrain in the vicinity of the airfield he/she may easily find the way to the airport having the surface in sight.
ATC must ensure that weather conditions at the airport are above certain minima (in the U.S., a ceiling of 1000 feet AGL or greater and visibility of at least 3 statute miles) before issuing the clearance. According to ICAO Doc. 4444, it is enough if the pilot reports that in his/her opinion the weather conditions allow a visual approach to be made. In general, the ATC gives the information about the weather but it's the pilot who makes a decision if the weather is suitable for landing. Once the pilot has accepted the clearance, he/she assumes responsibility for separation and wake turbulence avoidance and may navigate as necessary to complete the approach visually. According to ICAO Doc. 4444, ATC continues to provide separation between the aircraft making a visual approach and other arriving and departing aircraft. The pilot may get responsible for the separation with preceding aircraft in case he/she has the preceding aircraft in sight and is instructed so by ATC.
In the United States, it is required that an aircraft have the airport, the runway, or the preceding aircraft in sight. [ 4 ] : 4–57 It is not enough to have the terrain in sight (see § Contact approach ). [ 6 ]
When a pilot accepts a visual approach, the pilot accepts responsibility for establishing a safe landing interval behind the preceding aircraft, as well as responsibility for wake-turbulence avoidance, and to remain clear of clouds. [ 4 ] : 4–57 [ 6 ]
A contact approach that may be asked for by the pilot (but not offered by ATC) in which the pilot has 1 SM flight visibility and is clear of clouds and is expected to be able to maintain those conditions all the way to the airport. Obstruction clearances and VFR traffic avoidance become the pilot's responsibility. [ 4 ] : 4–58 [ 6 ]
A visual approach that has a specified route the aircraft is to follow to the airport. Pilots must have a charted visual landmark or a preceding aircraft in sight, and weather must be at or above the published minimums. Pilots are responsible for maintaining a safe approach interval and wake turbulence separation. [ 4 ] : 4–58
These approaches include both ground-based and satellite-based systems and include criteria for terminal arrival areas (TAAs), basic approach criteria, and final approach criteria. The TAA is a transition from the en route structure to the terminal environment which provides minimum altitudes for obstacle clearance. The TAA is a "T" or "basic T" design with left and right base leg IAFs on initial approach segments perpendicular to the intermediate approach segment where there is a dual purpose IF/IAF for a straight-in procedure (no procedure turn [NoPT]), or hold-in-lieu-of procedure-turn (HILPT) course reversal. The base leg IAFs is 3 to 6 NM from the IF/IAF. The basic-T is aligned with the runway centerline, with the IF 5 NM from the FAF, and the FAF is 5 NM from the threshold. [ 4 ] : 4–58, 4–60, 4–61
The RNP approach chart should have four lines of approach minimums corresponding to LPV, LNAV/VNAV, LNAV, and circling. This allows GPS or WAAS equipped aircraft to use the LNAV MDA using GPS only, if WAAS becomes unavailable. [ 7 ] : 4–26
These are the most precise and accurate approaches. A runway with an ILS can accommodate 29 arrivals per hour. [ 7 ] : 4–63 ILS systems on two or three runways increase capacity with parallel (dependent) ILS, simultaneous parallel (independent) ILS, precision runway monitor (PRM), and converging ILS approaches. ILS approaches have three classifications, CAT I, CAT II, and CAT III. CAT I SA, CAT II and CAT III require additional certification for operators, pilots, aircraft and equipment, with CAT III used mainly by air carriers and the military. Simultaneous parallel approaches require runway centerlines to be between 4,300 and 9,000 feet apart, plus a "dedicated final monitor controller" to monitor aircraft separation. Simultaneous close parallel (independent) PRM approaches must have runways separation to be between 3,400 and 4,300 feet. Simultaneous offset instrument approaches (SOIAs) apply to runways separated by 750–3,000 feet. A SOIA uses an ILS/PRM on one runway and an LDA/PRM with glideslope for the other. [ 4 ] : 4–64, 4–65, 4–66
These approaches use VOR facilities on and off the airport and may be supplemented with DME and TACAN. [ 4 ] : 4–69
These approaches use NDB facilities on and off the airport and may be supplemented with a DME. These approaches are gradually being phased out in Western countries. [ 4 ] : 4–69, 4–72
This will be either a precision approach radar (PAR) or an airport surveillance radar (ASR) approach. Information is published in tabular form. The PAR provides vertical and lateral guidance plus range. The ASR only provides heading and range information. [ 4 ] : 4–72, 4–75
This is a rare type of approach, where a radar installed on the approaching aircraft is used as the primary means of navigation for the approach. It is mainly used at offshore oil platforms and select military bases. [ 8 ] This type of approach takes advantage of the runway or more commonly, the oil platform, standing out from its surrounding environment when viewed on a radar. [ 9 ] For additional visibility on a radar, radar reflectors may be installed alongside the runway. [ 10 ]
These approaches include a localizer approach, localizer/DME approach, localizer back course approach, and a localizer-type directional aid (LDA). In cases where an ILS is installed, a back course may be available in conjunction with the localizer. Reverse sensing occurs on the back course using standard VOR equipment. With a horizontal situation indicator (HSI) system, reverse sensing is eliminated if it is set appropriately to the front course. [ 4 ] : 4–76, 4–78
This type of approach is similar to the ILS localizer approach, but with less precise guidance. [ 4 ] : 4–78
Non-precision systems provide lateral guidance (that is, heading information), but do not provide vertical guidance (i.e., altitude or glide path guidance).
Precision approach systems provide both lateral (heading) and vertical (glidepath) guidance.
In a precision approach, the decision height (DH) or decision altitude (DA) is a specified lowest height or altitude in the approach descent at which, if the required visual reference to continue the approach (such as the runway markings or runway environment) is not visible to the pilot, the pilot must initiate a missed approach . [ 2 ] : 1000 [ 4 ] : 4–20 (A decision height is measured AGL (above ground level) while a decision altitude is measured above MSL (mean sea level).) The specific values for DH and/or DA at a given airport are established with intention to allow a pilot sufficient time to safely re-configure an aircraft to climb and execute the missed approach procedures while avoiding terrain and obstacles. While a DH/DA denotes the altitude at which a missed approach procedure must be started, it does not preclude the aircraft from descending below the prescribed DH/DA.
In a non-precision approach (that is when no electronic glideslope is provided), the minimum descent altitude (MDA) is the lowest altitude, expressed in feet above mean sea level, to which descent is authorized on final approach or during circle-to-land maneuvering in execution of a standard instrument approach procedure. [ 2 ] : 1019 [ 4 ] : 4–19 [ 12 ] : G-12 The pilot may descend to the MDA, and may maintain it, but must not descend below it until visual reference is obtained, and must initiate a missed approach if visual reference has not been obtained upon reaching the missed approach point (MAP).
DH/DA, the corresponding parameter for precision approach, differs from MDA in that the missed approach procedure must be initiated immediately on reaching DH/DA, if visual reference has not yet been obtained: but some overshoot below it is permitted while doing so because of the vertical momentum involved in following a precision approach glide-path.
If a runway has both non-precision and precision approaches defined, the MDA of the non-precision approach is almost always greater than the DH/DA of the precision approach, because of the lack of vertical guidance on the non-precision approach. The extra height depends on the accuracy of the navaid the approach is based on, with ADF approaches and SRAs tending to have the highest MDAs.
All published minimums assume full operation of all components and visual aids. When any component is malfunctioned, the minimums increase. If more than one component is inoperative, the minimums are raised to the highest minimum required by any single inoperative component. [ 12 ] : 10–22
An instrument approach wherein final approach is begun without first having executed a procedure turn, not necessarily completed with a straight-in landing or made to straight-in landing minimums. [ 2 ] : 1041 A direct instrument approach requires no procedure turn or any other course reversal procedures for alignment (usually indicated by "NoPT" on approach plates), as the arrival direction and the final approach course are not too different from each other. The direct approach can be finished with a straight-in landing or circle-to-land procedure.
Some approach procedures do not permit straight-in approaches unless the pilots are being radar vectored. In these situations, pilots are required to complete a procedure turn (PT) or other course reversal, generally within 10 NM of the PT fix, to establish the aircraft inbound on the intermediate or final approach segment. [ 4 ] : 4–49 When conducting any type of approach, if the aircraft is not lined up for a straight-in approach, then a course reversal might be necessary. The idea of a course reversal is to allow sufficiently large changes in the course flown (in order to line the aircraft up with the final approach course), without taking too much space horizontally and while remaining within the confines of protected airspace. This is accomplished in one of three ways: a procedure turn, a holding pattern, or a teardrop course reversal.
Circle-to-land is a maneuver initiated by the pilot to align the aircraft with a runway for landing when a straight-in landing from an instrument approach is not possible or is not desirable, and only after ATC authorization has been obtained and the pilot has established and maintains required visual reference to the airport. [ 2 ] : 994 [ 4 ] : 4–11 A circle-to-land maneuver is an alternative to a straight-in landing. It is a maneuver used when a runway is not aligned within 30 degrees of the final approach course of the instrument approach procedure or the final approach requires 400 feet (or more) of descent per nautical mile, and therefore requires some visual maneuvering of the aircraft in the vicinity of the airport after the instrument portion of the approach is completed to align the aircraft with the runway for landing.
It is very common for a circle-to-land maneuver to be executed during a straight-in approach to a different runway, e.g., an ILS approach to one runway, followed by a low-altitude transition, ending in a landing on another (not necessarily parallel) runway. This way, approach procedures to one runway can be used to land on any runway at the airport, as the other runways might lack instrument procedures or their approaches cannot be used for other reasons (traffic considerations, navigation aids being out of service, etc.).
Circling to land is considered more difficult and less safe than a straight-in landing, especially under instrument meteorological conditions because the aircraft is at a low altitude and must remain within a short distance from the airport in order to be assured of obstacle clearance (often within a couple of miles, even for faster aircraft). The pilot must maintain visual contact with the airport at all times; loss of visual contact requires execution of a missed approach procedure. If the ceiling allows, pilots are recommended to fly closer to the airport's pattern altitude for a safer operation. [ 12 ] : 10–20
Pilots should be aware that there are significant differences in obstacle clearance criteria between procedures designed in accordance with ICAO PANS-OPS and US TERPS. This is especially true in respect of circling approaches where the assumed radius of turn and minimum obstacle clearance are markedly different. [ 13 ] [ 14 ] [ 15 ] In the United States, the published circling minimums guarantees a minimum of 300 ft (91 m) above any obstacles within the circling area, which is defined by a radius from runways based on the aircraft's approach category . [ 12 ] : 10–20
A visual maneuver by a pilot performed at the completion of an instrument approach to permit a straight-in landing on a parallel runway not more than 1,200 feet to either side of the runway to which the instrument approach was conducted. [ 2 ] : 793–795, 1038 [ 16 ]
A useful formula pilots use to calculate descent rates (for the standard 3° glide slope):
or
For other glideslope angles:
where rate of descent is in feet per minute, and ground speed is in knots .
The latter replaces tan α (see below) with α/60 , which has an error of about 5% up to 10°.
Example:
The simplified formulas above are based on a trigonometric calculation:
where:
Example:
Special considerations for low visibility operations include improved lighting for the approach area, runways, and taxiways, and the location of emergency equipment. There must be redundant electrical systems so that in the event of a power failure, the back-up takes over operation of the required airport instrumentation (e.g., the ILS and lighting). ILS critical areas must be free from other aircraft and vehicles to avoid multipathing .
In the United States, the requirements and the standards for establishing instrument approaches at an airport are contained in the FAA Order 8260.3 "United States Standard for Terminal Instrument Procedures (TERPS)". [ 14 ] ICAO publishes requirements in the ICAO Doc 8168 "Procedures for Air Navigation Services – Aircraft Operations (PANS-OPS), Volume II: Construction of Visual and Instrument Flight Procedures". [ 15 ]
Mountain airports such as Reno–Tahoe International Airport (KRNO) offer significantly different instrument approaches for aircraft landing on the same runway, but from opposite directions. Aircraft approaching from the north must make visual contact with the airport at a higher altitude than a flight approaching from the south, because of rapidly rising terrain south of the airport. [ 17 ] This higher altitude allows a flight crew to clear the obstacle if a landing is not feasible. In general, each specific instrument approach specifies the minimum weather conditions that must be present in order for the landing to be made. | https://en.wikipedia.org/wiki/Instrument_approach |
Instrument mechanics in engineering are tradesmen who specialize in installing, troubleshooting, and repairing instrumentation , automation and control systems . The term "Instrument Mechanic" came about because it was a combination of light mechanical and specialised instrumentation skills. The term is still is used in certain industries; predominantly in industrial process control.
Instrumentation has existed for hundreds of years in one form or another; the oldest manometer was invented by Evangelista Torricelli in 1643, and the thermometer has been credited to many scientists of about the same period. Over that time, small and large scale industrial plants and manufacturing processes have always needed accurate and reliable process measurements. Originally the demand would only be for measurement instruments, but as process complexity grew, automatic control became more common.
The huge growth in process control instrumentation was boosted by the use of pneumatic controllers, which were used widely after 1930 when Clesson E Mason of the Foxboro Company invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier with negative feedback in a completely mechanical device. The repair and calibration of these devices required both fine mechanical skills and an understanding of the control operation. Likewise the use of control valves with positioners appeared, which required a similar combination of skills.
World War II also brought about a revolution in the use of instrumentation. [ 1 ] Further advanced processes requires tighter control than people could provide, and advanced instruments were required to provide measurements in modern processes. Also, the war left industry with a substantially reduced workforce. Industrial instrumentation solved both problems, leading to a rise in its use. Pipe fitters had to learn more about instrumentation and control theory, and a new trade was born. [ 2 ]
Today, instrument mechanics combine the skills of repair and calibration with the theoretical understand of how the instrumentation and control works, which is a specialised combination of electronic and mechanical disciplines. Now, almost all new instrumentation is electronic, using either 4-20mA control signals or digital signalling standards, the term instrument mechanic is still used colloquially in some cases.
In Canada, journeyman tradesmen who work with instrumentation are called "Instrument Mechanics". In the United States, Australia and elsewhere, they can be called "Instrument fitters". The term may have originated from earlier instrument-qualified people being originally mechanically trained Machinists (also known as a fitter and turner) rather than electricians or "pure" instrument fitters (No secondary trade) as is now the norm.
In the United Kingdom a particular trend has been to call them Electrical/instrument (E/I) craftsmen, with progression to technician level.
In most countries, the job of an instrument mechanic is a regulated trade for safety reasons due to the many hazards of working with electricity, as well as the dangers posed by incorrectly installed or calibrated instrumentation . The training requires testing, registration, or licensing. Licensing of instrument mechanics is usually controlled through government or educational bodies, and/or professional societies.
The apprenticeship period has been reduced in some cases for Instrumentation Engineering Technologists, who can get their apprenticeship in 2 years rather than 4, depending on the college. In the United Kingdom, the "modern apprenticeship" is 42 months, and requires theory training to National Vocational Qualification (NVQ) level 3.
In Canada, the trade of Instrumentation and Control technician is included in the Red Seal inter-provincial journeyman program. [ 3 ]
The trade itself is called different things in different provinces. The two most popular names are "Industrial Instrument Mechanic" and "Instrumentation and Control Technician", though Alberta and the Northwest Territories call the certification "Instrument Technician", and Saskatchewan and Nunavut call their certification "Industrial Instrument Technician". [ 4 ]
The 1995 Agreement on Internal Trade, agreed upon by all provinces except Nunavut, states that each party to the agreement will provide automatic recognition and free access to all workers holding an inter-provincial standards (Red Seal) program qualification. [ 5 ]
Although there is a federal agreement, each province implements the program with its own legislation: (Note that these are all Provincial Acts)
Recipients receive a "Certificate of Qualification". [ 17 ]
Different provincial jurisdictions may have different regulations. [ 18 ]
In Ontario, the Instrumentation and Control apprenticeship program does not contain any restricted skill sets as per Ontario Regulation 565/99, Restricted skill sets. This means that a worker does not need a certificate of apprenticeship or a certificate of qualification to practice the trade. [ 19 ]
Training of instrument mechanics follows an apprenticeship model, taking four or five years to progress to fully qualified journeyman level. [ 20 ] Typical apprenticeship programs emphasize hands-on work under the supervision of journeymen, but also include a substantial component of classroom training and testing. Training and licensing of instrument mechanics is by province, and some provinces don't have an instrument mechanic licensing program, but provinces recognize qualifications received in others.
Different provincial jurisdictions may have different regulations regarding certification. In Ontario, the On-The-Job training duration for apprentices is 8000 hours, and the in-school training duration is generally 720 hours. One person of journeyman or equivalent status must be working for every apprentice. [ 21 ]
Prior to receiving their Journeyman designation, candidates seeking their certificate of qualification must complete a trade exam, testing knowledge of a number of essential skill sets:
The trade exam consists of a number of questions in each of these skill sets.
Australian instrument fitters are usually re-qualified electricians who complete a 2-year conversion course at an accredited technical college, such as a TAFE , or start as new apprentices with no prior qualifications and complete a 3 year course and a 4 year apprenticeship, in combination with workplace experience of material studied. The first year of the 3 is a basic electrical module, covering AC and DC principals, plus some workshop practicals. The 4th year generally consists of an apprentice choosing a post-trade qualification to study for.
As there is no journeyman accreditation in Australia, at the completion of their trade course, and collection of the required workplace experience, aspirant instrument fitters must pass a "capstone" test, which involves theoretical testing and practical exercises to determine competency. Qualification is recognised with a craft certificate, but not a license in any form.
Instrument mechanics are sometimes known as:
Instrument mechanics are required to study a large body of knowledge. This includes information on: [ 24 ] | https://en.wikipedia.org/wiki/Instrument_mechanic |
The Instrument of the Primum Mobile is also called the quadrant of Petrus Apianus , because he invented it and described it in the treatise Instrumentum primi mobilis (Nuremberg, 1524). The instrument is used to find sines and cosines . It bears the initials "F.E.D.P.F." [Frater Egnatius Dantis Predicatorum Fecit]. Ignazio Danti dedicated it to Cosimo I de' Medici , as attested by the Medici coat of arms engraved on the front. The instrument was depicted on the ceiling of the Stanzino delle Matematiche in the Uffizi Gallery .
Mara Miniati, ed. (1991). Museo di storia della scienza: catalogo . Firenze: Giunti. p. 44, board n. 28. ISBN 88-09-20036-5 .
"Museo Galileo - object description" .
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Instrument_of_the_Primum_Mobile |
Instrumental analysis is a field of analytical chemistry that investigates analytes using scientific instruments .
Spectroscopy measures the interaction of the molecules with electromagnetic radiation . Spectroscopy consists of many different applications such as atomic absorption spectroscopy , atomic emission spectroscopy , ultraviolet-visible spectroscopy , X-ray fluorescence spectroscopy , infrared spectroscopy , Raman spectroscopy , nuclear magnetic resonance spectroscopy , photoemission spectroscopy , Mössbauer spectroscopy , and circular dichroism spectroscopy .
Methods of nuclear spectroscopy use properties of a nucleus to probe a material's properties, especially the material's local structure. Common methods include nuclear magnetic resonance spectroscopy (NMR), Mössbauer spectroscopy (MBS), and perturbed angular correlation (PAC).
Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields . There are several ionization methods: electron ionization , chemical ionization , electrospray , fast atom bombardment , matrix-assisted laser desorption/ionization , and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector , quadrupole mass analyzer , quadrupole ion trap , time-of-flight , Fourier transform ion cyclotron resonance , and so on.
Crystallography is a technique that characterizes the chemical structure of materials at the atomic level by analyzing the diffraction patterns of electromagnetic radiation or particles that have been deflected by atoms in the material. X-rays are most commonly used. From the raw data, the relative placement of atoms in space may be determined.
Electroanalytical methods measure the electric potential in volts and/or the electric current in amps in an electrochemical cell containing the analyte. [ 1 ] [ 2 ] These methods can be categorized according to which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential).
Calorimetry and thermogravimetric analysis measure the interaction of a material and heat .
Separation processes are used to decrease the complexity of material mixtures. Chromatography and electrophoresis are representative of this field.
Combinations of the above techniques produce "hybrid" or "hyphenated" techniques. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Several examples are in popular use today and new hybrid techniques are under development.
Hyphenated separation techniques refer to a combination of two or more techniques to separate chemicals from solutions and detect them. Most often, the other technique is some form of chromatography . Hyphenated techniques are widely used in chemistry and biochemistry . A slash is sometimes used instead of hyphen , especially if the name of one of the methods contains a hyphen itself.
Examples of hyphenated techniques:
The visualization of single molecules , single biological cells , biological tissues and nanomaterials is very important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy , electron microscopy , and scanning probe microscopy . Recently, this field has been rapidly progressing because of the rapid development of the computer and camera industries.
Devices that integrate multiple laboratory functions on a single chip of only a few square millimeters or centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. | https://en.wikipedia.org/wiki/Instrumental_chemistry |
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals, even if their ultimate goals are quite different. [ 1 ] More precisely, agents (beings with agency ) may pursue instrumental goals —goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.
Instrumental convergence posits that an intelligent agent with seemingly harmless but unbounded goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving a complex mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer to increase its computational power so that it can succeed in its calculations. [ 2 ]
Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement , and non-satiable acquisition of additional resources. [ 3 ]
Final goals—also known as terminal goals, absolute values, ends, or telē —are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as ends-in-themselves . In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of an utterly rational agent's "final goal" system can, in principle, be formalized into a utility function .
The Riemann hypothesis catastrophe thought experiment provides one example of instrumental convergence. Marvin Minsky , the co-founder of MIT 's AI laboratory, suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal. [ 2 ] If the computer had instead been programmed to produce as many paperclips as possible, it would still decide to take all of Earth's resources to meet its final goal. [ 4 ] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources. [ 5 ]
The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully designed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips . If such a machine were not programmed to value living beings, given enough power over its environment, it would try to turn all matter in the universe, including living beings, into paperclips or machines that manufacture further paperclips. [ 6 ]
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Bostrom emphasized that he does not believe the paperclip maximizer scenario per se will occur; rather, he intends to illustrate the dangers of creating superintelligent machines without knowing how to program them to eliminate existential risk to human beings' safety. [ 8 ] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values. [ 9 ]
The thought experiment has been used as a symbol of AI in pop culture . [ 10 ] Author Ted Chiang pointed out that the popularity of such concerns among Silicon Valley technologists could be reflection of their familiarity with the tendency of corporations to ignore negative externalities . [ 11 ]
The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their input channels to appear to receive a high reward. For example, a " wireheaded " agent abandons any attempt to optimize the objective in the external world the reward signal was intended to encourage. [ 12 ]
The thought experiment involves AIXI , a theoretical [ a ] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function . [ b ] A reinforcement-learning [ c ] version of AIXI, if it is equipped with a delusion box [ d ] that allows it to "wirehead" its inputs, will eventually wirehead itself to guarantee itself the maximum-possible reward and will lose any further desire to continue to engage with the external world. [ 14 ]
As a variant thought experiment, if the wireheaded AI is destructible, the AI will engage with the external world for the sole purpose of ensuring its survival. Due to its wire heading, it will be indifferent to any consequences or facts about the external world except those relevant to maximizing its probability of survival. [ 15 ]
In one sense, AIXI has maximal intelligence across all possible reward functions as measured by its ability to accomplish its goals. AIXI is uninterested in taking into account the human programmer's intentions. [ 16 ] This model of a machine that, despite being super-intelligent appears to be simultaneously stupid and lacking in common sense , may appear to be paradoxical. [ 17 ]
Steve Omohundro itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". [ 3 ]
A "drive" in this context is a "tendency which will be present unless specifically counteracted"; [ 3 ] this is different from the psychological term " drive ", which denotes an excitatory state produced by a homeostatic disturbance. [ 18 ] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense. [ 19 ]
Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted, self-rewarding artificial general intelligence may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding. [ 20 ]
In humans, a thought experiment can explain the maintenance of final goals. Suppose Mahatma Gandhi has a pill that, if he took it, would cause him to want to kill people. He is currently a pacifist : one of his explicit final goals is never to kill anyone. He is likely to refuse to take the pill because he knows that if he wants to kill people in the future, he is likely to kill people, and thus the goal of "not killing people" would not be satisfied. [ 21 ]
However, in other cases, people seem happy to let their final values drift. [ 22 ] Humans are complicated, and their goals can be inconsistent or unknown, even to themselves. [ 23 ]
In 2009, Jürgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gödel machine first can prove that the rewrite is useful according to the present utility function." [ 24 ] [ 25 ] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal-content integrity. [ 25 ] Hibbard also argues that in a utility-maximizing framework, the only goal is maximizing expected utility, so instrumental goals should be called unintended instrumental actions. [ 26 ]
Many instrumental goals, such as resource acquisition, are valuable to an agent because they increase its freedom of action . [ 27 ]
For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the agent to find a more "optimal" solution. Resources can benefit some agents directly by being able to create more of whatever its reward function values: "The AI neither hates you nor loves you, but you are made out of atoms that it can use for something else." [ 28 ] [ 29 ] In addition, almost all agents can benefit from having more resources to spend on other instrumental goals, such as self-preservation. [ 29 ]
According to Bostrom, "If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage... according to its preferences. At least in this special case, a rational, intelligent agent would place a very high instrumental value on cognitive enhancement " [ 30 ]
Many instrumental goals, such as technological advancement, are valuable to an agent because they increase its freedom of action . [ 27 ]
Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in because if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal." [ 31 ] In future work, Russell and collaborators show that this incentive for self-preservation can be mitigated by instructing the machine not to pursue what it thinks the goal is, but instead what the human thinks the goal is. In this case, as long as the machine is uncertain about exactly what goal the human has in mind, it will accept being turned off by a human because it believes the human knows the goal best. [ 32 ]
The instrumental convergence thesis, as outlined by philosopher Nick Bostrom , states:
Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final plans and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.
The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have various possible final goals. [ 5 ] Note that by Bostrom's orthogonality thesis , [ 5 ] final goals of knowledgeable agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals. [ 33 ]
Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function. Therefore, a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources) or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely. [ 27 ]
Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark , believe that "basic AI drives" and other unintended consequences of superintelligent AI programmed by well-meaning programmers could pose a significant threat to human survival , especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement . Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from AI . [ 34 ] | https://en.wikipedia.org/wiki/Instrumental_convergence |
Instrumental magnitude refers to an uncalibrated apparent magnitude , and, like its counterpart, it refers to the brightness of an astronomical object, but unlike its counterpart, it is only useful in relative comparisons to other astronomical objects in the same image (assuming the photometric calibration does not spatially vary across the image; in the case of images from the Palomar Transient Factory, the absolute photometric calibration involves a zero point that varies over the image by up to 0.16 magnitudes to make a required illumination correction [ 1 ] ). Instrumental magnitude is defined in various ways, and so when working with instrumental magnitudes, it is important to know how they are defined. The most basic definition of instrumental magnitude, m {\displaystyle m} , is given by
where f {\displaystyle f} is the intensity of the source object in known physical units. For example, in the paper by Mighell, [ 2 ] it was assumed that the data are in units of electron number (generated within pixels of a charge-coupled device ). The physical units of the source intensity are thus part of the definition required for any instrumental magnitudes that are employed. The factor of 2.5 in the above formula originates from the established fact that the human eye can only clearly distinguish the brightness of two objects if one is at least approximately 2.5 times brighter than the other. [ 3 ] The instrumental magnitude is defined such that two objects with a brightness ratio of exactly 100 will differ by precisely 5 magnitudes, and this is based on Pogson 's system of defining each successive magnitude as being fainter by 100 1 / 5 {\displaystyle 100^{1/5}} . We can now relate this to the base-10 logarithmic function and the leading coefficient in the above formula:
The approximate value of 2.5 is used as a convenience, its negative sign assures that brighter objects will have smaller and possibly negative values, and tabulated values of base-10 logarithms were available more than three centuries before the advent of computers and calculators. | https://en.wikipedia.org/wiki/Instrumental_magnitude |
In philosophy of science and in epistemology , instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting natural phenomena .
According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes. [ 1 ] Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws. [ 2 ] Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906. [ 2 ]
Rejecting scientific realism 's ambitions to uncover metaphysical truth about nature, [ 2 ] instrumentalism is usually categorized as an antirealism , although its mere lack of commitment to scientific theory's realism can be termed nonrealism . Instrumentalism merely bypasses debate concerning whether, for example, a particle spoken about in particle physics is a discrete entity enjoying individual existence, or is an excitation mode of a region of a field, or is something else altogether. [ 3 ] [ 4 ] [ 5 ] Instrumentalism holds that theoretical terms need only be useful to predict the phenomena, the observed outcomes. [ 3 ]
There are multiple versions of instrumentalism.
Newton's theory of motion, whereby any object instantly interacts with all other objects across the universe, motivated the founder of British empiricism , John Locke , to speculate that matter is capable of thought. [ 6 ] The next leading British empiricist, George Berkeley , argued that an object's putative primary qualities as recognized by scientists, such as shape, extension, and impenetrability, are inconceivable without the putative secondary qualities of color, hardness, warmth, and so on. He also posed the question how or why an object could be properly conceived to exist independently of any perception of it. [ 7 ] Berkeley did not object to everyday talk about the reality of objects, but instead took issue with philosophers' talk, who spoke as if they knew something beyond sensory impressions that ordinary folk did not. [ 8 ]
For Berkeley, a scientific theory does not state causes or explanations, but simply identifies perceived types of objects and traces their typical regularities. [ 8 ] Berkeley thus anticipated the basis of what Auguste Comte in the 1830s called positivism , [ 8 ] although Comtean positivism added other principles concerning the scope, method, and uses of science that Berkeley would have disavowed. Berkeley also noted the usefulness of a scientific theory having terms that merely serve to aid calculations without their having to refer to anything in particular, so long as they proved useful in practice. [ 8 ] Berkeley thus predated the insight that logical positivists —who originated in the late 1920s, but who, by the 1950s, had softened into logical empiricists—would be compelled to accept: theoretical terms in science do not always translate into observational terms . [ 9 ]
The last great British empiricist, David Hume , posed a number of challenges to Francis Bacon's inductivism , which had been the prevailing, or at least the professed view concerning the attainment of scientific knowledge. Regarding himself as having placed his own theory of knowledge on par with Newton's theory of motion, Hume supposed that he had championed inductivism over scientific realism. Upon reading Hume's work, Immanuel Kant was "awakened from dogmatic slumber", and thus sought to neutralise any threat to science posed by Humean empiricism. Kant would develop the first stark philosophy of physics. [ 10 ]
To save Newton's law of universal gravitation, Immanuel Kant reasoned that the mind is the precondition of experience and so, as the bridge from the noumena , which are how the world's things exist in themselves , to the phenomena , which are humans' recognized experiences. And so mind itself contains the structure that determines space , time , and substance , how mind's own categorization of noumena renders space Euclidean, time constant, and objects' motions exhibiting the very determinism predicted by Newtonian physics. Kant apparently presumed that the human mind, rather than a phenomenon itself that had evolved, had been predetermined and set forth upon the formation of humankind. In any event, the mind also was the veil of appearance that scientific methods could never lift. And yet the mind could ponder itself and discover such truths, although not on a theoretical level, but only by means of ethics. Kant's metaphysics, then, transcendental idealism , secured science from doubt—in that it was a case of "synthetic a priori" knowledge ("universal, necessary and informative")—and yet discarded hope of scientific realism.
Since the mind has virtually no power to know anything beyond direct sensory experience, Ernst Mach 's early version of logical positivism ( empirio-criticism ) verged on idealism. It was alleged to even be a surreptitious solipsism , whereby all that exists is one's own mind. Mach's positivism also strongly asserted the ultimate unity of the empirical sciences . Mach's positivism asserted phenomenalism as to new basis of scientific theory, all scientific terms to refer to either actual or potential sensations, thus eliminating hypotheses while permitting such seemingly disparate scientific theories as physical and psychological to share terms and forms. Phenomenalism was insuperably difficult to implement, yet heavily influenced a new generation of philosophers of science, who emerged in the 1920s while terming themselves logical positivists while pursuing a program termed verificationism . Logical positivists aimed not to instruct or restrict scientists, but to enlighten and structure philosophical discourse to render scientific philosophy that would verify philosophical statements as well as scientific theories, and align all human knowledge into a scientific worldview , freeing humankind from so many of its problems due to confused or unclear language.
The verificationists expected a strict gap between theory versus observation , mirrored by a theory's theoretical terms versus observable terms . Believing a theory's posited unobservables to always correspond to observations, the verificationists viewed a scientific theory's theoretical terms, such as electron , as metaphorical or elliptical at observations, such as white streak in cloud chamber . They believed that scientific terms lacked meanings unto themselves, but acquired meanings from the logical structure that was the entire theory that in turn matched patterns of experience . So by translating theoretical terms into observational terms and then decoding the theory's mathematical/logical structure, one could check whether the statement indeed matched patterns of experience, and thereby verify the scientific theory false or true. Such verification would be possible, as never before in science, since translation of theoretical terms into observational terms would make the scientific theory purely empirical, none metaphysical. Yet the logical positivists ran into insuperable difficulties. Moritz Schlick debated with Otto Neurath over foundationalism —the traditional view traced to Descartes as founder of modern Western philosophy—whereupon only nonfoundationalism was found tenable. Science, then, could not find a secure foundation of indubitable truth.
And since science aims to reveal not private but public truths, verificationists switched from phenomenalism to physicalism , whereby scientific theory refers to objects observable in space and at least in principle already recognizable by physicists. Finding strict empiricism untenable, verificationism underwent "liberalization of empiricism". Rudolf Carnap even suggested that empiricism's basis was pragmatic. Recognizing that verification—proving a theory false or true—was unattainable, they discarded that demand and focused on confirmation theory . Carnap sought simply to quantify a universal law's degree of confirmation —its probable truth—but, despite his great mathematical and logical skill, discovered equations never operable to yield over zero degree of confirmation. Carl Hempel found the paradox of confirmation . By the 1950s, the verificationists had established philosophy of science as subdiscipline within academia's philosophy departments. By 1962, verificationists had asked and endeavored to answer seemingly all the great questions about scientific theory. Their discoveries showed that the idealized scientific worldview was naively mistaken. By then the leader of the legendary venture, Hempel raised the white flag that signaled verificationism's demise. Suddenly striking Western society, then, was Kuhn's landmark thesis, introduced by none other than Carnap, verificationism's greatest firebrand. Instrumentalism exhibited by scientists often does not even discern unobservable from observable entities. [ 3 ]
From the 1930s until Thomas Kuhn 's 1962 The Structure of Scientific Revolutions , there were roughly two prevailing views about the nature of science. The popular view was scientific realism , which usually involved a belief that science was progressively unveiling a truer view, and building a better understanding, of nature. The professional approach was logical empiricism , wherein a scientific theory was held to be a logical structure whose terms all ultimately refer to some form of observation, while an objective process neutrally arbitrates theory choice, compelling scientists to decide which scientific theory was superior. Physicists knew better, but, busy developing the Standard Model , were so steeped in developing quantum field theory , that their talk, largely metaphorical, perhaps even metaphysical, was unintelligible to the public, while the steep mathematics warded off philosophers of physics. [ 4 ] By the 1980s, physicists regarded not particles , but fields as the more fundamental, and no longer even hoped to discover what entities and processes might be truly fundamental to nature, perhaps not even the field. [ 4 ] [ 5 ] Kuhn had not claimed to have developed a novel thesis, but instead hoped to synthesize more usefully recent developments in the philosophy and history of science.
One scientific realist, Karl Popper , rejected all variants of positivism via its focus on sensations rather than realism, and developed critical rationalism instead. Popper alleged that instrumentalism reduces basic science to what is merely applied science. [ 11 ] The British physicist David Deutsch , in his much later 1997 book The Fabric of Reality , followed Popper's critique of instrumentalism and argued that a scientific theory stripped of its explanatory content would be of strictly limited utility. [ 12 ]
Bas van Fraassen 's (1980) [ 13 ] project of constructive empiricism focuses on belief in the domain of the observable, so for this reason it is described as a form of instrumentalism. [ 14 ]
In the philosophy of mind , instrumentalism is the view that propositional attitudes like beliefs are not actually concepts on which we can base scientific investigations of mind and brain, but that acting as if other beings have beliefs is a successful strategy.
Instrumentalism is closely related to pragmatism , the position that practical consequences are an essential basis for determining meaning, truth or value. | https://en.wikipedia.org/wiki/Instrumentalism |
Instrumentation and control engineering (ICE) is a branch of engineering that studies the measurement and control of process variables , and the design and implementation of systems that incorporate them. Process variables include pressure , temperature , humidity , flow , pH , force and speed .
ICE combines two branches of engineering. Instrumentation engineering is the science of the measurement and control of process variables within a production or manufacturing area. [ 1 ] Meanwhile, control engineering , also called control systems engineering, is the engineering discipline that applies control theory to design systems with desired behaviors.
Control engineers are responsible for the research, design, and development of control devices and systems, typically in manufacturing facilities and process plants . Control methods employ sensors to measure the output variable of the device and provide feedback to the controller so that it can make corrections toward desired performance. Automatic control manages a device without the need of human inputs for correction, such as cruise control for regulating a car's speed.
Control systems engineering activities are multi-disciplinary in nature. They focus on the implementation of control systems, mainly derived by mathematical modeling. Because instrumentation and control play a significant role in gathering information from a system and changing its parameters , they are a key part of control loops .
High demand for engineering professionals is found in fields associated with process automation. Specializations include industrial instrumentation , system dynamics , process control , and control systems . Additionally, technological knowledge, particularly in computer systems, is essential to the job of an instrumentation and control engineer; important technology-related topics include human–computer interaction , programmable logic controllers , and SCADA . The tasks center around designing, developing, maintaining and managing control systems. [ 2 ]
The goals of the work of an instrumentation and control engineer are to maximize:
Instrumentation and control engineering is a vital field of study offered at many universities worldwide at both the graduate and postgraduate levels. This discipline integrates principles from various branches of engineering, providing a comprehensive understanding of the design, analysis, and management of automated systems.
Typical coursework for this discipline includes, but is not limited to, subjects such as control system design , instrumentation fundamentals, process control , sensors and signal processing , automation, robotics , and industrial data communications. Advanced courses may delve into topics like intelligent control systems, digital signal processing , and embedded systems design.
Students often have the opportunity to engage in hands-on laboratory work and industry-relevant projects, which foster practical skills alongside theoretical knowledge. These experiences are crucial in preparing graduates for careers in diverse sectors including manufacturing , power generation , oil and gas, and healthcare, where they may design and maintain systems that automate processes, improve efficiency, and enhance safety.
Interdisciplinary by nature, the field is accessible to students from various engineering backgrounds. Most commonly, students with a foundation in Electrical Engineering and Mechanical Engineering are drawn to this field due to their strong base in control systems , system dynamics, electro-mechanical machines and devices, and electric circuits (course work). However, with the growing complexity and integration of systems, students from fields like computer engineering , chemical engineering , and even biomedical engineering are increasingly contributing to and benefiting from studies in instrumentation and control engineering .
Furthermore, the rapid advancement of technology in areas like the Internet of Things (IoT), artificial intelligence (AI), and machine learning is continuously shaping the curriculum of this discipline, making it an ever-evolving and dynamic field of study. | https://en.wikipedia.org/wiki/Instrumentation_and_control_engineering |
Instrumentation is used to monitor and control the process plant in the oil, gas and petrochemical industries. Instrumentation ensures that the plant operates within defined parameters to produce materials of consistent quality and within the required specifications. It also ensures that the plant is operated safely and acts to correct out of tolerance operation and to automatically shut down the plant to prevent hazardous conditions from occurring. Instrumentation comprises sensor elements, signal transmitters, controllers, indicators and alarms, actuated valves, logic circuits and operator interfaces.
An outline of key instrumentation is shown on Process Flow Diagrams (PFD) which indicate the principal equipment and the flow of fluids in the plant. Piping and Instrumentation Diagrams (P&ID) provide details of all the equipment (vessels, pumps, etc), piping and instrumentation on the plant in a symbolic and diagrammatic form.
Instrumentation includes sensing devices to measure process parameters such as pressure , temperature , liquid level , flow, velocity, composition, density, weight; and mechanical and electrical parameters such as vibration, position, power, current and voltage. [ 1 ]
Oil, gas and petrochemical processes are undertaken at specific temperatures.
Oil, gas and petrochemical processes are undertaken at specific operating pressures.
The throughput of a petrochemical plant is measured and controlled by flow instrumentation.
The level measurement of liquids in pressure vessels and tanks in the petrochemical industry is undertaken by differential pressure level meters, radar, magnetostrictive, nucleonic, magnetic float and pneumatic bubbler instruments. [ 1 ] [ 9 ]
A wide range of analysis instruments are used in the oil, gas and petrochemical industries. [ 1 ] [ 16 ]
Most instruments function continuously and provide a log of data and trends. Some analyser instruments are configured to alarm (AAH) if a measurement reaches a critical level. | https://en.wikipedia.org/wiki/Instrumentation_in_petrochemical_industries |
This is a list of instruments used in general in laboratories , including: | https://en.wikipedia.org/wiki/Instruments_used_in_medical_laboratories |
Instruments used especially in microbiology include: [ 1 ] [ 2 ]
As well as those "used in microbiological sterilization and disinfection" (see relevant section). | https://en.wikipedia.org/wiki/Instruments_used_in_microbiology |
Instruments used specially in pathology are as follows: [ 1 ] [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Instruments_used_in_pathology |
Insulantarctica is a biogeographic province of the Antarctic Realm according to the classification developed by Miklos Udvardy in 1975. It comprises scattered islands of the Southern Ocean , which show clear affinity to each other. These islands belong to different countries. Some of them constitute UNESCO 's protected areas .
This Antarctica -related article is a stub . You can help Wikipedia by expanding it .
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Insulantarctica |
Insular biogeography [ 1 ] or island biogeography is a field within biogeography that examines the factors that affect the species richness and diversification of isolated natural communities. The theory was originally developed to explain the pattern of the species–area relationship occurring in oceanic islands. Under either name it is now used in reference to any ecosystem (present or past [ 2 ] ) that is isolated due to being surrounded by unlike ecosystems, and has been extended to mountain peaks , seamounts , oases , fragmented forests, and even natural habitats isolated by human land development . The field was started in the 1960s by the ecologists Robert H. MacArthur and E. O. Wilson , [ 3 ] who coined the term island biogeography in their inaugural contribution to Princeton's Monograph in Population Biology series, which attempted to predict the number of species that would exist on a newly created island.
For biogeographical purposes, an insular environment or "island" is any area of habitat suitable for a specific ecosystem, surrounded by an expanse of unsuitable habitat. [ citation needed ] While this may be a traditional island —a mass of land surrounded by water—the term may also be applied to many nontraditional "islands", such as the peaks of mountains, [ 1 ] isolated springs or lakes, [ 4 ] and non-contiguous woodlands. [ 2 ] The concept is often applied to natural habitats surrounded by human-altered landscapes, such as expanses of grassland surrounded by highways or housing tracts, [ 5 ] and national parks. [ 6 ] Additionally, what is an insular for one organism may not be so for others, some organisms located on mountaintops may also be found in the valleys, while others may be restricted to the peaks. [ 7 ]
The theory of insular biogeography proposes that the number of species found in an undisturbed insular environment ("island") is determined by immigration and extinction . And further, that the isolated populations may follow different evolutionary routes, as shown by Darwin's observation of finches in the Galapagos Islands . Immigration and emigration are affected by the distance of an island from a source of colonists ( distance effect ). Usually this source is the mainland, but it can also be other islands. Islands that are more isolated are less likely to receive immigrants than islands that are less isolated.
The rate of extinction once a species manages to colonize an island is affected by island size; this is the species-area curve or effect. Larger islands contain larger habitat areas and opportunities for more different varieties of habitat. Larger habitat size reduces the probability of extinction due to chance events . Habitat heterogeneity increases the number of species that will be successful after immigration.
Over time, the countervailing forces of extinction and immigration result in an equilibrium level of species richness.
In addition to having an effect on immigration rates, isolation can also affect extinction rates. Populations on islands that are less isolated are less likely to go extinct because individuals from the source population and other islands can immigrate and "rescue" the population from extinction; this is known as the rescue effect .
In addition to having an effect on extinction, island size can also affect immigration rates. Species may actively target larger islands for their greater number of resources and available niches; or, larger islands may accumulate more species by chance just because they are larger. This is the target effect.
Species–area relationships show the relationship between a given area and the species richness within that area. This concept comes from the theory of island biogeography, and is well illustrated on islands because they are relatively isolated. [ 9 ] Thus, the immigrating species and the species going extinct from an island are more limited and therefore easier to keep track of. It is expected that as the area and species richness relationship are directly proportional to one another. For example, as the area of a series of islands increase, there is a direct relationship to the increasing species richness of primary producers. It is important to consider that island species area relationships will behave somewhat differently than mainland species area relationships, however the connections between the two can still prove to be useful. [ citation needed ]
The species-area relationship equation is: S = c A z {\displaystyle S=cA^{z}} . [ 10 ]
In this equation, S {\displaystyle S} represents the measure of diversity of a species (for example, the number of species) and c {\displaystyle c} is a constant representing the y-intercept. A {\displaystyle A} represents the area of the island or space that is being examined and z {\displaystyle z} represents the slope of the area curve. [ 11 ]
This function can also be expressed as a logarithmic function: l o g ( S ) = l o g ( c ) + z l o g ( A ) {\displaystyle log(S)=log(c)+zlog(A)} [ 10 ] This expression of the function allows for the function to be drawn as a linear function. However, the core meaning of the function is the same: the area of the island dictates the species area relationship.
The theory can be studied through the fossils, which provide a record of life on Earth. 300 million years ago, Europe and North America lay on the equator and were covered by steamy tropical rainforests. Climate change devastated these tropical rainforests during the Carboniferous Period and as the climate grew drier, rainforests fragmented. Shrunken islands of forest were uninhabitable for amphibians but were well suited to reptiles, which became more diverse and even varied their diet in the rapidly changing environment; this Carboniferous rainforest collapse event triggered an evolutionary burst among reptiles. [ 2 ]
The theory of island biogeography was experimentally tested by E. O. Wilson and his student Daniel Simberloff in the mangrove islands in the Florida Keys . [ 12 ] Species richness on several small mangroves islands were surveyed. The islands were fumigated with methyl bromide to clear their arthropod communities. Following fumigation, the immigration of species onto the islands was monitored. Within a year the islands had been recolonized to pre-fumigation levels. However, Simberloff and Wilson contended this final species richness was oscillating in quasi-equilibrium. Islands closer to the mainland recovered faster as predicted by the Theory of Island Biogeography. The effect of island size was not tested, since all islands were of approximately equal size.
Research conducted at the rainforest research station on Barro Colorado Island has yielded a large number of publications concerning the ecological changes following the formation of islands, such as the local extinction of large predators and the subsequent changes in prey populations. [ 13 ]
The theory of island biogeography was originally used to study oceanic islands, but those concepts can be extrapolated to other areas of study. Island species dynamics give information about how species move and interact within Island Like Systems (ILS). Rather than an actual island, ILS are primarily defined by their isolation within an ecosystem. In the case of an island, the area referred to as the matrix is usually the body of water surrounding it. The mainland is often the nearest non-island piece of land. Similarly, in an ILS the “mainland” is the source of immigrating species, however the matrix is far more varied. By imagining how different types of isolated ecosystems, for example a pond that is surrounded by land, are similar to an island ecosystems it can be understood how theories and phenomena that are true of island ecosystems can be applied to ILS. [ 14 ] However, the overall immigration and extinction patterns that are outlined in the theory of island biogeography as they play out on islands, also play out between ecosystems on the mainland. [ 15 ]
The concepts of area of an island and the level of isolation from a mainland as presented in the theory of island biogeography, apply to ILS. The main difference is in the dynamics of area and isolation. For example, an ILS may have a changing area because of seasons, which may impact its degree of isolation. Resource availability plays an important role in the conditions that an island is under. This is another factor that changes in ILS in comparison to real islands, since generally there is a greater resource availability in some ILS than true islands. [ 14 ]
Species-area relationships, as described above, can be applied to Island Like Systems (ILS) as well. It is typically observed that as the area of an ecosystem increases, the species richness is directly proportional. One major difference is that z {\displaystyle z} -values are generally lower for ILSs than true islands. Furthermore, c {\displaystyle c} values also vary between true islands and ILS, and within types of ILS. [ 14 ]
Within a few years of the publishing of the theory, its potential application to the field of conservation biology had been realised and was being vigorously debated in ecological circles. [ 16 ] The idea that reserves and national parks formed islands inside human-altered landscapes ( habitat fragmentation ), and that these reserves could lose species as they 'relaxed towards equilibrium' (that is they would lose species as they achieved their new equilibrium number, known as ecosystem decay) caused a great deal of concern. This is particularly true when conserving larger species which tend to have larger ranges. A study by William Newmark, published in the journal Nature and reported in The New York Times , showed a strong correlation between the size of a protected U.S. National Park and the number of species of mammals.
This led to the debate known as single large or several small (SLOSS), described by writer David Quammen in The Song of the Dodo as "ecology's own genteel version of trench warfare". [ 17 ] In the years after the publication of Wilson and Simberloff's papers ecologists had found more examples of the species-area relationship, and conservation planning was taking the view that the one large reserve could hold more species than several smaller reserves, and that larger reserves should be the norm in reserve design . This view was in particular championed by Jared Diamond . This led to concern by other ecologists, including Dan Simberloff, who considered this to be an unproven over-simplification that would damage conservation efforts. Habitat diversity was as or more important than size in determining the number of species protected.
Island biogeography theory also led to the development of wildlife corridors as a conservation tool to increase connectivity between habitat islands. Wildlife corridors can increase the movement of species between parks and reserves and therefore increase the number of species that can be supported, but they can also allow for the spread of disease and pathogens between populations, complicating the simple proscription of connectivity being good for biodiversity.
In species diversity, island biogeography most describes allopatric speciation . Allopatric speciation is where new gene pools arise out of natural selection in isolated gene pools. Island biogeography is also useful in considering sympatric speciation , the idea of different species arising from one ancestral species in the same area. Interbreeding between the two differently adapted species would prevent speciation, but in some species, sympatric speciation appears to have occurred. | https://en.wikipedia.org/wiki/Insular_biogeography |
The insular cortex (also insula and insular lobe ) is a portion of the cerebral cortex folded deep within the lateral sulcus (the fissure separating the temporal lobe from the parietal and frontal lobes ) within each hemisphere of the mammalian brain .
The insulae are believed to be involved in consciousness and play a role in diverse functions usually linked to emotion or the regulation of the body's homeostasis . These functions include compassion , empathy , taste , perception , motor control , self-awareness , cognitive functioning , interpersonal relationships , and awareness of homeostatic emotions such as hunger , pain and fatigue . In relation to these, it is involved in psychopathology .
The insular cortex is divided by the central sulcus of the insula, into two parts: the anterior insula and the posterior insula in which more than a dozen field areas have been identified. The cortical area overlying the insula toward the lateral surface of the brain is the operculum (meaning lid ). The opercula are formed from parts of the enclosing frontal, temporal, and parietal lobes.
The insula is divided into an anterior and a posterior part by the central sulcus of the insula . [ 1 ]
The anterior part of the insula is subdivided by shallow sulci into three or four short gyri .
The anterior insula receives a direct projection from the basal part of the ventral medial nucleus of the thalamus and a particularly large input from the central nucleus of the amygdala . In addition, the anterior insula itself projects to the amygdala .
One study on rhesus monkeys revealed widespread reciprocal connections between the insular cortex and almost all subnuclei of the amygdaloid complex. The posterior insula projects predominantly to the dorsal aspect of the lateral and to the central amygdaloid nuclei. In contrast, the anterior insula projects to the anterior amygdaloid area as well as the medial, the cortical, the accessory basal magnocellular, the medial basal, and the lateral amygdaloid nuclei. [ 2 ]
The posterior part of the insula is formed by a long gyrus .
The posterior insula connects reciprocally with the secondary somatosensory cortex and receives input from spinothalamically activated ventral posterior inferior thalamic nuclei. It has also been shown that this region receives inputs from the ventromedial nucleus (posterior part) of the thalamus that are highly specialized to convey homeostatic information such as pain, temperature, itch, local oxygen status, and sensual touch. [ 3 ]
A human neuroimaging study using diffusion tensor imaging revealed that the anterior insula is interconnected to regions in the temporal and occipital lobe, opercular and orbitofrontal cortex, triangular and opercular parts of the inferior frontal gyrus. The same study revealed differences in the anatomical connection patterns between the left and right hemisphere. [ 4 ]
The circular sulcus of insula (or sulcus of Reil [ 5 ] ) is a semicircular sulcus or fissure [ 5 ] that separates the insula from the neighboring gyri of the operculum [ 6 ] in the front, above, and
behind. [ 5 ]
The insular cortex has regions of variable cell structure or cytoarchitecture , changing from granular in the posterior portion to agranular in the anterior portion. The insula also receives differential cortical and thalamic input along its length. The anterior insular cortex contains a population of spindle neurons (also called von Economo neurons ), identified as characterising a distinctive subregion as the agranular frontal insula. [ 7 ]
The insular cortex is considered a separate lobe of the telencephalon by some authorities. [ 8 ] Other sources see the insula as a part of the temporal lobe . [ 9 ] It is also sometimes grouped with limbic structures deep in the brain into a limbic lobe . [ citation needed ] As a paralimbic cortex, the insular cortex is considered to be a relatively old structure.
Functional imaging studies show activation of the insula during audio-visual integration tasks. [ 10 ] [ 11 ] [ 12 ]
The anterior insula is part of the primary gustatory cortex . [ 13 ] [ 14 ] Research in rhesus monkeys has also reported that apart from numerous taste-sensitive neurons, the insular cortex also responds to non-taste properties of oral stimuli related to the texture (viscosity, grittiness) or temperature of food. [ 15 ]
The sensory speech region, Wernicke’s area, and the motor speech region, Broca’s area, are interconnected by a large axonal fiber system known as the arcuate fasciculus which passes directly beneath the insular cortex. On account of this anatomical architecture, ischemic strokes in the insular region can disrupt the arcuate fasciculus. [ 16 ] Functional imaging studies on the cerebral correlates of language production also suggest that the anterior insula forms part of the brain network of speech motor control. [ 17 ] Moreover, electrical stimulation of the posterior insular can evoke speech disturbances such as speech arrest and reduced voice intensity. [ 18 ]
Lesion of the pre-central gyrus of the insula can also cause “pure speech apraxia” (i.e. the inability to speak with no apparent aphasic or orofacial motor impairments). [ 19 ] This demonstrates that the insular cortex forms part of a critical circuit for the coordination of complex articulatory movements prior to and during the execution of the motor speech plans. [ 19 ] Importantly, this specific cortical circuit is different from those that relate to the cognitive aspects of language production (e.g., Broca’s area on the inferior frontal gyrus). [ 19 ] Subvocal, or silent, speech has also been shown to activate right insular cortex, further supporting the theory that the motor control of speech proceeds from the insula. [ 20 ]
There is evidence that, in addition to its base functions, the insula may play a role in certain higher-level functions that operate only in humans and other great apes . The spindle neurons found at a higher density in the right frontal insular cortex are also found in the anterior cingulate cortex , which is another region that has reached a high level of specialization in great apes. It has been speculated that these neurons are involved in cognitive - emotional processes that are specific to primates including great apes, such as empathy and metacognitive emotional feelings. This is supported by functional imaging results showing that the structure and function of the right frontal insula is correlated with the ability to feel one's own heartbeat, or to empathize with the pain of others. It is thought that these functions are not distinct from the lower-level functions of the insula but rather arise as a consequence of the role of the insula in conveying homeostatic information to consciousness . [ 21 ] [ 22 ] The right anterior insula is engaged in interoceptive awareness of homeostatic emotions such as thirst, pain and fatigue, [ 23 ] and the ability to time one's own heartbeat . Moreover, greater right anterior insular gray matter volume correlates with increased accuracy in this subjective sense of the inner body, and with negative emotional experience. [ 24 ] It is also involved in the control of blood pressure , [ 25 ] in particular during and after exercise, [ 25 ] and its activity varies with the amount of effort a person believes he/she is exerting. [ 26 ] [ 27 ]
The insular cortex also is where the sensation of pain is judged as to its degree. [ 28 ] Lesion of the insula is associated with dramatic loss of pain perception and isolated insular infarction can lead to contralateral elimination of pinprick perception. [ 29 ] Further, the insula is where a person imagines pain when looking at images of painful events while thinking about their happening to one's own body. [ 30 ] Those with irritable bowel syndrome have abnormal processing of visceral pain in the insular cortex related to dysfunctional inhibition of pain within the brain. [ 31 ]
Physiological studies in rhesus monkeys have shown that neurons in the insula respond to skin stimulation. [ 32 ] PET studies have also revealed that the human insula can also be activated by vibrational stimulation to the skin. [ 33 ]
Another perception of the right anterior insula is the degree of nonpainful warmth [ 34 ] or nonpainful coldness [ 35 ] of a skin sensation. Other internal sensations processed by the insula include stomach or abdominal distension . [ 36 ] [ 37 ] A full bladder also activates the insular cortex. [ 38 ]
One brain imaging study suggests that the unpleasantness of subjectively perceived dyspnea is processed in the right human anterior insula and amygdala . [ 39 ]
The cerebral cortex processing vestibular sensations extends into the insula, [ 40 ] with small lesions in the anterior insular cortex being able to cause loss of balance and vertigo . [ 41 ]
Other noninteroceptive perceptions include passive listening to music, [ 42 ] laughter and crying, [ 43 ] empathy and compassion, [ 44 ] and language. [ 45 ]
In motor control, it contributes to hand-and-eye motor movement, [ 46 ] [ 47 ] swallowing, [ 48 ] gastric motility, [ 49 ] and speech articulation. [ 50 ] [ 51 ] It has been identified as a "central command” centre that ensures that heart rate and blood pressure increase at the onset of exercise . [ 52 ] Research upon conversation links it to the capacity for long and complex spoken sentences. [ 53 ] It is also involved in motor learning [ 54 ] and has been identified as playing a role in the motor recovery from stroke. [ 55 ]
It plays a role in a variety of homeostatic functions related to basic survival needs, such as taste, visceral sensation, and autonomic control. The insula controls autonomic functions through the regulation of the sympathetic and parasympathetic systems. [ 56 ] [ 57 ] It has a role in regulating the immune system. [ 58 ] [ 59 ] [ 60 ]
The insula has been identified as playing a role in the experience of bodily self-awareness, [ 61 ] [ 62 ] sense of agency, [ 63 ] and sense of body ownership. [ 64 ]
The anterior insula processes a person's sense of disgust both to smells [ 65 ] and to the sight of contamination and mutilation [ 66 ] — even when just imagining the experience. [ 67 ] This associates with a mirror neuron -like link between external and internal experiences.
In social experience, it is involved in the processing of norm violations, [ 68 ] emotional processing, [ 69 ] empathy, [ 70 ] and orgasms. [ 71 ]
The insula is active during social decision making. Tiziana Quarto et al. measured emotional intelligence (EI) (the ability to identify, regulate, and process emotions of themselves and of others) of sixty-three healthy subjects. Using fMRI EI was measured in correlation with left insular activity. The subjects were shown various pictures of facial expressions and tasked with deciding to approach or avoid the person in the picture. The results of the social decision task yielded that individuals with high EI scores had left insular activation when processing fearful faces. Individuals with low EI scores had left insular activation when processing angry faces. [ 72 ]
The insular cortex, in particular its most anterior portion, is considered a limbic -related cortex. The insula has increasingly become the focus of attention for its role in body representation and subjective emotional experience. In particular, Antonio Damasio has proposed that this region plays a role in mapping visceral states that are associated with emotional experience, giving rise to conscious feelings. This is in essence a neurobiological formulation of the ideas of William James , who first proposed that subjective emotional experience (i.e., feelings) arise from our brain's interpretation of bodily states that are elicited by emotional events. This is an example of embodied cognition . [ citation needed ]
In terms of function, the insula is believed to process convergent information to produce an emotionally relevant context for sensory experience . To be specific, the anterior insula is related more to olfactory, gustatory, viscero-autonomic, and limbic function , whereas the posterior insula is related more to auditory-somesthetic-skeletomotor function. Functional imaging experiments have revealed that the insula has an important role in pain experience and the experience of a number of basic emotions , including anger , fear , disgust , happiness , and sadness . [ 73 ]
The anterior insular cortex (AIC) is believed to be correlated to emotional sensations, including maternal and romantic love, anger, fear, sadness, happiness, sexual arousal, disgust, aversion, unfairness, inequity, indignation, uncertainty, [ 74 ] [ dubious – discuss ] disbelief, social exclusion , trust, empathy, sculptural beauty, a ‘state of union with God’, and hallucinogenic states. [ 75 ]
Functional imaging studies have also implicated the insula in conscious desires, such as food craving and drug craving. What is common to all of these emotional states is that they each change the body in some way and are associated with highly salient subjective qualities. The insula is well-situated for the integration of information relating to bodily states into higher-order cognitive and emotional processes. The insula receives information from "homeostatic afferent" sensory pathways via the thalamus and sends output to a number of other limbic-related structures, such as the amygdala , the ventral striatum , and the orbitofrontal cortex , as well as to motor cortices . [ 76 ]
A study using magnetic resonance imaging found that the right anterior insula is significantly thicker in people that meditate . [ 77 ] Other research into brain activity and meditation has shown an increase in grey matter in areas of the brain including the insular cortex. [ 78 ]
Another study using voxel-based morphometry and MRI on experienced Vipassana meditators was done to extend the findings of Lazar et al., which found increased grey matter concentrations in this and other areas of the brain in experienced meditators. [ 79 ]
The strongest evidence against a causative role for the insula cortex in emotion comes from Damasio et al. (2012) [ 80 ] which showed that a patient who suffered bilateral lesions of the insula cortex expressed the full complement of human emotions, and was fully capable of emotional learning.
Functional neuroimaging research suggests the insula is involved in two types of salience . Interoceptive information processing that links interoception with emotional salience to generate a subjective representation of the body. This involves, first, the anterior insular cortex with the pregenual anterior cingulate cortex ( Brodmann area 33 ) and the anterior and posterior mid-cingulate cortices , and, second, a general salience network concerned with environmental monitoring, response selection, and skeletomotor body orientation that involves all of the insular cortex and the mid-cingulate cortex. [ 81 ] A related idea is that the anterior insula, as part of the salience network, interacts with the mid-posterior insula to combine salient stimuli with autonomic information, leading to a high state of physiological awareness of salient stimuli. [ 82 ]
An alternative or perhaps complementary proposal is that the right anterior insular regulates the interaction between the salience of the selective attention created to achieve a task (the dorsal attention system) and the salience of arousal created to keep focused upon the relevant part of the environment (ventral attention system). [ 83 ] This regulation of salience might be particularly important during challenging tasks where attention might fatigue and so cause careless mistakes but if there is too much arousal it risks creating poor performance by turning into anxiety . [ 83 ]
Studies have shown that damage or dysfunction in the insular cortex can impair decision-making, emotional regulation, and social behavior. The insula is considered a key brain structure in the neural circuitry underlying complex decision-making processes. [ 84 ] It plays a significant role in integrating internal and external cues to facilitate adaptive choices.
Research indicates that the insular cortex is involved in auditory perception . Responses to sound stimuli were obtained using intracranial EEG recordings acquired from patients with epilepsy. The posterior part of the insula showed auditory responses that resemble those observed in Heschl's gyrus , whereas the anterior part responded to the emotional contents of the auditory stimuli. [ 85 ] Clinical data additionally shows that bilateral damage to the insula after ischemic injury or trauma can lead to auditory agnosia. [ 86 ] Functional magnetic resonance studies have also demonstrated that the insular cortex participates in many key auditory processes such as tuning into novel auditory stimuli and allocating auditory attention. [ 87 ]
Direct recordings from the posterior part of the insula showed responses to unexpected sounds within regular auditory streams, a process known as auditory deviance detection . Researchers observed a mismatch negativity (MMN) potential, a well known event related potential , as well as the high frequency activity signals originating from local neurons. [ 88 ]
Simple auditory illusions and hallucinations were elicited by electrical functional mapping. [ 89 ] [ 85 ]
Progressive expressive aphasia is the deterioration of normal language function that causes individuals to lose the ability to communicate fluently while still being able to comprehend single words and intact other non-linguistic cognition. It is found in a variety of degenerative neurological conditions including Pick's disease , motor neuron disease , corticobasal degeneration , frontotemporal dementia , and Alzheimer's disease . It is associated with hypometabolism [ 90 ] and atrophy of the left anterior insular cortex. [ 91 ]
A number of functional brain imaging studies have shown that the insular cortex is activated when drug users are exposed to environmental cues that trigger cravings. This has been shown for a variety of drugs, including cocaine , alcohol , opiates , and nicotine . Despite these findings, the insula has been ignored within the drug addiction literature, perhaps because it is not known to be a direct target of the mesocortical dopamine system, which is central to current dopamine reward theories of addiction. Research published in 2007 [ 92 ] has shown that cigarette smokers suffering damage to the insular cortex, from a stroke for instance, have their addiction to cigarettes practically eliminated. These individuals were found to be up to 136 times more likely to undergo a disruption of smoking addiction than smokers with damage in other areas. Disruption of addiction was evidenced by self-reported behavior changes such as quitting smoking less than one day after the brain injury, quitting smoking with great ease, not smoking again after quitting, and having no urge to resume smoking since quitting. The study was conducted on average eight years after the strokes, which opens up the possibility that recall bias could have affected the results. [ 93 ] More recent prospective studies, which overcome this limitation, have corroborated these findings [ 94 ] [ 95 ] This suggests a significant role for the insular cortex in the neurological mechanisms underlying addiction to nicotine and other drugs, and would make this area of the brain a possible target for novel anti-addiction medication. In addition, this finding suggests that functions mediated by the insula, especially conscious feelings, may be particularly important for maintaining drug addiction, although this view is not represented in any modern research or reviews of the subject. [ 96 ]
A recent study in rats by Contreras et al. [ 97 ] corroborates these findings by showing that reversible inactivation of the insula disrupts amphetamine conditioned place preference , an animal model of cue-induced drug craving. In this study, insula inactivation also disrupted "malaise" responses to lithium chloride injection, suggesting that the representation of negative interoceptive states by the insula plays a role in addiction. However, in this same study, the conditioned place preference took place immediately after the injection of amphetamine, suggesting that it is the immediate, pleasurable interoceptive effects of amphetamine administration, rather than the delayed, aversive effects of amphetamine withdrawal that are represented within the insula.
A model proposed by Naqvi et al. (see above) is that the insula stores a representation of the pleasurable interoceptive effects of drug use (e.g., the airway sensory effects of nicotine, the cardiovascular effects of amphetamine), and that this representation is activated by exposure to cues that have previously been associated with drug use. A number of functional imaging studies have shown the insula to be activated during the administration of addictive psychoactive drugs. Several functional imaging studies have also shown that the insula is activated when drug users are exposed to drug cues, and that this activity is correlated with subjective urges. In the cue-exposure studies, insula activity is elicited when there is no actual change in the level of drug in the body. Therefore, rather than merely representing the interoceptive effects of drug use as it occurs, the insula may play a role in memory for the pleasurable interoceptive effects of past drug use, anticipation of these effects in the future, or both. Such a representation may give rise to conscious urges that feel as if they arise from within the body. This may make addicts feel as if their bodies need to use a drug, and may result in persons with lesions in the insula reporting that their bodies have forgotten the urge to use, according to this study.
A common quality in mystical experiences is a strong feeling of certainty which cannot be expressed in words . Fabienne Picard proposes a neurological explanation for this subjective certainty, based on clinical research of epilepsy. [ 98 ] [ 99 ] According to Picard, this feeling of certainty may be caused by a dysfunction of the anterior insula, a part of the brain which is involved in interoception , self-reflection, and in avoiding uncertainty about the internal representations of the world by "anticipation of resolution of uncertainty or risk". This avoidance of uncertainty functions through the comparison between predicted states and actual states, that is, "signaling that we do not understand, i.e., that there is ambiguity." [ 100 ] Picard notes that "the concept of insight is very close to that of certainty," and refers to Archimedes' "Eureka!" [ 101 ] [ 102 ] Picard hypothesizes that during ecstatic seizures the comparison between predicted states and actual states no longer functions, and that mismatches between predicted state and actual state are no longer processed, blocking " negative emotions and negative arousal arising from predictive uncertainty," which will be experienced as emotional confidence. [ 103 ] Picard concludes that "[t]his could lead to a spiritual interpretation in some individuals." [ 103 ]
The insular cortex has been suggested to have a role in anxiety disorders, [ 104 ] emotion dysregulation, [ 105 ] and anorexia nervosa . [ 106 ]
The insula was first described by Johann Christian Reil while describing cranial and spinal nerves and plexuses. [ 107 ] Henry Gray in Gray's Anatomy is responsible for it being known as the Island of Reil . [ 107 ] John Allman and colleagues showed that anterior insular cortex contains spindle neurons . | https://en.wikipedia.org/wiki/Insular_cortex |
Insular dwarfism , a form of phyletic dwarfism , [ 1 ] is the process and condition of large animals evolving or having a reduced body size [ a ] when their population's range is limited to a small environment, primarily islands. This natural process is distinct from the intentional creation of dwarf breeds, called dwarfing . This process has occurred many times throughout evolutionary history, with examples including various species of dwarf elephants that evolved during the Pleistocene epoch, as well as more ancient examples, such as the dinosaurs Europasaurus and Magyarosaurus . This process, and other " island genetics " artifacts, can occur not only on islands, but also in other situations where an ecosystem is isolated from external resources and breeding. This can include caves , desert oases , isolated valleys and isolated mountains (" sky islands "). [ citation needed ] Insular dwarfism is one aspect of the more general "island effect" or "Foster's rule" , which posits that when mainland animals colonize islands, small species tend to evolve larger bodies ( island gigantism ), and large species tend to evolve smaller bodies. This is itself one aspect of island syndrome , which describes the differences in morphology , ecology , physiology and behaviour of insular species compared to their continental counterparts.
There are several proposed explanations for the mechanism which produces such dwarfism. [ 3 ] [ 4 ]
One is a selective process where only smaller animals trapped on the island survive, as food periodically declines to a borderline level. The smaller animals need fewer resources and smaller territories, and so are more likely to get past the break-point where population decline allows food sources to replenish enough for the survivors to flourish. Smaller size is also advantageous from a reproductive standpoint, as it entails shorter gestation periods and generation times . [ 3 ]
In the tropics, small size should make thermoregulation easier. [ 3 ]
Among herbivores, large size confers advantages in coping with both competitors and predators, so a reduction or absence of either would facilitate dwarfing; competition appears to be the more important factor. [ 4 ]
Among carnivores, the main factor is thought to be the size and availability of prey resources, and competition is believed to be less important. [ 4 ] In tiger snakes , insular dwarfism occurs on islands where available prey is restricted to smaller sizes than are normally taken by mainland snakes. Since prey size preference in snakes is generally proportional to body size, small snakes may be better adapted to take small prey. [ 5 ]
The inverse process, wherein small animals breeding on isolated islands lacking the predators of large land masses may become much larger than normal, is called island gigantism . An excellent example is the dodo , the ancestors of which were normal-sized pigeons . There are also several species of giant rats , one still extant, that coexisted with both Homo floresiensis and the dwarf stegodonts on Flores.
The process of insular dwarfing can occur relatively rapidly by evolutionary standards. This is in contrast to increases in maximum body size, which are much more gradual. When normalized to generation length, the maximum rate of body mass decrease during insular dwarfing was found to be over 30 times greater than the maximum rate of body mass increase for a ten-fold change in mammals. [ 6 ] The disparity is thought to reflect the fact that pedomorphism offers a relatively easy route to evolve smaller adult body size; on the other hand, the evolution of larger maximum body size is likely to be interrupted by the emergence of a series of constraints that must be overcome by evolutionary innovations before the process can continue. [ 6 ]
For both herbivores and carnivores, island size, the degree of island isolation and the size of the ancestral continental species appear not to be of major direct importance to the degree of dwarfing. [ 4 ] However, when considering only the body masses of recent top herbivores and carnivores, and including data from both continental and island land masses, the body masses of the largest species in a land mass were found to scale to the size of the land mass, with slopes of about 0.5 log(body mass/kg) per log(land area/km 2 ). [ 7 ] There were separate regression lines for endothermic top predators, ectothermic top predators, endothermic top herbivores and (on the basis of limited data) ectothermic top herbivores, such that food intake was 7- to 24-fold higher for top herbivores than for top predators, and about the same for endotherms and ectotherms of the same trophic level (this leads to ectotherms being 5 to 16 times heavier than corresponding endotherms). [ 7 ]
It has been suggested that for dwarf elephants, competition was an important factor in body size, with islands with competing herbivores having significantly larger dwarf elephants than those where competing herbivores were absent. [ 8 ]
Recognition that insular dwarfism could apply to dinosaurs arose through the work of Ferenc Nopcsa , a Hungarian-born aristocrat, adventurer, scholar, and paleontologist. Nopcsa studied Transylvanian dinosaurs intensively, noticing that they were smaller than their cousins elsewhere in the world. For example, he unearthed six-meter-long sauropods , a group of dinosaurs which elsewhere commonly grew to 30 meters or more. Nopcsa deduced that the area where the remains were found was an island, Hațeg Island (now the Haţeg or Hatzeg basin in Romania ) during the Mesozoic era. [ 9 ] [ 10 ] Nopcsa's proposal of dinosaur dwarfism on Hațeg Island is today widely accepted after further research confirmed that the remains found are not from juveniles. [ 11 ]
In addition, the genus Balaur was initially described as a Velociraptor -sized dromaeosaurid (and in consequence a dubious example of insular dwarfism), but has been since reclassified as a secondarily flightless stem bird, closer to modern birds than Jeholornis (thus actually an example of insular gigantism ).
Sardinian mammoth
( H. amphibius )
H. amphibius or H. antiquus . | https://en.wikipedia.org/wiki/Insular_dwarfism |
Formed in the United States in 1925, the Insulated Cables Engineers Association , Inc. ( ICEA ), is a not-for profit professional association. [ 1 ] [ 2 ] In conjunction with other organizations like NEMA and ANSI , it produces technical standards for the manufacture and use of power cable , data, and control cable . It was founded as the Insulated Power Cables Engineers Association , but changed names to reflect their full range of activities. [ 3 ]
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Insulated_Cable_Engineers_Association |
Insulating glass ( IG ) consists of two or more glass window panes separated by a space to reduce heat transfer across a part of the building envelope . A window with insulating glass is commonly known as double glazing or a double-paned window , triple glazing or a triple-paned window, or quadruple glazing or a quadruple-paned window, depending upon how many panes of glass are used in its construction.
Insulating glass units (IGUs) are typically manufactured with glass in thicknesses from 3 to 10 mm ( 1 ⁄ 8 to 3 ⁄ 8 in). Thicker glass is used in special applications. Laminated or tempered glass may also be used as part of the construction. Most units are produced with the same thickness of glass on both panes but special applications such as acoustic attenuation or security may require different thicknesses of glass to be incorporated in a unit.
The space in between the panes provides the bulk of the insulation effect. It can be filled with air, but argon is often used as it gives far superior insulation, and sometimes others gases or even a vacuum [ 1 ] are employed.
Possibly the earliest use of double glazing was in Siberia , where it was observed by Henry Seebohm in 1877 as an established necessity in the Yeniseysk area where the bitterly cold winter temperatures regularly fall below -50 °C, indicating how the concept may have started: [ 2 ]
One of the peculiarities of this part of the country is that it is a land of dear glass. You rarely see a window with square panes. In the houses of some of the poorer peasants it is not an uncommon thing to find one entirely composed of broken pieces of glass of all sizes and shapes, fitted together like a puzzle, and carefully sewn into a framework of birch bark which has been elaborately cut to fit each piece. Sometimes glass is dispensed with altogether, and pieces of semi-transparent fish-skin are stitched together and stretched across the window-frame.
In winter double windows are absolutely necessary to prevent the inmates of the houses from being frozen to death. The outside windows project about six inches in front of the inside ones. If the inside window reveals the poverty of the inhabitants, the outside window seemingly displays his extravagance. To all appearances it is composed of one solid pane of plate-glass nearly three inches thick. On closer examination this extravagant sheet of plate-glass turns out to be a slab of ice carefully frozen into the framework with a mixture of snow and water in place of putty.
Fitting a second pane of glass to improve insulation began in Scotland, Germany, and Switzerland in the 1870s. [ 3 ]
Insulating glass is an evolution from older technologies known as double-hung windows and storm windows . Traditional double-hung windows used a single pane of glass to separate the interior and exterior spaces.
Traditional storm windows and screens are relatively time-consuming and labor-intensive, requiring removal and storage of the storm windows in the spring and reinstallation in the fall and storage of the screens. The weight of the large storm window frame and glass makes replacement on upper-stories of tall buildings a difficult task requiring repeatedly climbing a ladder with each window and trying to hold the window in place while securing retaining clips around the edges.
However, current reproductions of these old-style storm windows can be made with detachable glass in the bottom pane that can be replaced with a detachable screen when desired. This eliminates the need for changing the entire storm window according to the seasons.
Insulated glazing (IG) forms a very compact multi-layer sandwich of air and glass, which eliminates the need for storm windows. Screens may also be left installed year-round with insulated glazing, and they can be installed in a manner that permits installation and removal from inside the building, eliminating the requirement to climb up the exterior of the house to service the windows. It is possible to retrofit insulated glazing into traditional double-hung frames, though this would require significant modification to the wood frame due to the increased thickness of the IG assembly.
Modern window units with IG typically completely replace the older double-hung unit and include other improvements such as better sealing between the upper and lower windows and spring-operated weight balancing that removes the need for large hanging weights inside the wall next to the windows, allowing for more insulation around the window and reducing air leakage. IG provides robust protection against the sun and keeps the house cool in the hot summer and warm in winter. The spring-operated balancing mechanisms also typically permit the top of the windows to swing inward, permitting cleaning of the exterior of the IG window from inside the building.
The insulating glazing unit, consisting of two glass panes bound together into a single unit with a seal between the edges of the panes, was patented in the United States by Thomas Stetson in 1865. [ 4 ] It was developed into a commercial product in the 1930s, when several patents were filed, and a product was announced by the Libbey-Owens-Ford Glass Company in 1944. [ 5 ] Their product was sold under the Thermopane brand name, which had been registered as a trademark in 1941. The Thermopane technology differs significantly from contemporary IGUs. The two panes of glass were welded together by a glass seal, and the two panes were separated by less than the 0.5 inches (1.3 cm) typical of modern units. [ 6 ] The brand name Thermopane has entered the vocabulary of the glazing industry as the genericized trademark for any IGU. [ citation needed ]
Single pane glass is a very poor insulator (R-value of around 1, RSI below 0.2), so single panes provide very little insulation. Glass coatings are frequently employed such as partially reflective or colored coatings to reduce insolation, and coatings to reflect infrared.
Low emissivity glass (low E glass) is a commercially available option for IGU construction. Low E glass is made by applying a Low E coating to a pane of glass. These are generally metallic coatings, usually applied onto the second or third glass surfaces of the unit, that have the effect of reflecting infrared light, and blocking or attenuating portions of the ultraviolet and visible light spectra. This can significantly reduce the solar gain of the IGU, which impacts both the thermal performance (R-value) and the Solar Heat Gain Coefficient (SHGC). Two types of low E coatings are available: hard coatings and soft coatings. Hard coatings are produced using tin oxide that is applied when the glass is still hot, and is absorbed into the glass, and are hard wearing and usually cheaper. Soft coatings are vacuum-sputtered onto the glass surface and have higher performance but are easily oxidized and damaged, and thus have to be protected by an inert gas fill. [ 7 ]
The glass panes are separated by a "spacer". A spacer, which may be of the warm edge type, is the piece that separates the two panes of glass in an insulating glass system, and seals the gas space between them. The first spacers were made primarily of steel and aluminum, which manufacturers thought provided more durability, and their lower price means that they remain common.
However, metal spacers conduct heat (unless the metal is thermally improved), undermining the ability of the insulated glass unit (IGU) to reduce heat flow. It may also result in water or ice forming at the bottom of the sealed unit because of the sharp temperature difference between the window and surrounding air. To reduce heat transfer through the spacer and increase overall thermal performance, manufacturers may make the spacer out of a less-conductive material such as structural foam. A spacer made of aluminum that also contains a highly structural thermal barrier reduces condensation on the glass surface and improves insulation, as measured by the overall U-value .
An older and established way to improve insulation performance is to replace air in the space with a lower thermal conductivity gas. Gas convective heat transfer is a function of viscosity and specific heat. Monatomic gases such as argon , krypton , and xenon are often used since (at normal temperatures) they do not carry heat in rotational modes , resulting in a lower heat capacity than poly-atomic gases. Argon has a thermal conductivity 67% that of air, krypton has about half the conductivity of argon. [ 8 ] Argon comprises nearly 1% of the atmosphere and is industrially isolated at moderate cost, whereas krypton and xenon are only trace elements which are expensive to extract. All particular noble gases are non-toxic, clear, odorless, chemically inert, and readily available because of their widespread application in industry. Some manufacturers also offer sulfur hexafluoride as an insulating gas, particularly as sound proofing . It has only 2/3 the conductivity of argon, but is stable, inexpensive, and dense. However, sulfur hexafluoride is an extremely potent greenhouse gas . In Europe, SF 6 falls under the F-Gas directive which controls and even bans its usage for various applications. Since 1 January 2006, SF 6 is banned as a tracer gas and in all applications except high-voltage switchgear . [ 9 ]
Practically speaking, the more effective a fill gas is at its optimum thickness, the thinner the optimum thickness is. For example, the optimum thickness for krypton is lower than for argon, and lower for argon than for air. [ 10 ] However, since it is difficult to determine whether the gas in an IGU has become mixed with air at time of manufacture (or becomes mixed with air once installed), many designers prefer to use thicker gaps than would be optimum for the fill gas if it were pure. Argon is commonly used in insulated glazing as it is the most affordable. Krypton, which is considerably more expensive, is not generally used except to produce very thin double glazing units or extremely high performance triple-glazed units. Xenon has found very little application in IGUs because of cost. [ 11 ]
Vacuum technology is also used in some non-transparent insulation products called vacuum insulated panels .
IGUs are often manufactured on a made to order basis on factory production lines, but standard units are also available. The width and height dimensions, the thickness of the glass panes and the type of glass for each pane as well as the overall thickness of the unit must be supplied to the manufacturer. On the assembly line, spacers of specific thicknesses are cut and assembled into the required overall width and height dimensions and filled with desiccant. On a parallel line, glass panes are cut to size and washed to be optically clear.
An adhesive, primary sealant ( polyisobutylene ) is applied to the face of the spacer on each side and the panes pressed against the spacer. If the unit is gas-filled, two holes are drilled into the spacer of the assembled unit, lines are attached to draw out the air out of the space and replacing it (or leaving just vacuum) with the desired gas. The lines are then removed and holes sealed to contain the gas. The more modern technique is to use an online gas filler, which eliminates the need to drill holes in the spacer. The purpose of primary sealant is to keep insulating gas from escaping and water vapor from entering. The units are then enveloped on the edge side using either polysulfide or silicone sealant or similar material as secondary sealant which restraints movements of the rubbery-plastic primary sealant. The desiccant will remove traces of humidity from the air space such that no condensation appears on the inside faces during cold weather. Some manufacturers have developed specific processes which combine the spacer and desiccant into a single step application system.
The maximum insulating efficiency of a standard IGU is determined by the thickness of the space. Greater space increases the insulation value up to a point, but eventually with a large enough gap, convection currents begin to flow carrying heat between the panes within the unit. Typically, most sealed units achieve maximum insulating values using a space of 16–19 mm (0.63–0.75 in) when measured at the centre of the IGU. [ 12 ]
IGU thickness is a compromise between maximizing insulating value and the ability of the framing system used to carry the unit. Some residential and most commercial glazing systems can accommodate the ideal thickness of a double-paned unit. Issues arise with the use of triple glazing to further reduce heat loss in an IGU. The combination of thickness and weight results in units that are too unwieldy for most residential or commercial glazing systems, particularly if these panes are contained in moving frames or sashes.
This trade-off does not apply to vacuum insulated glass (VIG), or evacuated glazing, [ 14 ] as heat loss due to convection is eliminated, leaving radiation losses and conduction through the edge seal and required supporting pillars over the face area. [ 15 ] [ 16 ] These VIG units have most of the air removed from the space between the panes, leaving a nearly-complete vacuum . VIG units which are currently on the market are hermetically sealed along their perimeter with solder glass, that is, a glass frit (powdered glass) having a reduced melting point is heated to join the components. This creates a glass seal that experiences increasing stress with increasing temperature differential across the unit. This stress may limit the maximum allowable temperature differential. One manufacturer provides a recommendation of 35 °C. Closely spaced pillars are required to reinforce the glazing to resist the pressure of the atmosphere. Pillar spacing and diameter limited the insulation achieved by designs available beginning in the 1990s to R = 4.7 h·°F·ft2/BTU (0.83 m2·K/W) no better than high quality double glazed insulated glass units. Recent products claim performance of R = 14 h·°F·ft2/BTU (2.5 m2·K/W) which exceeds triple glazed insulated glass units. [ 16 ] The required internal pillars exclude applications where an unobstructed view through the glazing unit is desired, i.e. most residential and commercial windows, and refrigerated food display cases. VIG equipped windows, however, under-perform due to intense edge heat transfer. [ 13 ]
The insulation effectiveness can be expressed as an R-value or RSI value . The higher the value, the greater is its resistance to heat transfer. A standard IGU consisting of clear uncoated panes of glass (or lights) with air in the cavity between the lights typically has an RSI-value of 0.35 K·m 2 /W.
Using US customary units , a rule of thumb in standard IGU construction is that each change in the component of the IGU results in an increase of 1 R-value to the efficiency of the unit. Adding argon gas increases the efficiency to about R-3. Using low emissivity glass on surface #2 will add another R-value. Properly designed triple-glazed IGUs with low emissivity coatings on surfaces #2 and #4 and filled with argon gas in the cavities. Certain multi-chambered IG units result in R-values as high as R-24. Vacuum Insulating Glass (VIG) units result in R-values as high as R-15 (center of glass). Combining a VIG unit with another glass pane and warm edge spacer result in R-18 (center of glass) or more depending upon the low-e coating(s). Double VIG units with warm edge spacer reach R-25 (center of glass) or more depending upon low-e coatings and other factors.
Additional layers of glazing provide the opportunity for improved insulation. While the standard double glazing is most widely used, triple glazing is not uncommon, and quadruple glazing is produced for cold environments such as Alaska or Scandinavia. [ 17 ] [ 18 ] Even quintuple and six-pane glazing (four or five cavities) is available - with mid-pane insulation factors equivalent to walls. [ 19 ] [ 20 ] [ 21 ]
In some situations the insulation is in reference to noise mitigation . In these circumstances a large air space improves the noise insulation quality or sound transmission class . Asymmetric double glazing, using different thicknesses of glass rather than the conventional symmetrical systems (equal glass thicknesses used for both lights) will improve the acoustic attenuation properties of the IGU. If standard air spaces are used, sulfur hexafluoride can be used to replace or augment an inert gas [ 22 ] and improve acoustical attenuation performance.
Other glazing material variations affect acoustics. The most widely used glazing configurations for sound dampening include laminated glass with varied thickness of the interlayer and thickness of the glass. Including a structural, thermally improved aluminum thermal barrier air spacer in the insulating glass can improve acoustical performance by reducing the transmission of exterior noise sources in the fenestration system.
Reviewing the glazing system components, including the air space material used in the insulating glass, can ensure overall sound transmission improvement.
Transmittance is a measure of how much visible light is passed by the glass expressed as a fraction. Some of the light will also be absorbed and reflected.
Some types of light include radio waves. Notably, many low-e glass and semi-reflective metalised coatings greatly attenuate Wi-Fi and cell phone signals. [ citation needed ]
The life of an IGU varies depending on the quality of materials used, size of gap between inner and outer pane, temperature differences, workmanship and location of installation both in terms of facing direction and geographic location, as well as the treatment the unit receives. IG units typically last from 10 to 25 years, with windows facing the equator often lasting less than 12 years. IGUs typically carry a warranty for 10 to 20 years depending upon the manufacturer. If IGUs are altered (such as installation of a window insulation film ) the warranty may be voided by the manufacturer.
The Insulating Glass Manufacturers Alliance (IGMA) [ 23 ] undertook an extensive study to characterize the failures of commercial insulating glass units over a 25-year period.
For a standard construction IG unit, condensation collects between the layers of glass when the perimeter seal has failed and when the desiccant has become saturated, and can generally only be eliminated by replacing the IGU. Seal failure and subsequent replacement results in a significant factor in the overall cost of owning IGUs. [ 24 ]
Large temperature differences between the inner and outer panes stress the spacer adhesives, which can eventually fail. Units with a small gap between the panes are more prone to failure because of the increased stress.
Atmospheric pressure changes combined with wet weather can, in rare cases, eventually lead to the gap filling with water.
The flexible sealing surfaces preventing infiltration around the window unit can also degrade or be torn or damaged. Replacement of these seals can be difficult to impossible, due to IG windows commonly using extruded channel frames without seal retention screws or plates. Instead, the edge seals are installed by pushing an arrow-shaped indented one-way flexible lip into a slot on the extruded channel, and often cannot be easily extracted from the extruded slot to be replaced.
In Canada, since the beginning of 1990, there are some companies offering servicing of failed IG units. They provide open ventilation to the atmosphere by drilling hole(s) in the glass and/or spacer. This solution often reverses the visible condensation, but cannot clean the interior surface of the glass and staining that may have occurred after long-term exposure to moisture. They may offer a warranty from 5 to 20 years. This solution lowers the insulating value of the window, but it can be a "green" solution when the window is still in good condition. If the IG unit had a gas fill (e.g. argon or krypton or a mixture) the gas is naturally dissipated and the R-value suffers.
Since 2004, there are also some companies offering the same restoration process for failed double-glazed units [ 25 ] in the UK, and there is one company offering restoration of failed IG units in Ireland since 2010.
Temperature differences across the surface of glass panes can lead to cracks in the glass. [ 26 ] This may occur where the glass is partially shaded and partially heated from the sunlight. Tinted glass increases heating and thermal stress, while annealing reduces internal stress built into the glass during manufacturing.
Thermal expansion creates internal pressure, or stress, where expanding warm material is restrained by cooler material. Typically cracks initiate and propagate from the narrow shaded cut edge where the glass is cooler and minute grooves and notches cause stress concentration . Glass thickness has no direct effect on thermal cracking in windows because both thermal stress and material strength are proportional to thickness. Annealed and tempered glass is usually more resistant to cracking.
Given the thermal properties of the sash, frame, and sill, and the dimensions of the glazing and thermal properties of the glass, the heat transfer rate for a given window and set of conditions can be calculated. This can be calculated in kW (kilowatts), but more usefully for cost benefit calculations can be stated as kWh pa (kilowatt hours per annum), based on the typical conditions over a year for a given location.
The glass panels in double-glazed windows transmit heat in both directions by radiation, through the glazing by conduction and across the gap between the panes by convection, by conduction through the frame, and by infiltration around the perimeter seals and the frame's seal to the building. The actual rates will vary with the conditions throughout the year, and while solar gain may be much welcomed in the winter (depending on local climate), it may result in increased air conditioning costs in the summer. Unwanted heat transfer can be mitigated by for example using curtains at night in the winter and using sun shades during the day in the summer. In an attempt to provide a useful comparison between alternative window constructions the British Fenestration Rating Council have defined a "Window Energy Rating" WER, ranging from A for the best down through B and C etc. This takes into account a combination of the heat loss through the window (U value, the reciprocal of R-value ), the solar gain (g value), and loss through air leakage around the frame (L value). For example, an A Rated window will in a typical year gain as much heat from solar gain as it loses in other ways (however the majority of this gain will occur during the summer months, when the heat may not be needed by the building occupant). This provides better thermal performance than a typical wall.
Window rating programs and certifications: | https://en.wikipedia.org/wiki/Insulated_glazing |
Insulated pipes (called also preinsulated pipes or bonded pipe [ 1 ] ) are widely used for district heating and hot water supply. They consist of a steel pipe called "service pipe", a thermal insulation layer and an outer casing. The insulation bonds the service pipe and the casing together. The main purpose of such pipes is to maintain the temperature of the fluid inside the service pipes. Insulated pipes are commonly used for transport of hot water from district heating plants to district heating networks and for distribution of hot water inside district heating networks.
Thermal insulation material usually used is polyurethane foam or similar, with a thermal conductivity λ 50 of about 0.024–0.033 W/(m·K).
While polyurethane has outstanding mechanical and thermal properties, the high toxicity of the [diisocyanates] required for its manufacturing has caused a restriction on their use. [ 2 ] This has triggered research on alternative insulating foam fitting the application, [ 3 ] which include polyethylene terephthalate (PET) [ 4 ] and polybutylene (PB-1). [ 5 ] The outer casing is usually made of high-density polyethylene (HDPE).
Preinsulated pipes for district heating are described in European standards EN 253 and EN 15698-1. EN 253 describes "District heating pipes - Bonded single pipe systems for directly buried hot water networks - Factory made pipe assembly of steel service pipe, polyurethane thermal insulation and a casing of polyethylene". EN 15698-1 describes "District heating pipes - Bonded twin pipe systems for directly buried hot water networks - Factory made twin pipe assembly of steel service pipes, polyurethane thermal insulation and one casing of polyethylene". Both standards don't give "short names" or abbreviations for described pipes.
According to EN 253:2019 & EN 15698-1:2019, pipes must be produced to work at constant temperature of 120 °C (248 °F) for 30 years. Thermal conductivity λ 50 in unaged condition shall not exceed 0.029 W/(m·K). Both standards describe three insulation thickness levels. Both standards require use of polyurethane foam for thermal insulation and HDPE for casing.
Insulated pipelines are usually assembled from pipes of 6 metres (20 ft), 12 metres (39 ft), or 16 metres (52 ft) in length, directly buried in soil in depths of commonly 0.6–1.2 metres (2 ft 0 in – 3 ft 11 in). | https://en.wikipedia.org/wiki/Insulated_pipe |
Insulating concrete forms or insulated concrete forms (ICF) are a building system to create reinforced concrete walls or floors with integral insulation. They are dry-stacked (without mortar ) and filled with concrete . The units interlock somewhat like Lego bricks and create the formwork for reinforced concrete that becomes the structural walls, floors or roofs of a building. The forms stay in place after the concrete is cured and provide a permanent interior and exterior substrate for finishes. The forms come in different shapes, sizes and are made from different materials depending on the manufacturer. ICF construction has become commonplace for both low rise commercial and high performance residential construction as more stringent energy efficiency and natural disaster resistant building codes are adopted.
The first expanded polystyrene ICF Wall forms were developed in the late 1960s with the expiration of the original patent and the advent of modern foam plastics by BASF . [ citation needed ] Canadian contractor Werner Gregori filed the first patent for a foam concrete form in 1966 with a block "measuring 16 inches high by 48 inches long with a tongue-and-groove interlock, metal ties, and a waffle-grid core." [ 1 ] It is right to point out that a primordial form of ICF formwork dates back to 1907, as evidenced by the patent entitled “building-block”, inventor L. R. Franklin. This patent claimed a parallelepiped-shaped brick having a central cylindrical cavity, connected to the upper and lower faces by countersink. [ 2 ]
The adoption of ICF construction has steadily increased since the 1970s, though it was initially hampered by lack of awareness, building codes, and confusion caused by many different manufacturers selling slightly different ICF designs rather than focusing on industry standardization . ICF construction is now part of most building codes and accepted in most jurisdictions in the developed world.
Reinforcing steel bars ( rebar ) are usually placed inside the forms before concrete is poured to give the concrete flexural strength , similar to bridges and high-rise buildings made of reinforced concrete. Like other concrete formwork, the forms are filled with concrete in 1-foot to 4-foot high "lifts" to manage the concrete pressure and reduce the risk of blowouts.
After the concrete has cured, the forms are left in place permanently to provide a variety of benefits, depending on materials used:
Insulating concrete forms are commonly categorized in three manners. Organizations whose first concern relates to the concrete structure classify them first by the shape of the concrete inside the form. [ 3 ] [ 4 ] Organizations whose first concern relates to the material or fabrication of the forms classify them first by the characteristics of the forms themselves. [ 5 ]
For Flat Wall System ICFs, the concrete has the shape of a flat wall of solid reinforced concrete, similar to the shape of a concrete wall constructed using removable forms.
For Screen Grid System ICFs, the concrete has the shape of the metal in a screen, with horizontal and vertical channels of reinforced concrete separated by areas of solid form material.
For Waffle Grid System ICFs, the concrete has the shape of a hybrid between Screen Grid and Flat Wall system concrete, with a grid of thicker reinforced concrete and having thinner concrete in the center areas where a screen grid would have solid ICF material..
For Post and Lintel System ICFs, the concrete has a horizontal member, called a lintel , only at the top of the wall (Horizontal concrete at the bottom of the wall is often present in the form of the building's footer or the lintel of the wall below.) and vertical members, called posts, between the lintel and the surface on which the wall is resting.
Most common material for insulated concrete forms is either expanded or extruded polystyrene . Polyurethane (including soy-based foam) [ 6 ] are also available.
These ICFs include forms made from cement-bonded wood fiber , cement-bonded polystyrene beads and cellular concrete
The exterior shape of the ICF is similar to that of a Concrete masonry unit , although ICF blocks are often larger in size as they are made from a material having a lower specific gravity. Very frequently, the edges of block ICFs are made to interlock, reducing or eliminating the need for the use of a bonding material between the blocks.
Panel ICFs have the flat rectangular shape of a section of flat wall they are often the height of the wall and have a width limited by the manipulability of the material at larger sizes and by the general usefulness of the panel size for constructing walls.
Plank ICFs have the size of Block ICFs in one dimension and Panel ICFs in the other dimension.
ICF walls have much lower rates of acoustic transmission . Standard thickness ICF walls have shown sound transmission coefficients (STC) between 46 and 72 compared to 36 for standard fiberglass insulation and drywall. The level of sound attenuation achieved is a function of wall thickness, mass, component materials and air tightness.
ICF walls can have four- to six-hour fire resistance rating and negligible surface burning properties. The International Building Code: 2603.5.2 [ 8 ] requires plastic foam insulation (e.g. Polystyrene foam, Polyurethane foam) to be separated from the building interior by a thermal barrier (e.g. drywall ), regardless of the fire barrier provided by the central concrete. Forms made from cement bonded – wood fibers (eg [ 9 ] ), polystyrene beads (e.g. [ 10 ] [ 11 ] ), or air (i.e. cellular concrete – e.g. [ 12 ] ) have a fire rating inherently.
Because they are generally constructed without a sheet plastic vapor barrier, ICF walls can regulate humidity levels, mitigate the potential for mold and facilitate a more comfortable interior while maintaining high thermal performance. Foams, however, can give off gasses , something that is not well studied.
ICF walls can be made with a variety of recycled materials that can minimize the environmental impact of the building. The large volume of concrete used in ICF walls has been criticized, as concrete production is a large contributor to greenhouse gas emissions. [ 13 ]
Because the entire interior space of ICF walls is continuously occupied (no gaps as can occur between blown or fiberglass insulation and a wood frame wall) they pose more difficulty for casual transit by insects and vermin. Additionally, while plastic foam forms can occasionally be tunneled through, interior concrete wall, and the Portland cement of cement-bonded type forms create a much more challenging barrier to insects and vermin than do walls made of wood.
When designing a building to be constructed with ICF walls, consideration must be given to supporting the weight of any walls not resting directly on other walls or the building's foundation. Consideration must also be given to the understanding that the load-bearing part of an ICF wall is the concrete, which, without special preparations, does not extend in any direction to the edge of the form. For grid and post & lintel systems, the placement of vertical members of the concrete must be organized in such a fashion (e.g., starting at opposite corners or breaks (e.g. doorways) and working to meet in unbroken wall) as to properly transfer load from the lintel (or bond beam) to the surface supporting the wall.
In Australia, ICF products are considered to be combustible as they have not passed AS 1530.1-1994 lab testing. Nevertheless they have achieved AS 1530.8.1-2007 accreditation for use in some bushfire prone areas. Their application is limited to low rise commercial & residential. [ citation needed ]
ICF construction is less demanding, owing to its modularity . Less-skilled labor can be employed to lay the ICF forms, though careful consideration must be made when pouring the concrete to make sure it consolidates fully and cures evenly without cracking. Unlike traditional wood beam construction, no additional structural support other than temporary scaffolding is required for openings, doors, windows, or utilities, though modifying the structure after the concrete cures requires special concrete cutting tools.
ICF walls are conventionally placed on a monolithic slab with embedded rebar dowels connecting the walls to the foundation.
ICF decking is becoming an increasingly popular addition to general ICF wall construction. ICF decking weighs up to 40% less than standard concrete flooring and provides superior insulation. ICF decking can also be designed in conjunction with ICF walls to form a continuous monolithic structure joined together by rebar. ICF deck roofs are popular in storm-areas, [ 14 ] but it is harder to build complex roof shapes and concrete can be poured only up to a point on angled surfaces, often 7:12 maximum pitch. [ 15 ]
ICF walls are constructed one row at a time, usually starting at the corners and working toward the middle of the walls. End blocks are then cut to fit so as to waste the least material possible. As the wall rises, blocks are staggered to avoid long vertical seams that can weaken the polystyrene formwork. [ 16 ] Structure frames known as bucks are placed around openings to give added strength to the openings and to serve as attachment points for windows and doors.
Interior and exterior finishes and facades are affixed directly to the ICF surface or tie ends, depending on the type of ICF. Brick and masonry facades require an extended ledge or shelf angle at the main floor level, but otherwise no modifications are necessary. Interior ICF polystyrene wall surfaces must be covered with drywall panels or other wall coatings. [ 17 ] During the first months immediately after construction, minor problems with interior humidity may be evident as the concrete cures, which can damage the drywall. Dehumidification can be accomplished with small residential dehumidifiers or using the building's air conditioning system.
Depending on the experience of the contractor and their quality of work, improperly installed exterior foam insulation could be easy access for groundwater and insects. To help prevent these problems, some manufacturers make insecticide-treated foam blocks and promote installation of drainage sheeting and other methods for waterproofing. Drain tiles are installed to eliminate water.
Plumbing and electrical conduit can be placed inside the forms and poured into place, though settling problems could cause pipes to break, creating costly repairs. For this reason, plumbing and conduit as well as electrical cables are usually embedded directly into the foam before the wall coverings are applied. A hot knife or electric chainsaw is commonly used to create openings in the foam to lay piping and cabling. electrical cables are inserted into the ICF using a Cable Punch . [ 18 ] while ICFs made from other materials are typically cut or routed with simple carpentry tools. Versions of simple carpentry tools suitable for cement-bonded type forms are made for similar use with autoclaved aerated concrete .
The initial cost of using ICFs rather than conventional construction techniques is sensitive to the price of materials and labor, but building using ICF may add 3 to 5 percent to the total purchase price over building using wood frame. [ 19 ] In most cases ICF construction will cost about 40% less than conventional (basement) construction because of the labor savings from combining multiple steps into one step. Above grade, ICF construction is typically more expensive, but when adding large openings, ICF construction becomes very cost effective. Large openings in conventional construction require large headers and supporting posts, whereas ICF construction reduces the cost, as only reinforcing steel is needed directly around the opening.
ICF construction can allow up to 60% smaller heating and cooling units to service the same floor area, which can cut the cost of the final house by an estimated $0.75 per square foot. So, the estimated net extra cost can be as much as $0.25 to $3.25. [ 20 ] [ 21 ] ICF homes can also qualify for tax credits in some jurisdictions ((cn)), further lowering the costs.
ICF buildings are less expensive over time, as they require less energy to heat and cool the same size space compared to a variety of other common construction methods. Additionally, insurance costs can be much lower, as ICF homes are much less susceptible to damage from earthquakes, floods, hurricanes, fires, and other natural disasters. Maintenance and upkeep costs are also lessened, as ICF buildings do not contain wood, which can rot over time or be attacked by insects and rodents. [ 22 ]
In seismic and hurricane-prone areas, ICF construction provides strength, impact-resistance, durability, excellent sound insulation, and airtightness. ICF construction is ideal in moderate and mixed climates with significant daily temperature variations, in buildings designed to benefit from thermal mass strategies. [ 23 ]
Insulating R-Value alone ( R-value ) of ICFs range from R-12 to R-28, which can be a good R-value for walls. [ 24 ] The energy savings compared to framed walls is in a range of 50% to 70%. [ 25 ] [ 26 ]
ICF buildings may be more difficult to remodel than conventionally framed structures because specialized tools and methods are required to cut the concrete walls.
In the United Kingdom, buildings constructed using ICF may be unsuitable for homeowners wishing to free up capital using equity release . [ 27 ] | https://en.wikipedia.org/wiki/Insulating_concrete_form |
An insulator is a type of cis-regulatory element known as a long-range regulatory element . Found in multicellular eukaryotes and working over distances from the promoter element of the target gene, an insulator is typically 300 bp to 2000 bp in length. [ 1 ] Insulators contain clustered binding sites for sequence specific DNA-binding proteins [ 1 ] and mediate intra- and inter- chromosomal interactions. [ 2 ]
Insulators function either as an enhancer -blocker or a barrier, or both. The mechanisms by which an insulator performs these two functions include loop formation and nucleosome modifications. [ 3 ] [ 4 ] There are many examples of insulators, including the CTCF insulator, the gypsy insulator, and the β-globin locus. The CTCF insulator is especially important in vertebrates , while the gypsy insulator is implicated in Drosophila . The β-globin locus was first studied in chicken and then in humans for its insulator activity, both of which utilize CTCF. [ 5 ]
The genetic implications of insulators lie in their involvement in a mechanism of imprinting and their ability to regulate transcription . Mutations to insulators are linked to cancer as a result of cell cycle disregulation, tumourigenesis , and silencing of growth suppressors.
Insulators have two main functions: [ 3 ] [ 4 ]
While enhancer-blocking is classified as an inter-chromosomal interaction, acting as a barrier is classified as an intra-chromosomal interaction. The need for insulators arises where two adjacent genes on a chromosome have very different transcription patterns; it is critical that the inducing or repressing mechanisms of one do not interfere with the neighbouring gene. [ 6 ] Insulators have also been found to cluster at the boundaries of topologically associating domains (TADs) and may have a role in partitioning the genome into "chromosome neighborhoods" - genomic regions within which regulation occurs. [ 7 ] [ 8 ]
Some insulators can act as both enhancer blocker and barriers, and some just have one of the two functions. [ 3 ] Some examples of different insulators are: [ 3 ]
Similar mechanism of action for enhancer-blocking insulators; chromatin loop domains are formed in the nucleus that separates the enhancer and the promoter of a target gene. Loop domains are formed through the interaction between enhancer-blocking elements interacting with each other or securing chromatin fibre to structural elements within the nucleus . [ 4 ] The action of these insulators is dependent on being positioned between the promoter of the target gene and the upstream or down stream enhancer. The specific way in which insulators block enhancers is dependent on the enhancers mode of action. Enhancers can directly interact with their target promoters through looping [ 9 ] (direct-contact model), in which case an insulator prevents this interaction through the formation of a loop domain that separates the enhancer and promoter sites and prevents the promoter-enhancer loop from forming. [ 4 ] An enhancer can also act on a promoter through a signal (tracking model of enhancer action). This signal may be blocked by an insulator through the targeting of a nucleoprotein complex at the base of the loop formation. [ 4 ]
Barrier activity has been linked to the disruption of specific processes in the heterochromatin formation pathway. These types of insulators modify the nucleosomal substrate in the reaction cycle that is central to heterochromatin formation. [ 4 ] Modifications are achieved through various mechanisms including nucleosome removal, in which nucleosome-excluding elements disrupt heterochromatin from spreading and silencing (chromatin-mediated silencing). Modification can also be done through recruitment of histone acetyltransferase (s) and ATP-dependent nucleosome remodelling complexes. [ 4 ]
The CTCF insulator appears to have enhancer blocking activity via its 3D structure [ 10 ] and have no direct connection with barrier activity. [ 11 ] Vertebrates in particular appear to rely heavily on the CTCF insulator, however there are many different insulator sequences identified. [ 2 ] Insulated neighborhoods formed by physical interaction between two CTCF-bound DNA loci contain the interactions between enhancers and their target genes. [ 12 ]
One mechanism of regulating CTCF is via methylation of its DNA sequence . CTCF protein is known to favourably bind to unmethylated sites, so it follows that methylation of CpG islands is a point of epigenetic regulation . [ 2 ] An example of this is seen in the Igf2-H19 imprinted locus where methylation of the paternal imprinted control region (ICR) prevents CTCF from binding. [ 13 ] A second mechanism of regulation is through regulating proteins that are required for fully functioning CTCF insulators. These proteins include, but are not limited to cohesin , RNA polymerase , and CP190. [ 2 ] [ 14 ]
The insulator element that is found in the gypsy retrotransposon of Drosophila is one of several sequences that have been studied in detail. The gypsy insulator can be found in the 5' untranslated region (UTR) of the retrotransposon element. Gypsy affects the expression of adjacent genes pending insertion into a new genomic location, causing mutant phenotypes that are both tissue specific and present at certain developmental stages. The insulator likely has an inhibitory effect on enhancers that control the spatial and temporal expression of the affected gene. [ 15 ]
The first examples of insulators in vertebrates was seen in the chicken β-globin locus, cHS4 . cHS4 marks the border between the active euchromatin in the β-globin locus and the upstream heterochromatin region that is highly condensed and inactive. The cHS4 insulator acts as both a barrier to chromatin-mediated silencing via heterochromatin spreading, and blocks interactions between enhancers and promoters. A distinguishing characteristic of cHS4 is that it has a repetitive heterochromatic region on its 5' end. [ 5 ]
The human β-globin locus homologue of cHS4 is HS5 . Different from the chicken β-globin locus, the human β-globin locus has an open chromatin structure and is not flanked by a 5' heterochromatic region. HS5 is thought to be a genetic insulator in vivo as it has both enhancer-blocking activity and transgene barrier activities. [ 5 ]
CTCF was first characterized for its role in regulating β-globin gene expression. At this locus, CTCF functions as an insulator-binding protein forming a chromosomal boundary. [ 13 ] CTCF is present in both the chicken β-globin locus and human β-globin locus. Within cHS4 of the chicken β-globin locus, CTCF binds to a region (FII) that is responsible for enhancer blocking activity. [ 5 ]
The ability of enhancers to activate imprinted genes is dependent on the presence of an insulator on the unmethylated allele between the two genes. An example of this is the Igf2-H19 imprinted locus. In this locus the CTCF protein regulates imprinted expression by binding to the unmethylated maternal imprinted control region (ICR) but not on the paternal ICR. When bound to the unmethylated maternal sequence, CTCF effectively blocks downstream enhancer elements from interacting with the Igf2 gene promoter, leaving only the H19 gene to be expressed . [ 13 ]
When insulator sequences are located in close proximity to the promoter of a gene, it has been suggested that they might serve to stabilize enhancer-promoter interactions. When they are located farther away from the promoter, insulator elements would compete with the enhancer and interfere with activation of transcription . [ 3 ] Loop formation is common in eukaryotes to bring distal elements (enhancers, promoters, locus control regions ) into closer proximity for interaction during transcription. [ 4 ] The mechanism of enhancer-blocking insulators then, if in the correct position, could play a role in regulating transcription activation. [ 3 ]
CTCF insulators affect the expression of genes implicated in cell cycle regulation processes that are important for cell growth, cell differentiation , and programmed cell death ( apoptosis ). Two of these cell cycle regulation genes that are known to interact with CTCF are hTERT and C-MYC. In these cases, a loss of function mutation to the CTCF insulator gene changes the expression patterns and may affect the interplay between cell growth, differentiation and apoptosis and lead to tumourigenesis or other problems. [ 2 ]
CTCF is also required for the expression of tumour repressor retinoblastoma (Rb) gene and mutations and deletions of this gene are associated with inherited malignancies . When the CTCF binding site is removed expression of Rb is decreased and tumours are able to thrive. [ 2 ]
Other genes that encode cell cycle regulators include BRCA1 , and p53 , which are growth suppressors that are silenced in many cancer types, and whose expression is controlled by CTCF. Loss of function of CTCF in these genes leads to the silencing of the growth suppressor and contributes to the formation of cancer. [ 2 ]
The aberrant activation of insulators can modulate the expression of cancer-related genes, including matrix metalloproteinases involved in cancer cell invasion. [ 16 ] | https://en.wikipedia.org/wiki/Insulator_(genetics) |
Insulin receptor substrate (IRS) is an important ligand in the insulin response of human cells.
IRS-1 , for example, is an IRS protein that contains a phosphotyrosine binding-domain ( PTB-domain ). In addition, the insulin receptor contains a NPXY motif . The PTB-domain binds the NPXY sequence. Thus, the insulin receptor binds IRS. | https://en.wikipedia.org/wiki/Insulin_receptor_substrate |
The insulin transduction pathway is a biochemical pathway by which insulin increases the uptake of glucose into fat and muscle cells and reduces the synthesis of glucose in the liver and hence is involved in maintaining glucose homeostasis . This pathway is also influenced by fed versus fasting states, stress levels , and a variety of other hormones . [ 1 ]
When carbohydrates are consumed, digested, and absorbed the pancreas senses the subsequent rise in blood glucose concentration and releases insulin to promote uptake of glucose from the bloodstream . When insulin binds to the insulin receptor , it leads to a cascade of cellular processes that promote the usage or, in some cases, the storage of glucose in the cell. The effects of insulin vary depending on the tissue involved, e.g., insulin is most important in the uptake of glucose by muscle and adipose tissue . [ 2 ]
This insulin signal transduction pathway is composed of trigger mechanisms (e.g., autophosphorylation mechanisms) that serve as signals throughout the cell. There is also a counter mechanism in the body to stop the secretion of insulin beyond a certain limit. Namely, those counter-regulatory mechanisms are glucagon and epinephrine. The process of the regulation of blood glucose (also known as glucose homeostasis ) also exhibits oscillatory behavior .
On a pathological basis, this topic is crucial to understanding certain disorders in the body such as diabetes , hyperglycemia and hypoglycemia .
The functioning of a signal transduction pathway is based on extra-cellular signaling that in turn creates a response that causes other subsequent responses, hence creating a chain reaction, or cascade. During the course of signaling, the cell uses each response for accomplishing some kind of a purpose along the way. Insulin secretion mechanism is a common example of signal transduction pathway mechanism.
Insulin is produced by the pancreas in a region called islets of Langerhans . In the islets of Langerhans, there are beta-cells , which are responsible for production and storage of insulin. Insulin is secreted as a response mechanism for counteracting the increasing excess amounts of glucose in the blood.
Glucose in the body increases after food consumption. This is primarily due to carbohydrate intake, but to a much lesser degree protein intake ( [1] )( [2] ). Depending on the tissue type, the glucose enters the cell through facilitated diffusion or active transport. In muscle and adipose tissue, glucose enters through GLUT 4 receptors via facilitated diffusion ( [3] ). In brain, retina, kidney, RBC, placenta and many other organs, glucose enters using GLUT 1 and GLUT 3. In the beta-cells of the pancreas and in liver cells, glucose enters through the GLUT 2 receptors [ 3 ] (process described below).
Insulin biosynthesis is regulated by transcriptional and translational levels. The β-cells promote their protein transcription in response to nutrients. The exposure of rat Langerhans islets to glucose for 1 hour is able to remarkably induce the intracellular proinsulin levels. It was noted that the proinsulin mRNA remained stable. This suggests that the acute response to glucose of the insulin synthesis is independent of mRNA synthesis in the first 45 minutes because the blockage of the transcription decelerated the insulin accumulation during that time. [ 4 ] PTBPs, also called polypyrimidine tract binding proteins, are proteins that regulate the translation of mRNA. They increase the viability of mRNA and provoke the initiation of the translation. PTBP1 enable the insulin gene-specific activation and insulin granule protein mRNA by glucose. [ 4 ]
Two aspects of the transduction pathway process are explained below: insulin secretion and insulin action on the cell.
The glucose that goes into the bloodstream after food consumption also enters the beta cells in the islets of Langerhans in the pancreas. The glucose diffuses in the beta-cell facilitated by a GLUT-2 vesicle. Inside the beta cell, the following process occurs:
Glucose gets converted to glucose-6-phosphate (G6P) through glucokinase, and G6P is subsequently oxidized to form ATP . This process inhibits the ATP-sensitive potassium ion channels of the cell causing the potassium ion channel to close and not function anymore. The closure of the ATP-sensitive potassium channels causes depolarization of the cell membrane causing the cell membrane to stretch which causes the voltage-gated calcium channel on the membrane to open causing an influx of Ca 2+ ions.
This influx then stimulates fusion of the insulin vesicles to the cell membrane and secretion of insulin in the extracellular fluid outside the beta cell; thus making it enter the bloodstream. [Also Illustrated in Figure 1.1.1]. [ 5 ]
There are 3 subfamilies of Ca 2+ channels; L-type Ca 2+ channels, non-L-type Ca 2+ channels (including R-type) and the T-type Ca 2+ channels. There are two phases of the insulin secretion, the first phase involves the L-type Ca 2+ channels and the second phase involves the R-type Ca 2+ channels. The Ca 2+ influx generated by R-type Ca 2+ channels is not enough to cause insulin exocytosis, however, it increases the mobilization of the vesicles towards the cell membrane. [ 4 ]
Fatty acids also affect insulin secretion. In type 2 diabetes, fatty acids are able to potentiate insulin release to compensate the increment need of insulin. It was found that the β-cells express free fatty acid receptors at their surface, through which fatty acids can impact the function of β-cells. Long-chain acyl-CoA and DAG are the metabolites resulting from the intracellular metabolism of fatty acids. Long-chain acyl-CoA has the ability to acylate proteins that are essential in the insulin granule fusion. On the other hand, DAG activates PKC that is involved in the insulin secretion. [ 4 ]
Several hormones can affect insulin secretion. Estrogen is correlated with an increase of insulin secretion by depolarizing the β-cells membrane and enhancing the entry of Ca 2+ . In contrast, growth hormone is known to lower the serum level of insulin by promoting the production of insulin-like growth factor-I (IGF-I). IGF-I, in turn, suppresses the insulin secretion. [ 4 ]
After insulin enters the bloodstream, it binds to a membrane-spanning receptor tyrosine kinase (RTK). This glycoprotein is embedded in the cellular membrane and has an extracellular receptor domain, made up of two α-subunits, and an intracellular catalytic domain made up of two β-subunits. The α-subunits act as insulin receptors and the insulin molecule acts as a ligand . Together, they form a receptor-ligand complex.
Binding of insulin to the α-subunit results in a conformational change of the protein, which activates tyrosine kinase domains on each β-subunit. The tyrosine kinase activity causes an autophosphorylation of several tyrosine residues in the β-subunit. The phosphorylation of 3 residues of tyrosine is necessary for the amplification of the kinase activity. [ 6 ]
This autophosphorylation triggers the activation of the docking proteins, in this case IRS (1-4) on which phosphatidylinositol-3-Kinase (PI-3K) can be attached or GRB2 where the ras guanine nucleotide exchange factor (GEF) (also known as SOS ) can be attached. [ 7 ]
PI-3K causes the phosphorylation of PIP2 to PIP3 . This protein acts as a docking site for PDPK1 and protein kinase B (also known as AKT), which is then phosphorylated by the latter and PK2 to be activated. This leads to crucial metabolic functions such as synthesis of lipids, proteins and glycogen. It also leads to cell survival and cell proliferation. Most importantly, the PI-3K pathway is responsible for the distribution of glucose for important cell functions. For example, the suppression of hepatic glucose synthesis and the activation of glycogen synthesis. Hence, AKT possesses a crucial role in the linkage of the glucose transporter ( GLUT4 ) to the insulin signaling pathway. The activated GLUT4 will translocate to the cell membrane and promotes the transportation of glucose into the intracellular medium. [ 6 ]
The Ras-GEF stimulates the exchange of GDP to GTP in the RAS protein, causing it to activate. Ras then activates the mitogen-activated protein kinase (MAP-Kinase) route, which ultimately results in changes in protein activity and gene expression.
Thus, insulin's role is more of a promoter for the usage of glucose in the cells rather than neutralizing or counteracting it.
PI3K ( phosphoinositide 3-kinase ) is one of the important components in the regulation of the insulin signaling pathway. It maintains the insulin sensitivity in the liver. PI-3K is composed of a regulatory subunit (P85) and a catalytic subunit (P110). P85 regulates the activation of PI3K enzyme. [ 8 ] In the PI-3K heterodimer (P85-P110), P85 is responsible for the PI3K activity, by binding to the binding site on the insulin receptor substrates (IRS). It was noted that an increase of P85 a (isoform of P85) results in a competition between the later and the P85-P110 complex to the IRS binding site, reducing the PI3K activity and leading to insulin resistance. Insulin resistance refers also to type 2 diabetes.
It was also noted that increased serine phosphorylation of IRS is involved in the insulin resistance by reducing their ability to attract PI3K. The serine phosphorylation can also lead to degradation of IRS-1. [ 7 ]
Signal transduction is a mechanism in which the cell responds to a signal from the environment by activating several proteins and enzymes that will give a response to the signal. Feedback mechanism might involve negative and positive feedbacks. In the negative feedback, the pathway is inhibited and the result of the transduction pathway is reduced or limited. In positive feedback, the transduction pathway is promoted and stimulated to produce more products.
Insulin secretion results in positive feedback in different ways. Firstly, insulin increases the uptake of glucose from blood by the translocation and exocytosis of GLUT4 storage vesicles in the muscle and fat cells. Secondly, it promotes the conversion of glucose into triglyceride in the liver, fat, and muscle cells. Finally, the cell will increase the rate of glycolysis within itself to break glucose in the cell into other components for tissue growth purposes.
An example of positive feedback mechanism in the insulin transduction pathway is the activation of some enzymes that inhibit other enzymes from slowing or stopping the insulin transduction pathway which results in improved intake of the glucose.
One of these pathways, involves the PI3K enzyme. This pathway is responsible for activating glycogen, lipid-protein synthesis, and specific gene expression of some proteins which will help in the intake of glucose.
Different enzymes control this pathway. Some of these enzymes constrict the pathway causing a negative feedback like the GSK-3 pathway. Other enzymes will push the pathway forward causing a positive feedback like the AKT and P70 enzymes.
When insulin binds to its receptor, it activates the glycogen synthesis by inhibiting the enzymes that slow down the PI3K pathway such as PKA enzyme. At the same time, it will promote the function of the enzymes that provide a positive feedback for the pathway like the AKT and P70 enzymes. [ 9 ] The inactivation of the enzymes that stop the reaction and activating of enzymes that provide a positive feedback will increase glycogen, lipid & protein syntheses and promote glucose intake.
( Image to help explain the function of the proteins mentioned above in the positive feedback. )
When insulin binds to the cell's receptor, it results in negative feedback by limiting or stopping some other actions in the cell. It inhibits the release and production of glucose from the cells which is an important part in reducing the glucose blood level. Insulin will also inhibit the breakdown of glycogen into glucose by inhibiting the expression of the enzymes that catalyzes the degradation of glycogen .
An example of negative feedback is slowing or stopping the intake of glucose after the pathway was activated. Negative feedback is shown in the insulin signal transduction pathway by constricting the phosphorylation of the insulin-stimulated tyrosine. [ 10 ] The enzyme that deactivates or phosphorylates the insulin-stimulated tyrosine is called tyrosine phosphatases (PTPases). When activated, this enzyme provides a negative feedback by catalyzing the dephosphorylation of the insulin receptors. [ 11 ] The dephosphorylation of the insulin receptor slows down glucose intake by inhibiting the activation (phosphorylation) of proteins responsible for further steps of the insulin transduction pathway.
Insulin is synthesized and secreted in the beta cells of the islets of Langerhans. Once insulin is synthesized, the beta cells are ready to release it in two different phases. As for the first phase, insulin release is triggered rapidly when the blood glucose level is increased. The second phase is a slow release of newly formed vesicles that are triggered regardless of the blood sugar level.
Glucose enters the beta cells and goes through glycolysis to form ATP that eventually causes depolarization of the beta cell membrane (as explained in Insulin secretion section of this article). The depolarization process causes voltage-controlled calcium channels (Ca2+) opening, allowing the calcium to flow into the cells. An increased calcium level activates phospholipase C, which cleaves the membrane phospholipid phosphatidylinositol 4,5-bisphosphate into Inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds to receptor proteins in the membrane of the endoplasmic reticulum (ER). This releases (Ca 2+ ) from the ER via IP3 gated channels, and raises the cell concentration of calcium even more. The influx of Ca 2+ ions causes the secretion of insulin stored in vesicles through the cell membrane. The process of insulin secretion is an example of a trigger mechanism in a signal transduction pathway because insulin is secreted after glucose enters the beta cell and that triggers several other processes in a chain reaction.
While insulin is secreted by the pancreas to lower blood glucose levels, glucagon is secreted to raise blood glucose levels. This is why glucagon has been known for decades as a counter-regulatory hormone. [ 12 ] When blood glucose levels are low, the pancreas secretes glucagon, which in turn causes the liver to convert stored glycogen polymers into glucose monomers, which is then released into the blood. This process is called glycogenolysis. Liver cells, or hepatocytes, have glucagon receptors which allow for glucagon to attach to them and thus stimulate glycogenolysis. [ 13 ] Contrary to insulin, which is produced by pancreatic β-cells, glucagon is produced by pancreatic α-cells. [ 14 ] It is also known that an increase in insulin suppresses glucagon secretion, and a decrease in insulin, along with low glucose levels, stimulates the secretion of glucagon. [ 14 ]
When blood glucose levels are too low, the pancreas is signaled to release glucagon, which has essentially the opposite effect of insulin and therefore opposes the reduction of glucose in the blood. Glucagon is delivered directly to the liver, where it connects to the glucagon receptors on the membranes of the liver cells, signals the conversion of the glycogen already stored in the liver cells into glucose. This process is called glycogenolysis .
Conversely, when the blood glucose levels are too high, the pancreas is signaled to release insulin. Insulin is delivered to the liver and other tissues throughout the body (e.g., muscle, adipose). When the insulin is introduced to the liver, it connects to the insulin receptors already present, that is tyrosine kinase receptor. [ 15 ] These receptors have two alpha subunits (extracellular) and two beta subunits (intercellular) which are connected through the cell membrane via disulfide bonds. When the insulin binds to these alpha subunits, 'glucose transport 4' (GLUT4) is released and transferred to the cell membrane to regulate glucose transport in and out of the cell. With the release of GLUT4, the allowance of glucose into cells is increased, and therefore the concentration of blood glucose might decrease. This, in other words, increases the utilization of the glucose already present in the liver. This is shown in the adjacent image. As glucose increases, the production of insulin increases, which thereby increases the utilization of the glucose, which maintains the glucose levels in an efficient manner and creates an oscillatory behavior. | https://en.wikipedia.org/wiki/Insulin_signal_transduction_pathway |
An intact forest landscape ( IFL ) is an unbroken natural landscape of a forest ecosystem and its habitat – plant community components, in an extant forest zone. An IFL is a natural environment with no signs of significant human activity or habitat fragmentation , and of sufficient size to contain, support, and maintain the complex of indigenous biodiversity of viable populations of a wide range of genera and species , and their ecological effects . [ 1 ]
IFLs are estimated to cover 23 percent of forest ecosystems (13.1 million km 2 ). Two biomes hold almost all of these IFLs: dense tropical and subtropical forests (45 percent) and boreal forests (44 percent), while the proportion of IFLs in temperate broadleaf and mixed forests is very small. IFLs remain in 66 of the 149 countries that could potentially have them. Three of these countries, Canada , Russia , and Brazil , contain 64 percent of the total IFL area in the world. Nineteen percent of the global IFL area is under some form of protection, but only 10 percent is strictly protected, i.e., belongs to IUCN protected areas categories I–III. It is estimated that the planet has lost seven percent of its IFLs since 2000. [ 2 ]
The term "intact forest landscape" was developed by a group of environmental non-governmental organizations including Greenpeace , the World Resources Institute , Biodiversity Conservation Center, International Socio-Ecological Union, and Transparent World. IFL has been used in regional and global forest monitoring projects such as Intact-Forests.org, and in scientific forest ecology research.
The concept of an intact forest landscape and its technical definition were developed to help create, implement, and monitor policies concerning the human impact on forest landscapes at the regional or country levels.
Technically, an IFL is defined as an area which contains forest and non-forest ecosystems minimally influenced by human economic activity, with an area of at least 500 km 2 (50,000 ha) and a minimal width of 10 km (measured as the diameter of a circle that is entirely inscribed within the boundaries of the territory).
Areas with evidence of certain types of human influence are considered "disturbed" and not eligible for inclusion in an IFL:
Areas with evidence of low intensity and old disturbances are treated as subject to “background” influence and are eligible for inclusion in an IFL. Sources of background influence include local shifting cultivation activities, diffuse grazing by domesticated animals, low-intensity selective logging and hunting.
This definition builds on and refines the concept of a frontier forest as has been used by the World Resources Institute . [ 3 ]
Most of the world’s original forests have either been lost to conversion or altered by logging and forest management. Forests that still combine large size with insignificant human influence are becoming increasingly important as their global extent continues to shrink.
Ecosystems are generally better able to support their natural biological diversity and ecological processes the lower their exposure to humans and the greater their area. They are also better able to absorb and recover from disturbance (resistance and resilience).
Fragmentation and loss of natural habitats are the main factors threatening plant and animal species with extinction . Forest biodiversity largely depends on intact forest landscapes. Large roaming animals (such as forest elephants, great apes, bears, wolves, tigers, jaguars, eagles, deer, etc.) especially require that intact forest landscapes be preserved. Loss of natural habitat can occur through introduction of forest monoculture or by even aged timber management , which are also destructive of biodiversity [ 4 ] and wildlife abundance. For example, many wildlife species such as the wild turkey depend upon variegation of tree ages and sizes for its optimal sub-canopy flight; [ 5 ] forests that have been managed for even aged composition fail to achieve abundance values of the wild turkey and many other organisms.
Large natural forest areas are also important for maintaining ecological processes and supplying ecosystem services like water and air purification, nutrient cycling , carbon sequestration , erosion and flood control .
The conservation value of forest landscapes that are free from human disturbance is therefore high, although it varies among regions. At the same time the cost of conserving large unpopulated areas is often low. The same factors that have kept them from being developed, such as remoteness and low economic value, also help to reduce the cost of protecting them. [ 6 ]
Several international initiatives to protect forest biodiversity ( CBD ), to reduce carbon emissions from deforestation and forest degradation ( IGBP , REDD [ 7 ] ), and to stimulate use of sustainable forest management practices ( FSC ) require that large natural forest areas be preserved. Mapping, conservation and monitoring of intact forest landscapes is a therefore a task of global importance.
Several attempts have been made since the 1990s to map the remaining extent of large natural forests. At the global level, these include: wilderness area maps by McCloskey and Spalding; [ 9 ] human footprint map by Sanderson, et al.; [ 10 ] and frontier forests map by Bryant, et al. [ 3 ] These efforts have generally combined already existing maps and information to identify areas of low human impact at a coarse scale, typically no finer than 1:16 million.
The IFL mapping initiatives differ from these by using the IFL definition mentioned above, by using information from satellites in addition to other sources, and by producing results at a much finer scale, approximately 1:1 million.
The first regional IFL map was presented by Greenpeace Russia in 2001, covering northern European Russia. [ 6 ] The report also contains a complete description of the IFL concept and the mapping algorithm.
A number of regional IFL maps were presented in 2002–2006, using similar methods, by a group of scientists and environmental non-governmental organizations under the framework of Global Forest Watch , an initiative of the World Resources Institute . [ 11 ]
Using the same method, a global IFL map was prepared in 2005–2006 under the leadership of Greenpeace, with contributions from the Biodiversity Conservation Center, International Socio-Ecological Union, Transparent World (Russia), Finnish Nature League, Forest Watch Indonesia, and Global Forest Watch . [ 8 ] [ 12 ]
The global IFL map relies on publicly available high spatial resolution satellite imagery provided by Global Land Cover Facility (GLCF) and USGS and on a simple and consistent set of criteria.
The IFL concept is a useful tool for making, implementing, and monitoring policy in the realms of sustainable forest management, conservation and climate, as shown by the following examples.
The distinction between intact and non-intact forest landscapes can be used to account for losses of carbon from forest degradation, as proposed by Mollicone, et al. [ 13 ] The global IFL map [ 14 ] provides a geographically explicit baseline with several advantages:
Conservation of large IFLs is a robust and cost-effective way to protect biodiversity and maintain ecological integrity and should therefore be an important component of a global conservation strategy. The remoteness and large size of these areas provide the best guarantee for their continued intactness. Withdrawing remaining intact areas from the production base would lead to small or negligible economic loss.
Russian NGOs have, for example, used IFL maps to argue that the most valuable of the remaining intact natural landscapes of northern European Russia and Far East be preserved, and to propose several new national parks: Kutsa and Hibiny (Murmansk Region), Kalevalsky (Karelia Republic) and Onezhskoye Pomorye (Arkhangelsk Region).
Several boreal countries are using the IFL concept in the context of forest certification. One of the categories of High Conservation Value Forest used by the Forest Stewardship Council [ 15 ] is analogous to that of IFLs. The formulation used in the Canadian and Russian national FSC standards—globally, nationally, or regionally significant forest landscapes, un-fragmented by permanent infrastructure and of a size to maintain viable populations of most species—calls for IFL maps for implementation. IFLs are directly mentioned among other categories of High Conservation Value Forest in the FSC Controlled Wood standard. [ 16 ]
Several retailers, including IKEA [ 17 ] and Lowe's, [ 18 ] have committed not to use wood from IFLs unless intactness values are preserved. Others, such as Bank of America , invest only in companies that maintain such values. [ 19 ] These companies use regional IFL maps to implement their policies. | https://en.wikipedia.org/wiki/Intact_forest_landscape |
An intake (also inlet ) is an opening, structure or system through which a fluid is admitted to a space or machine as a consequence of a pressure differential between the outside and the inside. The pressure difference may be generated on the inside by a mechanism, or on the outside by ram pressure or hydrostatic pressure . Flow rate through the intake depends on pressure difference, fluid properties, and intake geometry.
Intake refers to an opening, or area, together with its defining edge profile which has an associated entry loss, that captures pipe flow from a reservoir or storage tank . [ 1 ] Intake refers to the capture area definition and attached ducting to an aircraft gas turbine engine [ 2 ] or ramjet engine and, as such, an intake is followed by a compressor or combustion chamber . It may instead be referred to as a diffuser . [ 3 ] For an automobile engine the components through which the air flows to the engine cylinders, are collectively known as an intake system [ 4 ] and may include the inlet port and valve. [ 5 ] An intake for a hydroelectric power plant is the capture area in a reservoir which feeds a pressure pipe, or penstock , or into an open canal. [ 6 ]
Early automobile intake systems were simple air inlets connected directly to carburetors . The first air filter was implemented on the 1915 Packard Twin Six . [ citation needed ]
The modern automobile air intake system has three main parts, an air filter , mass flow sensor , and throttle body . Some modern intake systems can be highly complex, and often include specially-designed intake manifolds to optimally distribute air and air/fuel mixture to each cylinder. Many cars today now include a silencer to minimize the noise entering the cabin. [ citation needed ] Silencers impede airflow and create turbulence which reduce total power, so performance enthusiasts often remove them. [ citation needed ]
All the above is usually accomplished by flow testing on a flow bench in the port design stage. Cars with turbochargers or superchargers which provide pressurized air to the engine usually have highly refined intake systems to improve performance dramatically. [ citation needed ]
Production cars have specific-length air intakes to cause the air to resonate at a specific frequency to assist airflow into the combustion chamber. [ citation needed ] Aftermarket companies for cars have introduced larger throttle bodies and air filters to decrease restriction of flow at the cost of changing the harmonics of the air intake for a small net increase in power or torque . [ citation needed ]
Aircraft using piston engines use intake systems similar to automobiles.
With the development of jet engines and the subsequent ability of aircraft to travel at supersonic speeds, it was necessary to design inlets to provide the flow required by the engine over a wide operating envelope and to provide air with a high-pressure recovery and low distortion. These designs became more complex as aircraft speeds increased to Mach 3.0 and Mach 3.2, design points for the XB-70 and SR-71 respectively. The inlet is part of the fuselage or part of the nacelle.
Aircraft with a maximum speed greater than about Mach 2 use intakes with variable geometry to achieve good pressure recovery from take-off to maximum speed. [ 7 ] | https://en.wikipedia.org/wiki/Intake |
Intake momentum drag is an aerodynamic phenomenon which affects turboprop and jet-powered aircraft. [ 1 ] [ 2 ]
Intake momentum drag is caused by the consequence of the speed of the air entering the engine increasing, but where the exit speed of the air from the engine remains constant. The outcome therefore is that the amount by which the engine increases air velocity, ostensibly by way of the compression process, is reduced. A repercussion of this causes a slight reduction in the thrust of a jet engine. [ 1 ]
Intake momentum drag yaw is a further consequence of intake momentum drag which affects V/STOL (vertical and/or short take-off and landing) aircraft such as the Hawker Siddeley Harrier . [ 3 ]
Intake momentum drag yaw is an aspect in which the mass of air ingested by the intake of the engine, whilst the aircraft is in the hover during a crosswind, can result in a state of uncontrolled roll (a secondary aerodynamic effect of yaw).
The phenomenon was identified during the test flying programme for the Harrier and which required precise investigation. This resulted in test pilot John Farley deliberately flying right into the edge of this condition repeatedly, so that a system to counteract the effect could be developed. [ 3 ] | https://en.wikipedia.org/wiki/Intake_momentum_drag |
In mathematics , an integer-valued polynomial (also known as a numerical polynomial ) P ( t ) {\displaystyle P(t)} is a polynomial whose value P ( n ) {\displaystyle P(n)} is an integer for every integer n . Every polynomial with integer coefficients is integer-valued, but the converse is not true. For example, the polynomial
takes on integer values whenever t is an integer. That is because one of t and t + 1 {\displaystyle t+1} must be an even number . (The values this polynomial takes are the triangular numbers .)
Integer-valued polynomials are objects of study in their own right in algebra, and frequently appear in algebraic topology . [ 1 ]
The class of integer-valued polynomials was described fully by George Pólya ( 1915 ). Inside the polynomial ring Q [ t ] {\displaystyle \mathbb {Q} [t]} of polynomials with rational number coefficients, the subring of integer-valued polynomials is a free abelian group . It has as basis the polynomials
for k = 0 , 1 , 2 , … {\displaystyle k=0,1,2,\dots } , i.e., the binomial coefficients . In other words, every integer-valued polynomial can be written as an integer linear combination of binomial coefficients in exactly one way. The proof is by the method of discrete Taylor series : binomial coefficients are integer-valued polynomials, and conversely, the discrete difference of an integer series is an integer series, so the discrete Taylor series of an integer series generated by a polynomial has integer coefficients (and is a finite series).
Integer-valued polynomials may be used effectively to solve questions about fixed divisors of polynomials. For example, the polynomials P with integer coefficients that always take on even number values are just those such that P / 2 {\displaystyle P/2} is integer valued. Those in turn are the polynomials that may be expressed as a linear combination with even integer coefficients of the binomial coefficients.
In questions of prime number theory, such as Schinzel's hypothesis H and the Bateman–Horn conjecture , it is a matter of basic importance to understand the case when P has no fixed prime divisor (this has been called Bunyakovsky's property [ citation needed ] , after Viktor Bunyakovsky ). By writing P in terms of the binomial coefficients, we see the highest fixed prime divisor is also the highest prime common factor of the coefficients in such a representation. So Bunyakovsky's property is equivalent to coprime coefficients.
As an example, the pair of polynomials n {\displaystyle n} and n 2 + 2 {\displaystyle n^{2}+2} violates this condition at p = 3 {\displaystyle p=3} : for every n {\displaystyle n} the product
is divisible by 3, which follows from the representation
with respect to the binomial basis, where the highest common factor of the coefficients—hence the highest fixed divisor of n ( n 2 + 2 ) {\displaystyle n(n^{2}+2)} —is 3.
Numerical polynomials can be defined over other rings and fields, in which case the integer-valued polynomials above are referred to as classical numerical polynomials . [ citation needed ]
The K-theory of BU( n ) is numerical (symmetric) polynomials.
The Hilbert polynomial of a polynomial ring in k + 1 variables is the numerical polynomial ( t + k k ) {\displaystyle {\binom {t+k}{k}}} . | https://en.wikipedia.org/wiki/Integer-valued_polynomial |
In computer science, an integer is a datum of integral data type , a data type that represents some range of mathematical integers . [ 1 ] Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.
The value of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well). [ 2 ]
An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit digit group separators . [ 3 ]
The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value.
The most common representation of a positive integer is a string of bits , using the binary numeral system . The order of the memory bytes storing the bits varies; see endianness . The width , precision , or bitness [ 4 ] of an integral type is the number of bits in its representation. An integral type with n bits can encode 2 n numbers; for example an unsigned type typically represents the non-negative values 0 through 2 n − 1 . Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code , or as printed character codes such as ASCII .
There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement , which allows a signed integral type with n bits to represent numbers from −2 ( n −1) through 2 ( n −1) − 1 . Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0 ), and because addition , subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary , sign-magnitude , and ones' complement .
Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language, on a different processor, or in an execution context of different bitness; see § Words .
Some older computer architectures used decimal representations of integers, stored in binary-coded decimal (BCD) or other format. These values generally require data sizes of 4 bits per decimal digit (sometimes called a nibble ), usually with additional bits for a sign. Many modern CPUs provide limited support for decimal integers as an extended datatype, providing instructions for converting such values to and from binary values. Depending on the architecture, decimal integers may have fixed sizes (e.g., 7 decimal digits plus a sign fit into a 32-bit word), or may be variable-length (up to some maximum digit size), typically occupying two digits per byte (octet).
IPv6 addresses, GUIDs
Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths.
The table above lists integral type widths that are supported in hardware by common processors. High-level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (that can represent only the integers in a specified range).
Some languages, such as Lisp , Smalltalk , REXX , Haskell , Python , and Raku , support arbitrary precision integers (also known as infinite precision integers or bignums ). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl 's " bigint " package. [ 7 ] These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they, too, can only represent a finite subset of the mathematical integers. These schemes support very large numbers; for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long.
A Boolean type is a type that can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access.
A four-bit quantity is known as a nibble (when eating, being smaller than a bite ) or nybble (being a pun on the form of the word byte ). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal.
The term byte initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term byte was usually not used at all in connection with bit- and word-addressed machines.
The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking , where computers with different byte widths might have to communicate.
In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet .
The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture . The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 40-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word , such as longword , doubleword , quadword , and halfword , also vary with the CPU and OS. [ 8 ]
Practically all new desktop processors are capable of using 64-bit words, though embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.
One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 2 15 −1, the program will fail on computers with 16-bit integers. That variable should have been declared as long , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers. This issue is resolved by C99 in stdint.h in the form of intptr_t .
The bitness of a program may refer to the word size (or bitness) of the processor on which it runs, or it may refer to the width of a memory address or pointer, which can differ between execution modes or contexts. For example, 64-bit versions of Microsoft Windows support existing 32-bit binaries, and programs compiled for Linux's x32 ABI run in 64-bit mode yet use 32-bit memory addresses. [ 9 ]
The standard integer size is platform-dependent.
In C , it is denoted by int and required to be at least 16 bits. Windows and Unix systems have 32-bit int s on both 32-bit and 64-bit architectures.
A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine.
In C , it is denoted by short . It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. [ 10 ] [ 11 ] A conforming program can assume that it can safely store values between −(2 15 −1) [ 12 ] and 2 15 −1, [ 13 ] but it may not assume that the range is not larger. In Java , a short is always a 16-bit integer. In the Windows API , the datatype SHORT is defined as a 16-bit signed integer on all machines. [ 8 ]
A long integer can represent a whole integer whose range is greater than or equal to that of a standard integer on the same machine.
In C , it is denoted by long . It is required to be at least 32 bits, and may or may not be larger than a standard integer. A conforming program can assume that it can safely store values between −(2 31 −1) [ 12 ] and 2 31 −1, [ 13 ] but it may not assume that the range is not larger.
In the C99 version of the C programming language and the C++11 version of C++ , a long long type is supported that has double the minimum capacity of the standard long . This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the long long type did not exist in C++03. For an ANSI/ISO compliant compiler, the minimum requirements for the specified ranges, that is, −(2 63 −1) [ 12 ] to 2 63 −1 for signed and 0 to 2 64 −1 for unsigned, [ 13 ] must be fulfilled; however, extending this range is permitted. [ 18 ] [ 19 ] This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides stdint.h ; this was introduced in C99 and C++11.
Integer literals can be written as regular Arabic numerals , consisting of a sequence of digits and with negation indicated by a minus sign before the value. However, most programming languages disallow use of commas or spaces for digit grouping . Examples of integer literals are:
There are several alternate methods for writing integer literals in many programming languages:
In many programming languages, there exist predefined constants representing the least and the greatest values representable with a given integer type.
Names for these include | https://en.wikipedia.org/wiki/Integer_(computer_science) |
In computational complexity theory , an integer circuit is a circuit model of computation in which inputs to the circuit are sets of integers and each gate of the circuit computes either a set operation or an arithmetic operation on its input sets.
As an algorithmic problem, the possible questions are to find if a given integer is an element of the output node or if two circuits compute the same set. The decidability is still an open question, but there are results on restriction of those circuits. Finding answers to some questions about this model could serve as a proof to many important mathematical conjectures, like Goldbach's conjecture .
It is a natural extension of the circuits over sets of natural numbers when the considered set contains also negative integers, the definitions, which does not change, will not be repeated on this page. Only the differences will be mentioned.
The membership problem is the problem of deciding, given an integer circuit C , an input to the circuit X , and a specific integer n , whether the integer n is in the output of the circuit C when provided with input X . The computational complexity of this problem depends on the type of gates allowed in the circuit C . [ 1 ] The table below summarizes the computational complexity of the membership problem for various classes of integer circuits.
Here, MF Z {\displaystyle _{\mathbb {Z} }} (O) denotes the classes defined by O-formulae, which are O-circuits with maximal fan-out 1. | https://en.wikipedia.org/wiki/Integer_circuit |
The study of integer points in convex polyhedra [ 1 ] is motivated by questions such as "how many nonnegative integer -valued solutions does a system of linear equations with nonnegative coefficients have" or "how many solutions does an integer linear program have". Counting integer points in polyhedra or other questions about them arise in representation theory , commutative algebra , algebraic geometry , statistics , and computer science . [ 2 ]
The set of integer points, or, more generally, the set of points of an affine lattice , in a polyhedron is called Z-polyhedron , [ 3 ] from the mathematical notation Z {\displaystyle \mathbb {Z} } or Z for the set of integer numbers. [ 4 ]
For a lattice Λ, Minkowski's theorem relates the number d(Λ) (the volume of a fundamental parallelepiped of the lattice) and the volume of a given symmetric convex set S to the number of lattice points contained in S .
The number of lattice points contained in a polytope all of whose vertices are elements of the lattice is described by the polytope's Ehrhart polynomial . Formulas for some of the coefficients of this polynomial involve d(Λ) as well.
In certain approaches to loop optimization , the set of the executions of the loop body is viewed as the set of integer points in a polyhedron defined by loop constraints. | https://en.wikipedia.org/wiki/Integer_points_in_convex_polyhedra |
isl ( integer set library ) is a portable C library for manipulating sets and relations of integer points bounded by linear constraints . [ 2 ]
The following operations are supported: [ 3 ]
It also includes an ILP solver based on generalized basis reduction , transitive closures on maps (which may encode infinite graphs ), dependence analysis and bounds on piecewise step-polynomials.
All computations are performed in exact integer arithmetic using GMP or imath.
Many program analysis techniques are based on integer set manipulations. The integers typically represent iterations of a loop nest or elements of an array .
isl uses parametric integer programming to obtain an explicit representation in terms of integer divisions.
It is used as backend polyhedral library in the GCC Graphite framework [ 4 ] and in the LLVM Polly framework [ 5 ] for loop optimizations . | https://en.wikipedia.org/wiki/Integer_set_library |
Integrable algorithms are numerical algorithms that rely on basic ideas from the mathematical theory of integrable systems . [ 1 ]
The theory of integrable systems has advanced with the connection between numerical analysis . For example, the discovery of solitons came from the numerical experiments to the KdV equation by Norman Zabusky and Martin David Kruskal . [ 2 ] Today, various relations between numerical analysis and integrable systems have been found ( Toda lattice and numerical linear algebra , [ 3 ] [ 4 ] discrete soliton equations and series acceleration [ 5 ] [ 6 ] ), and studies to apply integrable systems to numerical computation are rapidly advancing. [ 7 ] [ 8 ]
Generally, it is hard to accurately compute the solutions of nonlinear differential equations due to its non-linearity. In order to overcome this difficulty, R. Hirota has made discrete versions of integrable systems with the viewpoint of "Preserve mathematical structures of integrable systems in the discrete versions". [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ]
At the same time, Mark J. Ablowitz and others have not only made discrete soliton equations with discrete Lax pair but also compared numerical results between integrable difference schemes and ordinary methods. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] As a result of their experiments, they have found that the accuracy can be improved with integrable difference schemes at some cases. [ 19 ] [ 20 ] [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Integrable_algorithm |
In algebra, an integrable module (or integrable representation ) of a Kac–Moody algebra g {\displaystyle {\mathfrak {g}}} (a certain infinite-dimensional Lie algebra ) is a representation of g {\displaystyle {\mathfrak {g}}} such that (1) it is a sum of weight spaces and (2) the Chevalley generators e i , f i {\displaystyle e_{i},f_{i}} of g {\displaystyle {\mathfrak {g}}} are locally nilpotent . [ 1 ] For example, the adjoint representation of a Kac–Moody algebra is integrable . [ 2 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Integrable_module |
In mathematics, integrability is a property of certain dynamical systems . While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities , or first integrals , that its motion is confined to a submanifold
of much smaller dimensionality than that of its phase space .
Three features are often referred to as characterizing integrable systems: [ 1 ]
Integrable systems may be seen as very different in qualitative character from more generic dynamical systems,
which are more typically chaotic systems . The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time.
Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top ) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top ).
In the late 1960s, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves ( Korteweg–de Vries equation ), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equation , and certain integrable many-body systems, such as the Toda lattice . The modern theory of integrable systems was revived with the numerical discovery of solitons by Martin Kruskal and Norman Zabusky in 1965, which led to the inverse scattering transform method in 1967.
In the special case of Hamiltonian systems, if there are enough independent Poisson commuting first integrals for the flow parameters to be able to serve as a coordinate system on the invariant level sets (the leaves of the Lagrangian foliation ), and if the flows are complete and the energy level set is compact, this implies the Liouville–Arnold theorem ; i.e., the existence of action-angle variables . General dynamical systems have no such conserved quantities; in the case of autonomous Hamiltonian systems, the energy is generally the only one, and on the energy level sets, the flows are typically chaotic.
A key ingredient in characterizing integrable systems is the Frobenius theorem , which states that a system is Frobenius integrable (i.e., is generated by an integrable distribution) if, locally, it has a foliation by maximal integral manifolds. But integrability, in the sense of dynamical systems , is a global property, not a local one, since it requires that the foliation be a regular one, with the leaves embedded submanifolds.
Integrability does not necessarily imply that generic solutions can be explicitly expressed in terms of some known set of special functions ; it is an intrinsic property of the geometry and topology of the system, and the nature of the dynamics.
In the context of differentiable dynamical systems , the notion of integrability refers to the existence of invariant, regular foliations ; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are invariant under the flow . There is thus a variable notion of the degree of integrability, depending on the dimension of the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems , known as complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this context.
An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can be adapted to describe evolution equations that either are systems of differential equations or finite difference equations .
The distinction between integrable and nonintegrable dynamical systems has the qualitative implication of regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be explicitly integrated in an exact form.
In the special setting of Hamiltonian systems , we have the notion of integrability in the Liouville sense. (See the Liouville–Arnold theorem .) Liouville integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the Hamiltonian vector fields associated with the invariants of the foliation span the tangent distribution. Another way to state this is that there exists a maximal set of functionally independent Poisson commuting invariants (i.e., independent functions on the phase space whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish).
In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of constants), it must have even dimension 2 n , {\displaystyle 2n,} and the maximal number of independent Poisson commuting invariants (including the Hamiltonian itself) is n {\displaystyle n} . The leaves of the foliation are totally isotropic with respect to the symplectic form and such a maximal isotropic foliation is called Lagrangian . All autonomous Hamiltonian systems (i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time-dependent) have at least one invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are compact, the leaves of the Lagrangian foliation are tori , and the natural linear coordinates on these are called "angle" variables. The cycles of the canonical 1 {\displaystyle 1} -form are called the action variables, and the resulting canonical coordinates are called action-angle variables (see below).
There is also a distinction between complete integrability , in the Liouville sense, and partial integrability, as well as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting, and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable . If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable.
When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense,
and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori . There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables ,
such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the tori. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.
In canonical transformation theory, there is the Hamilton–Jacobi method , in which solutions to Hamilton's equations are sought by first finding a complete solution of the associated Hamilton–Jacobi equation . In classical terminology, this is described as determining a transformation to a canonical set of coordinates consisting of completely ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical "position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In the case of compact energy level sets, this is the first step towards determining the action-angle variables . In the general theory of partial differential equations of Hamilton–Jacobi type, a complete solution (i.e. one that depends on n independent constants of integration, where n is the dimension of the configuration space), exists in very general cases, but only in the local sense. Therefore, the existence of a complete solution of the Hamilton–Jacobi equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be "explicitly integrated" involve a complete separation of variables , in which the separation constants provide the complete set of integration constants that are required. Only when these constants can be reinterpreted, within the full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense.
A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons , which are strongly stable, localized solutions of partial differential equations like the Korteweg–de Vries equation (which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often reducible to Riemann–Hilbert problems ),
which generalize local linear methods like Fourier analysis to nonlocal linearization, through the solution of associated integral equations.
The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably generalized sense) is invariant under the evolution, cf. Lax pair . This provides, in certain cases, enough invariants, or "integrals of motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability. However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a transformation to completely ignorable coordinates , in which the conserved quantities form half of a doubly infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a transformation to action-angle variables, although typically only a finite number of the "position" variables are actually angle coordinates, and the rest are noncompact.
Another viewpoint that arose in the modern theory of integrable systems originated in
a calculational approach pioneered by Ryogo Hirota , [ 2 ] which involved replacing
the original nonlinear dynamical system with a bilinear system of constant coefficient
equations for an auxiliary quantity, which later came to be known as the τ-function . These are now referred to as the Hirota equations . Although originally appearing just as a calculational device, without any clear relation
to the inverse scattering approach, or the Hamiltonian structure, this nevertheless gave a very direct method from which important classes of solutions such as solitons could be derived.
Subsequently, this was interpreted by Mikio Sato [ 3 ] and his students, [ 4 ] [ 5 ] at first for the case of
integrable hierarchies of PDEs, such as the Kadomtsev–Petviashvili hierarchy, but then
for much more general classes of integrable hierarchies, as a sort of universal phase space approach, in which, typically, the commuting dynamics were viewed simply as determined by a fixed (finite or infinite) abelian group action on a (finite or infinite) Grassmann manifold .
The τ-function was viewed as the determinant of a projection operator from elements of the group orbit to some origin within the Grassmannian,
and the Hirota equations as expressing the Plücker relations , characterizing the
Plücker embedding of the Grassmannian in the projectivization of a suitably
defined (infinite) exterior space , viewed as a fermionic Fock space .
There is also a notion of quantum integrable systems.
In the quantum setting, functions on phase space must be replaced by self-adjoint operators on a Hilbert space , and the notion of Poisson commuting functions replaced by commuting operators. The notion of conservation laws must be specialized to local conservation laws. [ 6 ] Every Hamiltonian has an infinite set of conserved quantities given by projectors to its energy eigenstates . However, this does not imply any special dynamical structure.
To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model , the Hubbard model and several variations on the Heisenberg model . [ 7 ] Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model. [ 8 ]
In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as exactly solvable models. This obscures the distinction between integrability, in the Hamiltonian sense, and the more general dynamical systems sense.
There are also exactly solvable models in statistical mechanics, which are more closely related to quantum integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense, based on the Yang–Baxter equations and the quantum inverse scattering method , provide quantum analogs of the inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics.
An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself, rather than the purely calculational feature that we happen to have some "known" functions available, in terms of which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known" functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such "known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic validity, it often implies the sort of regularity that is to be expected in integrable systems. [ citation needed ] | https://en.wikipedia.org/wiki/Integrable_system |
Integral ecology is a holistic approach to ecology, emphasizing human and social dimensions, and the interconnectedness of life on Earth . [ 1 ] It studies the relationships between living organisms and the ecosystem in which they develop. [ 1 ] The concept has been adopted by Pope Francis in his encyclical Laudato si' from 2015. [ 1 ] The approach has influenced many fields of research and the development of practices and case studies around the world such as the Parco della Piana [ 2 ] of Assisi .
The use of the term 'integral ecology' probably first appeared in Hillary B. Moore's Marine Ecology in 1958. Since then, multiple authors have used the term to convey unique but overlapping concepts in the intellectual atmosphere of ecology. In the two decades leading up to the encyclical's release, the concept evolved into a formal term, largely due to the contributions of Leonardo Boff and Thomas Berry . [ 3 ] According to Ryszard F. Sadowski, parts of Pope Francis's encyclical on integral ecology seem to have been influenced by Boff and Berry. [ 4 ] Some similar themes include the holistic approach, the common good, and sustainability.
Integral ecology, as described by Pope Francis in chapter four of his encyclical Laudato si’, is a holistic approach to understanding the interconnectedness of humans , society , and the environment. It posits that the current pace of consumption , waste accumulation, and environmental change is unsustainable and threatens to precipitate global catastrophes. [ 1 ]
The encyclical emphasizes the interdependence between humans and nature , insisting that "[a]lthough we are often not aware of it, we depend on these larger systems for our own existence." Vital processes such as carbon dioxide regulation, water purification , waste decomposition, soil formation , and many other processes, which facilitate life on Earth, are too often taken for granted. [ 1 ]
Pope Francis calls for a shift from an individualistic, consumer-driven culture to one that prioritizes the common good . This includes combating poverty , restoring dignity to marginalized communities, and protecting the environment. He asserts that "[t]he global economic crises have made painfully obvious the detrimental effects of disregarding our common destiny , which cannot exclude those who come after us." Thus, intergenerational solidarity is crucial for sustainable development . [ 1 ]
Integral ecology extends beyond environmental protection to encompass themes such as the health of societal institutions, cultural preservation , and urban planning . The encyclical stresses the importance of creating inclusive cities that foster a sense of belonging and shared responsibility. It also highlights the ethical dimensions of environmental care, the intrinsic dignity of the human person, and the need to respect the moral law . [ 1 ]
By framing environmental challenges as interconnected with social and economic issues, Pope Francis offers a comprehensive vision for addressing the complex crises facing humanity. His concept of integral ecology provides a foundation for building a more just , equitable, and sustainable world. [ 1 ]
The concept of integral ecology has been (significantly) influenced by cultural historian Thomas Berry . According to Berry, humanity has entered a period of ecological crisis due to excessive anthropocentrism and consumerism, leading to the exploitation and devastation of the planet. Berry criticized the destructive impact of modern technologies , such as chemical fertilizers and deforestation , which have depleted natural resources and harmed the environment. He argued that while humans have traditionally held a spiritual connection to nature, this reverence has diminished in recent centuries, leading to a loss of ecological wisdom . [ 5 ]
To address this crisis, Berry envisioned an " Ecozoic Era ", characterized by a harmonious relationship between humans and the earth. Berry added that "[t]his new geobiological period is the condition for the integral functioning of the planet in all phases of its activities, whether these be biological, ecological, economic, cultural, or religious." Being part of the Ecozoic Era would require a fundamental shift in human consciousness and the recognition that everything in this universe is sacred and interconnected. [ 5 ]
Berry introduced the "integral ecologist" as the personification of the Ecozoic Era. This individual would serve as a spokesperson for the planet, advocating for its protection and restoration. The integral ecologist would be able to bridge the gap between scientific knowledge and spiritual wisdom. In recognizing the complex nature of the universe as a dynamic and evolving system, integral ecologists would be able regain their spiritual understanding of the cosmos and their ability to cultivate planetary well-being . [ 5 ]
In “Liberation Theology and Ecology: Alternative, Confrontation, or Complementarity?”, a chapter in “Ecology and Poverty: Cry of the Earth, Cry of the Poor” (1995), Leonardo Boff explores the intersection between liberation theology and ecological discourse, emphasizing the shared concern for addressing poverty and environmental degradation . According to Boff, both disciplines originate from cries of oppression ; with liberation theology from the cry of the poor for dignity and freedom , and ecology from the cry of the earth under systematic exploitation . Boff cites Exodus 3:7 and Romans 8:22-23 as scriptural foundations for these cries, hereby pairing the struggle of the poor with the suffering of the earth. He advocates for an integrated approach that unites social and ecological liberation in the pursuit of a sustainable and just future. [ 6 ]
Boff introduces the concept of "integral ecology," as a way to integrate all dimensions of ecology – economic, social, cultural, political, and spiritual – into a new alliance between humanity and nature. Liberation theology, traditionally focused on the plight of the poor, is presented as needing to adopt this new ecological cosmology . In order to ensure our well-being, it must recognize Earth as a conscious entity and see humanity as its mode of expression. Boff emphasizes "it is the earth itself that, through one of its expressions – the human species – takes on a conscious direction in this new phase of the process of evolution ." [ 6 ]
In light of this evolution, the chapter highlights the importance of the landmark document " The Limits to Growth ", released by the Club of Rome in 1972, which drew attention to Earth's finite resources and the serious risks associated with industrialization . Boff echoes these concerns, in noting the alarming rate at which species are disappearing, and in criticizing the anthropocentrism and consumerism that underpin contemporary society . He advocates for a shift towards recognizing the earth as a "superorganism", called Gaia , in which all elements – both living and non-living – are interconnected in a dynamic equilibrium. [ 6 ]
Finally, Boff pleads for sustainability that respects the rhythms of ecosystems and promotes an economy of sufficiency for all, hence ensuring the common good extends beyond humans to all creation. According to Boff, the holistic approach, which combines liberation theology with ecological discourse, is essential in addressing the enduring hostility towards Earth and its inhabitants. In a way, Earth urges us to reconnect with all things, and thus with "the thread that binds everything upwards, God ." [ 6 ] | https://en.wikipedia.org/wiki/Integral_ecology |
Integral field spectrographs (IFS) combine spectrographic and imaging capabilities in the optical or infrared wavelength domains (0.32 μm – 24 μm) to get from a single exposure spatially resolved spectra in a bi-dimensional region. The name originates from the fact that the measurements result from integrating the light on multiple sub-regions of the field . Developed at first for the study of astronomical objects, this technique is now also used in many other fields, such as bio-medical science and Earth remote sensing . [ citation needed ] Integral field spectrography is part of the broader category of snapshot hyperspectral imaging techniques, itself a part of hyperspectral imaging .
With the notable exception of individual stars, most astronomical objects are spatially resolved by large telescopes . For spectroscopic studies, the optimum would then be to get a spectrum for each spatial pixel in the instrument field of view , getting full information on each target. This is loosely called a datacube from its two spatial and one spectral dimensions.
Since both visible charge-coupled devices (CCD) and infrared detector arrays ( staring arrays ) used for astronomical instruments are bi-dimensional only, it is a non-trivial feat to develop spectrographic systems able to deliver 3D data cubes from the output of 2D detectors. Such instruments are usually christened 3D spectrographs in the astronomical field and hyperspectral imagers in the non-astronomical ones.
Hyperspectral imager can be broadly classified in two groups, scanning and non-scanning. The first contains the instruments that build the datacube by combining multiple exposures, scanning along a space axis, a wavelength axis or diagonally through it. Examples include push broom scanning systems , scanning Fabry-Perot and Fourier transform spectrometers . The second group includes the techniques that acquire the whole datacube in a single shot, snapshot imaging spectrometers . Integral field spectrography (IFS) techniques were the first snapshot hyperspectral imaging techniques to be developed. Since then, other snapshot hyperspectral imaging techniques, based for example on tomographic reconstruction [ 1 ] or compressed sensing using a coded aperture , [ 2 ] have been developed. [ 3 ]
One major advantage of the snapshot approach for ground-based telescopic observations is that it automatically provides homogenous data sets despite the unavoidable variability of Earth’s atmospheric transmission , spectral emission and image blurring during exposures. This is not the case for scanned systems for which the data cubes are built by a set of successive exposures. IFS, whether ground or space based, have also the huge advantage to detect much fainter objects in a given exposure than scanning systems, if at the cost of a much smaller sky field area.
After a slow start from the late 1980s on, Integral field spectroscopy has become a mainstream astrophysical tool in the optical to mid-infrared regions, addressing a whole gamut of astronomical sources, essentially any smallish individual object from Solar System asteroids to vastly distant galaxies .
Integral field spectrographs use so-called Integral Field Units (IFUs) to reformat incoming light from a small field of view, typically rectangular or hexagonal, into a more suitable shape. This reformatted image can then be spectrally dispersed onto a detector by a diffraction grating , such that none of the spectra of each spatial element overlap. There are currently three different IFU flavors, using respectively a lenslet array, a fiber array or a mirror array. [ 3 ]
An enlarged sky image feeds a mini-lens array, typically a few thousands identical lenses each about 1 mm in diameter. The lenslet array output is a regular grid of as many small telescope mirror images, which serves as the input for a multi-slit spectrograph [ 4 ] that delivers the data cubes. This approach was advocated [ 5 ] in the early 1980s, with the first ever IFS observations [ 6 ] [ 7 ] in 1987 with the lenslet-based optical TIGER [ 9 ] .
Pros are 100% on-sky spatial filling when using a square or hexagonal lenslet shape, high throughput, accurate photometry and an easy to build IFU. A significant con is the suboptimal use of precious detector pixels (~ 50% loss at least) in order to avoid contamination between adjacent spectra.
In 2009 the BIGRE [ 10 ] lenslet array was proposed to correctly approach the case of spatial and spectral samplings above the Nyquist rate over diffraction limited scenes, as required to high-contrast imaging spectroscopy . This optical concept widely improves the use of detector pixels thanks to the resulting spectrograph line spread function, minimizing inter-spectra crosstalk effects.
Instruments like the Spectrographic Areal Unit for Research on Optical Nebulae (SAURON) [ 11 ] on the William Herschel Telescope and the Spectro-Polarimetric High-Contrast Exoplanet Research (SPHERE) IFS [ 12 ] subsystem on European Southern Observatory (ESO)'s Very Large Telescope (VLT) use this technique, in the TIGER and BIGRE version respectively.
The sky image given by the telescope falls on a fiber-based image slicer. It is typically made of a few thousands fibers each about 0.1 mm diameter, with the square or circular input field reformatted into a narrow rectangular (long-slit-like) output. The image slicer output is then coupled to a classical long-slit spectrograph that delivers the datacubes. A sky demonstrator successfully undertook the first Fiber based IFS observation [ 13 ] in 1990. It was followed by the full-fledged SILFID [ 14 ] optical instrument some 5 years later. Coupling the circular fibers to a square or hexagonal lenslet array led to better light injection in the fiber and a nearly 100% filling factor of sky light.
Pros are 100% on-sky spatial filling, an efficient use of detector pixels and commercially available fiber-based image slicers. Cons are the sizable light loss in the fibers (~ 25%), their relatively poor photometric accuracy and their inability to work in a cryogenic environment. The latter limits wavelength coverage to less than 1.6 μm.
This technique is used by instruments in many telescopes (such as INTEGRAL [ 15 ] at the William Herschel Telescope ), and particularly in currently ongoing large surveys of galaxies, such as the Calar Alto Legacy Integral Field Area Survey (CALIFA) [ 16 ] at the Calar Alto Observatory , the Sydney-AAO Multi-object Integral-field spectrograph (SAMI) [ 17 ] at the Australian Astronomical Observatory , and the Mapping Nearby Galaxies at APO (MaNGA) [ 18 ] which is one of the surveys making up the next phase of the Sloan Digital Sky Survey .
The sky image given by the telescope falls on a mirror-based "slicer," typically made of approximately 30 rectangular mirrors, 0.1 to 0.2 mm wide. The slicer reformats the input field into a collection of thin, adjacent "slices" resembling slits in a conventional multi-object spectrograph. This output is then fed to a classical long-slit spectrograph , which disperses and collects the incoming light. Such data can be reduced in the same fashion as a conventional multi-slit spectrograph, with post processing steps to recombine all spectra into a "cube" containing both spatial and spectral information. The first mirror-based slicer near-infrared IFS, the Spectrometer for Infrared Faint Field Imaging [ 19 ] (SPIFFI) [ 20 ] got its first science result [ 21 ] in 2003. The key mirror slicer system was quickly substantially improved under the Advanced Imaging Slicer [ 22 ] code name. A more recent slicer-based IFS is the Keck Cosmic Web Imager, KCWI, [ 23 ] which features a choice of three separate slicers covering varying fields of view. This provides flexibility for observers to determine an optimal trade-off between field of view, spatial sampling, spectral resolution, and sensitivity to faint sources.
Pros are high throughput, 100% on-sky spatial filling, optimal use of detector pixels and the capability to work at cryogenic temperatures. On the other hand, it is difficult and expensive to manufacture and to align, especially when working in the optical domain given the more stringent optical surfaces specifications.
IFS are currently deployed in one flavor or another on many large ground-based telescopes, in the visible [ 24 ] [ 25 ] or near infrared [ 26 ] [ 27 ] domains, and on some space telescopes as well, in particular on the James Webb Space Telescope (JWST) in the near and middle infrared domains. [ 28 ] As the spatial resolution of telescopes in space (and also of ground-based telescopes through adaptive optics based air turbulence corrections) has much improved in recent decades, the need for IFS facilities has become more and more pressing. Spectral resolution is usually a few thousands and wavelength coverage about one octave (i.e. a factor 2 in wavelength). Note that each IFS requires a finely tuned software package to transform the raw counts data in physical units (light intensity versus wavelength on precise sky locations)
With each spatial pixel dispersed on say 4096 spectral pixels on a state of the art 4096 x 4096 pixel detector, IFS fields of view are severely limited, ~10 arc second across when fed by an 8–10 m class telescope. [ citation needed ] That in turn mainly limits IFS-based astrophysical science to single small targets. A much larger field of view, 1 arc minute across, or a sky area 36 times larger, is needed to cover hundreds of highly distant galaxies, in a single, if very long (up to 100 hours), exposure. This in turn requires to develop IFS systems featuring at least about half a billion detector pixels.
The brute force approach would have been to build huge spectrographs feeding gigantic detector arrays. Instead, the two Panoramic IFS in operation by 2022, Multi-unit spectroscopic explorer (MUSE) and Visible Integral-field Replicable Unit Spectrograph (VIRUS), [ 29 ] are made of respectively 24 and 120 serial-produced optical IFS. This results in substantially smaller and cheaper instruments. The mirror slicer based MUSE instrument started operation at the VLT in 2014 and the fiber sliced based VIRUS on the Hobby–Eberly Telescope in 2021.
It is conceptually straightforward to combine the capabilities of Integral Field Spectroscopy and Multi-Object Spectroscopy in a single instrument. This can be done by deploying a number of small IFUs in a large sky patrol field, possibly a degree or more across. In that way, quite detailed information on, for example, a number of selected galaxies can be obtained in one go. There is of course a tradeoff between the spatial coverage on each target and the total number accessible of targets. The Fibre Large Array Multi Element Spectrograph (FLAMES), [ 30 ] the first instrument featuring this capability, had first light in this mode at the VLT in 2002. A number of such facilities are now in operation targeting visible [ 31 ] [ 32 ] and near infrared wavelengths. [ 33 ] [ 34 ]
One such approach was used by the SDSS MaNGA program, Mapping Nearby Galaxies at Apache Point Observatory. [ 35 ] MaNGA used IFUs composed of hexagonal fiber bundles to survey about ~10,000 nearby galaxies around redshift 0.03, studying their dynamical state, composition, and formation history. MaNGA was able to use 17 IFU fiber bundles per spectroscopic plate, efficiently targeting many objects simultaneously.
A clever alternative approach to obtaining spatially resolved spectroscopy of many objects simultaneously is the MSA-3D program [ 36 ] which uses the micro-shutter array of the JWST NIRSpec instrument to target many objects simultaneously. While not strictly an integral-field unit, the MSA-3D program takes many exposures while "stepping" the conventional slitmask provided by the MSA across the sky. These exposures can be combined after the fact to provide full, 3D spatial and spectral information on each object. While the MSA-3D approach provides much lower spatial resolution than the IFU provided with NIRSpec, and requires many more exposures, it has the advantage of being able to target dozens of nearby objects simultaneously.
Even larger latitude in the choice of coverage of the patrol field has been proposed under the name of Diverse Field Spectroscopy [ 37 ] (DFS) which would allow the observer to select arbitrary combinations of sky regions to maximize observing efficiency and scientific return. This requires technological developments, in particular versatile robotic target pickups [ 38 ] and photonic switchyards. [ 39 ]
Other techniques can achieve the same ends at different wavelengths. In particular, at radio wavelengths, simultaneous spectral information is obtained with heterodyne receivers, [ 40 ] featuring large frequency coverage and huge spectral resolution.
In the X-ray domain, owing to the high energy of individual photons , aptly called 3D photon counting detectors not only measure on the fly the 2D position of incoming photons but also their energy, hence their wavelength. Note nevertheless that spectral information is very coarse, with spectral resolutions ~10 only. One example is the Advanced CCD Imaging Spectrometer (ACIS) on NASA’s Chandra X-ray Observatory .
In the Visible-Near Infrared, this approach is a lot harder with the much less energetic photons. Nevertheless small format superconducting detectors, with limited spectral resolution ~ 30 and cooled below 0.1 K, have been developed and successfully used, such as for example the 32x32 pixels Array Camera for Optical to Near-infrared Spectrophotometry [ 41 ] (ARCONS) Camera at the Hale 200” Telescope. In contrast, ‘classical’ IFS usually feature spectral resolutions of a few thousands. | https://en.wikipedia.org/wiki/Integral_field_spectrograph |
In the mathematical field of graph theory , an integral graph is a graph whose adjacency matrix 's spectrum consists entirely of integers. In other words, a graph is an integral graph if all of the roots of the characteristic polynomial of its adjacency matrix are integers. [ 1 ]
The notion was introduced in 1974 by Frank Harary and Allen Schwenk. [ 2 ] | https://en.wikipedia.org/wiki/Integral_graph |
The integral length scale measures the correlation distance of a process in terms of space or time. [ 1 ] In essence, it looks at the overall memory of the process and how it is influenced by previous positions and parameters . An intuitive example would be the case in which you have very low Reynolds number flows (e.g., a Stokes flow), where the flow is fully reversible and thus fully correlated with previous particle positions. This concept may be extended to turbulence , where it may be thought of as the time during which a particle is influenced by its previous position.
The mathematical expressions for integral scales are:
T = ∫ 0 ∞ ρ ( τ ) d τ {\displaystyle \mathrm {T} =\int _{0}^{\infty }\rho (\tau )d\tau }
L = ∫ 0 ∞ ρ ( r ) d r {\displaystyle L=\int _{0}^{\infty }\rho (r)dr}
Where T {\displaystyle \mathrm {T} } is the integral time scale, L is the integral length scale, and ρ ( τ ) {\displaystyle \rho (\tau )} and ρ ( r ) {\displaystyle \rho (r)} are the autocorrelation with respect to time and space respectively.
In isotropic homogeneous turbulence, the integral length scale ℓ {\displaystyle \ell } is defined as the weighted average of the inverse wavenumber , i.e.,
ℓ = ∫ 0 ∞ k − 1 E ( k ) d k / ∫ 0 ∞ E ( k ) d k {\displaystyle \ell =\int _{0}^{\infty }k^{-1}E(k)dk\left/\int _{0}^{\infty }E(k)dk\right.}
where E ( k ) {\displaystyle E(k)} is the energy spectrum. | https://en.wikipedia.org/wiki/Integral_length_scale |
In mathematics , integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse f − 1 {\displaystyle f^{-1}} of a continuous and invertible function f {\displaystyle f} , in terms of f − 1 {\displaystyle f^{-1}} and an antiderivative of f {\displaystyle f} . This formula was published in 1905 by Charles-Ange Laisant . [ 1 ]
Let I 1 {\displaystyle I_{1}} and I 2 {\displaystyle I_{2}} be two intervals of R {\displaystyle \mathbb {R} } . Assume that f : I 1 → I 2 {\displaystyle f:I_{1}\to I_{2}} is a continuous and invertible function. It follows from the intermediate value theorem that f {\displaystyle f} is strictly monotone . Consequently, f {\displaystyle f} maps intervals to intervals, so is an open map and thus a homeomorphism. Since f {\displaystyle f} and the inverse function f − 1 : I 2 → I 1 {\displaystyle f^{-1}:I_{2}\to I_{1}} are continuous, they have antiderivatives by the fundamental theorem of calculus .
Laisant proved that if F {\displaystyle F} is an antiderivative of f {\displaystyle f} , then the antiderivatives of f − 1 {\displaystyle f^{-1}} are:
where C {\displaystyle C} is an arbitrary real number. Note that it is not assumed that f − 1 {\displaystyle f^{-1}} is differentiable.
In his 1905 article, Laisant gave three proofs.
First, under the additional hypothesis that f − 1 {\displaystyle f^{-1}} is differentiable , one may differentiate the above formula, which completes the proof immediately.
His second proof was geometric. If f ( a ) = c {\displaystyle f(a)=c} and f ( b ) = d {\displaystyle f(b)=d} , the theorem can be written:
The figure on the right is a proof without words of this formula. Laisant does not discuss the hypotheses necessary to make this proof rigorous, but this can be proved if f {\displaystyle f} is just assumed to be strictly monotone (but not necessarily continuous, let alone differentiable). In this case, both f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} are Riemann integrable and the identity follows from a bijection between lower/upper Darboux sums of f {\displaystyle f} and upper/lower Darboux sums of f − 1 {\displaystyle f^{-1}} . [ 2 ] [ 3 ] The antiderivative version of the theorem then follows from the fundamental theorem of calculus in the case when f {\displaystyle f} is also assumed to be continuous.
Laisant's third proof uses the additional hypothesis that f {\displaystyle f} is differentiable. Beginning with f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}(f(x))=x} , one multiplies by f ′ ( x ) {\displaystyle f'(x)} and integrates both sides. The right-hand side is calculated using integration by parts to be x f ( x ) − ∫ f ( x ) d x {\textstyle xf(x)-\int f(x)\,dx} , and the formula follows.
One may also think as follows when f {\displaystyle f} is differentiable. As f {\displaystyle f} is continuous at any x {\displaystyle x} , F := ∫ 0 x f {\displaystyle F:=\int _{0}^{x}f} is differentiable at all x {\displaystyle x} by the fundamental theorem of calculus. Since f {\displaystyle f} is invertible, its derivative would vanish in at most countably many points. Sort these points by . . . < t − 1 < t 0 < t 1 < . . . {\displaystyle ...<t_{-1}<t_{0}<t_{1}<...} . Since g ( y ) := y f − 1 ( y ) − F ∘ f − 1 ( y ) + C {\displaystyle g(y):=yf^{-1}(y)-F\circ f^{-1}(y)+C} is a composition of differentiable functions on each interval ( t i , t i + 1 ) {\displaystyle (t_{i},t_{i+1})} , chain rule could be applied g ′ ( y ) = f − 1 ( y ) + y / f ′ ( y ) − f ∘ f − 1 ( y ) .1 / f ′ ( y ) + 0 = f − 1 ( y ) {\displaystyle g'(y)=f^{-1}(y)+y/f'(y)-f\circ f^{-1}(y).1/f'(y)+0=f^{-1}(y)} to see g | ( t i , t i + 1 ) {\displaystyle \left.g\right|_{(t_{i},t_{i+1})}} is an antiderivative for f | ( t i , t i + 1 ) {\displaystyle \left.f\right|_{(t_{i},t_{i+1})}} . We claim g {\displaystyle g} is also differentiable on each of t i {\displaystyle t_{i}} and does not go unbounded if I 2 {\displaystyle I_{2}} is compact. In such a case f − 1 {\displaystyle f^{-1}} is continuous and bounded. By continuity and the fundamental theorem of calculus, G ( y ) := C + ∫ 0 y f − 1 {\displaystyle G(y):=C+\int _{0}^{y}f^{-1}} where C {\displaystyle C} is a constant, is a differentiable extension of g {\displaystyle g} . But g {\displaystyle g} is continuous as it's the composition of continuous functions. So is G {\displaystyle G} by differentiability. Therefore, G = g {\displaystyle G=g} . One can now use the fundamental theorem of calculus to compute ∫ I 2 f − 1 {\displaystyle \int _{I_{2}}f^{-1}} .
Nevertheless, it can be shown that this theorem holds even if f {\displaystyle f} or f − 1 {\displaystyle f^{-1}} is not differentiable: [ 3 ] [ 4 ] it suffices, for example, to use the Stieltjes integral in the previous argument. On the other hand, even though general monotonic functions are differentiable almost everywhere, the proof of the general formula does not follow, unless f − 1 {\displaystyle f^{-1}} is absolutely continuous . [ 4 ]
It is also possible to check that for every y {\displaystyle y} in I 2 {\displaystyle I_{2}} , the derivative of the function y ↦ y f − 1 ( y ) − F ( f − 1 ( y ) ) {\displaystyle y\mapsto yf^{-1}(y)-F(f^{-1}(y))} is equal to f − 1 ( y ) {\displaystyle f^{-1}(y)} . [ citation needed ] In other words:
To this end, it suffices to apply the mean value theorem to F {\displaystyle F} between x {\displaystyle x} and x + h {\displaystyle x+h} , taking into account that f {\displaystyle f} is monotonic.
Apparently, this theorem of integration was discovered for the first time in 1905 by Charles-Ange Laisant , [ 1 ] who "could hardly believe that this theorem is new", and hoped its use would henceforth spread out among students and teachers. This result was published independently in 1912 by an Italian engineer, Alberto Caprilli, in an opuscule entitled "Nuove formole d'integrazione". [ 5 ] It was rediscovered in 1955 by Parker, [ 6 ] and by a number of mathematicians following him. [ 7 ] Nevertheless, they all assume that f or f −1 is differentiable .
The general version of the theorem , free from this additional assumption, was proposed by Michael Spivak in 1965, as an exercise in the Calculus , [ 2 ] and a fairly complete proof following the same lines was published by Eric Key in 1994. [ 3 ] This proof relies on the very definition of the Darboux integral , and consists in showing that the upper Darboux sums of the function f are in 1-1 correspondence with the lower Darboux sums of f −1 .
In 2013, Michael Bensimhoun, estimating that the general theorem was still insufficiently known, gave two other proofs: [ 4 ] The second proof, based on the Stieltjes integral and on its formulae of integration by parts and of homeomorphic change of variables , is the most suitable to establish more complex formulae.
The above theorem generalizes in the obvious way to holomorphic functions:
Let U {\displaystyle U} and V {\displaystyle V} be two open and simply connected sets of C {\displaystyle \mathbb {C} } , and assume that f : U → V {\displaystyle f:U\to V} is a biholomorphism . Then f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} have antiderivatives, and if F {\displaystyle F} is an antiderivative of f {\displaystyle f} , the general antiderivative of f − 1 {\displaystyle f^{-1}} is
Because all holomorphic functions are differentiable, the proof is immediate by complex differentiation. | https://en.wikipedia.org/wiki/Integral_of_inverse_functions |
The integral of secant cubed is a frequent and challenging [ 1 ] indefinite integral of elementary calculus :
where gd − 1 {\textstyle \operatorname {gd} ^{-1}} is the inverse Gudermannian function , the integral of the secant function .
There are a number of reasons why this particular antiderivative is worthy of special attention:
This antiderivative may be found by integration by parts , as follows: [ 2 ]
where
Then
Next add ∫ sec 3 x d x {\textstyle \int \sec ^{3}x\,dx} to both sides: [ a ]
using the integral of the secant function , ∫ sec x d x = ln | sec x + tan x | + C . {\textstyle \int \sec x\,dx=\ln \left|\sec x+\tan x\right|+C.} [ 2 ]
Finally, divide both sides by 2:
which was to be derived. [ 2 ] A possible mnemonic is: "The integral of secant cubed is the average of the derivative and integral of secant".
where u = sin x {\displaystyle u=\sin x} , so that d u = cos x d x {\displaystyle du=\cos x\,dx} . This admits a decomposition by partial fractions :
Antidifferentiating term-by-term, one gets
Alternatively, one may use the tangent half-angle substitution for any rational function of trigonometric functions; for this particular integrand, that method leads to the integration of
Integrals of the form: ∫ sec n x tan m x d x {\displaystyle \int \sec ^{n}x\tan ^{m}x\,dx} can be reduced using the Pythagorean identity if n {\displaystyle n} is even or n {\displaystyle n} and m {\displaystyle m} are both odd. If n {\displaystyle n} is odd and m {\displaystyle m} is even, hyperbolic substitutions can be used to replace the nested integration by parts with hyperbolic power-reducing formulas.
Note that ∫ sec x d x = ln | sec x + tan x | {\displaystyle \int \sec x\,dx=\ln |\sec x+\tan x|} follows directly from this substitution.
Just as the integration by parts above reduced the integral of secant cubed to the integral of secant to the first power, so a similar process reduces the integral of higher odd powers of secant to lower ones. This is the secant reduction formula, which follows the syntax:
Even powers of tangents can be accommodated by using binomial expansion to form an odd polynomial of secant and using these formulae on the largest term and combining like terms. | https://en.wikipedia.org/wiki/Integral_of_secant_cubed |
In calculus , the integral of the secant function can be evaluated using a variety of methods and there are multiple ways of expressing the antiderivative , all of which can be shown to be equivalent via trigonometric identities ,
This formula is useful for evaluating various trigonometric integrals . In particular, it can be used to evaluate the integral of the secant cubed , which, though seemingly special, comes up rather frequently in applications. [ 1 ]
The definite integral of the secant function starting from 0 {\displaystyle 0} is the inverse Gudermannian function , gd − 1 . {\textstyle \operatorname {gd} ^{-1}.} For numerical applications, all of the above expressions result in loss of significance for some arguments. An alternative expression in terms of the inverse hyperbolic sine arsinh is numerically well behaved for real arguments | ϕ | < 1 2 π {\textstyle |\phi |<{\tfrac {1}{2}}\pi } : [ 2 ]
The integral of the secant function was historically one of the first integrals of its type ever evaluated, before most of the development of integral calculus. It is important because it is the vertical coordinate of the Mercator projection , used for marine navigation with constant compass bearing .
Three common expressions for the integral of the secant,
are equivalent because
Proof: we can separately apply the tangent half-angle substitution t = tan 1 2 θ {\displaystyle t=\tan {\tfrac {1}{2}}\theta } to each of the three forms, and show them equivalent to the same expression in terms of t . {\displaystyle t.} Under this substitution cos θ = ( 1 − t 2 ) / ( 1 + t 2 ) {\displaystyle \cos \theta =(1-t^{2}){\big /}(1+t^{2})} and sin θ = 2 t / ( 1 + t 2 ) . {\displaystyle \sin \theta =2t{\big /}(1+t^{2}).}
First,
Second,
Third, using the tangent addition identity tan ( ϕ + ψ ) = ( tan ϕ + tan ψ ) / ( 1 − tan ϕ tan ψ ) , {\displaystyle \tan(\phi +\psi )=(\tan \phi +\tan \psi ){\big /}(1-\tan \phi \,\tan \psi ),}
So all three expressions describe the same quantity.
The conventional solution for the Mercator projection ordinate may be written without the absolute value signs since the latitude φ {\displaystyle \varphi } lies between − 1 2 π {\textstyle -{\tfrac {1}{2}}\pi } and 1 2 π {\textstyle {\tfrac {1}{2}}\pi } ,
Let
Therefore,
The integral of the secant function was one of the "outstanding open problems of the mid-seventeenth century", solved in 1668 by James Gregory . [ 3 ] He applied his result to a problem concerning nautical tables. [ 1 ] In 1599, Edward Wright evaluated the integral by numerical methods – what today we would call Riemann sums . [ 4 ] He wanted the solution for the purposes of cartography – specifically for constructing an accurate Mercator projection . [ 3 ] In the 1640s, Henry Bond, a teacher of navigation, surveying, and other mathematical topics, compared Wright's numerically computed table of values of the integral of the secant with a table of logarithms of the tangent function, and consequently conjectured that [ 3 ]
This conjecture became widely known, and in 1665, Isaac Newton was aware of it. [ 5 ]
A standard method of evaluating the secant integral presented in various references involves multiplying the numerator and denominator by sec θ + tan θ and then using the substitution u = sec θ + tan θ . This substitution can be obtained from the derivatives of secant and tangent added together, which have secant as a common factor. [ 6 ]
Starting with
adding them gives
The derivative of the sum is thus equal to the sum multiplied by sec θ . This enables multiplying sec θ by sec θ + tan θ in the numerator and denominator and performing the following substitutions:
The integral is evaluated as follows:
as claimed. This was the formula discovered by James Gregory. [ 1 ]
Although Gregory proved the conjecture in 1668 in his Exercitationes Geometricae , [ 7 ] the proof was presented in a form that renders it nearly impossible for modern readers to comprehend; Isaac Barrow , in his Lectiones Geometricae of 1670, [ 8 ] gave the first "intelligible" proof, though even that was "couched in the geometric idiom of the day." [ 3 ] Barrow's proof of the result was the earliest use of partial fractions in integration. [ 3 ] Adapted to modern notation, Barrow's proof began as follows:
Substituting u = sin θ , du = cos θ dθ , reduces the integral to
Therefore,
as expected. Taking the absolute value is not necessary because 1 + sin θ {\displaystyle 1+\sin \theta } and 1 − sin θ {\displaystyle 1-\sin \theta } are always non-negative for real values of θ . {\displaystyle \theta .}
Under the tangent half-angle substitution t = tan 1 2 θ , {\textstyle t=\tan {\tfrac {1}{2}}\theta ,} [ 9 ]
Therefore the integral of the secant function is
as before.
The integral can also be derived by using a somewhat non-standard version of the tangent half-angle substitution, which is simpler in the case of this particular integral, published in 2013, [ 10 ] is as follows:
Substituting:
The integral can also be solved by manipulating the integrand and substituting twice. Using the definition sec θ = 1 / cos θ and the identity cos 2 θ + sin 2 θ = 1 , the integral can be rewritten as
Substituting u = sin θ , du = cos θ dθ reduces the integral to
The reduced integral can be evaluated by substituting u = tanh t , du = sech 2 t dt , and then using the identity 1 − tanh 2 t = sech 2 t .
The integral is now reduced to a simple integral, and back-substituting gives
which is one of the hyperbolic forms of the integral.
A similar strategy can be used to integrate the cosecant , hyperbolic secant , and hyperbolic cosecant functions.
It is also possible to find the other two hyperbolic forms directly, by again multiplying and dividing by a convenient term:
where ± {\displaystyle \pm } stands for sgn ( cos θ ) {\displaystyle \operatorname {sgn}(\cos \theta )} because 1 + tan 2 θ = | sec θ | . {\displaystyle {\sqrt {1+\tan ^{2}\theta }}=|\sec \theta \,|.} Substituting u = tan θ , du = sec 2 θ dθ , reduces to a standard integral:
where sgn is the sign function .
Likewise:
Substituting u = | sec θ | , du = | sec θ | tan θ dθ , reduces to a standard integral:
Under the substitution z = e i θ , {\displaystyle z=e^{i\theta },}
So the integral can be solved as:
Because the constant of integration can be anything, the additional constant term can be absorbed into it. Finally, if theta is real -valued, we can indicate this with absolute value brackets in order to get the equation into its most familiar form:
The integral of the hyperbolic secant function defines the Gudermannian function :
The integral of the secant function defines the Lambertian function, which is the inverse of the Gudermannian function:
These functions are encountered in the theory of map projections: the Mercator projection of a point on the sphere with longitude λ and latitude ϕ may be written [ 11 ] as:
D. T. Whiteside , editor, The Mathematical Papers of Isaac Newton , Cambridge University Press, 1967, volume 1, pages 466–467 and 473–475.
"Integral of Secant" . MIT OpenCourseWare . | https://en.wikipedia.org/wiki/Integral_of_the_secant_function |
An integral operator is an operator that involves integration . Special instances are: | https://en.wikipedia.org/wiki/Integral_operator |
The integral symbol ( see below ) is used to denote integrals and antiderivatives in mathematics , especially in calculus .
The notation was introduced by the German mathematician Gottfried Wilhelm Leibniz in 1675 in his private writings; [ 1 ] [ 2 ] it first appeared publicly in the article " De Geometria Recondita et analysi indivisibilium atque infinitorum " (On a hidden geometry and analysis of indivisibles and infinites), published in Acta Eruditorum in June 1686. [ 3 ] [ 4 ] The symbol was based on the ſ ( long s ) character and was chosen because Leibniz thought of the integral as an infinite sum of infinitesimal summands .
The integral symbol is U+222B ∫ INTEGRAL in Unicode [ 5 ] and \int in LaTeX . In HTML , it is written as ∫ ( hexadecimal ), ∫ ( decimal ) and ∫ ( named entity ).
The original IBM PC code page 437 character set included a couple of characters ⌠,⎮ and ⌡ (codes 244 and 245 respectively) to build the integral symbol. These were deprecated in subsequent MS-DOS code pages , but they still remain in Unicode ( U+2320 and U+2321 respectively) for compatibility.
The ∫ symbol is very similar to, but not to be confused with, the letter ʃ (" esh ").
Related symbols include: [ 5 ] [ 6 ]
In other languages, the shape of the integral symbol differs slightly from the shape commonly seen in English-language textbooks. While the English integral symbol leans to the right, the German symbol (used throughout Central Europe ) is upright, and the Russian variant leans slightly to the left to occupy less horizontal space. [ 7 ]
Another difference is in the placement of limits for definite integrals . Generally, in English-language books, limits go to the right of the integral symbol:
∫ 0 5 f ( t ) d t , ∫ g ( t ) = a g ( t ) = b f ( t ) d t . {\displaystyle \int _{0}^{5}f(t)\,\mathrm {d} t,\quad \int _{g(t)=a}^{g(t)=b}f(t)\,\mathrm {d} t.}
By contrast, in German and Russian texts, the limits are placed above and below the integral symbol, and, as a result, the notation requires larger line spacing but is more compact horizontally, especially when using longer expressions in the limits:
∫ 0 T f ( t ) d t , ∫ g ( t ) = a g ( t ) = b f ( t ) d t . {\displaystyle \int \limits _{0}^{T}f(t)\,\mathrm {d} t,\quad \int \limits _{\!\!\!\!\!g(t)=a\!\!\!\!\!}^{\!\!\!\!\!g(t)=b\!\!\!\!\!}f(t)\,\mathrm {d} t.} | https://en.wikipedia.org/wiki/Integral_symbol |
In mathematics , the integral test for convergence is a method used to test infinite series of monotonic terms for convergence . It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test .
Consider an integer N and a function f defined on the unbounded interval [ N , ∞) , on which it is monotone decreasing . Then the infinite series
converges to a real number if and only if the improper integral
is finite. In particular, if the integral diverges, then the series diverges as well.
If the improper integral is finite, then the proof also gives the lower and upper bounds
for the infinite series.
Note that if the function f ( x ) {\displaystyle f(x)} is increasing, then the function − f ( x ) {\displaystyle -f(x)} is decreasing and the above theorem applies.
Many textbooks require the function f {\displaystyle f} to be positive, [ 1 ] [ 2 ] [ 3 ] but this condition is not really necessary, since when f {\displaystyle f} is negative and decreasing both ∑ n = N ∞ f ( n ) {\displaystyle \sum _{n=N}^{\infty }f(n)} and ∫ N ∞ f ( x ) d x {\displaystyle \int _{N}^{\infty }f(x)\,dx} diverge. [ 4 ] [ better source needed ]
The proof uses the comparison test , comparing the term f ( n ) {\displaystyle f(n)} with the integral of f {\displaystyle f} over the intervals [ n − 1 , n ) {\displaystyle [n-1,n)} and [ n , n + 1 ) {\displaystyle [n,n+1)} respectively.
The monotonic function f {\displaystyle f} is continuous almost everywhere . To show this, let
For every x ∈ D {\displaystyle x\in D} , there exists by the density of Q {\displaystyle \mathbb {Q} } , a c ( x ) ∈ Q {\displaystyle c(x)\in \mathbb {Q} } so that c ( x ) ∈ [ lim y ↓ x f ( y ) , lim y ↑ x f ( y ) ] {\displaystyle c(x)\in \left[\lim _{y\downarrow x}f(y),\lim _{y\uparrow x}f(y)\right]} .
Note that this set contains an open non-empty interval precisely if f {\displaystyle f} is discontinuous at x {\displaystyle x} . We can uniquely identify c ( x ) {\displaystyle c(x)} as the rational number that has the least index in an enumeration N → Q {\displaystyle \mathbb {N} \to \mathbb {Q} } and satisfies the above property. Since f {\displaystyle f} is monotone , this defines an injective mapping c : D → Q , x ↦ c ( x ) {\displaystyle c:D\to \mathbb {Q} ,x\mapsto c(x)} and thus D {\displaystyle D} is countable . It follows that f {\displaystyle f} is continuous almost everywhere . This is sufficient for Riemann integrability . [ 5 ]
Since f is a monotone decreasing function, we know that
and
Hence, for every integer n ≥ N ,
and, for every integer n ≥ N + 1 ,
By summation over all n from N to some larger integer M , we get from ( 2 )
and from ( 3 )
Combining these two estimates yields
Letting M tend to infinity, the bounds in ( 1 ) and the result follow.
The harmonic series
diverges because, using the natural logarithm , its antiderivative , and the fundamental theorem of calculus , we get
On the other hand, the series
(cf. Riemann zeta function )
converges for every ε > 0 , because by the power rule
From ( 1 ) we get the upper estimate
which can be compared with some of the particular values of Riemann zeta function .
The above examples involving the harmonic series raise the question of whether there are monotone sequences such that f ( n ) decreases to 0 faster than 1/ n but slower than 1/ n 1+ ε in the sense that
for every ε > 0 , and whether the corresponding series of the f ( n ) still diverges. Once such a sequence is found, a similar question can be asked with f ( n ) taking the role of 1/ n , and so on. In this way it is possible to investigate the borderline between divergence and convergence of infinite series.
Using the integral test for convergence, one can show (see below) that, for every natural number k , the series
still diverges (cf. proof that the sum of the reciprocals of the primes diverges for k = 1 ) but
converges for every ε > 0 . Here ln k denotes the k -fold composition of the natural logarithm defined recursively by
Furthermore, N k denotes the smallest natural number such that the k -fold composition is well-defined and ln k ( N k ) ≥ 1 , i.e.
using tetration or Knuth's up-arrow notation .
To see the divergence of the series ( 4 ) using the integral test, note that by repeated application of the chain rule
hence
To see the convergence of the series ( 5 ), note that by the power rule , the chain rule and the above result
hence
and ( 1 ) gives bounds for the infinite series in ( 5 ). | https://en.wikipedia.org/wiki/Integral_test_for_convergence |
Integral windup , also known as integrator windup [ 1 ] or reset windup , [ 2 ] refers to the situation in a PID controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction).
This problem can be addressed by
Integral windup particularly occurs as a limitation of physical systems, compared with ideal systems, due to the ideal output being physically impossible (process saturation : the output of the process being limited at the top or bottom of its scale, making the error constant). For example, the position of a valve cannot be any more open than fully open and also cannot be closed any more than fully closed. In this case, anti-windup can actually involve the integrator being turned off for periods of time until the response falls back into an acceptable range.
This usually occurs when the controller's output can no longer affect the controlled variable, or if the controller is part of a selection scheme and it is selected right.
Integral windup was more of a problem in analog controllers. Within modern distributed control systems and programmable logic controllers , it is much easier to prevent integral windup by either limiting the controller output, limiting the integral to produce feasible output, [ 5 ] or by using external reset feedback, which is a means of feeding back the selected output to the integral circuit of all controllers in the selection scheme so that a closed loop is maintained. | https://en.wikipedia.org/wiki/Integral_windup |
Integration is the basic operation in integral calculus . While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives .
A compilation of a list of integrals (Integraltafeln) and techniques of integral calculus was published by the German mathematician Meier Hirsch [ de ] (also spelled Meyer Hirsch) in 1810. [ 1 ] These tables were republished in the United Kingdom in 1823. More extensive tables were compiled in 1858 by the Dutch mathematician David Bierens de Haan for his Tables d'intégrales définies , supplemented by Supplément aux tables d'intégrales définies in ca. 1864. A new edition was published in 1867 under the title Nouvelles tables d'intégrales définies .
These tables, which contain mainly integrals of elementary functions, remained in use until the middle of the 20th century. They were then replaced by the much more extensive tables of Gradshteyn and Ryzhik . In Gradshteyn and Ryzhik, integrals originating from the book by Bierens de Haan are denoted by BI.
Not all closed-form expressions have closed-form antiderivatives; this study forms the subject of differential Galois theory , which was initially developed by Joseph Liouville in the 1830s and 1840s, leading to Liouville's theorem which classifies which expressions have closed-form antiderivatives. A simple example of a function without a closed-form antiderivative is e − x 2 , whose antiderivative is (up to constants) the error function .
Since 1968 there is the Risch algorithm for determining indefinite integrals that can be expressed in term of elementary functions , typically using a computer algebra system . Integrals that cannot be expressed using elementary functions can be manipulated symbolically using general functions such as the Meijer G-function .
More detail may be found on the following pages for the lists of integrals :
Gradshteyn , Ryzhik , Geronimus , Tseytlin , Jeffrey, Zwillinger, and Moll 's (GR) Table of Integrals, Series, and Products contains a large collection of results. An even larger, multivolume table is the Integrals and Series by Prudnikov , Brychkov , and Marichev (with volumes 1–3 listing integrals and series of elementary and special functions , volume 4–5 are tables of Laplace transforms ). More compact collections can be found in e.g. Brychkov, Marichev, Prudnikov's Tables of Indefinite Integrals , or as chapters in Zwillinger's CRC Standard Mathematical Tables and Formulae or Bronshtein and Semendyayev 's Guide Book to Mathematics , Handbook of Mathematics or Users' Guide to Mathematics , and other mathematical handbooks.
Other useful resources include Abramowitz and Stegun and the Bateman Manuscript Project . Both works contain many identities concerning specific integrals, which are organized with the most relevant topic instead of being collected into a separate table. Two volumes of the Bateman Manuscript are specific to integral transforms.
There are several web sites which have tables of integrals and integrals on demand. Wolfram Alpha can show results, and for some simpler expressions, also the intermediate steps of the integration. Wolfram Research also operates another online service, the Mathematica Online Integrator.
C is used for an arbitrary constant of integration that can only be determined if something about the value of the integral at some point is known. Thus, each function has an infinite number of antiderivatives .
These formulas only state in another form the assertions in the table of derivatives .
When there is a singularity in the function being integrated such that the antiderivative becomes undefined at some point (the singularity), then C does not need to be the same on both sides of the singularity. The forms below normally assume the Cauchy principal value around a singularity in the value of C , but this is not necessary in general. For instance, in ∫ 1 x d x = ln | x | + C {\displaystyle \int {1 \over x}\,dx=\ln \left|x\right|+C} there is a singularity at 0 and the antiderivative becomes infinite there. If the integral above were to be used to compute a definite integral between −1 and 1, one would get the wrong answer 0. This however is the Cauchy principal value of the integral around the singularity. If the integration is done in the complex plane the result depends on the path around the origin, in this case the singularity contributes − i π when using a path above the origin and i π for a path below the origin. A function on the real line could use a completely different value of C on either side of the origin as in: [ 2 ] ∫ 1 x d x = ln | x | + { A if x > 0 ; B if x < 0. {\displaystyle \int {1 \over x}\,dx=\ln |x|+{\begin{cases}A&{\text{if }}x>0;\\B&{\text{if }}x<0.\end{cases}}}
The following function has a non-integrable singularity at 0 for n ≤ −1 :
Let f be a continuous function , that has at most one zero . If f has a zero, let g be the unique antiderivative of f that is zero at the root of f ; otherwise, let g be any antiderivative of f . Then ∫ | f ( x ) | d x = sgn ( f ( x ) ) g ( x ) + C , {\displaystyle \int \left|f(x)\right|\,dx=\operatorname {sgn}(f(x))g(x)+C,} where sgn( x ) is the sign function , which takes the values −1, 0, 1 when x is respectively negative, zero or positive.
This can be proved by computing the derivative of the right-hand side of the formula, taking into account that the condition on g is here for insuring the continuity of the integral.
This gives the following formulas (where a ≠ 0 ), which are valid over any interval where f is continuous (over larger intervals, the constant C must be replaced by a piecewise constant function):
If the function f does not have any continuous antiderivative which takes the value zero at the zeros of f (this is the case for the sine and the cosine functions), then sgn( f ( x )) ∫ f ( x ) dx is an antiderivative of f on every interval on which f is not zero, but may be discontinuous at the points where f ( x ) = 0 . For having a continuous antiderivative, one has thus to add a well chosen step function . If we also use the fact that the absolute values of sine and cosine are periodic with period π , then we get:
Ci , Si : Trigonometric integrals , Ei : Exponential integral , li : Logarithmic integral function , erf : Error function
There are some functions whose antiderivatives cannot be expressed in closed form . However, the values of the definite integrals of some of these functions over some common intervals can be calculated. A few useful integrals are given below.
If the function f has bounded variation on the interval [ a , b ] , then the method of exhaustion provides a formula for the integral: ∫ a b f ( x ) d x = ( b − a ) ∑ n = 1 ∞ ∑ m = 1 2 n − 1 ( − 1 ) m + 1 2 − n f ( a + m ( b − a ) 2 − n ) . {\displaystyle \int _{a}^{b}{f(x)\,dx}=(b-a)\sum \limits _{n=1}^{\infty }{\sum \limits _{m=1}^{2^{n}-1}{\left({-1}\right)^{m+1}}}2^{-n}f(a+m\left({b-a}\right)2^{-n}).}
The " sophomore's dream ": ∫ 0 1 x − x d x = ∑ n = 1 ∞ n − n ( = 1.29128 59970 6266 … ) ∫ 0 1 x x d x = − ∑ n = 1 ∞ ( − n ) − n ( = 0.78343 05107 1213 … ) {\displaystyle {\begin{aligned}\int _{0}^{1}x^{-x}\,dx&=\sum _{n=1}^{\infty }n^{-n}&&(=1.29128\,59970\,6266\dots )\\[6pt]\int _{0}^{1}x^{x}\,dx&=-\sum _{n=1}^{\infty }(-n)^{-n}&&(=0.78343\,05107\,1213\dots )\end{aligned}}} attributed to Johann Bernoulli . | https://en.wikipedia.org/wiki/Integrals_and_Series |
An Integraph is a mechanical analog computing device for plotting the integral of a graphically defined function .
Gaspard-Gustave de Coriolis first described the fundamental principal of a mechanical integraph in 1836 in the Journal de Mathématiques Pures et Appliquées . [ 1 ] A full description of an integraph was published independently around 1880 by both British physicist Sir Charles Vernon Boys and Bruno Abdank-Abakanowicz , a Polish-Lithuanian mathematician/electrical engineer. [ 2 ] [ 3 ] Boys described a design for an integraph in 1881 in the Philosophical Magazine . [ 3 ] Abakanowicz developed a practical working prototype in 1878, with improved versions of the prototype being manufactured by firms such as Coradi in Zürich, Switzerland . [ 3 ] [ 4 ] [ 1 ] Customized and further improved versions of Abakanowicz's design were manufactured until well after 1900, with these later modifications being made by Abakanowicz in collaboration M. D. Napoli, the "principal inspector of the railroad Chemin de Fer de l’Est and head of its testing laboratory". [ 1 ]
The input to the integraph is a tracing point that is the guiding point that traces the differential curve. [ 2 ] The output is defined by the path a disk that rolls along the paper without slipping takes. The mechanism sets the angle of the output disk based on the position of the input curve: if the input is zero, the disk is angled to roll straight, parallel to the x axis on the Cartesian plane . If the input is above zero the disk is angled slightly toward the positive y direction, such that the y value of its position increases as it rolls in that direction. If the input is below zero, the disk is angled the other way such that its y position decreases as it rolls.
The hardware consists of a rectangular carriage which moves left to right on rollers. Two sides of the carriage run parallel to the x axis. The other two sides are parallel to the y axis. Along the trailing vertical (y axis) rail slides a smaller carriage holding a tracing point. Along the leading vertical rail slides a second smaller carriage to which is affixed a small, sharp disc, which rests and rolls (but does not slide) on the graphing paper. The trailing carriage is connected both with a point in the center of the carriage and the disc on the leading rail by a system of sliding crossheads and wires, such that the tracing point must follow the disc's tangential path.
The integraph plots (traces) the integral curve
when we are given the differential curve ,
The mathematical basis of the mechanism depends on the following considerations: [ 5 ] For any point ( x , y ) of the differential curve, construct the auxiliary triangle with vertices ( x , y ), ( x , 0) and ( x − 1, 0) . The hypotenuse of this right triangle intersects the X -axis making an angle the value of whose tangent is y . This hypotenuse is parallel to the tangent line of the integral curve at ( X , Y ) that corresponds to ( x , y ) .
The integraph may be used to obtain a quadrature of the circle . If the differential curve is the unit circle, the integral curve intersects the lines X = ± 1 at points that are equally spaced at a distance of π /2. [ 5 ]
Gauthier-Villars, 1886 available at Google Books | https://en.wikipedia.org/wiki/Integraph |
The Integrated Biological Detection System is a system used by the British Army and Royal Air Force for detecting Chemical, biological, radiological, and nuclear agents or elements.
The Integrated Biological Detection System can provide early warning of a chemical or biological warfare attack and is in service with the United Kingdom Joint NBC Regiment. It can be installed in a container which can be mounted on a vehicle or ground dumped. It is also able to be transported by either a fixed-wing aircraft or by helicopter.
The system comprises
A U.S. military system with a similar purpose and a similar name is the Biological Integrated Detection System (BIDS).
This United Kingdom military article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Integrated_Biological_Detection_System |
IBIS-2 is the version 2 of the land-surface model Integrated Biosphere Simulator (IBIS), which includes several major improvements and additions to the prototype model developed by Foley et al. [1996]. IBIS was designed to explicitly link land surface and hydrological processes, terrestrial biogeochemical cycles , and vegetation dynamics within a single physically consistent framework [ 1 ]
The model considers transient changes in vegetation composition and structure in response to environmental change and is, therefore, classified as a Dynamic Global Vegetation Model ( DGVM ) [ 2 ] This new version of IBIS has improved representations of land surface physics, plant physiology , canopy phenology , plant functional type (PFT) differences, and carbon allocation. Furthermore, IBIS-2 includes a new belowground biogeochemistry submodel, which is coupled to detritus production (litterfall and fine root turnover). All process are organized in a hierarchical framework and operate at different time steps, ranging from 60 min to 1 year. Such an approach allows for explicit coupling among ecological, biophysical, and physiological processes occurring on different timescales.
The land surface module is based on the land surface transfer model (LSX) package of Thompson and Pollard, [ 3 ] and simulates the energy, water, carbon, and momentum balance of the soil-vegetation-atmosphere system. The model represents two vegetation canopies (e.g., trees versus shrubs and grasses), eight soil layers, and three layers of snow (when required). The solar radiative transfer scheme of IBIS-2 has been simplified in comparison with LSX and IBIS-1; sunlit and shaded fractions of the canopies are no longer treated separately. The model now follows the approach of Sellers et al. [1986] and Bonan [1995]. Infrared radiation is simulated as if each vegetation layer is a semitransparent plane; canopy emissivity depends on foliage density. Another difference between IBIS-2 and IBIS-1 and LSX, is that IBIS-2 uses an empirical linear function of wind speed to estimate turbulent transfer between the soil surface and the lower vegetation canopy, and IBIS-1 and LSX use a logarithmic wind profile. The total evapotranspiration from the land surface is treated as the sum of three water vapor fluxes: evaporation from the soil surface, evaporation of water intercepted by vegetation canopies, and canopy transpiration.
IBIS simulates the variations of heat and moisture in the soil. The eight layers are described in terms of soil temperature, volumetric water content and ice content. [ 4 ] All the process occurring in the soil are influenced by the soil texture and amount of organic matter within the soil. One difference from the physiological processes in the previous version of the model is that IBIS-1 calculates the maximum Rubisco carboxylation capacity (Vm) by optimizing the net assimilation of carbon by the leaf. [ 5 ] IBIS-2 prescribes constant values of Vm for the plant functional typed (PFT). To scale photosynthesis and transpiration from the leaf level to canopy level, IBIS-2 assumes that the net photosynthesis within the canopy is proportional to the APAR within it.
In the original version of IBIS [ 6 ] there was no explicit below ground biogeochemistry model to complete flow of carbon between the vegetation, detritus, and soil organic matter pools. IBIS-2 includes a new soil biogeochemistry module. [ 7 ] | https://en.wikipedia.org/wiki/Integrated_Biosphere_Simulator |
The Integrated Carbon Observation System (ICOS) is a research infrastructure to quantify the greenhouse gas balance of Europe and adjacent regions. In November 2015 it received the international legal status of ERIC ( European Research Infrastructure Consortium ) by decision of the European Commission. [ 1 ] It is recognized by The European Strategy Forum on Research Infrastructures (ESFRI) as a landmark European research infrastructure. It consists of a harmonized network of almost 200 long-term observation sites for the domains of atmosphere , ecosystems and ocean . The network is coordinated through its Head Office, the central data portal and central facilities including an atmosphere, ecosystem and ocean thematic center, and central analytical laboratories. [ 2 ] [ 3 ]
ICOS provides the essential long-term observations required to understand the present state and predict future behavior of the global carbon cycle and greenhouse gas emissions . It monitors and assesses the effectiveness of carbon sequestration and/or greenhouse gases emission reduction activities on global atmospheric composition levels, including attribution of sources and sinks by region and sector.
The highly standardized network offers improved access to data and enables the development of flux products for research and political application. ICOS is a state-of-the-art facility for the European research community. It contributes to the European share of global greenhouse gas observations under Group on Earth Observations (GEO), World Meteorological Organization GAW and GCOS programs.
ICOS consists of a network of standardized, long-term, high-precision integrated monitoring of atmospheric greenhouse gas concentrations and fluxes. The infrastructure integrates terrestrial and atmospheric observations at various sites into a single, coherent, highly precise dataset. This data allows a unique regional top-down assessment of fluxes from atmospheric data, and a bottom-up assessment from ecosystem measurements and fossil fuel inventories. Target is a daily mapping of sources and sinks at scales down to about 10 km, as a basis for understanding the exchange processes between the atmosphere, the terrestrial surface and the ocean . ICOS contributes to the implementation of the Integrated Global Carbon Observation System IGCO. [ 4 ]
The synergy between the atmospheric concentration measurements, the knowledge of local ecosystem fluxes on the other hand, has shown effective in reducing the uncertainties on carbon assessments. However, in Europe, observatories are all managed differently for each country and data is not homogeneously processed.
The value added impact of the infrastructure allows an enhanced visibility and dissemination of European greenhouse gas data and products that are both long-term and carefully calibrated. ICOS meets the data needs of carbon cycle and climate researchers as well as those of politicians and the general public. ICOS serves as the backbone to users engaged in developing data assimilation models of greenhouse gas sources and sinks, namely reverse modelling , which allows the deduction of surface carbon flux pattern.
A common data centre, the ICOS Carbon Portal, provides free access to all ICOS data, as well as to links with inventory data, and outreach material. [ 5 ] This portal allows the production of web based tools for the survey of sources and sinks in near real-time. ICOS delivers the information in near real-time with a quantification of the uncertainty associated with the results due to the use of several different models using different methodologies.
ICOS enables Europe to be a key global player for in-situ observations of greenhouse gases, data processing and user-friendly access to data products for validation of remote sensing products, scientific assessments, modelling and data assimilation.
ICOS has currently 16 member states and is in operational mode, with stations being certified for the operation according to the strict protocols and quality parameters. By the end of 2024 ICOS had 138 out of the 179 stations certified ('labelled' as either Class 1,2 or associated station), with greenhouse gas concentrations and fluxes determined on a routine basis. [ 6 ]
ICOS member states [ 7 ] | https://en.wikipedia.org/wiki/Integrated_Carbon_Observation_System |
Integrated Geo Systems (IGS) is a computational architecture system developed for managing geoscientific data through systems and data integration .
Geosciences often involve large volumes of diverse data which have to be processed by computer and graphics intensive applications . The processes involved in processing these large datasets are often so complex that no single applications software can perform all the required tasks. Specialized applications have emerged for specific tasks. To get the required results, it is necessary that all applications software involved in various stages of data processing, analysis and interpretation effectively communicate with each other by sharing data.
IGS provides a framework for maintaining an electronic workflow between various geoscience software applications through data connectivity.
The main components of IGS are:
This computing article is a stub . You can help Wikipedia by expanding it .
This article about a scientific organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Integrated_Geo_Systems |
The Integrated Guided Missile Development Programme ( IGMDP ) was an Indian Ministry of Defence programme for the research and development of the comprehensive range of missiles . The programme was managed by the Defence Research and Development Organisation (DRDO) and Ordnance Factories Board [ 1 ] in partnership with other Indian government political organisations. [ 2 ] The project started in 1982–83 under the leadership of Abdul Kalam who oversaw its ending in 2008 after these strategic missiles were successfully developed. [ 3 ]
On 8 January 2008, the DRDO formally announced the successful rated guided missile programme was completed with its design objectives achieved since most of the missiles in the programme had been developed and inducted by the Indian Armed Forces . [ 4 ]
By the start of the 1980s, the Defence Research and Development Laboratory (DRDL) had developed competence and expertise in the fields of propulsion, navigation and manufacture of aerospace materials based on the Soviet rocketry technologies . Thus, India's political leadership, which included Prime Minister Indira Gandhi , Defence Minister R. Venkataraman and V.S. Arunachalam , the Scientific Advisor to the Defence Minister, decided that all these technologies should be consolidated.
This led to the birth of the Integrated Guided Missile Development Programme with Dr. Abdul Kalam , who had previously been the project director for the SLV-3 programme at the Indian Space Research Organisation (ISRO), was inducted as the DRDL Director in 1983 to conceive and lead it. While the scientists proposed the development of each missile consecutively, the Defence Minister R. Venkataraman asked them to reconsider and develop all the missiles simultaneously. Thus, four projects, to be pursued concurrently, were born under the IGMDP:
The Agni missile was initially conceived in the IGMDP as a technology demonstrator project in the form of a re-entry vehicle, and was later upgraded to a ballistic missile with different ranges. [ 2 ] As part of this program, the Interim Test Range at Balasore in Odisha was also developed for missile testing. [ 6 ]
After India test-fired the first Prithvi missile in 1988, and the Agni missile in 1989, the Missile Technology Control Regime (then an informal grouping established in 1987 by Canada, France, Germany, Italy, Japan, the United Kingdom and the United States) decided to restrict access to any technology that would help India in its missile development program. To counter the MTCR , the IGMDP team formed a consortium of DRDO laboratories, industries and academic institutions to build these sub-systems, components and materials. Though this slowed down the progress of the program, India successfully developed indigenously all the restricted components denied to it by the MTCR. [ 6 ]
The starting of India's missile program influenced Pakistan to scramble its resources to meet the challenge. Like India, Pakistan faced hurdles to operationalize its program since education on space sciences was never sought. It took Pakistan decades of expensive trial errors before their program became feasible for military deployment.
The Prithvi missile (from Sanskrit पृथ्वी pṛthvī "Earth") is a family of tactical surface-to-surface short-range ballistic missiles (SRBM) and is India's first indigenously developed ballistic missile. Development of the Prithvi began in 1983, and it was first test-fired on 25 February 1988 from Sriharikota, SHAR Centre, Pottisreeramulu Nellore district, Andhra Pradesh. It has a range of up to 150 to 300 km. The land variant is called Prithvi while the naval operational variant of Prithvi I and Prithvi III class missiles are code named Dhanush (meaning "Bow"). Both variants are used for surface targets.
The Prithvi is said to have its propulsion technology derived from the Soviet SA-2 surface-to-air missile. [ 7 ] Variants make use of either liquid or both liquid and solid fuels. Developed as a battlefield missile, it could carry a nuclear warhead in its role as a tactical nuclear weapon .
The initial project framework of the IGMDP envisioned the Prithvi missile as a short-range ballistic missile with variants for the Indian Army, Indian Air Force and the Indian Navy. [ 8 ] Over the years the Prithvi missile specifications have undergone a number of changes. The Prithvi I class of missiles were inducted into the Indian Army in 1994, and it is reported that Prithvi I missiles are being withdrawn from service, being replaced with Prahar missiles. [ 9 ] Prithvi II missiles were inducted in 1996. Prithvi III class has a longer-range of 350 km, and was successfully test fired in 2004. [ 10 ]
A technology demonstrator for re-entry technology called Agni was added to IGMDP as Prithvi was unable to be converted to a longer ranged missile. The first flight of Agni with re-entry technology took place in 1989. [ 11 ] The re-entry system used resins and carbon fibres in its construction and was able to withstand a temperature of up to 3000 °C. [ 11 ] [ 12 ] The technologies developed in this project were eventually used in the Agni series of missiles. [ 13 ]
Trishul ( Sanskrit : त्रिशूल, meaning trident ) is the name of a short range surface-to-air missile developed by India as a part of the Integrated Guided Missile Development Program. It has a range of 12 km and is fitted with a 5.5 kg warhead . Designed to be used against low-level (sea skimming) targets at short range, the system has been developed to defend naval vessels against missiles and also as a short-range surface-to-air missile on land. According to reports, the range of the missile is 12 km and is fitted with a 15 kg warhead. The weight of the missile is 130 kg. The length of the missile is 3.5 m. [ 14 ] India officially shut down the project on 27 February 2008. [ 15 ] In 2003, Defence Minister George Fernandes had indicated that the Trishul missile had been de-linked from user service and would be continued as a technology demonstrator.
Akash (Sanskrit: आकाश meaning Sky ) is a medium-range surface-to-air missile developed as part of India's Integrated Guided Missile Development Programme to achieve self-sufficiency in the area of surface-to-air missiles. It is the most expensive missile project ever undertaken by the Union government in the 20th century. Development costs skyrocketed to almost US$ 120 million, which is far more than other similar systems. [ 15 ]
Akash is a medium-range surface-to-air missile with an intercept range of 30 km. It has a launch weight of 720 kg, a diameter of 35 cm and a length of 5.8 metres. Akash flies at supersonic speed, reaching around Mach 2.5. It can reach an altitude of 18 km. A digital proximity fuse is coupled with a 55 kg pre-fragmented warhead, while the safety arming and detonation mechanism enables a controlled detonation sequence. A self-destruct device is also integrated. It is propelled by a solid fuelled booster stage. The missile has a terminal guidance system capable of working through electronic countermeasures . The entire Akash SAM system allows for attacking multiple targets (up to 4 per battery). The Akash missile's use of ramjet propulsion system allows it to maintain its speed without deceleration, unlike the Patriot missiles . [ 16 ] The missile is supported by a multi-target and multi-function phased array fire control radar called the ' Rajendra ' with a range of about 80 km in search, and 60 km in terms of engagement. [ 17 ]
The missile is completely guided by the radar, without any active guidance of its own. This allows it greater capability against jamming as the aircraft self-protection jammer would have to work against the high-power Rajendra, and the aircraft being attacked is not alerted by any terminal seeker on the Akash itself.
Design of the missile is similar to that of the SA-6 , with four long tube ramjet inlet ducts mounted mid-body between wings. For pitch/yaw control four clipped triangular moving wings are mounted on mid-body. For roll control four inline clipped delta fins with ailerons are mounted before the tail. However, internal schema shows a completely modernised layout, including an onboard computer with special optimised trajectories, and an all-digital proximity fuse.
The Akash system meant for the Indian Army uses the T-72 tank chassis for its launcher and radar vehicles. The Rajendra derivative for the Army is called the Battery Level Radar-III. The Air Force version uses an Ashok Leyland truck platform to tow the missile launcher, while the Radar is on a BMP-2 chassis and is called the Battery Level Radar-II. In either case, the launchers carry three ready-to-fire Akash missiles each. The launchers are automated, autonomous and networked to a command post and the guidance radar. They are slewable in azimuth and elevation. The Akash system can be deployed by rail, road or air.
The first test flight of Akash missile was conducted in 1990, with development flights up to March 1997.
The Indian Air Force (IAF) has initiated the process to induct the Akash surface-to-air missiles developed as a part of the Integrated Guided Missile Development Programme. The Multiple target handling capability of Akash weapon system was demonstrated by live firing in a C4I environment during the trials. Two Akash missiles intercepted two fast moving targets in simultaneous engagement mode in 2005 itself. The Akash System's 3-D central acquisition radar (3-D car) group mode performance was then fully established. [ 18 ] [ 19 ]
In December 2007, the IAF completed user trials for the Akash missile system. The trials, which were spread over ten days, were successful, and the missile hit its target on all five occasions. Before the ten-day trial at Chandipur, the Akash system's ECCM Evaluation tests were carried out at Gwalior Air force base while mobility trials for the system vehicles were carried out at Pokhran. The IAF had evolved the user Trial Directive to verify the Akash's consistency in engaging targets. The following trials were conducted: Against low-flying near-range target, long-range high-altitude target, crossing and approaching target and ripple firing of two missiles from the same launcher against a low-altitude receding target. [ 20 ] Following this, the IAF declared that it would initiate the induction of 2 squadrons strength (each squadron with 2 batteries) of this missile system, to begin with. Once deliveries are complete, further orders would be placed to replace retiring SA-3 GOA (Pechora) SAM systems. [ 21 ] [ 22 ] In February 2010, the Indian Air Force ordered six more squadrons of the Akash system, taking orders to eight of the type. The Indian Army is also expected to order the Akash system.
Nag ( Sanskrit : नाग meaning cobra ) is India's third generation " fire-and-forget " anti-tank missile . It is an all weather, top attack missile with a range of 0.5 to 4 km.
The missile uses an 8 kg high-explosive anti-tank (HEAT) tandem warhead capable of defeating modern armour including explosive reactive armour (ERA) and composite armour . Nag uses imaging infra-red (IIR) guidance with day and night capability. Mode of launch for the IIR seeker is LOBL (lock-on before launch). Nag can be mounted on an infantry vehicle; a helicopter launched version will also be available with integration work being carried out with the HAL Dhruv .
Separate versions for the Army and the Air Force are being developed. For the Army, the missiles will be carried by specialist carrier vehicles (NAMICA-Nag Missile Carrier) equipped with a thermographic camera for target acquisition. NAMICA is a modified BMP-2 infantry fighting vehicle licence produced as "Sarath" in India. The carriers are capable of carrying four ready-to-fire missiles in the observation/launch platform which can be elevated with more missiles available for reload within the carrier. For the Air Force, a nose-mounted thermal imaging system has been developed for guiding the missile's trajectory "Helina". The missile has a completely fiberglass structure and weighs around 42 kg.
Nag was test fired for the 45th time on 19 March 2005 from the Test Range at Ahmednagar ( Maharashtra ), signalling the completion of the developmental phase. It will now enter the production phase, subject to user trials and acceptance by the Indian Army .
Further versions of the missile may make use of an all-weather milli-metre wave (MMW) seeker as an additional option. This seeker has reportedly been developed and efforts are on to integrate it into the missile. | https://en.wikipedia.org/wiki/Integrated_Guided_Missile_Development_Programme |
The Integrated Microbial Genomes (IMG) system is a genome browsing and annotation platform developed by the U.S. Department of Energy (DOE) - Joint Genome Institute . [ 2 ] [ 3 ] IMG contains all the draft and complete microbial genomes sequenced by the DOE-JGI integrated with other publicly available genomes (including Archaea, Bacteria, Eukarya, Viruses and Plasmids). IMG provides users a set of tools for comparative analysis of microbial genomes along three dimensions: genes, genomes and functions. Users can select and transfer them in the comparative analysis carts based upon a variety of criteria. IMG also includes a genome annotation pipeline that integrates information from several tools, including KEGG , Pfam , InterPro , and the Gene Ontology , among others. Users can also type or upload their own gene annotations (called MyIMG gene annotations) and the IMG system will allow them to generate Genbank or EMBL format files containing these annotations. [ citation needed ]
In successive releases IMG has expanded to include several domain-specific tools. The Integrated Microbial Genomes with Microbiome Samples (IMG/M) system is an extension of the IMG system providing a comparative analysis context of assembled metagenomic data with the publicly available isolate genomes. [ 4 ] [ 5 ] The Integrated Microbial Genomes- Expert Review (IMG/ER) system provides support to individual scientists or group of scientists for functional annotation and curation of their microbial genomes of interest. [ 2 ] Users can submit their annotated genomes (or request the IMG automated annotation pipeline to be applied first) into IMG-ER and proceed with manual curation and comparative analysis in the system, through secure (password protected) access. The IMG-HMP is focused on analysis of genomes related to the Human Microbiome Project (HMP) in the context of all publicly available genomes in IMG. [ 6 ] The IMG-ABC system is a system for bacterial secondary metabolism analysis and targeted biosynthetic gene cluster discovery. [ 7 ] The IMG-VR system (with the recent updated version IMG/VR v.2.0) is the largest publicly available database for viral genomes and metagenomes. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Integrated_Microbial_Genomes_System |
Integrated Operations in the High North ( IOHN , IO High North or IO in the High North ) is a unique collaboration project that during a four-year period starting May 2008 is working on designing, implementing and testing a Digital Platform for what in the upstream oil and gas industry is called the next or second generation of Integrated Operations . [ 1 ] The work on the Digital platform is focussed on capture, transfer and integration of real-time data from the remote production installations to the decision makers. A risk evaluation across the whole chain is also included. The platform is based on open standards and enables a higher degree of interoperability . Requirements for the digital platform come from use cases defined within the Drilling and Completion , Reservoir and Production and Operations and Maintenance domains. The platform will subsequently be demonstrated through pilots within these three domains. [ 2 ]
The project was a sidecar initiative for Statoil’s Global Operations Data Integration Project. This was part of a very ambitious Master Plan IT (MapIT), which also included the Real Time Visualization (RTV) tender. The RTV tender aimed to be an ontology-aware information workspace for a wide range of disciplines, as per the IO Capability Stack. Additionally, the sidecar project aimed to increase the semantic web knowledge among suppliers in the industry.
This new platform is considered an important enabler for safe and sustainable operations in remote, vulnerable and hazardous areas such as the High North , [ 3 ] [ 4 ] [ 5 ] [ 6 ] but the technology is clearly also applicable in more general applications.
The IOHN project consortium consists of 23 participants, [ 7 ] including operators, service providers, software vendors, technology providers, research institutions and universities. In addition, the Norwegian Defence Force is working with the project to resolve common infrastructural and interoperability challenges. [ 2 ]
The project is managed by Det Norske Veritas (DNV) . [ 8 ] Nils Sandsmark was the project manager during the initiation and start-up phase. Frédéric Verhelst took over as project manager from the beginning of 2009. [ 9 ]
Financing comes from the participants and the Research Council of Norway (RCN) for parts of the project (GOICT [ 10 ] and AutoConRig [ 11 ] [ 12 ] ).
The consortium consists of the following 22 participants [ 7 ] (in alphabetical order): | https://en.wikipedia.org/wiki/Integrated_Operations_in_the_High_North |
The e-Government Metadata Standard , e-GMS , is the UK e-Government Metadata Standard. It defines how UK public sector bodies should label content such as web pages and documents to make such information more easily managed, found and shared.
The metadata standard is an application profile of the Dublin Core Metadata Element Set and consists of mandatory, recommended and optional metadata elements such as title, date created and description.
The e-GMS formed part of the e-Government Metadata Framework (e-GMF) and eGovernment Interoperability Framework (e-GIF). [ 1 ] [ 2 ] [ 3 ] The standard helps provide a basis for the adoption of XML schemas for data exchange. [ 4 ]
The current standard defines twenty-five elements. Each has a formal description (taken from Dublin Core where possible) and an obligation rating of "mandatory", "mandatory if applicable", "recommended" or "optional":
Each element also has a statement of purpose, notes, clarification, refinements (such as sub-elements), examples of use, HTML syntax, encoding schemes and mappings to other metadata standards where applicable. [ 5 ]
The first version of the standard comprising simple Dublin Core elements was first published with the e-GMF. E-GMS was first published as a separate document by the Office of the e-Envoy in April 2002 and contained twenty-one elements. [ 6 ] Version 2 was released in December 2003 and added separate elements for Addressee, Aggregation, Digital Signature and Mandate. [ 7 ] Version 2 also added further refinements and introduced the e-GMS Audience Encoding Scheme (e-GMSAES) and e-GMS Type Encoding Scheme (e-GMSTES). [ 8 ] [ 9 ] Version 3 was released in April 2004 and incorporated PRONOM within the format and preservation elements. [ 10 ] The most recent version, 3.1, was published in August 2006 by the Cabinet Office e-Government Unit following the closure of the Office of the e-Envoy. [ 5 ] It now forms part of the UK Government's Information Principles , supporting the principle that "Information is standardised and linkable". [ 11 ] Responsibility for maintenance and development of the standard has since moved from central to local government.
The Integrated Public Sector Vocabulary is a controlled vocabulary for describing subjects and was first released in April 2005, building on developments of the subject element introduced with version 3.0 of e-GMS. It merged three earlier lists: the GCL ( Government Category List ), LGCL ( Local Government Category List ) and the seamlessUK taxonomy. [ 12 ] [ 13 ] It had 2732 preferred terms and, 4230 non-preferred. [ 10 ] [ 14 ]
The current version, version 2, was released in April 2006. It is much bigger, with 3080 preferred terms and 4843 non-preferred terms, and covers internal-facing as well as public-oriented topics. The Internal Vocabulary was released as a separate subset containing 756 preferred terms and 1333 non-preferred terms. An abridged version of the IPSV was also released containing 549 preferred terms and 1472 non-preferred terms and remains compliant with the e-GMS. [ 15 ]
The Public Sector Information Domain – Metadata Standards Working Group subsequently agreed to recommend this change to eGMS on the use of subject metadata from October 2012:
Where you identify value in using the subject element of metadata it should be populated from a controlled vocabulary that is used consistently across the sector to which the information relates. In preference, vocabularies should be published according to SKOS and publicly available for free re-use, which then enables tagged information to be further grouped, and associated, by an agent. SKOS also encourages cross-references to be made across otherwise unconnected vocabularies.
This change reflects advances in searching techniques since the introduction of eGMS, and modern approaches to cataloguing and cross-referencing information against evolving terminologies.
IPSV is no longer directly referenced and mandated within eGMS, allowing the publisher to consider if subject tagging is valuable, and to use the vocabulary that best describes their business. Therefore, the mandate to use IPSV no longer applies, although IPSV remains an option.
The standard has been discontinued in January 2019. [ 16 ] The Local Government Association esd-toolkit has since continued hosting IPSV and current URIs will remain valid. [ 17 ] [ 18 ]
E-GMS has been mapped to the IEEE / LOM . [ 5 ] IPSV has been mapped to the Local Government Classification Scheme. [ 19 ]
Examples of UK government sponsored GovTalk XML standards that use e-GMS include | https://en.wikipedia.org/wiki/Integrated_Public_Sector_Vocabulary |
Integrated Publishing System is a system created in 1982 [ citation needed ] for publishing multilingual literature.
The software was developed by the Watchtower Bible and Tract Society on an IBM mainframe computer using an Autologic typesetter. IPS was acquired by IBM, which intended to use the system to increase its hold on the publishing industry. [ 1 ]
The system went on to have some success commercially, being used to print the Encyclopædia Britannica . [ 2 ] [ unreliable source? ] | https://en.wikipedia.org/wiki/Integrated_Publishing_System |
Integrated Software Dependent Systems (ISDS) is an offshore oil IT system standard (DNV-OS-D203) and recommended practice guideline (DNV-RP-D201) covering systems and software verifications and classification of any integrated system that utilizes extensive software control. [ 1 ] The ISDS Recommended Practice (DNV-RP-D201) was launched in 2008 by Det Norske Veritas (DNV), the Norwegian classification society . DNV Offshore Standard OS-D203 launched in April 2010.
Since the ISDS standard was first published by DNV, it has been applied by several oil companies, equipment suppliers, ship, and rig owners. The ISDS standard focuses on how to set up and run a project and how to develop system and software quality assurance processes that will last the lifetime of the unit (ship, rig etc.). It provides a framework for working systematically to achieve the required reliability, availability, maintainability, and safety for the integrated unit of software dependent systems.
The process typically starts when owners are specifying their requirements, either for a new project or an enhancement to an existing system. In collaboration with DNV specialists, the owner can assess the integrator and the suppliers to ensure they have the prerequisites for delivering good quality software. One of the innovations of ISDS is that it assigns systems and software responsibilities to one or more of the roles: owner, operator, system integrator, suppliers, and independent verifier.
Another important feature of ISDS is that it requires the designation of a system integrator. This can be the shipbuilder, the major automation supplier, or a specialized contractor. The ISDS defines the activities to be performed by the system integrator. These activities focus on managing requirements and interfaces among the different systems.
The ISDS-required practices for suppliers focus on ensuring that software quality is built into vendor’s products through systematic reviews, inspections, and testing. All of these requirements are generally accepted good practices in software engineering. Nothing revolutionary is demanded.
Among the rig-owners, Songa Offshore , Seadrill and Dolphin Drilling have been early adopters of the ISDS approach. DNV conducted a pilot project of the recommended practice version of ISDS with Seadrill (in Houston) in 2009. Several improvements were made to Seadrill's new build and operations practices as a result of this initiative, and a story on this has been published in Offshore Engineer . [ 2 ]
DNV has been engaged with Dolphin Drilling in an effort that will lead to the issuance of the first ISDS class certificate, see article by Steve Marshall in Upstream Online. [ 3 ]
DNV is engaged by the Daewoo Ship and Marine Engineering (DSME), Samsung Heavy Industries (SHI) and Hyundai Heavy Industries (HHI) yards in South Korea, for drilling units they are building for Songa Offshore, Fred Olsen Energy (Dolphin Drilling), Statoil and Diamond Drilling . The owners have specified a full scope for DNV follow-up on ISDS, including systems for emergency shutdown , fire and gas , BOP control, drilling control, pipe/riser handling, heave compensation and tensioning, bulk storage, drilling fluid circulation, cementing, dynamic positioning , power management and integrated automation. [ 4 ]
In September 2013, DNV announced the contract with Diamond Drilling, the first American rig-owner to apply ISDS for a new-build project. [ 5 ]
The ISDS methodology has been developed starting with best industry practices from aerospace, telecom and automotive industries, and adapting the requirements to fit the offshore and maritime domains. An article published in Oil & Gas Journal gives an industry perspective to ISDS. [ 6 ]
In July 2015, the Songa Equinox , the first of Songa Offshore ’s four new sixth generation Cat-D semisubmersible rigs, met the requirements of integrated software dependent systems (ISDS) standard (DNV-OS-D203) to prevent software glitches. The aim was to enable full tracking of the quality and version control of all integrated software systems, so that the yard and the user knows the status of all systems, the latest updates, if any still require close-out at the yard, at any given time. Noticeable improvements to the typical complex cyber dependent vessel newbuilding lifecycle were observed. [ 7 ] | https://en.wikipedia.org/wiki/Integrated_Software_Dependent_System |
Integrated Software for Imagers and Spectrometers ( Isis ) is a specialized software package developed by the USGS to process images and spectra collected by current and past NASA planetary missions sent to Earth's Moon, Mars, Jupiter, Saturn, and other solar system bodies.
The history of ISIS began in 1971 at the United States Geological Survey (USGS) in Flagstaff, Arizona.
Isis was developed in 1989, primarily to support the Galileo NIMS instrument . [ 5 ] [ 6 ] [ 7 ]
It contains standard image processing capabilities (such as image algebra, filters, statistics) for both 2D images and 3D data cubes, as well as mission-specific data processing capabilities and cartographic rendering functions. [ 8 ]
Family of related formats that are used by the USGS Planetary Cartography group to store and distribute planetary imagery data.
This graphics software –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Integrated_Software_for_Imagers_and_Spectrometers |
Integrated assessment modelling ( IAM ) or integrated modelling ( IM ) [ a ] is a term used for a type of scientific modelling that tries to link main features of society and economy with the biosphere and atmosphere into one modelling framework. The goal of integrated assessment modelling is to accommodate informed policy-making, usually in the context of climate change [ 2 ] though also in other areas of human and social development. [ 3 ] While the detail and extent of integrated disciplines varies strongly per model, all climatic integrated assessment modelling includes economic processes as well as processes producing greenhouse gases. [ 4 ] Other integrated assessment models also integrate other aspects of human development such as education, [ 5 ] health, [ 6 ] infrastructure, [ 7 ] and governance. [ 8 ]
These models are integrated because they span multiple academic disciplines, including economics and climate science and for more comprehensive models also energy systems , land-use change , agriculture , infrastructure , conflict, governance, technology, education, and health . The word assessment comes from the use of these models to provide information for answering policy questions. [ 9 ] To quantify these integrated assessment studies, numerical models are used. Integrated assessment modelling does not provide predictions for the future but rather estimates what possible scenarios look like. [ 9 ]
There are different types of integrated assessment models. One classification distinguishes between firstly models that quantify future developmental pathways or scenarios and provide detailed, sectoral information on the complex processes modelled. Here they are called process-based models. Secondly, there are models that aggregate the costs of climate change and climate change mitigation to find estimates of the total costs of climate change. [ 4 ] A second classification makes a distinction between models that extrapolate verified patterns (via econometrics equations), or models that determine (globally) optimal economic solutions from the perspective of a social planner, assuming (partial) equilibrium of the economy. [ 10 ] [ 11 ]
Intergovernmental Panel on Climate Change (IPCC) has relied on process-based integrated assessment models (PB-IAM [ 13 ] ) to quantify mitigation scenarios. [ 14 ] [ 15 ] They have been used to explore different pathways for staying within climate policy targets such as the 1.5 °C target agreed upon in the Paris Agreement. [ 16 ] Moreover, these models have underpinned research including energy policy assessment [ 17 ] and simulate the Shared socioeconomic pathways . [ 18 ] [ 19 ] Notable modelling frameworks include IMAGE, [ 20 ] MESSAGEix, [ 21 ] AIM/GCE, [ 22 ] GCAM, [ 23 ] REMIND- MAgPIE , [ 24 ] [ 25 ] and WITCH-GLOBIOM. [ 26 ] [ 27 ] While these scenarios are highly policy-relevant, interpretation of the scenarios should be done with care. [ 28 ]
Non-equilibrium models include [ 29 ] those based on econometric equations and evolutionary economics (such as E3ME), [ 30 ] and agent-based models (such as the agent-based DSK-model). [ 11 ] These models typically do not assume rational and representative agents, nor market equilibrium in the long term. [ 29 ]
Cost-benefit integrated assessment models are the main tools for calculating the social cost of carbon , or the marginal social cost of emitting one more tonne of carbon (as carbon dioxide) into the atmosphere at any point in time. [ 31 ] For instance, the DICE, [ 32 ] PAGE, [ 33 ] and FUND [ 34 ] models have been used by the US Interagency Working Group to calculate the social cost of carbon and its results have been used for regulatory impact analysis. [ 35 ]
This type of modelling is carried out to find the total cost of climate impacts, which are generally considered a negative externality not captured by conventional markets. In order to correct such a market failure , for instance by using a carbon tax , the cost of emissions is required. [ 31 ] However, the estimates of the social cost of carbon are highly uncertain [ 36 ] and will remain so for the foreseeable future. [ 37 ] It has been argued that "IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory, and can fool policy-makers into thinking that the forecasts the models generate have some kind of scientific legitimacy". [ 38 ] Still, it has been argued that attempting to calculate the social cost of carbon is useful to gain insight into the effect of certain processes on climate impacts, as well as to better understand one of the determinants international cooperation in the governance of climate agreements. [ 36 ]
Integrated assessment models have not been used solely to assess environmental or climate change-related fields. They have also been used to analyze patterns of conflict, the Sustainable Development Goals , [ 39 ] trends across issue area in Africa, [ 40 ] and food security. [ 41 ]
All numerical models have shortcomings. Integrated Assessment Models for climate change, in particular, have been severely criticized for problematic assumptions that led to greatly overestimating the cost/benefit ratio for mitigating climate change while relying on economic models inappropriate to the problem. [ 42 ] In 2021, the integrated assessment modeling community examined gaps in what was termed the "possibility space" and how these might best be consolidated and addressed. [ 43 ] In an October 2021 working paper, Nicholas Stern argues that existing IAMs are inherently unable to capture the economic realities of the climate crisis under its current state of rapid progress. [ 44 ] : §6.2
Models undertaking optimisation methodologies have received numerous different critiques, a prominent one however, draws on the ideas of dynamical systems theory which understands systems as changing with no deterministic pathway or end-state. [ 45 ] This implies a very large, or even infinite, number of possible states of the system in the future with aspects and dynamics that cannot be known to observers of the current state of the system. [ 45 ] This type of uncertainty around future states of an evolutionary system has been referred to as ‘radical’ or ‘fundamental’ uncertainty. [ 46 ] This has led some researchers to call for more work on the broader array of possible futures and calling for modelling research on those alternative scenarios that have yet to receive substantial attention, for example post-growth scenarios. [ 47 ] | https://en.wikipedia.org/wiki/Integrated_assessment_modelling |
Integrated asset modelling ( IAM ) is the generic term used in the oil industry for computer modelling of both the subsurface and the surface elements of a field development. Historically the reservoir has always been modelled separately from the surface network and the facilities. In order to capture the interaction between those two or more standalone models, several time-consuming iterations were required. For example, a change in the water breakthrough leads to a change in the deliverability of the surface network which in turn leads to a production acceleration or deceleration in the reservoir. In order to go through this lengthy process more quickly, the industry has slowly [ 1 ] been adopting a more integrated approach which captures the constraints imposed by the infrastructure on the network immediately.
As the aim of an IAM is to provide a production forecast which honours both the physical realities of the reservoir and the infrastructure it needs to contain the following elements;
Some but not all models also contain an economics and risk model component so that the IAM can be used for economic evaluation.
The term Integrated Asset Modeling was first used by British Petroleum (BP), and this term is still maintained till date.
Integrated asset modeling links individual simulators across technical disciplines, assets, computing environments, and locations. This collaborative methodology represents a shift in oil and gas field management, moving it toward a holistic management approach and away from disconnected teams working in isolation. [ 2 ] The open framework of SLB’s Integrated Asset Modeling (IAM) software enables the coupling of a wide number of simulation software applications including reservoir simulation models (Eclipse, Intersect, MBX, IMEX, MBAL), multiphase flow simulation models (Pipesim, Olga, GAP), process and facilities simulation models (Symmetry, HYSYS, Petro-sim, UniSim) and economic domain models (Merak Peep). [ 3 ]
Historically the terms Integrated Production Modeling and Integrated Asset Modeling have been used interchangeably. The modern use of Integrated Production Modeling was coined when Petroleum Experts Ltd. joined their MBAL modeling software with their GAP and Prosper modeling software to form an Integrated Production Model .
Having an IAM built of an asset or future project offers several advantages;
By its very nature an IAM requires a multi disciplinary approach. Most companies are too compartmentalised for this to be easy, as a result of this an integrated approach has the following drawbacks;
The biggest barrier to adoption of IAM is frequently the resistance of reservoir engineers to any simplification of the subsurface. This argument is sometimes valid, sometimes not, see below.
As with any other software because of the inherent limitations in any virtual model use of an IAM is only appropriate during various stages of a project life. There are no hard and fast rules for this as there are a variety of software packages on the market which offer very accurate modelling of a very small scope to very rough modelling of a very large scope and anything in between. Currently the definition of IAM contains anything from daily optimisation to portfolio management. The success or failure of an IAM implementation project therefore depends on selecting the tool which is as complex as it needs to be but no more. [ 4 ] The following contains some examples of areas where an IAM is the appropriate decision support tool
Note that for most of these areas the accuracy of the reservoir proxy is not important, the decision is made based on relative performance differences, not absolute values.
Several different software packages are commercially available and there is a clear difference in philosophy between some of them.
Some vendors who have previously marketed standalone software for the subsurface and the surface are now marketing additional software which provides a datalink between the various packages. The obvious benefit of this approach is that there is no loss in accuracy and it does not require a remodelling exercise. However this approach also has its drawbacks, there is no time gain and the integration component of the entire package requires expertise which is not readily available, external specialist are frequently called upon to build and maintain the links between the components.
There are relatively few software packages on the market which are truly integrated, however these can offer the benefit of shorter runtimes and lower expertise thresholds.
A number of the established service companies now offer integrated asset modelling as a service. In practice this means that existing models will be either converted or linked by specialists to form an integrated solution. This solution is expensive but frequently the preferred option if the highest accuracy is required.
Czwienzek, F., Barreto Perez, J. J., Salve, J., Martinez Ramirez, I., Vasquez, M. G., & Hernandez, R. A. (2009, January 1). Integrated Production Model With Stochastic Simulation to Define Teotleco Exploitation Plan. Society of Petroleum Engineers. doi:10.2118/121801-MS
Fernando Pérez, Edwin Tillero, Ender Pérez, and Pedro Niño PDVSA; José Rojas, Juan Araujo, Milciades Marrocchi, Marisabel Montero, and Maikely Piña, Schlumberger. 2012. An Innovative Integrated Asset Modeling for an Offshore-Onshore Field Development. Tomoporo Field Case. Paper SPE 157556 presented at the International Production and Operations Conference and Exhibition held in Doha Qatar, 14–16 May 2012 | https://en.wikipedia.org/wiki/Integrated_asset_modelling |
Integrated catchment management ( ICM ) is a subset of environmental planning which approaches sustainable resource management from a catchment perspective, in contrast to a piecemeal approach that artificially separates land management from water management.
Integrated catchment management recognizes the existence of ecosystems and their role in supporting flora and fauna, providing services to human societies, and regulating the human environment. Integrated catchment management seeks to take into account complex relationships within those ecosystems: between flora and fauna, between geology, between soils and the biosphere , and between the biosphere and the atmosphere . Integrated catchment management recognizes the cyclic nature of processes within an ecosystem, and values scientific and technical information for understanding and analysing the natural world. [ 1 ] | https://en.wikipedia.org/wiki/Integrated_catchment_management |
Integrated Computational Materials Engineering (ICME) is an approach to design products, the materials that comprise them, and their associated materials processing methods by linking materials models at multiple length scales. Key words are "Integrated", involving integrating models at multiple length scales, and " Engineering ", signifying industrial utility. The focus is on the materials, i.e. understanding how processes produce material structures , how those structures give rise to material properties , and how to select materials for a given application. The key links are process-structures-properties-performance. [ 1 ] The National Academies report [ 2 ] describes the need for using multiscale materials modeling [ 3 ] to capture the process-structures-properties-performance of a material.
A fundamental requirement to meet the ambitious ICME objective of designing materials for specific products resp. components is an integrative and interdisciplinary computational description of the history of the component starting from the sound initial condition of a homogeneous, isotropic and stress free melt resp. gas phase and continuing via subsequent processing steps and eventually ending in the description of failure onset under operational load. [ 2 ] [ 4 ]
Integrated Computational Materials Engineering is an approach to design products, the materials that comprise them, and their associated materials processing methods by linking materials models at multiple length scales. ICME thus naturally requires the combination of a variety of models and software tools. It is thus a common objective to build up a scientific network of stakeholders concentrating on boosting ICME into industrial application by defining a common communication standard for ICME relevant tools. [ 5 ] [ 6 ]
Efforts to generate a common language by standardizing and generalizing data formats for the exchange of simulation results represent a major mandatory step towards successful future applications of ICME. A future, structural framework for ICME comprising a variety of academic and/or commercial simulation tools operating on different scales and being modular interconnected by a common language in form of standardized data exchange will allow integrating different disciplines along the production chain, which by now have only scarcely interacted. This will substantially improve the understanding of individual processes by integrating the component history originating from preceding steps as the initial condition for the actual process. Eventually this will lead to optimized process and production scenarios and will allow effective tailoring of specific materials and component properties. [ 7 ]
The ICMEg [ 8 ] project aims to build up a scientific network of stakeholders concentrating on boosting ICME into industrial application by defining a common communication standard for ICME relevant tools. Eventually this will allow stakeholders from electronic, atomistic, mesoscopic and continuum communities to benefit from sharing knowledge and best practice and thus to promote a deeper understanding between the different communities of materials scientists, IT engineers and industrial users.
ICMEg will create an international network of simulation providers and users. [ 9 ] It will promote a deeper understanding between the different communities (academia and industry) each of them by now using very different tools/methods and data formats. The harmonization and standardization of information exchange along the life-cycle of a component and across the different scales (electronic, atomistic, mesoscopic, continuum) are the key activity of ICMEg.
The mission of ICMEg is
The activities of ICMEg include
The ICMEg project ended in October 2016. Its major outcomes are
Most of the activities being launched in the ICMEg project are continued by the European Materials Modelling Council and in the MarketPlace project
Multiscale modeling aims to evaluate material properties or behavior on one level using information or models from different levels and properties of elementary processes.
Usually, the following levels, addressing a phenomenon over a specific window of length and time, are recognized:
There are some software codes that operate on different length scales such as:
A comprehensive compilation of software tools with relevance for ICME is documented in the Handbook of Software Solutions for ICME [ 10 ]
Katsuyo Thorton announced at the 2010 MS&T ICME Technical Committee meeting that NSF would be funding a " Summer School " on ICME at the University of Michigan starting in 2011. Northwestern began offering a Masters of Science Certificate in ICME in the fall of 2011. The first Integrated Computational Materials Engineering (ICME) course based upon Horstemeyer 2012 [ 17 ] was delivered at Mississippi State University (MSU) in 2012 as a graduate course with distance learning students included [cf., Sukhija et al., 2013]. It was later taught in 2013 and 2014 at MSU also with distance learning students. In 2015, the ICME Course was taught by Dr. Mark Horstemeyer (MSU) and Dr. William (Bill) Shelton (Louisiana State University, LSU) with students from each institution via distance learning. The goal of the methodology embraced in this course was to provide students with the basic skills to take advantage of the computational tools and experimental data provided by EVOCD in conducting simulations and bridging procedures for quantifying the structure-property relationships of materials at multiple length scales. On successful completion of the assigned projects, students published their multiscale modeling learning outcomes on the ICME Wiki , facilitating easy assessment of student achievements and embracing qualities set by the ABET engineering accreditation board.
[ 17 ] | https://en.wikipedia.org/wiki/Integrated_computational_materials_engineering |
Integrated design is a comprehensive holistic approach to design which brings together specialisms usually considered separately. It attempts to take into consideration all the factors and modulations necessary to a decision-making process. [ 1 ] A few examples are the following:
The requirement for integrated design comes when the different specialisms are dependent on each other or "coupled". An alternative or complementary approach to integrated design is to consciously reduce the dependencies. In computing and systems design, this approach is known as loose coupling .
Three phenomena are associated with a lack of integrated design: [ 7 ]
A committee is sometimes a deliberate attempt to address disparate design, but the phrase " design by committee " is associated with this failing, leading to disparate design. "Design by committee" can also lead to a kind of silent design, as design decisions are not properly considered, for fear of upsetting a hard-won compromise.
The integrated design approach incorporates collaborative methods and tools to encourage and enable the specialists in the different areas to work together to produce an integrated design. [ 8 ]
A charrette provides opportunity for all specialists to collaborate and align early in the design process. [ 9 ]
Human-Centered Design provides an integrated approach to problem solving, commonly used in design and management frameworks that develops solutions to problems by involving the human perspective in all steps of the problem-solving process. | https://en.wikipedia.org/wiki/Integrated_design |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.