text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Titanium tetraiodide is an inorganic compound with the formula TiI 4 . It is a black volatile solid, first reported by Rudolph Weber in 1863. [ 2 ] It is an intermediate in the van Arkel–de Boer process for the purification of titanium.
TiI 4 is a rare molecular binary metal iodide, consisting of isolated molecules of tetrahedral Ti(IV) centers. The Ti-I distances are 261 pm . [ 3 ] Reflecting its molecular character, TiI 4 can be distilled without decomposition at one atmosphere; this property is the basis of its use in the van Arkel–de Boer process . The difference in melting point between TiCl 4 (m.p. -24 °C) and TiI 4 (m.p. 150 °C) is comparable to the difference between the melting points of CCl 4 (m.p. -23 °C) and CI 4 (m.p. 168 °C), reflecting the stronger intermolecular van der Waals bonding in the iodides.
Two polymorphs of TiI 4 exist, one of which is highly soluble in organic solvents. In the less soluble cubic form, the Ti-I distances are 261 pm . [ 3 ]
Three methods are well known:
1) From the elements, typically using a tube furnace at 425 °C: [ 4 ]
This reaction can be reversed to produce highly pure films of Ti metal. [ 5 ]
2) Exchange reaction from titanium tetrachloride and HI.
3) Oxide-iodide exchange from aluminium iodide .
Like TiCl 4 and TiBr 4 , TiI 4 forms adducts with Lewis bases, and it can also be reduced. When the reduction is conducted in the presence of Ti metal, one obtains polymeric Ti(III) and Ti(II) derivatives such as CsTi 2 I 7 and the chain CsTiI 3 , respectively. [ 6 ]
TiI 4 exhibits extensive reactivity toward alkenes and alkynes resulting in organoiodine derivatives. It also effects pinacol couplings and other C-C bond-forming reactions. [ 7 ] | https://en.wikipedia.org/wiki/I4Ti |
A IA5String is a restricted character string type in the ASN.1 notation .
It is used to represent ISO 646 ( IA5 ) characters.
According to the ITU-T Rec. X.680 (ASN.1 Specification of basic notation) the entire character set contains precisely 128 characters.
Those characters are generally equivalent to the first 128 characters of the ASCII alphabet .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IA5STRING |
The International Association for the Properties of Water and Steam (IAPWS) is an international non-profit association of national organizations concerned with the properties of water and steam , [ 1 ] particularly thermophysical properties and other aspects of high-temperature steam, water and aqueous mixtures that are relevant to thermal power cycles and other industrial applications. [ 2 ]
The organization publishes a range of 'releases.' Specifically, these relate to the thermal and expansion properties of steam.
Both free software and commercial software implementations of the IAPWS correlations are available.
This article about an international organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IAPWS |
The IAQVEC (Indoor Air Quality, Ventilation and Energy Conservation in Buildings) is an international scientific organisation whose mission is to provide technical support, guidance and technical publications to industry and research organizations for the optimization of indoor air quality, ventilation technology and energy conservation through annual conferences and workshops. [ 2 ] The conferences cover a wide range of key research areas with the goal of simultaneously improving indoor environmental quality (IEQ) and energy efficiency enhancing wellbeing and sustainability. The association was established in 2016. [ 3 ] [ 4 ]
Indoor Air Quality, Ventilation and Energy Conservation in Buildings (IAQVEC) was founded by Fariborz Haghighat and Francis Allard in 1992. [ 5 ] [ 1 ] [ 6 ] [ 7 ] The first IAQVEC conference was held October 7–9, 1992 at the 5th International Jacques Cartier Conference in Montreal , and an annual meeting has been held since 1992. [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ]
Past and future IAQVEC conferences include:
The objectives of the association are: [ 23 ] | https://en.wikipedia.org/wiki/IAQVEC |
IARC group 1 Carcinogens are substances, chemical mixtures , and exposure circumstances which have been classified as carcinogenic to humans by the International Agency for Research on Cancer (IARC). [ 1 ] This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent ( chemical mixture ) may be placed in this category when evidence of carcinogenicity in humans is less than sufficient, but when there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity.
This list focuses on the hazard linked to the agents. This means that while carcinogens are capable of causing cancer, it does not take their risk into account, which is the probability of causing a cancer, given the level of exposure to this carcinogen. [ 2 ] The list is up to date as of January 2024. [ 3 ] | https://en.wikipedia.org/wiki/IARC_group_1 |
IARC group 2A agents are substances and exposure circumstances that have been classified as probable carcinogens by the International Agency for Research on Cancer (IARC). [ 1 ] This designation is applied when there is limited evidence of carcinogenicity in humans, as well as sufficient evidence of carcinogenicity in experimental animals . In some cases, an agent may be classified in this group when there is inadequate evidence of carcinogenicity in humans along with sufficient evidence of carcinogenicity in experimental animals and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans. Exceptionally, an agent may be classified in this group solely on the basis of limited evidence of carcinogenicity in humans.
This list is focusing on the hazard linked to the agents. This means that the carcinogenic agents are capable of causing cancer, but this does not take their risk into account, which is the probability of causing a cancer given the level of exposure to this carcinogenic agent. [ 2 ] The list is uptodate as of January 2024. [ 3 ] | https://en.wikipedia.org/wiki/IARC_group_2A |
IARC group 2B substances, mixtures and exposure circumstances are those that have been classified as "possibly carcinogenic to humans" by the International Agency for Research on Cancer (IARC) as [ 1 ] This category is used when there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals . It may also be used when there is insufficient evidence of carcinogenicity in humans but sufficient evidence in experimental animals. In some cases, an agent, mixture, or exposure circumstance with inadequate evidence of carcinogenicity in humans but limited evidence in experimental animals, combined with supporting evidence from other relevant data, may be included in this group.
This list focuses on the hazard linked to the agents. This means that the carcinogenic agents are capable of causing cancer, but this does not take their risk into account, which is the probability of causing a cancer given the level of exposure to this carcinogenic agent. [ 2 ] The list is up to date as of January 2024. [ 3 ] | https://en.wikipedia.org/wiki/IARC_group_2B |
The IAS 11 standard of International Accounting Standards set out requirements for the accounting treatment of the revenue and costs associated with long-term construction contracts . By their nature, construction activities and contracts are long-term projects, often beginning and ending in different accounting periods . Until its replacement with IFRS 15 in January 2018, IAS 11 helped accountants with measuring to what extent costs, revenue and possible profit or loss on the project are incurred in each period. [ 1 ]
This is a timeline of IAS 11: [ 2 ]
How accounting revenue and costs are to be recognized depends first on whether the stage of completion of a project can be reliably measured. If this is the case, cost and revenue (including profit if any) can be recognized up to the percentage of completion during the current accounting period. If the stage of completion of a project cannot be reliably measured, the revenue can only be recognized up to the costs that have been incurred and any profit is only recognized at the end of the last accounting period. In the case a company is expecting to make a loss on the contract, this loss will be immediately recognized in the current accounting period. [ 4 ] | https://en.wikipedia.org/wiki/IAS_11 |
The International Astronomical Union at its 16th General Assembly in Grenoble in 1976, accepted Resolution No. 1 [ 1 ] regarding a "New System of Astronomical Constants " [ 2 ] recommended for reduction of astronomical observations, and for computation of ephemerides . It superseded the IAU's previous recommendations of 1964 (see IAU (1964) System of Astronomical Constants ), became in effect in the Astronomical Almanac from 1984 onward, and remained in use until the introduction of the IAU (2009) System of Astronomical Constants . In 1994 [ 3 ] the IAU recognized that the parameters became outdated, but retained the 1976 set for sake of continuity and also recommended to start maintaining a set of "current best estimates". [ 4 ] This "sub group for numerical standards" had published a list, which included new constants (like those for relativistic time scales). [ 5 ]
The system of constants was prepared [ 6 ] by Commission 4 on ephemerides led by P. Kenneth Seidelmann after whom asteroid 3217 Seidelmann is named.
At the time, a new standard epoch ( J2000.0 ) was accepted; followed later [ 7 ] [ 8 ] by a new reference system with fundamental catalogue ( FK5 ), and expressions for precession of the equinoxes ,
and in 1979 by new expressions for the relation between Universal Time and sidereal time , [ 9 ] [ 10 ] [ 11 ] and in 1979 and 1980 by a theory of nutation . [ 12 ] [ 13 ] There were no reliable rotation elements for most planets, [ 2 ] [ 6 ] but a joint working group on Cartographic Coordinates and Rotational Elements was installed to compile recommended values. [ 14 ] [ 15 ]
The IAU(1976) system is based on the astronomical system of units :
IAU commission 4: [2] , [3] | https://en.wikipedia.org/wiki/IAU_(1976)_System_of_Astronomical_Constants |
The International Astronomical Union Circulars ( IAUCs ) are notices that give information about astronomical phenomena. IAUCs are issued by the International Astronomical Union 's Central Bureau for Astronomical Telegrams (CBAT) at irregular intervals for the discovery and follow-up information regarding such objects as planetary satellites, novae , supernovae , and comets .
The first series of IAUCs was published at Uccle during 1920–1922 when the IAU's first CBAT was located there; the first IAUC published in the present series was published in 1922 at Copenhagen Observatory after the transfer of the CBAT from Uccle to Copenhagen . [ 1 ]
At the end of 1964, the CBAT moved from Copenhagen to the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts , where it remains, on the grounds of the Harvard College Observatory (HCO). [ 2 ] HCO had maintained a Central Bureau for the Western hemisphere from 1883 until the end of 1964, when its staff took on the IAU's CBAT; HCO had published its own Announcement Cards that paralleled the IAUCs from 1926 until the end of 1964, but the Announcement Cards ceased publication when the IAUCs began to be issued from the same building. [ 1 ]
The IAUCs are delivered via the United States Postal Service , e-mail, and through the Central Bureau for Astronomical Telegrams / Minor Planet Center Computer Service. Most of the announcement circulars published at Cambridge, Copenhagen, and Uccle from 1895 to the present day are available for viewing via the CBAT website. | https://en.wikipedia.org/wiki/IAU_Circular |
The International Astronomical Union (IAU) established a Working Group on Star Names ( WGSN ) in May 2016 to catalog and standardize proper names for stars for the international astronomical community. [ 1 ] It operates under Division C – Education, Outreach and Heritage. [ 2 ]
The IAU states [ 3 ] that it is keen to make a distinction between the terms name and designation . To the IAU, name refers to the (usually colloquial) term used for a star in everyday conversation, while designation is solely alphanumerical, and used almost exclusively in official catalogues and for professional astronomy . (The WGSN notes that transliterated Bayer designations (e.g., Tau Ceti ) are considered a special historical case and are treated as designations. [ 4 ] )
The terms of reference for the WGSN [ 5 ] for the period 2016–2018 were approved by the IAU Executive Committee at its meeting on 6 May 2016. [ 6 ] In summary, these are to:
While initially the WGSN would focus on incorporating 'past' names from history and culture, in the future it would be responsible for defining the rules and enabling the process by which new names can be proposed by members of the international astronomical community.
The WGSN adopted preliminary guidelines for unique star names. [ 5 ] In summary, these are:
The WGSN explicitly recognized the names of exoplanets and their host stars approved by the Executive Committee Working Group Public Naming of Planets and Planetary Satellites, including the names of stars adopted during the 2015 NameExoWorlds campaign. [ 7 ]
The WGSN decided to attribute proper names to individual stars rather than entire multiple systems . For example, the name Fomalhaut specifically refers to the bright A component of a 3 star system. The informal names often attributed to other components in a physical multiple (e.g., Fomalhaut B) are treated as unofficial (albeit described as "useful nicknames"), and not included in the List of IAU-approved Star Names . In the List, the components are clearly identified by their identifiers in the Washington Double Star Catalog . [ 4 ] Where a component letter is not explicitly listed, the WGSN says that the name should be understood to be attributed to the visually brightest component. [ 8 ]
General guidelines for Chinese star names were adopted during 2017. [ 4 ] In summary, these are:
The WGSN decided to focus during the rest of 2016 on standardizing common names and spellings for the brightest few hundred stars with published names, and on compiling cultural names, with names for faint stars to be discussed in the future (it regarded 'bright stars' as those with designations in the Bright Star Catalogue and any physical companions; 'faint stars' as any other Galactic stars, substellar objects , and stellar remnants ). [ 5 ]
The main aim for the next few years is to delve into worldwide astronomical history and culture, looking to determine the best-known stellar appellations to use as the officially recognised names. Beyond this point, once the names of many of the bright stars in the sky have been officially approved and catalogued, the WGSN will turn its focus towards establishing a format and template for the rules, criteria and process by which proposals for stellar names can be accepted from professional astronomers, as well as from the general public. [ 3 ]
The WGSN's first bulletin dated July 2016 [ 5 ] included a table of 125 stars comprising the first two batches of names approved by the WGSN (on 30 June and 20 July 2016) together with names of stars (including five traditional star names: Ain , Edasich , Errai , Fomalhaut , and Pollux ) reviewed and adopted by the IAU Executive Committee Working Group on Public Naming of Planets and Planetary Satellites during the 2015 NameExoWorlds campaign [ 9 ] and recognized by the WGSN.
Further batches of names were approved on 21 August, 12 September, 5 October and 6 November 2016. These were listed in a table of 102 stars included in the WGSN's second bulletin in November 2016. [ 8 ] The next additions were done on 1 February 2017 (13 new star names), 30 June 2017 (29), 5 September 2017 (41), 19 November 2017 (3) and 6 June 2018 (17). All 330 names are included in the current List of IAU-approved Star Names , last updated on 1 June 2018. [ 3 ]
The first list includes two stars given names of individuals during the NameExoWorlds process: "Cervantes" for the star μ Arae (honoring the writer Miguel de Cervantes Saavedra ) and "Copernicus" for the star 55 Cancri A (honoring the astronomer Nicolaus Copernicus ). [ 10 ] The WGSN approved the historical name Cor Caroli ( Latin for 'heart of Charles') for the star α Canum Venaticorum , so named in honour of King Charles I of England by Sir Charles Scarborough , his physician. [ 11 ] [ 12 ] [ 13 ] The 1 February 2017 update included the approval of the historical name for Barnard's Star , named after the American astronomer E.E. Barnard . | https://en.wikipedia.org/wiki/IAU_Working_Group_on_Star_Names |
In contemporary astronomy , 88 constellations are recognized by the International Astronomical Union (IAU). [ 1 ] Each constellation is a region of the sky bordered by arcs of right ascension and declination , together covering the entire celestial sphere . Their boundaries were officially adopted by the International Astronomical Union in 1928 and published in 1930. [ 2 ]
The ancient Mesopotamians and later the Greeks established most of the northern constellations in international use today, listed by the Roman-Egyptian astronomer Ptolemy . The constellations along the ecliptic are called the zodiac . When explorers mapped the stars of the southern skies, European astronomers proposed new constellations for that region, as well as ones to fill gaps between the traditional constellations. Because of their Roman and European origins, every constellation has a Latin name. In 1922, the International Astronomical Union adopted three-letter abbreviations for 89 constellations, the modern list of 88 plus Argo . After this, Eugène Joseph Delporte drew up boundaries for each of the 88 constellations so that every point in the sky belonged to one constellation. [ 1 ] [ 2 ] When astronomers say that an object lies in a particular constellation, they mean that it is positioned within these specified boundaries.
Some constellations are no longer recognized by the IAU, but may appear in older star charts and other references. Most notable is Argo Navis , which was one of Ptolemy's original 48 constellations. In the 1750s the French astronomer Nicolas Louis de Lacaille divided this into three separate constellations: Carina , Puppis , and Vela . [ 3 ]
The 88 constellations depict 42 animals, 29 inanimate objects, and 17 humans or mythological characters.
Each IAU constellation has an official three-letter abbreviation based on the genitive form of the constellation name. As the genitive is similar to the base name, the majority of the abbreviations are just the first three letters of the constellation name: Ori for Orion/Orionis , Ara for Ara/Arae , and Com for Coma Berenices/Comae Berenices . In some cases, the abbreviation contains letters from the genitive not appearing in the base name (as in Hyi for Hydrus/Hydri , to avoid confusion with Hydra , abbreviated Hya ; and Sge for Sagitta/Sagittae , to avoid confusion with Sagittarius , abbreviated Sgr ). Some abbreviations use letters beyond the initial three to unambiguously identify the constellation (for example when the name and its genitive differ in the first three letters): Aps for Apus/Apodis , CrA for Corona Australis , CrB for Corona Borealis , Crv for Corvus . ( Crater is abbreviated Crt to prevent confusion with CrA .) When letters are taken from the second word of a two-word name, the first letter from the second word is capitalised: CMa for Canis Major , CMi for Canis Minor . Two cases are ambiguous: Leo for the constellation Leo could be mistaken for Leo Minor (abbreviated LMi ), and Tri for Triangulum could be mistaken for Triangulum Australe (abbreviated TrA ). [ 4 ]
In addition to the three-letter abbreviations used today, the IAU also introduced four-letter abbreviations in 1932. The four-letter abbreviations were repealed in 1955 and are now obsolete, but were included in the NASA Dictionary of Technical Terms for Aerospace Use (NASA SP-7) published in 1965. [ 5 ] These are labeled "NASA" in the table below and are included here for reference only.
For help with the literary English pronunciations, see the pronunciation key . There is considerable diversity in how Latinate names are pronounced in English. For traditions closer to the original, see Latin spelling and pronunciation .
Various other unofficial patterns exist alongside the constellations. These are known as "asterisms". Some are part of one larger constellation while others consists of stars in two adjoining constellations. Examples include the Big Dipper /Plough in Ursa Major ; the Teapot in Sagittarius ; the Square of Pegasus in Pegasus and Andromeda ; and the False Cross in Carina and Vela . | https://en.wikipedia.org/wiki/IAU_designated_constellations |
The International Astronomical Union (IAU) designates 88 constellations of stars. In the table below, they are ranked by the solid angle that they subtend in the sky, measured in square degrees and millisteradians .
These solid angles depend on arbitrary boundaries between the constellations : the list below is based on constellation boundaries drawn up by Eugène Delporte in 1930 on behalf of the IAU and published in Délimitation scientifique des constellations (Cambridge University Press). Before Delporte's work, there was no standard list of the boundaries of each constellation.
Delporte drew the boundaries along vertical and horizontal lines of right ascension and declination ; however, he did so for the epoch B1875.0 , which means that due to precession of the equinoxes, the borders on a modern star map (e.g., for epoch J2000 ) are already somewhat skewed and no longer perfectly vertical or horizontal. This skew will increase over the centuries to come. However, this does not change the solid angle of any constellation. | https://en.wikipedia.org/wiki/IAU_designated_constellations_by_area |
The International Astronomical Union (IAU) designates 88 constellations . [ 1 ] In the table below, they are listed by geographical visibility according to latitude as seen from Earth, as well as the best months for viewing the constellations at 21:00 (9 p.m.). | https://en.wikipedia.org/wiki/IAU_designated_constellations_by_geographical_visibility |
IAV GmbH Ingenieurgesellschaft Auto und Verkehr , (literal Engineer Society Automobile and Traffic ), abbreviated to IAV GmbH , is an engineering company in the automotive industry , designing products for powertrain , electronics and vehicle development . Founded in Berlin in 1983 by Prof. Dr. Hermann Appel as a university-affiliated research institute, the company employs over 8,000 members of staff, [ 1 ] and supplies automobile manufacturers and component suppliers . In addition to development centres in Berlin, Chemnitz and Gifhorn , IAV operates at sites in France, United Kingdom, Spain, Sweden, China, Japan, South Korea , Brazil and the United States.
Clients include the Volkswagen Group , BMW , Stellantis , Ford , GM , Porsche , Toyota , Claas and Liebherr . [ 2 ] Component manufacturer clients include Bosch , Aptiv , Continental , ZF Group , ETAS , Forvia , Freudenberg , Glatt , MWM , Schaeffler and Sonplas. [ 2 ]
As of 2023 [update] the shareholders of IAV GmbH were:
IAV has the following worldwide subsidiary companies: [ 3 ]
In 2009 the company submitted a patent for an electric vehicle recharger that is built into the road. The technology would allow electric vehicles to be charged as they drive over roads embedded with a recessed wireless recharging strip, using electromagnetic induction . [ 4 ] | https://en.wikipedia.org/wiki/IAV_GmbH |
The IBEX ribbon is a narrow, arc-shaped structure of enhanced energetic neutral atom (ENA) emissions discovered by NASA’s Interstellar Boundary Explorer (IBEX) mission in 2009. The ribbon is a significant feature at the boundary of the heliosphere , the region of space dominated by the Sun’s influence.
The IBEX spacecraft, launched in 2008, was designed to map the flux of ENAs originating from the heliosphere’s boundary. In its first all-sky map, IBEX detected an unexpected, bright, and narrow structure of ENA emissions, now known as the IBEX ribbon. This feature was not predicted by prior models and indicated a complex interaction between the solar wind and the local interstellar magnetic field. [ 1 ]
Multiple hypotheses have been proposed to explain the IBEX ribbon. [ 2 ]
The first hypothesis, proposed by McComas et al. (2009) and Schwadron et al. (2009), suggests that the ribbon forms through a chain of charge-exchange processes. In this model, a ring distribution of pick-up ions in the local interstellar magnetic field leads to the production of ribbon ENAs, assuming inefficient isotropization. [ 2 ]
A second theory by Schwadron & McComas (2013), proposes that the ribbon results from temporary containment of newly ionized atoms in a "retention" region in the local interstellar medium. These ions primarily originate from neutralized solar wind and pick-up ions from beyond the solar wind termination shock. [ 2 ] [ 3 ]
A third theory by Fahr et al. (2011) and Siewert et al. (2013), places the ion sources inside the heliosphere. Specifically, they propose that the ribbon ENAs come from pick-up ions related to adiabatically cooled anomalous cosmic rays upstream of the solar wind termination shock, as well as shock-accelerated pick-up ions beyond it. [ 2 ]
A fourth theory by Grzedzielski et al. (2010) suggests that the ENAs are produced through charge exchange between neutral hydrogen atoms at the edge of the local interstellar cloud and hot protons in the Local Bubble . [ 2 ]
A fifth theory by Fichtner et al. (2014) proposes that the ribbon is a consequence of inhomogeneities in the local interstellar medium itself, particularly related to the propagation of neutral density enhancements ("H-waves") along the interstellar magnetic field. This model suggests that when these density enhancements intersect with the heliopause, they create regions of increased ENA production that appear as the ribbon. [ 2 ]
As of 2014, none of the theories has gained universal acceptance, and each faces certain challenges in fully accounting for all the observed characteristics. [ 2 ]
Xu and Liu (2023) propose that magnetohydrodynamic (MHD) turbulence in the very local interstellar medium plays a key role in forming the IBEX ribbon through a mechanism called mirror diffusion. In this model, compressible modes of MHD turbulence create magnetic mirrors that interact with pickup ions near regions where the magnetic field is perpendicular to the line of sight. These turbulent magnetic mirrors, rather than the mean magnetic field, dominate the mirroring effect due to their strong magnetic field gradients at small scales. The mirroring effect is most effective for pickup ions with pitch angles below a critical value. [ 4 ]
The width of the IBEX ribbon (approximately 20° at 1 keV) is determined by two factors: the range of pitch angles where turbulent mirroring is effective and the wandering of magnetic field lines caused by Alfvénic modes. The authors found that for the ribbon structure to remain coherent across the sky, the injection scale of turbulence in the very local interstellar medium must be less than ~500 astronomical units. The model explains how pickup ions maintain their initial pitch angles through mirror diffusion, allowing them to return to the heliosphere as energetic neutral atoms after neutralization, creating the observed ribbon feature. [ 4 ]
IBEX observations over more than a decade indicate that the intensity and structure of the ribbon evolve over the solar cycle . [ 6 ] The ribbon and globally distributed flux (GDF) both respond to solar cycle changes, but with different time delays. The GDF shows a smaller temporal latency than the ribbon by a few years. [ 5 ] [ 7 ]
A study comparing IBEX observations from 2009 and 2019 revealed key temporal changes in the ribbon structure. For energies below 1.7 keV, the ribbon's intensity recovered near the nose direction and up to 25° southward, but not at mid and high ecliptic latitudes, which fell in similar phases of solar cycles 23 and 24. The ribbon's width showed significant variability depending on the viewing angle around the map center, with different patterns between 2009 and 2019. Despite these changes, circularity analysis indicated that the ribbon's radius remained statistically consistent between the two periods. The partial recovery aligned with models suggesting the heliosphere's closest point lies southward of the nose region. The variable width patterns potentially indicated small-scale processes occurring within the ribbon's source region. [ 5 ]
At low latitudes where solar wind speeds average below 500 km/s, the majority of observed energetic neutral atoms have energies below 2 keV, explaining why significant intensity changes occurred primarily at these lower energies. The recovery was first observed in the southern regions where the heliosphere's boundaries are closest to the Sun, consistent with models predicting different response times based on the distance traveled by the particles. [ 5 ]
The 2013 study found the ribbon to be "extraordinarily circular" with its center at ecliptic coordinates (219.2° ± 1.3°, 39.9° ± 2.3°). This center lies 50° from the heliospheric nose direction and is thought to align with the local interstellar magnetic field direction. The ribbon demonstrated exceptional spatial coherence across all observed energies (0.7-4.3 keV), with a spatial coherence parameter of δC ≤ 0.014, suggesting it forms in a region where the interstellar magnetic field structure is highly uniform over large distances. [ 8 ]
The analysis revealed subtle structural details: a slight systematic elongation of the ribbon (eccentricity ~0.3) generally perpendicular to the vector between the ribbon center and heliospheric nose, and an asymmetric intensity profile skewed toward the ribbon's interior. At higher energies (4.3 keV), the ribbon appeared slightly larger and displaced relative to lower energies. These characteristics provide key constraints for models of the ribbon's formation and its relationship to the broader heliosphere-interstellar medium interaction, though the exact physical mechanisms involved are still uncertain. [ 8 ]
With IBEX continuing its mission, future studies aim to compare its results with data from NASA’s Interstellar Mapping and Acceleration Probe (IMAP), which is expected to provide higher-resolution ENA measurements. [ 9 ] It is expected to be launched in 2025. | https://en.wikipedia.org/wiki/IBEX_ribbon |
IBM in atoms was a demonstration by IBM scientists in 1989 of a technology capable of manipulating individual atoms . [ 1 ] A scanning tunneling microscope was used to arrange 35 individual xenon atoms on a substrate of chilled crystal of nickel to spell out the three letter company initialism . It was the first time that atoms had been precisely positioned on a flat surface. [ 2 ] [ 3 ]
Donald Eigler and Erhard Schweizer of the IBM Almaden Research Center in San Jose, California , discovered the ability using a scanning tunneling microscope (STM) to move atoms about the surface. [ 4 ] In the demonstration, where the microscope was used in low temperature, [ 5 ] they positioned 35 individual xenon atoms on a substrate of chilled crystal of nickel to form the acronym "IBM". [ 1 ] The pattern they created was 5 nm tall and 17 nm wide. They also assembled chains of xenon atoms similar in form to molecules. [ 1 ] The demonstrated capacity showed the potential of fabricating rudimentary structures and allowed insights as to the extent of device miniaturization. [ 5 ]
This nanotechnology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_(atoms) |
The IBM 1015 [ 1 ] [ 2 ] is a display terminal for the IBM System/360 . [ 3 ] [ 4 ] [ 5 ] IBM suggested that it be used for phone-based customer support. [ 6 ] : p. 6
It was exhibited during the 1964 introduction of the IBM System/360 and included in the official System Summary . [ 7 ] Other display devices introduced and co-marketed by IBM were the IBM 2250 [ 6 ] : p. 6 and the IBM 2260 . [ 3 ]
The screen was round, [ 7 ] and it sat forward and above a keyboard. The display area could hold 30 lines, each with up to 40 characters, selected from A–Z, 0–9, and 26 special characters. Output was 650 characters per second. [ 1 ] It came with a desk. [ 3 ] Up to ten 1015s could be connected to the IBM 1016 Control Unit or the IBM 1414 Input/Output Synchronizer. [ 5 ] [ 6 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_1015_(terminal) |
The IBM 1017 is a table-top paper tape reader from IBM introduced in 1968. [ 1 ]
The 1017 reads 5, 6, 7, and 8-track paper or polyester tape at 120 characters-per-second (cps). Two models were available, the model 1 reads strips of tape, while the model 2 has supply and take-up reels, and can read either strips or reels.
The IBM 1018 is a paper tape punch from IBM introduced in 1968. [ 1 ]
The 1018 punches paper or polyester tape at 120 cps.
The 1017 and 1018 can attach to the multiplexor channel of an IBM System/360 Model 25 , Model 30 , Model 40 , or Model 50 , via an IBM 2826 control unit. [ 1 ] They also attach to an 2770 Data Communication System . [ 2 ]
The 1017 and 1018 are supported by DOS/360 . | https://en.wikipedia.org/wiki/IBM_1017 |
The IBM 1017 is a table-top paper tape reader from IBM introduced in 1968. [ 1 ]
The 1017 reads 5, 6, 7, and 8-track paper or polyester tape at 120 characters-per-second (cps). Two models were available, the model 1 reads strips of tape, while the model 2 has supply and take-up reels, and can read either strips or reels.
The IBM 1018 is a paper tape punch from IBM introduced in 1968. [ 1 ]
The 1018 punches paper or polyester tape at 120 cps.
The 1017 and 1018 can attach to the multiplexor channel of an IBM System/360 Model 25 , Model 30 , Model 40 , or Model 50 , via an IBM 2826 control unit. [ 1 ] They also attach to an 2770 Data Communication System . [ 2 ]
The 1017 and 1018 are supported by DOS/360 . | https://en.wikipedia.org/wiki/IBM_1018 |
The IBM 1030 Data Collection System was a remote terminal system created by IBM in Endicott, New York in 1963, intended to transmit data from remote locations to a central computer system. [ 1 ] [ 2 ]
The system consisted of the following components: [ 3 ]
The 1030 had limited editing capabilities, which consisted of checking that all required data was entered before transmitting a transaction.
The 1030 originally attached to an IBM 1440 computer through a 1448 Transmission Control Unit. Later it could be attached to an IBM System/360 .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_1030 |
The IBM 601 Multiplying Punch was a unit record machine that could read two numbers from a punched card and punch their product in a blank field on the same card. The factors could be up to eight decimal digits long. [ 1 ] The 601 was introduced in 1931 and was the first IBM machine that could do multiplication. [ 2 ] [ 3 ]
In 1936 W. J. Eckert connected a modified 601 to a 285 tabulator and an 016 duplicating punch through a custom switch he designed and used the combined setup to perform scientific calculations. [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_601 |
The IBM 603 Electronic Multiplier was the first mass-produced commercial electronic calculating device; it used full-size vacuum tubes to perform multiplication and addition. [ 1 ] (The earlier IBM 601 and released in the same year IBM 602 used relay logic .) The IBM 603 was adapted as the arithmetic unit in the IBM Selective Sequence Electronic Calculator . It was designed by James W. Bryce , [ 2 ] and included circuits patented by A. Halsey Dickenson in 1937. [ 3 ] The IBM 603 was developed in Endicott, New York , and announced on September 27, 1946. [ 4 ]
IBM's CEO Thomas J. Watson was doubtful of the product, but commercialization was pushed for by his son Thomas J. Watson Jr. [ 5 ] Only about 20 were built since the bulky tubes made it hard to manufacture, but the demand showed that the product was filling a need. [ 6 ] Ralph Palmer and Jerrier Haddad were hired to develop a more refined and versatile version of the 603, which became the IBM 604 Electronic Calculating Punch . [ 1 ] The 604 used miniature tubes and a patented design for pluggable modules, which made the product easier to manufacture and service. [ 7 ] Throughout the following 10 years IBM would build and lease 5600 units of the IBM 604. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_603 |
The IBM 6400 family of line matrix printers [ 1 ] were modern highspeed business computer printers introduced by IBM in 1995. These printers were designed for use on a variety IBM systems including mainframes , servers , and PCs.
The 6400 was available in a choice of open pedestal (to minimize floor size requirements) or an enclosed cabinet (for quiet operation). [ 1 ] Three models existed, with print speeds of 500, 1000 or 1500 lines/minute. [ 2 ]
When configured with the appropriate graphics option, it could print mailing bar codes "certified by the U.S. Postal service. [ 1 ] Twelve configurations were commonly sold by IBM. [ 3 ]
These printers were manufactured by Printronix Corp and rebranded for IBM. All internal parts had the Printronix Logo and/or artwork. Although they once did, IBM no longer manufactures printers. One of their old printer divisions became Lexmark The other became the IBM Printing Systems Division, which was subsequently sold to Ricoh in 2007. [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_6400 |
The ACS-1 and ACS-360 are two related supercomputers designed by IBM as part of the Advanced Computing Systems project from 1965 to 1969. Although the designs were never finished and no models ever went into production, the project spawned a number of organizational techniques and architectural innovations that have since become incorporated into nearly all high-performance computers in existence today. Many of the ideas resulting from the project directly influenced the development of the IBM RS/6000 and, more recently, have contributed to the Explicitly Parallel Instruction Computing (EPIC) computing paradigm used by Intel and HP in the Itanium processors.
After the ACS project folded, the engineers were given the choice to rejoin other divisions of IBM. Many declined as it would require them to return to the east coast from California. A number formed MASCOR in 1970 but this was short-lived as they were unable to raise capital. Gene Amdahl took the opportunity to start his own company, building IBM-compatible mainframe computers using the ECL designs worked on for ACS. Amdahl Corporation 's 470V/6 were both faster and less expensive than IBM's own high-end designs.
IBM introduced its first supercomputer , the IBM 7030 Stretch , in May 1961. They had to withdraw it from the market when tests at the launch customer, Los Alamos Scientific Laboratory , demonstrated it had very poor real-world performance. Almost immediately, IBM organized two development projects, Project X at the IBM Poughkeepsie Laboratory and Project Y at the IBM Thomas J. Watson Research Center . Project X was tasked with designing a machine that would run 10 to 20 times as fast as Stretch, while Y was to be 100 times faster. [ 1 ]
In the spring of 1962, Control Data Corporation (CDC) announced that they had installed two computers at Lawrence Radiation Laboratory and had received a contract for a third, a much more powerful design. That new machine was officially announced in August 1963 as the CDC 6600 , causing IBM CEO Thomas J. Watson Jr. to write a now-famous memo [ 2 ] asking how it was that this small company could produce machines that outperformed those from IBM. [ 1 ]
At a meeting in September 1963, IBM decided to shore up the high-end of what was then known as the New Product Line, or NPL. Project X was directed to implement the NPL instruction set , becoming a high-end machine in that lineup. When NPL was launched in 1964 as the System/360 , Project X became the Model 92, later renamed Model 91. Eventually, about a dozen machines in the Model 90 series would be sold. [ 1 ]
Project Y was never directed to use NPL, as it was a longer-term project aimed purely at the scientific market. Development was assigned to Jack Bertram and his Experimental Computers and Programming Group and started in earnest in late 1963. Bertram brought in John Cocke , Frances Allen , Brian Randell , Herb Schorr, and Edward H. Sussenguth , among others. Schorr developed the initial instruction set and recruited his former student, Lynn Conway , to work on a system simulator. [ 1 ]
The System/360 was an immediate runaway success, but production line problems plagued deliveries and much of the company was dedicated to fixing them. Meanwhile, CDC announced they would be introducing a new machine that was 10 times the performance of the 6600. Watson was convinced that the 360 instruction set would not be suitable for the new design and was worried that development would be slowed by the turmoil at the labs due to the 360 problems. In the spring of 1965, he approved the creation of a new division in California that would be closer to their customers at the weapons labs. A building in Sunnyvale, California was purchased in 1965 and set up as the IBM Advanced Computing Systems. Max Paley would be the lab director. [ 3 ]
At a steering meeting in August 1965, Paley, Bertram, and Schorr gave presentations on the design so far. The machine would use a 48-bit word length, as that was the standard for scientific computing. The machine would have a clock cycle time of 10 nanoseconds, about 10 times faster than the 6600, with six or seven internal cycles per clock. The arithmetic logic units (ALUs) that performed most of the mathematics would be pipelined , as in the 6600, and it would dispatch multiple instructions per cycle. Branching performance would be improved with a buffer that would begin executing both sides of the branch. [ 3 ]
Harwood Kolsky gave a presentation on the various competing designs, while Gene Amdahl and Chen Tze-chiang talked about their work on the high-end 360 Model 92. Kolsky had worked at Los Alamos for seven years before joining the Stretch project, while Amdahl had left IBM after being passed over to lead Stretch development but returned to IBM Research in 1960 and joined the Project X effort. [ 3 ] In late 1964, Amdahl took a teaching position at Stanford University , wanting to return to the west coast. In January 1965 he was named an IBM Fellow for his work on the Model 92. As a Fellow, Amdahl was entitled to work at any IBM facility of his choosing and decided to join ACS at the invitation of Bob Evans. [ 4 ] [ 5 ]
Even at this early meeting, Amdahl made the argument that it would make much more sense to make the ACS compatible with the 360, as had been the case with Project X. While it might run marginally slower than the ACS, due largely to it using having sixteen 32-bit registers instead of thirty-two 48-bit ones in the new concept, it would offer customers of the Model 92 an upgrade path to much higher performance and leverage all of the software for the 360, especially the compiler technology developed for that machine. [ 3 ]
In early 1966 the Project Y design was finalized as ACS-1, with the only major change being the removal of the 192-bit extended floating-point format. In 1966, a new building with 38,000 square feet (3,500 m 2 ) was built at 2800 Sand Hill Road in Menlo Park, California , near the Stanford Linear Accelerator and the project moved there late in the year. A significant change to the design occurred during this period. Originally, the compiler was responsible for moving instructions out of a large core memory or thin film memory store into a smaller cache of static RAM (although that term was not in use at the time) inside the CPU. Reviewing the system, Schorr and Dick Arnold concluded it would not work, and decided to reimplement it as a single-level with hardware caching of 32 or 64 kWords. [ 6 ]
Another concept developed for the ACS was dynamic instruction scheduling, or DIS. The ALU and indexing units, which calculated addresses, both had six-slot buffers from which it could select two instructions to execute out-of-order. This allowed the system to execute queued instructions while earlier instructions were waiting for data from memory or previous calculations. The outputs from these calculations being executed out of order would then be placed back in memory at the correct time, giving the illusion that everything had been executed in the order it was found in the machine code . Lynn Conway, who had been hired to develop a software simulation of the ACS, developed a system that used a bit-matrix to track which instructions were ready to be executed and which were waiting. [ 6 ]
Using the simulator, Conway benchmarked a number of high-performance computing workloads against the IBM 7090 , CDC 6600 and S/360 Model 91 . In comparison to the 7090, IBM's older scientific offering, ACS-1 would perform the Lagrangian Hydrodynamics Calculation (LHC) 2,500 times faster. On the more complex Neutron Diffusion (ND) code, it outperformed the 7090 by almost 1,300 times, and was about 60 times as fast as the 6600. [ 7 ]
Allen, Cocke, and Jim Beatty led the development of the compilers for the machine. This represented a significant effort as the system was to be highly advanced and aggressively optimize code. Among its features was the ability to unwind loops, schedule instructions around the basic block concept, and separate those optimizations that were code-based vs. platform-based. The compiler would be used by both a PL/1 front-end as well as an expanded version of Fortran IV . [ 5 ]
In a November 1967 project review, Herb Schorr outlined a delivery plan that would ship the first machine in 1971. [ 8 ] The plan estimated that over 100,000 lines of Fortran and assembly code would be needed for the operating system and nearly 70,000 lines for the compilers, assembler, and library routines. He estimated the cost of development to be $15 million ($141 million in 2024) for the software alone. [ 5 ]
Amdahl continued to agitate for a 360-compatible version of the machine. In January 1967, Ralph L. Palmer asked John Backus , Robert Creasy , and Harwood Kolsky to review the project and Amdahl's concept. Kolsky concluded that the 360-compatible version would be too difficult, and pointed out that the ACS was aimed at the CDC 6600 market, not the 360's, so if the customer was interested in compatibility, 6600 compatibility would seem more useful. The next month, Amdahl once again argued for 360 compatibility for marketing reasons. [ 5 ]
Amdahl's continued arguments for 360 compatibility placed him increasingly at odds with Bertram. Bertram responded by "quarantining" him and making sure that no one was allowed to talk to him. Whenever someone would visit, within minutes someone else would arrive and call the first visitor into a meeting. [ 9 ] Around the same time, another ACS team member, circuit designer John Earle, was being removed from the main team due to his working style which was causing friction in the team. Earle had been beaten up in a fight in Philadelphia, [ 9 ] and when he returned from the hospital Bertram assigned Earle to Amdahl, apparently as a form of punishment. [ 5 ]
This backfired badly, as over the next month Amdahl was able to convince Earle that a 360-compatible version was possible, and Earle went ahead and designed it. The result was the Amdahl-Earle Computer, or AEC/360. Using many of the concepts in ACS-1 they produced a design that was slightly slower than it, but cost perhaps 75% as much to build, with only 90,000 gates instead of 270,000 (a gate requires about five transistors using the ECL logic of the era). Much of the reduction was due to the fewer and smaller registers, which accounted for half of the gates in the ACS-1. The loss of performance due to fewer registers was to be made up by a faster 8 nanosecond clock, possible due to a streamlined internal design. [ 5 ]
In December 1967, Kolsky was sent to meet with Amdahl to get a more detailed description of the proposed design. [ 8 ] Around the same time, Amdahl began calling people within IBM to tell them about the new design. As word of the concept spread around the System Development Division in New York, the division's vice president Erich Bloch began to organize an internal review. The ACS team responded with a "frantic" redesign that reduced the number of gates from 270,000 to 200,000 with little effect on performance, which strongly suggested it was overdesigned. [ 10 ]
Bloch selected Carl Conti from IBM Poughkeepsie to handle the review, which occurred in March 1968. Amdahl presented performance estimates based on hand-calculated cycle counts. Conti accepted Amdahl's arguments that on integer benchmarks, the AEC/360 would be up to five times as fast as the ACS-1, it would be up to 2.5 times slower on floating-point, and the complex branching system of ACS seemed to offer 10 to 20% at best and could be adapted to the AEC if desired. But a key point made by Conti was that if the ACS system was so reliant on the compilers for its performance, moving that code to some other machine could result in far different outcomes and that could be considered a disadvantage. [ 11 ] He also concluded that while the AEC would be closer to 108,000 gates, it was still half as complex as the ACS. [ 10 ]
A final review was performed in April, but this was brief and seemingly already decided. In May, IBM announced the ACS-1 would be cancelled and the AEC/360, to be known as the ACS-360 from that point, would move forward. Although Amdahl's competing design had much to do with this, it was not the only reason. Amdahl had also argued that the $15 million would better be spent on improving the operating systems on the 360, which would improve the entire lineup, not just the AEC. But perhaps the most serious blow to the ACS was the continued success of the 360. In January 1968, NASA had taken delivery of a 360 Model 95, which IBM described as "the fastest, most powerful computer now in user operation." [ 11 ] Although the ACS would have outperformed the Model 95 by a wide margin, by this time Watson Jr. was considering withdrawing from the supercomputer market entirely. [ 12 ]
Many of the retrospective articles on the ACS project note that the original machine would have been a world leader. Conway notes that "In hindsight, it is now recognized that had the ACS-1 been successfully built, it would have been the premier supercomputer of the era." [ 8 ] The decision to cancel the original design rested mostly on the cycle counts which had not been tested as the simulator she had developed had not been modified to use the new instruction set. [ 8 ] Likewise, Amdahl's claim of an 8 nanosecond cycle was accepted by the Conti review although Mark Smotherman suggests it is not realistic. [ 11 ]
Most of the ACS upper management team left, and Amdahl was placed in command. The AEC/360 continued development along the proposed lines, with the only major change being the introduction of generalized register renaming as part of the out-of-order system and changes to the branch prediction system to work with the 360 instruction set. [ 11 ]
While calculating the cost of the machine, Amdahl concluded that there was no way its sales could turn a profit. This was a serious risk to the company, as introducing a high-end machine that was guaranteed to lose money could be seen as anti-competitive behaviour, an attempt to take the market away from companies like CDC. IBM faced a similar problem with Stretch, but over time it was shown that the R&D in that project had been widely used in the company and if it was billed out then it was slightly positive. [ 13 ] To allow ACS/360 to more clearly turn a profit, Amdahl suggested producing three models of the same basic system, the original ACS/360, a smaller model with 1 ⁄ 3 the performance, and an even smaller version with 1 ⁄ 9 , which would still make it the fastest machine in IBM's lineup. [ 11 ] This proposal was rejected. [ 13 ]
In May 1969, IBM upper management instead decided to cancel the entire project, [ 11 ] apparently at Amdahl's suggestion. [ 13 ] What had initially been intended to be a project to compete with the fast-moving CDC had now stretched on for the better part of a decade and showed little evidence that it would release a machine in the short term. Amdahl later claimed its cancellation was due primarily to it upsetting IBM's carefully planned pricing structure. The company as a whole had an understanding that machines above a certain performance level would always lose money and that introducing a machine that was as fast as the ACS/360 would require it to be priced in a way that would force their other machines to be reduced in price. [ 4 ] He has also claimed to have heard rumors that it had been deliberately set up to fail so that the technology could be used in other projects and the R&D cost written off on taxes. [ 13 ]
Shortly after the announcement of the project's cancellation, in August 1969, IBM announced the IBM System/360 Model 195 , a re-implementation of the Model 91 using integrated circuits that made it twice as fast as the Model 85 , which at that time was the fastest machine in the lineup. To address the high-end market, a vector processing task force was started in Poughkeepsie. [ 14 ]
When the ACS project was cancelled, many of the engineers were not interested in returning to the main IBM research campus in New York and wished to remain in California. Some ended up at IBM's hard drive research facility in San Jose, California , while many others left to form a new company, Multi Access System Corp, or MASCOR. This failed to raise capital and folded after only a few months. [ 14 ] Amdahl resigned in September 1970 and formed his own company to build 360-compatible machines, introducing the Amdahl 470/6 in 1975. Amdahl Corporation would become a major vendor of IBM-compatible systems into the 1980s, with a 20% or better market share through the 1970s and 80s. [ 15 ]
Although neither the ACS-1 nor the ACS-360 was ever manufactured, the IBM Advanced Computing Systems group responsible for their design developed architectural innovations and pioneered a number of RISC CPU design techniques that would become fundamental to the design of modern computer architectures and systems: [ 16 ] | https://en.wikipedia.org/wiki/IBM_Advanced_Computer_Systems_project |
CP-40 was a research precursor to CP-67 , which in turn was part of IBM's then-revolutionary CP[-67]/CMS – a virtual machine / virtual memory time-sharing operating system for the IBM System/360 Model 67 , and the parent of IBM's VM family . CP-40 ran multiple instances of client operating systems – particularly CMS , the Cambridge Monitor System , [ 1 ] built as part of the same effort. Like CP-67, CP-40 and the first version of CMS were developed by IBM's Cambridge Scientific Center (CSC) staff, working closely with MIT researchers at Project MAC and Lincoln Laboratory . CP-40/CMS production use began in January 1967. CP-40 ran on a unique, specially modified IBM System/360 Model 40 .
CP-40 was a one-off research system. Its declared goals were:
However, there was also an important unofficial mission: To demonstrate IBM's commitment to and capability for supporting time-sharing users like MIT. CP-40 (and its successor) achieved its goals from technical and social standpoints – they helped to prove the viability of virtual machines, to establish a culture of time-sharing users, and to launch a remote computer services industry. The project became embroiled in an internal IBM political war over time-sharing versus batch processing; and it failed to win the hearts and minds of the academic computer science community, which ultimately turned away from IBM to systems like Multics , UNIX , TENEX , and various DEC operating systems. Ultimately the virtualization concepts developed in the CP-40 project bore fruit in diverse areas, and remain important today.
CP-40 was the first operating system that implemented complete virtualization, i.e. it provided a virtual machine environment supporting all aspects of its target computer system (a S/360-40), such that other S/360 operating systems could be installed, tested, and used as if on a stand-alone machine. CP-40 supported fourteen simultaneous virtual machines. Each virtual machine ran in "problem state" – privileged instructions such as I/O operations caused exceptions, which were then caught by the control program and simulated. Similarly, references to virtual memory locations not present in main memory cause page faults , which again were handled by control program rather than reflected to the virtual machine. Further details on this implementation are found in CP/CMS (architecture) .
The basic architecture and user interface of CP-40 were carried forward into CP-67/CMS , which evolved to become IBM's current VM product line.
A Model 67 was not available for building CP-40, so a custom virtual memory device based on associative memory (the "CAT box" [ 2 ] ) was designed and built for CSC. It involved both hardware and microcode changes to a specially modified System/360 Model 40. These changes gave the unit the technology needed for full virtualization of the System/360 hardware. This modified Model 40 influenced the design of the forthcoming Model 67, which was intended to meet the needs of the same community of time-sharing users (notably MIT's Project MAC and Bell Laboratories – though both of these sites became notable IBM sales failures).
Three distinct virtual memory systems were implemented by IBM during this period:
These systems were all different, but bore a family resemblance. CP-40's CAT box was a key milestone. Pugh et al. [ 3 ] cite an IEEE paper [ 4 ] about the CP-40 virtual memory hardware, and states that it was "unique in that it included a parallel-search register bank to speed dynamic address translation. With funds supplied by Cambridge, IBM engineer[s]... built a 64-register associative memory and integrated it into a 360/40. The one-of-a-kind result was shipped to Cambridge early in 1966."
Although virtualization support was an explicit goal for CSC's modified Model 40, this was not apparently the case for the original Model 67 design. The fact that virtualization capabilities were ultimately implemented in the -67, and thus enabled the success of CP-67/CMS , speaks to the tenacity and persuasiveness of the CSC team.
CMS was first built in 1964 at CSC to run as a "client" operating system under CP-40. The CMS project leader was John Harmon. Although any S/360 operating system could run in a CP-40 virtual machine, it was decided that a new, simple, single-user interactive operating system would be best for supporting interactive time-sharing users. This would avoid the complexity and overhead of running a multi-user system like CTSS . (Contrast this with IBM's OS/MVT-TSO and its successors – essentially a time-sharing operating system running as a single task under an IBM batch operating system. With CMS, each interactive user gets a private virtual machine.)
By September 1965, many important CMS design decisions had already been made:
These were radical departures from the difficult file naming, job control (via JCL), and other requirements of IBM's "real" operating systems. [ 5 ] (Some of these concepts had been goals for operating systems from other vendors, such as Control Data Corporation and DEC .)
The CMS file system design, with its flat directory structure, was kept deliberately simple. Creasy notes: "This structure of multiple disks, each with a single directory, was chosen to be simple but useful. Multi-level linked directories, with files stored in common areas, had been the design trend when we began. We simplified the design of this and other components of CMS to reduce implementation complexity." [ 6 ]
Application programs running under CMS executed within the same address space. They accessed system services, such as the CMS file system, through a simple programming interface to the CMS nucleus , which resided in low memory within the CMS virtual machine. A variety of system calls were provided, most of which would be familiar to current CMS programmers. (Since applications ran in the CMS virtual machine, they could potentially misbehave, by overwriting CMS data, using privileged instructions, or taking other actions that could take over or crash the virtual machine. Of course, doing so could not affect other virtual machines, which were all mutually isolated; nor could it damage the underlying control program. Unlike most operating systems, CP crashes rarely stemmed from application errors – and were thus themselves relatively rare.)
The following notes provide brief quotes, primarily from Pugh, Varian, and Creasy [see references], illustrating the development context of CP-40. Direct quotes rather than paraphrases are provided here, because the authors' perspectives color their interpretations. Also see History of CP/CMS for additional context. | https://en.wikipedia.org/wiki/IBM_CP-40 |
IBM Condor is a 1,121- qubit quantum processor created by IBM , unveiled during the IBM Quantum Summit 2023, which occurred on December 4, 2023. It is the 2nd largest quantum processor (in terms of qubits), just shy of the 1,125-qubit quantum processor by the company Atom , created in October 2023. [ 1 ] [ 2 ]
It has a similar performance to its predecessor, the IBM Osprey . [ 1 ]
It has a 50% increase in qubit density compared to the IBM Osprey, and over a mile of high-density cryogenic flex IO wiring. [ 1 ]
It is not as fast as the IBM Heron , unveiled during the same event. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/IBM_Condor |
IBM Distributed Office Support System , or DISOSS is a centralized document distribution and filing application for IBM 's mainframe computers running the MVS and VSE operating systems. DISOSS runs under both the CICS transaction processing system and the IMS/DS transaction processing system, and later versions use the SNADS architecture of peer to peer communication for distributed services.
Heterogeneous office systems connect through DISOSS to OfficeVision/MVS series. The IBM systems are OV/MVS, “OV/VM, OV/400, PS/CICS, PS/TSO, PS/PC, PROFS, and other Mail Systems Supporting SNADS and DIA. Only a single copy of DISOSS needs to be installed somewhere in the network to accomplish the connection.” [ 1 ] A number of other vendors such as Digital Equipment Corporation , Hewlett-Packard , and Data General provided links to DISOSS. [ 2 ]
DISOSS provides document library function with search and retrieval controlled by security based on user ID, along with document translation based on Document Interchange Architecture (DIA) and Document Content Architecture (DCA). The different systems that use DISOSS for document exchange and distribution vary in their implementation of DCA and thus the end results of some combinations are only final form (FFT) documents rather than revisable form text (RFT).
It supports document exchange between various IBM and non-IBM office devices including the IBM Displaywriter System , the IBM 5520 , the IBM 8100/DOSF , IBM Scanmaster, and Personal computers and word processors. [ 3 ] It offers format transformation and printing services, and provides a rich application programming interface (API) and interfaced with other office products such as IBM OfficeVision .
DISOSS was announced in 1980, [ 4 ] and "was designated a strategic IBM product in 1982." [ 2 ] It was a key part of IBM Systems Application Architecture (SAA), but suffered from a reputation as "difficult to understand" and "a resource hog." [ 5 ] DISOSS continues to be actively marketed and supported as of 2012. [ 6 ] [ 7 ]
Version 1 of DISOSS was introduced in June 1980; Colgate-Palmolive was one of the first sites to implement DISOSS version 1, and reported dissatisfaction with the poor quality of the documentation and with software bugs. [ 8 ] IBM released version 2 in 1982, in which IBM claimed to resolve the issues which version 1 users had experienced. [ 8 ]
DISOSS was implemented by the city government of Long Beach, California during 1983–1984. [ 9 ]
IBM Corporation: Document Interchange with DISOSS Version 3 (1983)
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_DISOSS |
IBM Eagle is a 127- qubit quantum processor . [ 1 ] [ 2 ] IBM claims that it can not be simulated by any classical computer. [ 3 ] [ 4 ] It is two times bigger than China's Jiuzhang 2. [ 5 ] It was revealed on November 16, 2021 and was claimed to be the most powerful quantum processor ever made until November 2022, when the IBM Osprey overtook it with 433 qubits. [ 6 ] [ 7 ] [ 8 ] It is almost twice as powerful as their last processor, the ' Hummingbird ', which had 65 quantum bits and was created in 2020. [ 6 ] IBM believes that the processes used in creating the 'Eagle', will be the backbone for their future processors. [ 6 ]
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_Eagle |
IBM Enterprise Systems Architecture is an instruction set architecture introduced by IBM as Enterprise Systems Architecture/370 (ESA/370) in 1988. It is based on the IBM System/370-XA architecture.
It extended the dual-address-space mechanism introduced in later IBM System/370 models by adding a new mode in which general-purpose registers 1–15 are each associated with an access register referring to an address space, with instruction operands whose address is computed with a given general-purpose register as a base register will be in the address space referred to by the corresponding address register.
The later Enterprise Systems Architecture/390 (ESA/390), introduced in 1990, added a facility to allow device descriptions to be read using channel commands and, in later models, added instructions to perform IEEE 754 floating-point operations and increased the number of floating-point registers from 4 to 16.
Enterprise Systems Architecture is essentially a 32-bit architecture; as with System/360, System/370, and 370-XA, the general-purpose registers are 32 bits long, and the arithmetic instructions support 32-bit arithmetic. Only byte-addressable real memory (Central Storage) and Virtual Storage addressing is limited to 31 bits, as is the case with 370-XA. (IBM reserved the most significant bit to easily support applications expecting 24-bit addressing, as well as to sidestep a problem with extending two instructions to handle 32-bit unsigned addresses.) It maintains problem state backward compatibility dating back to 1964 with the 24-bit -address/32-bit-data ( System/360 and System/370 ) and subsequent 24/31-bit-address/32-bit-data architecture ( System/370-XA ). However, the I/O subsystem is based on System/370 Extended Architecture (S/370-XA), not on the original S/370 I/O instructions.
On February 15, 1988, IBM announced [ 6 ] [ 7 ] Enterprise Systems Architecture/370 (ESA/370) for 3090 enhanced ("E") models and for 4381 model groups 91E and 92E.
In addition to the primary-space and secondary-space addressing modes that later System/370 models, and System/370 Extended Architecture (S/370-XA) models, support, ESA has an access register mode in which each use of general register 1–15 as a base register uses an associated access register to select an address space. [ 8 ] In addition to the normal address spaces that machines with the dual-address-space facility support, ESA also allows data spaces, which contain no executable code.
A machine may be divided into Logical Partitions ( LPARs ), each with its own virtual system memory so that multiple operating systems may run concurrently on one machine.
An important capability to form a Parallel Sysplex was added to the architecture in 1994.
ESA/390 also extends the Sense ID command to provide additional information about a device, and additional device-dependent channel commands, the command codes for which are provided in the Sense ID information, to allow device description information to be fetched from a device. [ 11 ] [ 12 ]
Starting with the System/390 G5 , [ 9 ] [ 10 ] IBM introduced: [ 13 ]
Some PC-based IBM-compatible mainframes which provide ESA/390 processors in smaller machines have been released over time, but are only intended for software development.
ESA/390 adds the following [ 14 ] facilities
The following channel commands [ f ] are new, or have their functionality changed, in ESA/390: [ 12 ] | https://en.wikipedia.org/wiki/IBM_Enterprise_Systems_Architecture |
The Future Systems project ( FS ) was a research and development project undertaken in IBM in the early 1970s to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware . The new systems were intended to replace the System/370 in the market some time in the late 1970s.
There were two key components to FS. The first was the use of a single-level store that allows data stored on secondary storage like disk drives to be referred to within a program as if it was data stored in main memory ; variables in the code could point to objects in storage and they would invisibly be loaded into memory, eliminating the need to write code for file handling. The second was to include instructions corresponding to the statements in high-level programming languages , allowing the system to directly run programs without the need for a compiler to convert from the language to machine code . One could, for instance, write a program in a text editor and the machine would be able to run that directly.
Combining the two concepts in a single system in a single step proved to be an impossible task. This concern was pointed out from the start by the engineers, but it was ignored by management and project leaders for many reasons. Officially started in the fall of 1971, by 1974 the project was moribund, and formally cancelled in February 1975. The single-level store was implemented in the System/38 in 1978 and moved to other systems in the lineup after that, but the concept of a machine that directly ran high-level languages has never appeared in an IBM product.
The System/360 was announced in April 1964. Only six months later, IBM began a study project on what trends were taking place in the market and how these should be used in a series of machines that would replace the 360 in the future. One significant change was the introduction of useful integrated circuits (ICs), which would allow the many individual components of the 360 to be replaced with a smaller number of ICs. This would allow a more powerful machine to be built for the same price as existing models. [ 1 ]
By the mid-1960s, the 360 had become a massive best-seller. This influenced the design of the new machines, as it led to demands that the machines have complete backward compatibility with the 360 series. When the machines were announced in 1970, now known as the System/370 , they were essentially 360s using small-scale ICs for logic, much larger amounts of internal memory and other relatively minor changes. [ 2 ] A few new instructions were added and others cleaned up, but the system was largely identical from the programmer's point of view. [ 3 ]
The recession of 1969–1970 led to slowing sales in the 1970-71 time period and much smaller orders for the 370 compared to the rapid uptake of the 360 five years earlier. [ 4 ] For the first time in decades, IBM's growth stalled. While some in the company began efforts to introduce useful improvements to the 370 as soon as possible to make them more attractive, others felt nothing short of a complete reimagining of the system would work in the long term. [ 3 ]
Two months before the announcement of the 370s, the company once again started considering changes in the market and how that would influence future designs. [ 3 ] In 1965, Gordon Moore predicted that integrated circuits would see exponential growth in the number of circuits they supported, today known as Moore's Law . IBM's Jerrier A. Haddad wrote a memo on the topic, suggesting that the cost of logic and memory was going to zero faster than it could be measured. [ 3 ]
An internal Corporate Technology Committee (CTC) study concluded a 30-fold reduction in the price of memory would take place in the next five years, and another 30 in the five after that. If IBM was going to maintain its sales figures, it was going to have to sell 30 times as much memory in five years, and 900 times as much five years later. Similarly, hard disk cost was expected to fall ten times in the next ten years. To maintain their traditional 15% year-over-year growth, by 1980 they would have to be selling 40 times as much disk space and 3600 times as much memory. [ 4 ]
In terms of the computer itself, if one followed the progression from the 360 to the 370 and onto some hypothetical System/380, the new machines would be based on large-scale integration and would be dramatically reduced in complexity and cost. There was no way they could sell such a machine at their current pricing, if they tried, another company would introduce far less expensive systems. [ 3 ] They could instead produce much more powerful machines at the same price points, but their customers were already underutilizing their existing systems. To provide a reasonable argument to buy a new high-end machine, IBM had to come up with reasons for their customers to need this extra power. [ 5 ] [ 6 ]
Another strategic issue was that while the cost of computing was steadily going down, the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost. [ 6 ]
In 1969, Bob O. Evans , president of the IBM System Development Division which developed their largest mainframes , asked Erich Bloch of the IBM Poughkeepsie Lab to consider how the company might use these much cheaper components to build machines that would still retain the company's profits. Bloch, in turn, asked Carl Conti to outline such systems. Having seen the term "future systems" being used, Evans referred to the group as Advanced Future Systems. The group met roughly biweekly.
Among the many developments initially studied under AFS, one concept stood out. At the time, the first systems with virtual memory (VM) were emerging, and the seminal Multics project had expanded on this concept as the basis for a single-level store . In this concept, all data in the system is treated as if it is in main memory , and if the data is physically located on secondary storage , the VM system automatically loads it into memory when a program calls for it. Instead of writing code to read and write data in files, the programmer simply told the operating system they would be using certain data, which then appeared as objects in the program's memory and could be manipulated like any other variable . The VM system would ensure that the data was synchronized with storage when needed. [ 7 ]
This was seen as a particularly useful concept at the time, as the emergence of bubble memory suggested that future systems would not have separate core memory and disk drives , instead everything would be stored in a large amount of bubble memory. [ 7 ] Physically, systems would be single-level stores, so the idea of having another layer for "files" which represented separate storage made no sense, and having pointers into a single large memory would not only mean one could simply refer to any data as it if were local, but also eliminate the need for separate application programming interfaces (APIs) for the same data depending on whether it was loaded or not. [ 7 ]
Evans also asked John McPherson at IBM's Armonk headquarters to chair another group to consider how IBM would offer these new designs across their many divisions. A group of twelve participants spread across three divisions produced the "Higher Level System Report", or HLS, which was delivered on 25 February 1970. A key component of HLS was the idea that programming was more expensive than hardware. If a system could greatly reduce the cost of development, then that system could be sold for more money, as the overall cost of operation would still be lower than the competition. [ 8 ]
The basic concept of the System/360 series was that a single instruction set architecture (ISA) would be defined that offered every possible instruction the assembly language programmer might desire. Whereas previous systems might be dedicated to scientific programming or currency calculations and had instructions for that sort of data, the 360 offered instructions for both of these and practically every other task. Individual machines were then designed that targeted particular workloads and ran those instructions directly in hardware and implemented the others in microcode . This meant any machine in the 360 family could run programs from any other, just faster or slower depending on the task. This proved enormously successful, as a customer could buy a low-end machine and always upgrade to a faster one in the future, knowing all their applications would continue to run.
Although the 360's instruction set was large, those instructions were still low-level, representing single operations that the central processing unit (CPU) would perform, like "add two numbers" or "compare this number to zero". Programming languages and their links to the operating system allowed users to type in programs using high-level concepts like "open file" or "add these arrays". The compilers would convert these higher-level abstractions into a series of machine code instructions.
For HLS, the instructions would instead represent those higher-level tasks directly. That is, there would be instructions in the machine code for "open file". If a program called this instruction, there was no need to convert this into lower-level code, the machine would do this internally in microcode or even a direct hardware implementation. [ 8 ] This worked hand-in-hand with the single-level store; to implement HLS, every bit of data in the system was paired with a descriptor , a record that contained the type of the data, its location in memory, and its precision and size. As descriptors could point to arrays and record structures as well, this allowed the machine language to process these as atomic objects. [ 8 ]
By representing these much higher-level objects directly in the system, user programs would be much smaller and simpler. For instance, to add two arrays of numbers held in files in traditional languages, one would generally open the two files, read one item from each, add them, and then store the value to a third file. In the HLS approach, one would simply open the files and call add. The underlying operating system would map these into memory, create descriptors showing them both to be arrays and then the add instruction would see they were arrays and add all the values together. Assigning that value into a newly created array would have the effect of writing it back to storage. A program that might take a page or so of code was now reduced to a few lines. Moreover, as this was the natural language of the machine, the command shell was itself programmable in the same way, there would be no need to "write a program" for a simple task like this, it could be entered as a command. [ 8 ]
The report concluded:
The user and IBM should both gain substantially from the easier coding and debugging of concise programs. We expect to sharply reduce the cost of programming and the size of complex programs, as both program quality and programmer productivity are enhanced. [ 8 ]
Until the end of the 1960s, IBM had been making most of its profit on hardware, bundling support software and services along with its systems to make them more attractive. Only hardware carried a price tag, but those prices included an allocation for software and services. [ 7 ]
Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives , at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. IBM responded by refusing to service machines with these third-party add-ons, which led almost immediately to sweeping anti-trust investigations and many subsequent legal remedies. In 1969, the company was forced to end its bundling arrangements and announced they would sell software products separately. [ 9 ]
Gene Amdahl saw an opportunity to sell compatible machines without software; the customer could purchase a machine from Amdahl and the operating system and other software from IBM. If IBM refused to sell it to them, they would be breaching their legal obligations. In early 1970, Amdahl quit IBM and announced his intention to introduce System/370 compatible machines that would be faster than IBM's high-end offerings but cost less to purchase and operate. [ 10 ]
At first, IBM was unconcerned. They made most of their money on software and support, and that money would still be going to them. But to be sure, in early 1971 an internal IBM task force, Project Counterpoint, was formed to study the concept. They concluded that the compatible mainframe business was indeed viable and that the basis for charging for software and services as part of the hardware price would quickly vanish. These events created a desire within the company to find some solution that would once again force the customers to purchase everything from IBM but in a way that would not violate antitrust laws. [ 7 ]
If IBM followed the suggestions of the HLS report, this would mean that other vendors would have to copy the microcode implementing the huge number of instructions. As this was software, if they did, those companies would be subject to copyright violations. [ 7 ] At this point, the AFS/HLS concepts gained new currency within the company.
In May–June 1971, an international task force convened in Armonk under John Opel , then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order-of-magnitude reduction in the costs of developing, operating and maintaining application software.
The major objectives of the FS project were consequently stated as follows:
It was hoped that a new architecture making heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers.
One design principle of FS was a " single-level store " which extended the idea of virtual memory (VM) to cover persistent data. In traditional designs, programs allocate memory to hold values that represent data. This data would normally disappear if the machine is turned off, or the user logs out. In order to have this data available in the future, additional code is needed to write it to permanent storage like a hard drive , and then read it back in the future. To ease these common operations, a number of database engines emerged in the 1960s that allowed programs to hand data to the engine which would then save it and retrieve it again on demand.
Another emerging technology at the time was the concept of virtual memory. In early systems, the amount of memory available to a program to allocate for data was limited by the amount of main memory in the system, which might vary based on such factors as it is moved from one machine to another, or if other programs were allocating memory of their own. Virtual memory systems addressed this problem by defining a maximum amount of memory available to all programs, typically some very large number, much more than the physical memory in the machine. In the case that a program asks to allocate memory that is not physically available, a block of main memory is written out to disk, and that space is used for the new allocation. If the program requests data from that offloaded ("paged" or "spooled") memory area, it is invisibly loaded back into main memory again. [ 11 ]
A single-level store is essentially an expansion of virtual memory to all memory, internal or external. VM systems invisibly write memory to a disk, which is the same task as the file system, so there is no reason it cannot be used as the file system. Instead of programs allocating memory from "main memory" which is then perhaps sent to some other backing store by VM, all memory is immediately allocated by the VM. This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out. When the user logs back in, that data, and the programs that were running it as they are also in the same unified memory, are immediately available in the same state they were before. The entire concept of loading and saving is removed, programs, and entire systems, pick up where they were even after a machine restart.
This concept had been explored in the Multics system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in core with a far slower backing store in the form of a hard drive or drum . With the introduction of new forms of non-volatile memory , most notably bubble memory , [ 7 ] that worked at speeds similar to core but had memory density similar to a hard disk, it appeared a single-level store would no longer have any performance downside.
Future Systems planned on making the single-level store the key concept in its new operating systems. Instead of having a separate database engine that programmers would call, there would simply be calls in the system's application programming interface (API) to retrieve memory. And those API calls would be based on particular hardware or microcode implementations, which would only be available on IBM systems, thereby achieving IBM's goal of tightly tying the hardware to the programs that ran on it. [ 7 ]
Another principle was the use of very high-level complex instructions to be implemented in microcode . As an example, one of the instructions, CreateEncapsulatedModule , was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN , COBOL , and PL/I . In effect, FS was designed to be the ultimate complex instruction set computer ( CISC ). [ 7 ]
Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, operating system software, data base software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, microcode , and conventional software . More than one layer of microcode and code were contemplated, sometimes referred to as picocode or millicode .
Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects).
The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC).
Meanwhile, John Cocke , one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer ( RISC ). [ citation needed ] In the long run, the IBM 801 RISC architecture, which eventually evolved into IBM's POWER , PowerPC , and Power architectures, proved to be vastly cheaper to implement and capable of achieving much higher clock rate.
The FS project was officially started in September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it.
During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time.
In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved.
As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence.
Three implementations of the FS architecture were planned: the top-of-line model was being designed in Poughkeepsie, NY , where IBM's largest and fastest computers were built; the next model down was being designed in Endicott, NY , which had responsibility for the mid-range computers; the model below that was being designed in Böblingen, Germany , and the smallest model was being designed in Hursley, UK . [ 12 ]
A continuous range of performance could be offered by varying the number of processors in a system at each of the four implementation levels.
Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie).
The FS project was terminated in 1975. The reasons given for terminating the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero.
One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle; it was considered "incomprehensible" by IBM's own engineers and there were strong indications that the system wide single-level store could not be backed up in part, [ clarification needed ] foretelling the IBM AS/400's partitioning of the System/38's single-level store. [ 13 ] [ clarification needed ] Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the System/370 emulator on the same machine. [ 14 ]
The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and assembly language based applications to FS was in many cases likely to be greater than the cost of acquiring a new system.
Although the FS project as a whole was terminated, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM System/38 , which proved to be a good design for ease of programming, but it was woefully underpowered. The AS/400 inherited the same architecture, but with performance improvements. In both machines, the high-level instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the System/360 instruction set. [ 15 ] In later machines the lower-level instruction set was an extended version of the PowerPC instruction set, which evolved from John Cocke's RISC machine. The dedicated hardware platform was replaced in 2008 by the IBM Power Systems platform running the IBM i operating system.
Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line: | https://en.wikipedia.org/wiki/IBM_Future_Systems_project |
IBM Heron is a 156- qubit tunable-coupler quantum processor created by IBM , originally unveiled during the IBM Quantum Summit 2023, which occurred on December 4, 2023, and is the highest performance quantum processor IBM has ever built. [ 1 ] [ 2 ]
It is currently in use on the IBM Quantum System Two , unveiled during the same event. [ 1 ]
IBM claims that this processor eliminates cross-talk errors that emerged in their previous quantum processors, and that this processor is being made available for users via the cloud . [ 1 ] [ 2 ]
The first version is reportedly 5 times faster than their previous best record set by the IBM Eagle . [ 1 ] [ 2 ]
During the IBM Quantum Developer Conference, a second revision (called r2) was released, which increased the qubit count from 133 to 156, and introduced a two-level system mitigation to reduce the impact of an important source of noise. [ 3 ]
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_Heron |
IBM Mobile is a portfolio of mobile solutions [ buzzword ] for businesses offered by the information technology company IBM that includes software, cloud services, and partnerships. [ 1 ]
In 2013, IBM launched IBM MobileFirst, a mobile strategy that enables clients to streamline and accelerate mobile adoption. In 2016, IBM incorporated its mobile capabilities into its IBM Cloud portfolio, and the MobileFirst naming was discontinued. IBM has more than 4,300 patents in mobile, social and security, which have been incorporated into IBM Mobile solutions. [ 2 ] that address the mobile challenges of industries such as Banking, Insurance, Retail, Transport, Telecom, Government, Healthcare and Automotive. [ 3 ]
Products in the IBM Mobile portfolio include:
In July 2014, IBM and Apple announced a partnership to transform enterprise mobility through a new class of industry-specific business apps for iPhone and iPad . As part of the agreement, IBM, under the brand IBM MobileFirst for iOS, will create exclusive industry applications for iOS and use its services to bring iPads and iPhones to enterprises and corporations. [ 7 ] Apple, on its end, introduced a special AppleCare program that provides 24/7 hardware support for devices for enterprises. [ 8 ] The partnership offers Apple access to IBM’s customers and analytics capabilities to power enterprise apps for their devices. [ 9 ] On December 10, 2014, Apple and IBM, in a joint statement, introduced 10 mobile apps for business. On December 16, 2015, the two announced the availability of over 100 enterprise apps. [ 10 ] The apps provide solutions in a number of different industries such as banking , retail , insurance , financial services , telecommunications and government . Clients include enterprises such as Air Canada , Banorte , Citi and Sprint . [ 11 ] | https://en.wikipedia.org/wiki/IBM_Mobile |
IBM Osprey is a 433- qubit quantum processor created by IBM , revealed during the IBM Quantum Summit 2022, which occurred on November 9, 2022, in New York, United States. [ 1 ]
It is 3 times larger than its predecessor, the IBM Eagle . [ 2 ] [ better source needed ]
It needs to be cooled down to a temperature of ~0.02 K (-273.13 °C ).
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_Osprey |
The IBM PC Network was IBM PC's first LAN system. [ 1 ] [ 2 ] It consisted of network cards , cables , and a small device driver known as NetBIOS (Network Basic Input/Output System). It used a data rate of 2 Mbit/s and carrier-sense multiple access with collision detection .
NetBIOS was developed by Sytek Inc as an API for software communication over this IBM PC Network LAN technology; with Sytek networking protocols being used for communication over the wire. IBM's later Token Ring network emulated the NetBIOS application programming interface , and it lived on in many later systems.
The original broadband version in 1984 communicated over 75 Ω cable television compatible co-axial cable with each card connecting via a single F connector . [ 1 ] Separate transmit and receive frequencies were used. Cards could be ordered that used different frequencies so multiple cards could transmit simultaneously, at 2 Mbit/s each. [ 3 ] A Sytek head-end device was required to translate from each card's transmit frequency to the destination card's receive frequency. Frequency-division multiplexing allowed the cable to be shared with other voice, video, and data traffic.
Later, in 1987 a much cheaper baseband version, also running at 2 Mbit/s connected computers in daisy-chain style using twisted-pair cables with 6P2C modular telephone connectors. [ 4 ] Interface cards had two 6P2C sockets for connecting to left and right neighbor nodes. The unused sockets at the ends of the network segment had to be fitted with a terminator on one end of the chain and a wrap plug on the other. A hybrid star topology was possible using a hub. [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_PC_Network |
IBM Quantum System Two is the first modular utility-scaled quantum computer system, unveiled by IBM on December 4, 2023. [ 1 ]
It is a successor to the IBM Quantum System One .
It contains three IBM Quantum Heron processors, which can be scaled up due to its modularity, and later upgraded for newer QPU's, as it is fully upgradeable. [ 1 ] [ 2 ]
For its maximum efficiency, it has to be cooled down to a temperature of a few hundredths of degrees above absolute zero (10-20 mK ), [ 3 ] using dilution technology .
IBM has stated that their clients and partners are using their 100+ qubit systems to advance science. [ 1 ]
IBM has stated that their quantum coupling technology will allow multiple IBM Quantum System Two units to connect together, to create systems capable of running 100 million operations in a single quantum circuit, and later a billion operations, by 2033. [ 1 ]
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_Q_System_Two |
IBM Quantum Platform (previously known as IBM Quantum Experience ) is an online platform allowing public and premium access to cloud-based quantum computing services provided by IBM . This includes access to a set of IBM's prototype quantum processors, a set of tutorials on quantum computation, and access to an interactive textbook. As of February 2021, there are over 20 devices on the service, six of which are freely available for the public. This service can be used to run algorithms and experiments , and explore tutorials and simulations around what might be possible with quantum computing .
IBM's quantum processors are made up of superconducting transmon qubits , located in dilution refrigerators at the IBM Research headquarters at the Thomas J. Watson Research Center . Users interact with a quantum processor through the quantum circuit model of computation. Circuits can be created either graphically with the Quantum Composer, or programmatically with the Jupyter notebooks of the Quantum Lab. Circuits are created using Qiskit and can be compiled down to OpenQASM for execution on real quantum systems.
The Quantum Composer is a graphic user interface (GUI) designed by IBM to allow users to construct various quantum algorithms or run other quantum experiments. Users may see the results of their quantum algorithms by either running it on a real quantum processor or by using a simulator. Algorithms developed in the Quantum Composer are referred to as a "quantum score", in reference to the Quantum Composer resembling a musical sheet. [ 8 ]
The composer can also be used in scripting mode, where the user can write programs in the OpenQASM -language instead. Below is an example of a very small program, built for IBMs 5- qubit computer. The program instructs the computer to generate a quantum state | Ψ ⟩ = 1 2 ( | 000 ⟩ + | 111 ⟩ ) {\displaystyle |\Psi \rangle ={\frac {1}{\sqrt {2}}}\left(|000\rangle +|111\rangle \right)} , a 3-qubit GHZ state , which can be thought of as a variant of the Bell state , but with three qubits instead of two. It then measures the state, forcing it to collapse to one of the two possible outcomes, | 000 ⟩ {\displaystyle |000\rangle } or | 111 ⟩ {\displaystyle |111\rangle } .
Every instruction in the QASM language is the application of a quantum gate , initialization of the chips registers to zero or measurement of these registers. | https://en.wikipedia.org/wiki/IBM_Quantum_Platform |
The IBM System/360 ( S/360 ) is a family of mainframe computer systems announced by IBM on April 7, 1964, [ 1 ] and delivered between 1965 and 1978. [ 2 ] System/360 was the first family of computers designed to cover both commercial and scientific applications and a complete range of applications from small to large. The design distinguished between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, featuring 8-bit byte addressing and fixed-point binary, fixed-point decimal and hexadecimal floating-point calculations. The System/360 family introduced IBM's Solid Logic Technology (SLT), which packed more transistors onto a circuit card, allowing more powerful but smaller computers. [ 3 ]
System/360's chief architect was Gene Amdahl , and the project was managed by Fred Brooks , responsible to Chairman Thomas J. Watson Jr. [ 4 ] The commercial release was piloted by another of Watson's lieutenants, John R. Opel , who managed the launch of IBM's System/360 mainframe family in 1964. [ 5 ] The slowest System/360 model announced in 1964, the Model 30 , could perform up to 34,500 instructions per second, with memory from 8 to 64 KB . [ 6 ] High-performance models came later. The 1967 IBM System/360 Model 91 could execute up to 16.6 million instructions per second . [ 7 ] The larger 360 models could have up to 8 MB of main memory , [ 4 ] though that much memory was unusual; a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower (8 microsecond) Large Capacity Storage (LCS) was also available for some models.
The IBM 360 was extremely successful, allowing customers to purchase a smaller system knowing they could expand it, if their needs grew, without reprogramming application software or replacing peripheral devices. It influenced computer design for years to come; many consider it one of history's most successful computers. Application-level compatibility (with some restrictions) for System/360 software is maintained to the present day with the System z mainframe servers.
By the early 1960s, IBM was struggling with the load of supporting and upgrading five separate lines of computers. These were aimed at different market segments and were entirely different from each other. A customer who purchased a machine to handle accounting, such as the IBM 1401 , that was now looking for a machine for engineering calculations, such as the IBM 7040 , had no reason to select IBM – the 7040 was incompatible with the 1401 and they might as well have been from different companies. Customers were frustrated that major investments, often entirely new machines and programs, were required when seemingly small performance improvements were needed. [ 8 ]
In 1961, IBM assembled a task force to chart their developments for the 1960s, known as SPREAD, for Systems Programming, Research, Engineering and Development. In meetings at the New Englander Motor Hotel in Greenwich, Connecticut , SPREAD developed a new concept for the next generation of IBM machines. At the time, new technologies were coming into the market including the introduction of replacement of individual transistors with small-scale integrated circuits and the move to an 8-bit byte from the former 6-bit oriented words. These were going to lead to a new generation of machines, today known as the third generation, from all of the existing vendors. [ 8 ]
Where SPREAD differed significantly from previous concepts was what features would be supported. Instead of machines aimed at different market niches, the new concept was effectively the union of all of these designs. A single instruction set architecture (ISA) included instructions for binary , floating-point , and decimal arithmetic, string processing, conversion between character sets (a major issue before the widespread use of ASCII ) and extensive support for file handling, among many other features. [ 8 ]
This would mean IBM would be introducing yet another line of machines, once again incompatible with their earlier machines. But the new systems would be able to run all of the programs that formerly required different machines. A concern was that there was a risk that their customers, facing the purchase of yet another new and incompatible platform, would simply choose some other vendor. Yet the concept steadily gained support, and six months after being formed, the company decided to implement the SPREAD concept. [ 8 ]
A new team was organized under the direction of Bob Evans , who personally persuaded CEO Thomas J. Watson Jr. to develop the new system. Gene Amdahl was the chief architect of the computers themselves, while Fred Brooks was the project lead for the software and Erich Bloch led the development of IBM's hybrid integrated circuit designs, Solid Logic Technology . [ 9 ]
Producing a single system design with support for all of these features, at a price acceptable to low-end customers and with a performance level acceptable to high-end customers, would border on impossible. Instead, the SPREAD concept was based on the separation of the defined feature set from its internal operation, with a family of machines with different performance and different internal designs.
Specifically, depending on the machine, some components might not be directly implemented in hardware, and would instead be completed using small programs referred to as microcode or microprograms. These small programs, or subprograms, would be stored in read only memory (ROM) [ NB 1 ] inside the machine. Some models [ NB 2 ] use microcode in the central processing unit (CPU) to implement instructions while others [ NB 3 ] use only hardware. Some models [ NB 4 ] use cycle-stealing microcode in the CPU to implement I/O channels while others [ NB 5 ] use only hardware in separate [ NB 6 ] units. Today this approach is known as microcode . [ 10 ]
This meant that a single lineup could have machines tailored to match the price and performance niches that formerly demanded entirely separate computer systems, where software was specific to each system. This flexibility greatly lowered barriers to entry. With most other vendors customers had to choose between machines they might outgrow or machines that were potentially too powerful and thus too costly. In practice, this meant that many companies simply did not buy computers. Now, a customer could purchase a machine that solved a particular requirement, knowing they could switch models as their needs changed, without losing support for the programs they were already running. [ 8 ]
For instance, in the case of a firm that purchased an accounting system and was now looking to expand their computer support into engineering, this meant they could develop and test their engineering program on the machine they already used. If they ever needed more performance, they could purchase a machine with floating-point hardware, knowing that nothing else would change, it would simply get faster. Even the same peripherals could be used, allowing, for instance, data from the engineering system to be written to tape and then printed using a high-speed line printer already connected to their accounting system. Or they might replace the accounting system outright with a system with the performance to run both tasks. [ 8 ]
The idea that a single design could address all the myriad ways that the machines could be used gave rise to the name, "360" is a reference to 360 degrees in a circle, and circles of machines and components featured prominently in IBM's advertising. [ 8 ]
IBM initially announced a series of six computers and forty common peripherals. IBM eventually delivered fourteen models, including rare one-off models for NASA . The least expensive model was the Model 20 with as little as 4096 bytes of core memory , eight 16-bit registers instead of the sixteen 32-bit registers of other System/360 models, and an instruction set that was a subset of that used by the rest of the range.
The initial announcement in 1964 included Models 30 , 40 , 50 , 60, 62, and 70. The first three were low- to middle-range systems aimed at the IBM 1400 series market. All three first shipped in mid-1965. The last three, intended to replace the 7000 series machines, never shipped and were replaced with the 65 and 75 , which were first delivered in November 1965, and January 1966, respectively.
Later additions to the low-end included models 20 (1966, mentioned above), 22 (1971), and 25 (1968). The Model 20 had several sub-models; sub-model 5 was at the higher end of the model. The Model 22 was a recycled Model 30 with minor limitations: a smaller maximum memory configuration, and slower I/O channels, which limited it to slower and lower-capacity disk and tape devices than on the 30.
The Model 44 (1966) was a specialized model, designed for scientific computing and for real-time computing and process control, featuring some additional instructions, and with all storage-to-storage instructions and five other complex instructions eliminated.
A succession of high-end machines included the Model 67 (1966, mentioned below, briefly anticipated as the 64 and 66 [ 11 ] ), 85 (1969), 91 (1967, anticipated as the 92), 95 (1968), and 195 (1971). The 85 design was intermediate between the System/360 line and the follow-on System/370 and was the basis for the 370/165. There was a System/370 version of the 195, but it did not include Dynamic Address Translation.
The implementations differed substantially, using different native data path widths, presence or absence of microcode, yet were extremely compatible. Except where specifically documented, the models were architecturally compatible. The 91 , for example, was designed for scientific computing and provided out-of-order instruction execution (and could yield "imprecise interrupts" if a program trap occurred while several instructions were being read), but lacked the decimal instruction set used in commercial applications. New features could be added without violating architectural definitions: the 65 had a dual-processor version (M65MP) with extensions for inter-CPU signalling; the 85 introduced cache memory. Models 44, 75, 91, 95, and 195 were implemented with hardwired logic, rather than microcoded as all other models.
The Model 67 , announced in August 1965, was the first production IBM system to offer dynamic address translation (virtual memory) hardware to support time-sharing . "DAT" is now more commonly referred to as an MMU . An experimental one-off unit was built based on a model 40. Before the 67, IBM had announced models 64 and 66, DAT versions of the 60 and 62, but they were almost immediately replaced with the 67 at the same time that the 60 and 62 were replaced with the 65. DAT hardware would reappear in the S/370 series in 1972, though it was initially absent from the series. Like its close relative, the 65, the 67 also offered dual CPUs.
IBM stopped marketing all System/360 models by the end of 1977. [ 12 ]
IBM's existing customers had a large investment in software that ran on second-generation machines . Several System/360 models had the option of emulating the customer's existing computer using special hardware [ 13 ] and microcode , and an emulation program that enabled existing programs to run on the new machine.
Customers initially had to halt the computer and load the emulation program. [ 14 ] IBM later added features and modified emulator programs to allow emulation of the 1401, 1440, 1460, 1410 and 7010 under the control of an operating system.
The Model 85 and later System/370 maintained the precedent, retaining emulation options and allowing emulators to run under OS control alongside native programs. [ 15 ] [ 16 ]
System/360 (excepting the Models 20, 44 [ NB 7 ] and 67 [ NB 8 ] ) was replaced with the compatible System/370 range in 1970 and Model 20 users were targeted to move to the IBM System/3 . (The idea of a major breakthrough with FS technology was dropped in the mid-1970s for cost-effectiveness and continuity reasons.) Later compatible IBM systems include the 4300 family , the 308x family , the 3090 , the ES/9000 and 9672 families ( System/390 family), and the IBM Z series.
Computers that were mostly identical or compatible in terms of the machine code or architecture of the System/360 included Amdahl 's 470 family (and its successors), Hitachi mainframes, the UNIVAC 9000 series , [ 17 ] Fujitsu as the Facom, the RCA Spectra 70 series, [ NB 9 ] and the English Electric System 4 . [ NB 10 ] The System 4 machines were built under license to RCA. RCA sold the Spectra series to what was then UNIVAC , where they became the UNIVAC Series 70. UNIVAC also developed the UNIVAC Series 90 as successors to the 9000 series and Series 70. [ 17 ] The Soviet Union produced a System/360 clone named the ES EVM . [ 18 ]
The IBM 5100 portable computer, introduced in 1975, offered an option to execute the System/360's APL.SV programming language through a hardware emulator. IBM used this approach to avoid the costs and delay of creating a 5100-specific version of APL.
Special radiation-hardened and otherwise somewhat modified System/360s, in the form of the System/4 Pi avionics computer, are used in several fighter and bomber jet aircraft. In the complete 32-bit AP-101 version, 4 Pi machines were used as the replicated computing nodes of the fault-tolerant Space Shuttle computer system (in five nodes). The U.S. Federal Aviation Administration operated the IBM 9020 , a special cluster of modified System/360s for air traffic control, from 1970 until the 1990s. (Some 9020 software is apparently still used via emulation on newer hardware. [ citation needed ] )
The System/360 introduced a number of industry standards to the marketplace, such as:
The System/360 series computer architecture specification makes no assumptions on the implementation itself, but rather describes the interfaces and expected behavior of an implementation. [ 38 ] [ 39 ] [ 40 ] The architecture describes mandatory interfaces that must be available on all implementations, and optional interfaces. Some aspects of this architecture are:
Some of the optional features are:
All models of System/360, except for the Model 20 and Model 44, implemented that specification.
Binary arithmetic and logical operations are performed as register-to-register and as memory-to-register/register-to-memory as a standard feature. If the Commercial Instruction Set option was installed, packed decimal arithmetic could be performed as memory-to-memory with some memory-to-register operations. The Scientific Instruction Set feature, if installed, provided access to four floating-point registers that could be programmed for either 32-bit or 64-bit floating-point operations. The Models 85 and 195 could also operate on 128-bit extended-precision floating-point numbers stored in pairs of floating-point registers, and software provided emulation in other models. The System/360 used an 8-bit byte, 32-bit word, 64-bit double-word, and 4-bit nibble . Machine instructions had operators with operands, which could contain register numbers or memory addresses. This complex combination of instruction options resulted in a variety of instruction lengths and formats.
Memory addressing was accomplished using a base-plus-displacement scheme, with registers 1 through F (15). A displacement was encoded in 12 bits, thus allowing a 4096-byte displacement (0–4095), as the offset from the address put in a base register.
Register 0 could not be used as a base register nor as an index register (nor as a branch address register), as "0" was reserved to indicate an address in the first 4 KB of memory, that is, if register 0 was specified as described, the value 0x00000000 was implicitly input to the effective address calculation in place of whatever value might be contained within register 0 (or if specified as a branch address register, then no branch was taken, and the content of register 0 was ignored, but any side effect of the instruction was performed).
This specific behavior permitted initial execution of an interrupt routines, since base registers would not necessarily be set to 0 during the first few instruction cycles of an interrupt routine. It isn't needed for IPL ("Initial Program Load" or boot), as one can always clear a register without the need to save it.
With the exception of the Model 67, [ 29 ] all addresses were real memory addresses. Virtual memory was not available in most IBM mainframes until the System/370 series. The Model 67 introduced a virtual memory architecture, which MTS , CP-67 , and TSS/360 used—but not IBM's mainline System/360 operating systems.
The System/360 machine-code instructions are 2 bytes long (no memory operands), 4 bytes long (one operand), or 6 bytes long (two operands). Instructions are always situated on 2-byte boundaries.
Operations like MVC (Move-Characters) (Hex: D2) can only move at most 256 bytes of information. Moving more than 256 bytes of data required multiple MVC operations. (The System/370 series introduced a family of more powerful instructions such as the MVCL "Move-Characters-Long" instruction, which supports moving up to 16 MB as a single block.)
An operand is two bytes long, typically representing an address as a 4-bit nibble denoting a base register and a 12-bit displacement relative to the contents of that register, in the range 000–FFF (shown here as hexadecimal numbers). The address corresponding to that operand is the contents of the specified general-purpose register plus the displacement. For example, an MVC instruction that moves 256 bytes (with length code 255 in hexadecimal as FF ) from base register 7, plus displacement 000 , to base register 8, plus displacement 001 , would be coded as the 6-byte instruction " D2FF 8001 7000 " (operator/length/address1/address2).
The System/360 was designed to separate the system state from the problem state . This provided a basic level of security and recoverability from programming errors. Problem (user) programs could not modify data or program storage associated with the system state. Addressing, data, or operation exception errors made the machine enter the system state through a controlled routine so the operating system could try to correct or terminate the program in error. Similarly, it could recover certain processor hardware errors through the machine check routines.
Peripherals interfaced to the system via channels . A channel is a specialized processor with the instruction set optimized for transferring data between a peripheral and main memory. In modern terms, this could be compared to direct memory access (DMA). The S/360 connects channels to control units with bus and tag cables; IBM eventually replaced these with Enterprise Systems Connection (ESCON) and Fibre Connection (FICON) channels, but well after the S/360 era.
There were initially two types of channels; byte-multiplexer channels (known at the time simply as "multiplexor channels"), for connecting "slow speed" devices such as card readers and punches, line printers , and communications controllers, and selector channels for connecting high speed devices, such as disk drives , tape drives , data cells and drums . Every System/360 (except for the Model 20, which was not a standard 360) has a byte-multiplexer channel and 1 or more selector channels, though the model 25 has just one channel, which can be either a byte-multiplexor or selector channel. The smaller models (up to the model 50) have integrated channels, while for the larger models (model 65 and above) the channels are large separate units in separate cabinets: the IBM 2870 is the byte-multiplexor channel with up to four selector sub-channels, and the IBM 2860 is up to three selector channels.
The byte-multiplexer channel is able to handle I/O to/from several devices simultaneously at the device's highest rated speeds, hence the name, as it multiplexed I/O from those devices onto a single data path to main memory. Devices connected to a byte-multiplexer channel are configured to operate in 1-byte, 2-byte, 4-byte, or "burst" mode. The larger "blocks" of data are used to handle progressively faster devices. For example, a 2501 card reader operating at 600 cards per minute would be in 1-byte mode, while a 1403-N1 printer would be in burst mode. Also, the byte-multiplexer channels on larger models have an optional selector subchannel section that would accommodate tape drives. The byte-multiplexor's channel address was typically "0" and the selector subchannel addresses were from "C0" to "FF." Thus, tape drives on System/360 were commonly addressed at 0C0–0C7. Other common byte-multiplexer addresses are: 00A: 2501 Card Reader, 00C/00D: 2540 Reader/Punch, 00E/00F: 1403-N1 Printers, 010–013: 3211 Printers, 020–0BF: 2701/2703 Telecommunications Units. These addresses are still commonly used in z/VM virtual machines.
System/360 models 40 and 50 have an integrated 1052-7 console that is usually addressed as 01F, however, this was not connected to the byte-multiplexer channel, but rather, had a direct internal connection to the mainframe. The model 30 attached a different model of 1052 through a 1051 control unit. The models 60 through 75 also use the 1052–7.
Selector channels enabled I/O to high speed devices. These storage devices were attached to a control unit and then to the channel. The control unit let clusters of devices be attached to the channels. On higher speed models, multiple selector channels, which could operate simultaneously or in parallel, improved overall performance.
Control units are connected to the channels with "bus and tag" cable pairs. The bus cables carried the address and data information and the tag cables identified what data was on the bus. The general configuration of a channel is to connect the devices in a chain, like this: Mainframe—Control Unit X—Control Unit Y—Control Unit Z. Each control unit is assigned a "capture range" of addresses that it services. For example, control unit X might capture addresses 40–4F, control unit Y: C0–DF, and control unit Z: 80–9F. Capture ranges had to be a multiple of 8, 16, 32, 64, or 128 devices and be aligned on appropriate boundaries. Each control unit in turn has one or more devices attached to it. For example, you could have control unit Y with 6 disks, that would be addressed as C0-C5.
There are three general types of bus-and-tag cables produced by IBM. The first is the standard gray bus-and-tag cable, followed by the blue bus-and-tag cable, and finally the tan bus-and-tag cable. Generally, newer cable revisions are capable of higher speeds or longer distances, and some peripherals specified minimum cable revisions both upstream and downstream.
The cable ordering of the control units on the channel is also significant. Each control unit is "strapped" as High or Low priority. When a device selection was sent out on a mainframe's channel, the selection was sent from X->Y->Z->Y->X. If the control unit was "high" then the selection was checked in the outbound direction, if "low" then the inbound direction. Thus, control unit X was either 1st or 5th, Y was either 2nd or 4th, and Z was 3rd in line. It is also possible to have multiple channels attached to a control unit from the same or multiple mainframes, thus providing a rich high-performance, multiple-access, and backup capability.
Typically the total cable length of a channel is limited to 200 feet, less being preferred. Each control unit accounts for about 10 "feet" of the 200-foot limit.
IBM first introduced a new type of I/O channel on the Model 85 and Model 195, the 2880 block multiplexer channel, and then made them standard on the System/370 . This channel allowed a device to suspend a channel program, pending the completion of an I/O operation and thus to free the channel for use by another device. A block multiplexer channel can support either standard 1.5 MB/s connections or, with the 2-byte interface feature, 3 MB/s; the latter use one tag cable and two bus cables. On the S/370 there is an option for a 3.0 MB/s data streaming [ 41 ] channel with one bus cable and one tag cable.
The initial use for this was the 2305 fixed-head disk, which has 8 "exposures" (alias addresses) and rotational position sensing (RPS).
Block multiplexer channels can operate as a selector channel to allow compatible attachment of legacy subsystems. [ 42 ]
Being uncertain of the reliability and availability of the then new monolithic integrated circuits , IBM chose instead to design and manufacture its own custom hybrid integrated circuits . These were built on 11 mm square ceramic substrates. Resistors were silk screened on and discrete glass encapsulated transistors and diodes were added. The substrate was then covered with a metal lid or encapsulated in plastic to create a " Solid Logic Technology " (SLT) module.
A number of these SLT modules were then flip chip mounted onto a small multi-layer printed circuit "SLT card". Each card had one or two sockets on one edge that plugged onto pins on one of the computer's "SLT boards" (also referred to as a backplane). This was the reverse of how most other company's cards were mounted, where the cards had pins or printed contact areas and plugged into sockets on the computer's boards.
Up to twenty SLT boards could be assembled side-by-side (vertically and horizontally, max 4 high by 5 wide) to form a "logic gate". Several gates mounted together constituted a box-shaped "logic frame". The outer gates were generally hinged along one vertical edge so they could be swung open to provide access to the fixed inner gates. The larger machines could have more than one frame bolted together to produce the final unit, such as a multi-frame Central Processing Unit (CPU).
The smaller System/360 models used the Basic Operating System/360 ( BOS/360 ), Tape Operating System (TOS/360), or Disk Operating System/360 ( DOS/360 , which evolved into DOS/VS, DOS/VSE, VSE/AF, VSE/SP, VSE/ESA, and then z/VSE ).
The larger models used Operating System/360 (OS/360). IBM developed several levels of OS/360, with increasingly powerful features: Primary Control Program (PCP), Multiprogramming with a Fixed number of Tasks (MFT), and Multiprogramming with a Variable number of Tasks (MVT). MVT took a long time to develop into a usable system, and the less ambitious MFT was widely used. PCP was used on intermediate machines too small to run MFT well, and on larger machines before MFT was available; the final releases of OS/360 included only MFT and MVT. For the System/370 and later machines, MFT evolved into OS/VS1 , while MVT evolved into OS/VS2 (SVS) (Single Virtual Storage), then various versions of MVS (Multiple Virtual Storage) culminating in the current z/OS .
When it announced the Model 67 in August 1965, IBM also announced TSS/360 (Time-Sharing System) for delivery at the same time as the 67. TSS/360, a response to Multics , was an ambitious project that included many advanced features. It had performance problems, was delayed, canceled, reinstated, and finally canceled [ NB 14 ] again in 1971. Customers migrated to CP-67 , MTS ( Michigan Terminal System ), TSO ( Time Sharing Option for OS/360), or one of several other time-sharing systems.
CP-67, the original virtual machine system, was also known as CP/CMS . CP/67 was developed outside the IBM mainstream at IBM's Cambridge Scientific Center , in cooperation with MIT researchers. CP/CMS eventually won wide acceptance, and led to the development of VM/370 (Virtual Machine) which had a primary interactive "sub" operating system known as VM/CMS (Conversational Monitoring System). This evolved into today's z/VM .
The Model 20 offered a simplified and rarely used tape-based system called TPS (Tape Processing System), and DPS (Disk Processing System) that provided support for the 2311 disk drive. TPS could run on a machine with 8 KB of memory; DPS required 12 KB, which was pretty hefty for a Model 20. Many customers ran quite happily with 4 KB and CPS (Card Processing System). With TPS and DPS, the card reader was used to read the Job Control Language cards that defined the stack of jobs to run and to read in transaction data such as customer payments. The operating system was held on tape or disk, and results could also be stored on the tapes or hard drives. Stacked job processing became an exciting possibility for the small but adventurous computer user.
A little-known and little-used suite of 80-column punched-card utility programs known as Basic Programming Support (BPS) (jocularly: Barely Programming Support), a precursor of TOS, was available for smaller systems.
IBM created a new naming system for the new components created for System/360, although well-known old names, like IBM 1403 and IBM 1052 , were retained. In this new naming system, components were given four-digit numbers starting with 2. The second digit described the type of component, as follows:
IBM developed a new family of peripheral equipment for System/360, carrying over a few from its older 1400 series. Interfaces were standardized, allowing greater flexibility to mix and match processors, controllers and peripherals than in the earlier product lines.
In addition, System/360 computers could use certain peripherals that were originally developed for earlier computers. These earlier peripherals used a different numbering system, such as the IBM 1403 chain printer. The 1403, an extremely reliable device that had already earned a reputation as a workhorse, was sold as the 1403-N1 when adapted for the System/360.
Also available were optical character recognition (OCR) readers IBM 1287 and IBM 1288 which could read Alpha Numeric (A/N) and Numeric Hand Printed (NHP/NHW) Characters from Cashier's rolls of tape to full legal size pages. At the time this was done with very large optical/logic readers. Software was too slow and expensive at that time.
Models 65 and below sold with an IBM 1052–7 as the console typewriter. The 360/85 with feature 5450 uses a display console that was not compatible with anything else in the line; [ 43 ] [ 44 ] the later 3066 console for the 370/165 and 370/168 use the same basic display design as the 360/85.
The IBM System/360 models 91 and 195 use a graphical display similar to the IBM 2250 as their primary console.
Additional operator consoles were also available. Certain high-end machines could optionally be purchased with a 2250 graphical display, costing upwards of US$100,000; smaller machines could use the less expensive 2260 display or later the 3270 .
The first disk drives for System/360 were IBM 2302s [ 45 ] : 60–65 and IBM 2311s . [ 45 ] : 54–58 The first drum for System/360 was the IBM 7320 . [ 46 ] [ 47 ] : 41
The 156 kbit/s 2302 was based on the earlier 1302 and was available as a model 3 with two 112.79 MB modules [ 45 ] : 60 or as a model 4 with four such modules. [ 45 ] : 60
The 2311, with a removable 1316 disk pack , was based on the IBM 1311 and had a theoretical capacity of 7.2 MB, although actual capacity varied with record design. [ 47 ] : 31 (When used with a 360/20, the 1316 pack was formatted into fixed-length 270 byte sectors , giving a maximum capacity of 5.4MB.)
In 1966, the first 2314s shipped. This device had up to eight usable disk drives with an integral control unit; there were nine drives, but one was reserved as a spare. Each drive used a removable 2316 disk pack with a capacity of nearly 28 MB. The disk packs for the 2311 and 2314 were physically large by today's standards — e.g., the 1316 disk pack was about 14 in (36 cm) in diameter and had six platters stacked on a central spindle. The top and bottom outside platters did not store data. Data were recorded on the inner sides of the top and bottom platters and both sides of the inner platters, providing 10 recording surfaces. The 10 read/write heads moved together across the surfaces of the platters, which were formatted with 203 concentric tracks. To reduce the amount of head movement (seeking), data was written in a virtual cylinder from inside top platter down to inside bottom platter. These disks were not usually formatted with fixed-sized sectors as are today's hard drives (though this was done with CP/CMS ). Rather, most System/360 I/O software could customize the length of the data record (variable-length records), as was the case with magnetic tapes.
Some of the most powerful early System/360s used high-speed head-per-track drum storage devices. The 3,500 RPM 2301, [ 48 ] which replaced the 7320, was part of the original System/360 announcement, with a capacity of 4 MB. The 303.8 kbit/s IBM 2303 [ 45 ] : 74–76 was announced on January 31, 1966, with a capacity of 3.913 MB. These were the only drums announced for System/360 and System/370, and their niche was later filled by fixed-head disks.
The 6,000 RPM 2305 appeared in 1970, with capacities of 5 MB (2305–1) or 11 MB (2305–2) per module. [ 49 ] [ 50 ] Although these devices did not have large capacity, their speed and transfer rates made them attractive for high-performance needs. A typical use was overlay linkage (e.g. for OS and application subroutines) for program sections written to alternate in the same memory regions. Fixed-head disks and drums were particularly effective as paging devices on the early virtual memory systems. The 2305, although often called a "drum" was actually a head-per-track disk device, with 12 recording surfaces and a data transfer rate up to 3 MB/s.
Rarely seen was the IBM 2321 Data Cell , [ 51 ] a mechanically complex device that contained multiple magnetic strips to hold data; strips could be randomly accessed, placed upon a cylinder-shaped drum for read/write operations; then returned to an internal storage cartridge. The IBM Data Cell [noodle picker] was among several IBM trademarked "speedy" mass online direct-access storage peripherals (reincarnated in recent years as "virtual tape" and automated tape librarian peripherals). The 2321 file had a capacity of 400 MB, at the time when the 2311 disk drive only had 7.2 MB. The IBM Data Cell was proposed to fill cost/capacity/speed gap between magnetic tapes—which had high capacity with relatively low cost per stored byte—and disks, which had higher expense per byte. Some installations also found the electromechanical operation less dependable and opted for less mechanical forms of direct-access storage.
The Model 44 was unique in offering an integrated single-disk drive as a standard feature. This drive used the 2315 "ramkit" cartridge and provided 1,171,200 bytes of storage. [ 30 ] : 11
The 2400-series of 1/2" magnetic tape units consisted of the 2401 and 2402 Models 1-6 Magnetic Tape Units, the 2403 Models 1-6 Magnetic Tape Unit and Control, the 2404 Models 1-3 Magnetic Tape Unit and Control, and the 2803/2804 Models 1 and 2 Tape Control Units. [ 52 ] The later 2415 Magnetic Tape Unit and Control, introduced in 1967 contained two, four, or six tape drives and a control in a single unit, and was slower and cheaper. [ 53 ] The 2415 drives and control were not marketed separately. [ 54 ] With System/360, IBM switched from IBM 7-track to 9-track tape format. Some 2400-series drives could be purchased that read and wrote 7-track tapes for compatibility with the older IBM 729 tape drives. In 1968, the IBM 2420 tape system was released, offering much higher data rates, self-threading tape operation and 1600bpi packing density. [ 55 ] It remained in the product line until 1979. [ 56 ]
Despite having been sold or leased in very large numbers for a mainframe system of its era, only a few of System/360 computers remain—mainly as non-operating property of museums or collectors. Examples of existing systems include:
A running list of remaining System/360s that are more than just 'front panels' can be found at World Inventory of remaining System/360 CPUs .
This gallery shows the operator's console , with register value lamps, toggle switches (middle of pictures), and " emergency pull " switch (upper right of pictures) of the various models. | https://en.wikipedia.org/wiki/IBM_System/360 |
The IBM System/360 Model 195 is a discontinued IBM computer introduced on August 20, 1969. The Model 195 was a reimplementation of the IBM System/360 Model 91 design using monolithic integrated circuits . [ 1 ] It offers "an internal processing speed about twice as fast as the Model 85 , the next most powerful System/360". [ 2 ] The Model 195 was discontinued on February 9, 1977, the same date as the System/370 Model 195.
About 20 Model 195 systems were produced. [ 3 ] [ 4 ]
The basic CPU cycle time is 54 nanoseconds (ns). The system has a high degree of parallelism and can process up to seven operations at a time.
The system can be configured with 1, 2, or 4 MB of magnetic core memory (models 195J, 195K, and 195L) with a cycle time of 756 ns. A 32 KB cache , called a buffer memory in the IBM announcement, is standard. Memory blocks are brought into cache in units of 64 bytes. [ 2 ]
The normal operating system for the Model 195 is OS/360 Multiprogramming with a Variable Number of Tasks (MVT) .
The 360/195 has the following components: [ 5 ]
The Model 195 was later updated as the IBM System/370 Model 195 with the new System/370 instructions and the 370 time-of-day clock and control registers, but without the virtual memory hardware. [ 6 ] | https://en.wikipedia.org/wiki/IBM_System/360_Model_195 |
The IBM System/360 Model 67 ( S/360-67 ) was an important IBM mainframe model in the late 1960s. [ 1 ] Unlike the rest of the S/360 series, it included features to facilitate time-sharing applications, notably a Dynamic Address Translation unit , the "DAT box", to support virtual memory , 32-bit addressing and the 2846 Channel Controller to allow sharing channels between processors. The S/360-67 was otherwise compatible with the rest of the S/360 series.
The S/360-67 was intended to satisfy the needs of key time-sharing customers, notably MIT (where Project MAC had become a notorious IBM sales failure), the University of Michigan , General Motors , Bell Labs , Princeton University , the Carnegie Institute of Technology (later Carnegie Mellon University ), [ 2 ] and the Naval Postgraduate School . [ 3 ]
In the mid-1960s a number of organizations were interested in offering interactive computing services using time-sharing . [ 4 ] At that time the work that computers could perform was limited by their lack of real memory storage capacity. When IBM introduced its System/360 family of computers in the mid-1960s, it did not provide a solution for this limitation and within IBM there were conflicting views about the importance of time-sharing and the need to support it.
A paper titled Program and Addressing Structure in a Time-Sharing Environment by Bruce Arden , Bernard Galler , Frank Westervelt (all associate directors at the University of Michigan's academic Computing Center), and Tom O'Brian building upon some basic ideas developed at the Massachusetts Institute of Technology (MIT) was published in January 1966. [ 5 ] The paper outlined a virtual memory architecture using dynamic address translation (DAT) that could be used to implement time-sharing.
After a year of negotiations and design studies, IBM agreed to make a one-of-a-kind version of its S/360-65 mainframe computer for the University of Michigan. The S/360-65M [ 4 ] would include dynamic address translation (DAT) features that would support virtual memory and allow support for time-sharing. Initially IBM decided not to supply a time-sharing operating system for the new machine.
As other organizations heard about the project they were intrigued by the time-sharing idea and expressed interest in ordering the modified IBM S/360 series machines. With this demonstrated interest IBM changed the computer's model number to S/360-67 and made it a supported product. When IBM realized there was a market for time-sharing, it agreed to develop a new time-sharing operating system called IBM Time Sharing System (TSS/360) for delivery at roughly the same time as the first model S/360-67.
The first S/360-67 was shipped in May 1966. The S/360-67 was withdrawn on March 15, 1977. [ 6 ]
Before the announcement of the Model 67, IBM had announced models 64 and 66, DAT versions of its 60 and 62 models, but they were almost immediately replaced by the 67 at the same time that the 60 and 62 were replaced by the 65. [ 7 ]
IBM announced the S/360-67 in its August 16, 1965 "blue letters" (a standard mechanism used by IBM to make product announcements). IBM stated that: [ 8 ]
The S/360-67 design added a component for implementing virtual memory, the "DAT box" (Dynamic Address Translation box). DAT on the 360/67 was based on the architecture outlined in a 1966 JACM paper by Arden, Galler, Westervelt, and O'Brien [ 5 ] and included both segment and page tables. The Model 67's virtual memory support was very similar to the virtual memory support that eventually became standard on the entire System/370 line.
The S/360-67 provided a 24- or 32-bit address space [ 1 ] – unlike the strictly 24-bit address space of other S/360 and early S/370 systems, and the 31-bit address space of S/370-XA available on later S/370s. The S/360-67 virtual address space was divided into pages (of 4096 bytes) [ 1 ] grouped into segments (of 1 million bytes); pages were dynamically mapped onto the processor's real memory. These S/360-67 features plus reference and change bits as part of the storage key enabled operating systems to implement demand paging : referencing a page that was not in memory caused a page fault , which in turn could be intercepted and processed by an operating system interrupt handler .
The S/360-67's virtual memory system was capable of meeting three distinct goals:
The first goal removed (for decades, at least) a crushing limitation of earlier machines: running out of physical storage. The second enabled substantial improvements in security and reliability. The third enabled the implementation of true virtual machines . Contemporary documents make it clear that full hardware virtualization and virtual machines were not original design goals for the S/360-67.
The S/360-67 included the following extensions in addition to the standard and optional features available on all S/360 systems: [ 1 ]
The S/360-67 operated with a basic internal cycle time of 200 nanoseconds and a basic 750 nanosecond magnetic core storage cycle, the same as the S/360-65. [ 1 ] The 200 ns cycle time put the S/360-67 in the middle of the S/360 line, between the Model 30 at the low end and the Model 195 at the high end. From 1 to 8 bytes (8 data bits and 1 parity bit per byte) could be read or written to processor storage in a single cycle. A 60-bit parallel adder facilitated handling of long fractions in floating-point operations. An 8-bit serial adder enabled simultaneous execution of floating point exponent arithmetic, and also handled decimal arithmetic and variable field length (VFL) instructions.
Four new components were part of the S/360-67:
These components, together with the 2365 Processor Storage Model 2, 2860 Selector Channel, 2870 Multiplexer Channel, and other System/360 control units and devices were available for use with the S/360-67.
Note that while Carnegie Tech had a 360/67 with an IBM 2361 LCS, that option was not listed in the price book and may not have worked in a duplex configuration.
Three basic configurations were available for the IBM System/360 model 67:
A half-duplex system could be upgraded in the field to a duplex system by adding one IBM 2067-2 processor and the third IBM 2365-12 Processor Storage, unless the half-duplex system already had three or more. The half-duplex and duplex configurations were called the IBM System/360 model 67–2.
When the S/360-67 was announced in August 1965, IBM also announced TSS/360 , a time-sharing operating system project that was canceled in 1971 (having also been canceled in 1968, but reprieved in 1969). IBM subsequently modified TSS/360 and offered the TSS/370 PRPQ [ 11 ] for three releases before cancelling it.
IBM's failure to deliver TSS/360 as promised opened the door for others to develop operating systems that would use the unique features of the S/360-67
MTS, the Michigan Terminal System , was the time-sharing operating system developed at the University of Michigan and first used on the Model 67 in January 1967. Virtual memory support was added to MTS in October 1967. Multi-processor support for a duplex S/360-67 was added in October 1968. [ 12 ]
CP/CMS was the first virtual machine operating system. Developed at IBM's Cambridge Scientific Center (CSC) near MIT. CP/CMS was essentially an unsupported research system, built away from IBM's mainstream product organizations, with active involvement of outside researchers. Over time it evolved into a fully supported IBM operating system ( VM/370 and today's z/VM ). VP/CSS , based upon CP/CMS, was developed by National CSS to provide commercial time-sharing services.
The S/360-67 had an important legacy. After the failure of TSS/360 , IBM was surprised by the blossoming of a time-sharing community on the S/360-67 platform ( CP/CMS , MTS , MUSIC ). A large number of commercial, academic, and service bureau sites installed the system. By taking advantage of IBM's lukewarm support for time-sharing, and by sharing information and resources (including source code modifications), they built and supported a generation of time-sharing centers.
The unique features of the S/360-67 were initially not carried into IBM's next product series, the System/370 , although the 370/145 had an associative memory that appeared more useful for paging than for its ostensible purpose. [ 13 ] This was largely fallout from a bitter and highly visible political battle within IBM over the merits of time-sharing versus batch processing . Initially at least, time-sharing lost.
However, IBM faced increasing customer demand for time-sharing and virtual memory capabilities. IBM also could not ignore the large number of S/360-67 time-sharing installations – including the new industry of time-sharing vendors, such as National CSS [ 14 ] [ 15 ] and Interactive Data Corporation (IDC), [ 16 ] that were quickly achieving commercial success.
In 1972, IBM added virtual memory features to the S/370 series, a move seen by many as a vindication of work done on the S/360-67 project; the microcode in the 370/145 was updated to use the associative memory for virtual address translation. [ 17 ] The survival and success of IBM's VM family, and of virtualization technology in general, also owe much to the S/360-67.
In 2010, in the technical description of its latest mainframe, the z196 , IBM stated that its software virtualization started with the System/360 model 67. [ 18 ] | https://en.wikipedia.org/wiki/IBM_System/360_Model_67 |
The IBM System/360 Model 91 was announced in 1964 as a competitor to the CDC 6600 . [ 1 ] Functionally, the Model 91 ran like any other large-scale System/360 , but the internal organization was the most advanced of the System/360 line, and it was the first IBM computer to support out-of-order instruction execution . [ 2 ] It ran OS/360 as its operating system. It was designed to handle high-speed data processing for scientific applications. This included space exploration , theoretical astronomy , sub-atomic physics and global weather forecasting . [ 3 ]
The first Model 91 was used at the NASA Goddard Space Flight Center in 1968 and at the time was the most powerful computer in user operation. It was capable of executing up to 16.6 million instructions per second, [ 3 ] making it roughly equivalent to an Intel 80486SX-20 MHz CPU or AMD 80386DX-40 MHz CPU in MIPS performance.
The CPU consisted of five autonomous units: instruction, floating-point, fixed-point, and two storage controllers for the overlapping memory units and the I/O data channels. The floating-point unit made heavy use of instruction pipelining [ 4 ] and was the first implementation of Tomasulo's algorithm . [ citation needed ] It was also one of the first computers to utilize multi-channel memory architecture .
Castells-Rufas et al. reported that the 360/91 used 74kW of power. [ 5 ]
There were four models of the IBM System/360 Model 91. [ 6 ] They differed by their main memory configuration, all using IBM's 2395 Processor Storage .
The 91K had 2 MB, using one 2395 Model 1.
Both the 91KK and the 91L came with 4 MB of main memory: the former used a pair of 2395 Model 1s, the latter a single 2395 Model 2.
The 6 MB KL was equipped with one Model 1 and one Model 2 IBM 2395s.
There were only 15 Model 91s ever produced, four of which were for IBM's internal use. [ 7 ] After quoting from Pugh et al, William H. Blair says "Many disagree on the number of 360/91s that IBM built or sold. I have read and heard it authoritatively stated that the number was 10, 11, 12, 14, 15, or 20." As for those delivered to customers, "a 360/85 was delivered from when a 91 was ordered until it was ready." [ 8 ] [ 9 ]
Because of the emphasis on speed, there were some minor differences in the system's behaviour: [ 10 ]
IBM had a long history with NASA including the use of IBM components on crewed space flights such as the IBM ASC-15 on Saturn 1 , the IBM ASC-15B on the Titan Family , IBM GDC on Gemini , IBM LVDC on Saturn 1B/5 , IBM System/4 Pi -EP on the MOL , and the IBM System/4 Pi-TC 1 on the Apollo Telescope Mount and Skylab . [ 11 ]
The Model 91 was shipped 9 months late to the Goddard Space Flight Center in October 1967 and did not begin regular operations until January 1968 after it passed the federal government operations testing. [ 7 ]
The Model 95 was a variant of the Model 91 with 1 megabyte of thin-film memory and 4 megabytes of core memory. [ 12 ] [ 7 ] NASA acquired the only two 360/95s ever built. [ 12 ] [ 13 ] [ 7 ]
The console of the Model 95, for which no Functional Characteristics manuals exist, was identical to that of the 360/91. [ 9 ]
In 1971, UCLA used an IBM 360/91 to provide "production computing services" to ARPANET . The services it provided included job submittal, a "mailbox" system and FTP . [ 14 ]
There is a Model 91 Panel that is currently on display at the Living Computer Museum in Seattle, Washington that was borrowed and featured in the movie Tomorrowland (2015) . | https://en.wikipedia.org/wiki/IBM_System/360_Model_91 |
The IBM System/360 architecture is the model independent architecture for the entire S/360 line of mainframe computers , including but not limited to the instruction set architecture . The elements of the architecture are documented in the IBM System/360 Principles of Operation [ 1 ] [ 2 ] and the IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information manuals. [ 3 ]
The System/360 architecture provides the following features:
Memory ( storage ) in System/360 is addressed in terms of 8-bit bytes. Various instructions operate on larger units called halfword (2 bytes), fullword (4 bytes), doubleword (8 bytes), quad word (16 bytes) and 2048 byte storage block, specifying the leftmost (lowest address) of the unit. Within a halfword, fullword, doubleword or quadword, low numbered bytes are more significant than high numbered bytes; this is sometimes referred to as big-endian . Many uses for these units require aligning them on the corresponding boundaries. Within this article the unqualified term word refers to a fullword .
The original architecture of System/360 provided for up to 2 24 = 16,777,216 bytes of memory. The later Model 67 extended the architecture to allow up to 2 32 = 4,294,967,296 [ a ] bytes of virtual memory.
System/360 uses truncated addressing similar to that of the UNIVAC III . [ 8 ] That means that instructions do not contain complete addresses, but rather specify a base register and a positive offset from the addresses in the base registers. In the case of System/360 the base address is contained in one of 15 [ b ] general registers. In some instructions, for example shifts, the same computations are performed for 32-bit quantities that are not addresses.
The S/360 architecture defines formats for characters, integers, decimal integers and hexadecimal floating point numbers. Character and integer instructions are mandatory, but decimal and floating point instructions are part of the Decimal arithmetic and Floating-point arithmetic features.
Instructions in the S/360 are two, four or six bytes in length, with the opcode in byte 0. Instructions have one of the following formats:
Instructions must be on a two-byte boundary in memory; hence the low-order bit of the instruction address is always 0.
The Program Status Word ( PSW ) [ 2 ] : 71–72 contains a variety of controls for the currently operating program. The 64-bit PSW describes (among other things) the address of the current instruction being executed, condition code and interrupt masks.
Load Program Status Word ( LPSW ) is a privileged instruction that loads the Program Status Word (PSW), including the program mode, protection key, and the address of the next instruction to be executed. LPSW is most often used to "return" from an interruption by loading the "old" PSW which is associated with the interruption class. Other privileged instructions (e.g., SSM, STNSM, STOSM, SPKA, etcetera) are available for manipulating subsets of the PSW without causing an interruption or loading a PSW; and one non-privileged instruction (SPM) is available for manipulating the program mask.
The architecture [ 2 ] : 77–83 defines 5 classes of interruption . An interruption is a mechanism for automatically changing the program state; it is used for both synchronous [ e ] and asynchronous events.
There are two storage fields assigned to each class of interruption on the S/360; an old PSW double-word and a new PSW double-word. The processor stores the PSW, with an interruption code inserted, into the old PSW location and then loads the PSW from the new PSW location. This generally replaces the instruction address, thereby effecting a branch, and (optionally) sets and/or resets other fields within the PSW, thereby effecting a mode change.
The S/360 architecture defines a priority to each interruption class, but it is only relevant when two interruptions occur simultaneously; an interruption routine can be interrupted by any other enabled interruption, including another occurrence of the initial interruption. For this reason, it is normal practice to specify all of the mask bits, with the exception of machine-check mask bit, as 0 for the "first-level" interruption handlers. "Second-level" interruption handlers are generally designed for stacked interruptions (multiple occurrences of interruptions of the same interruption class).
An I/O interruption [ 15 ] occurs at the completion of a channel program, after fetching a CCW with the PCI bit set and also for asynchronous events detected by the device, control unit or channel, e.g., completion of a mechanical movement. The system stores the device address into the interruption code and stores channel status into the CSW at location 64 ('40'X).
A Program interruption [ 2 ] : 16, 79–80.1 occurs when an instruction encounters one [ f ] of 15 [ g ] exceptions; however, if the Program Mask bit corresponding to an exception is 0 then there is no interruption for that exception.
On 360/65, [ 21 ] : 12 360/67 [ 11 ] : 46 and 360/85 [ 9 ] : 12 the Protection Exception and Addressing Exception interruptions can be imprecise, in which case they store an Instruction Length Code of 0.
The Interruption code may be any of
Imprecise interruption [ f ] on 360/91, [ 20 ] : 15 360/95 or 360/195 [ 10 ] : 14
Segment Translation [ 11 ] : 17 [ g ]
Page Translation [ 11 ] : 17 [ g ]
SSM Exception [ 21 ] [ g ]
A Supervisor Call interruption [ 17 ] occurs as the result of a Supervisor Call instruction ; the system stores bits 8-15 of the SVC instruction as the Interruption Code.
An External [ 26 ] [ k ] interruption occurs as the result of certain asynchronous events. Bits 16-24 of the External Old PSW are set to 0 and one or more of bits 24-31 is set to 1
24
25
26
27
28
29
30
31
A Machine Check interruption [ 19 ] occurs to report unusual conditions associated with the channel or CPU that cannot be reported by another class of interruption. The most important class of conditions causing a Machine Check is a hardware error such as a parity error found in registers or storage, but some models may use it to report less serious conditions. Both the interruption code and the data stored in the scanout area at '80'x (128 decimal) are model dependent.
This article describes I/O from the CPU perspective. It does not discuss the channel cable or connectors, which have a separate article ; there is a summary elsewhere and details can be found in the IBM literature [ 3 ] and in FIPS PUB 60-2. [ 27 ]
I/O is carried out by a conceptually separate processor called a channel. Channels have their own instruction set, and access memory independently of the program running on the CPU. On the smaller models (through 360/50 ) a single microcode engine runs both the CPU program and the channel program. On the larger models the channels are in separate cabinets and have their own interfaces to memory. A channel may contain multiple subchannel s, each containing the status of an individual channel program. A subchannel associated with multiple devices that cannot concurrently have channel programs is referred to as shared ; a subchannel representing a single device is referred to as unshared .
There are three types of channels on the S/360:
Conceptually peripheral equipment is attached to a S/360 through control units , which in turn are attached through channels. However, the architecture does not require that control units be physically distinct, and in practice they are sometimes integrated with the devices that they control. Similarly, the architecture does not require the channels to be physically distinct from the processor, and the smaller S/360 models (through 360/50) have integrated channels that steal cycles from the processor.
Peripheral devices are addressed with 16-bit [ l ] addresses., [ 2 ] : 89 referred to as cua or cuu ; this article will use the term cuu . The high 8 bits identify a channel, numbered from 0 to 6, [ c ] while the low 8 bits identify a device on that channel. A device may have multiple cuu addresses.
Control units are assigned an address "capture" range. For example, a CU might be assigned range 20-2F or 40-7F. The purpose of this is to assist with the connection and prioritization of multiple control units to a channel. For example, a channel might have three disk control units at 20-2F, 50-5F, and 80-8F. Not all of the captured addresses need to have an assigned physical device. Each control unit is also marked as High or Low priority on the channel.
Device selection progresses from the channel to each control unit in the order they are physically attached to their channel. At the end of the chain the selection process continues in reverse back towards the channel. If the selection returns to the channel then no control unit accepted the command and SIO returns Condition Code 3. Control units marked as High Priority check the outbound CUU to be within their range. If so, then the I/O is processed. If not, then the selection is passed to the next outbound CU. Control units marked as Low Priority check for inbound (returning) CUU to be within their range. If so, then the I/O is processed. If not, then the selection is passed to the next inbound CU (or the channel). The connection of three controls unit to a channel might be physically -A-B-C and, if all are marked as High then the priority would be ABC. If all are marked low then the priority would be CBA. If B was marked High and AC low then the order would be BCA. Extending this line of reasoning then the first of N controllers would be priority 1 (High) or 2N-1 (Low), the second priority 2 or 2N-2, the third priority 3 or 2N-3, etc. The last physically attached would always be priority N.
There are three storage fields reserved for I/O; a double word I/O old PSW, a doubleword I/O new PSW and a fullword Channel Address Word ( CAW ). Performing an I/O normally requires the following:
A channel program consists of a sequence of Channel Control Words ( CCW s) chained together (see below.) Normally the channel fetches CCW s from consecutive doublewords, but a control unit can direct the channel to skip a CCW and a Transfer In Channel ( TIC ) CCW can direct the channel to start fetching CCW s from a new location.
There are several defined ways for a channel command to complete. Some of these allow the channel to continue fetching CCWs, while others terminate the channel program. In general, if the CCW does not have the chain-command bit set and is not a TIC, then the channel will terminate the I/O operation and cause an I/O interruption when the command completes. Certain status bits from the control unit suppress chaining.
The most common ways for a command to complete are for the count to be exhausted when chain-data is not set and for the control unit to signal that no more data transfers should be made. If Suppress-Length-Indication (SLI) is not set and one of those occurs without the other, chaining is not allowed. The most common situations that suppress chaining are unit-exception and unit-check. However, the combination of unit-check and status-modifier does not suppress chaining; rather, it causes the channel to do a command retry, reprocessing the same CCW.
In addition to the interruption signal sent to the CPU when an I/O operation is complete, a channel can also send a Program-Controlled interruption (PCI) to the CPU while the channel program is running, without terminating the operation, and a delayed device-end interruption after the I/O completion interruption.
These conditions are detected by the channel and indicated in the CSW . [ 28 ]
These conditions are presented to the channel by the control unit or device. [ 33 ] In some cases they are handled by the channel and in other cases they are indicated in the CSW . There is no distinction between conditions detected by the control unit and conditions detected by the device.
The fullword Channel Address Word [ 2 ] : 99 (CAW) contains a 4-bit storage protection key and a 24-bit address of the channel program to be started.
A Channel Command Word is a doubleword containing the following:
The low order 2 or 4 bits determine the six types of operations that the channel performs;. [ 2 ] : 100, 105 The encoding is
The meaning of the high order six or four bits, the modifier bits, M in the table above, depends upon the type of I/O device attached, see e.g., DASD CKD CCWs . All eight bits are sent to and interpreted in the associated control unit (or its functional equivalent).
Control is used to cause a state change in a device or control unit, often associated with mechanical motion, e.g., rewind, seek.
Sense is used to read data describing the status of the device. The most important case is that when a command terminates with unit check, the specific cause can only be determined by doing a Sense and examining the data returned. A Sense command with the modifier bits all zero is always valid.
A noteworthy deviation from the architecture is that DASD use Sense command codes for Reserve and Release, instead of using Control.
The flags in a CCW affect how it executes and terminates.
The Channel Status Word (CSW) [ 2 ] : 113–121 provides data associated with an I/O interruption.
The S/360 has four [ 56 ] I/O instructions: Start I/O (SIO), Test I/O (TIO), Halt I/O (HIO) and Test Channel (TCH). All four are privileged and thus will cause a privileged operation program interruption if used in problem state. The B 1 (base) and D 1 (displacement) fields are used to calculate the cuu (channel and device number); bits 8-15 of the instructions are unused and should be zero for compatibility with the S/370.
SIO [ 57 ] attempts to start the channel program pointed to by the CAW , using the storage protection key in the CAW.
TIO [ 58 ] tests the status of a channel and device. It may also store a CSW , in which case it completes with condition code 1.
HIO [ 59 ] attempt to terminate an active channel program. It may also store a CSW , in which case it completes with condition code 1.
TCH [ 60 ] tests the status of a channel. It does not affect the status of an active channel program and does not store a CSW ,
The architecture of System/360 specified the existence of several common functions, but did not specify their means of implementation. This allowed IBM to use different physical means, e.g., dial, keyboard, pushbutton, roller, image or text on a CRT, for selecting the functions and values on different processors. Any reference to key or switch should be read as applying to, e.g., a light-pen selection, an equivalent keyboard sequence.
On some models, e.g., the S/360-85 , [ 9 ] the alignment requirements for some problem-state instructions were relaxed. There is no mechanism to turn off this feature, and programs depending on receiving a program check type 6 (alignment) on those instructions must be modified.
The decimal arithmetic feature provides instructions that operate on packed decimal data. A packed decimal number has 1-31 decimal digits followed by a 4-bit sign. All of the decimal arithmetic instructions except PACK and UNPACK generate a Data exception if a digit is not in the range 0-9 or a sign is not in the range A-F.
The Direct Control [ 66 ] feature provides six external signal lines and an 8-bit data path to/from storage. [ 67 ]
The floating-point arithmetic feature provides 4 64-bit floating point registers and instructions to operate on 32 and 64 bit hexadecimal floating point numbers. The 360/85 and 360/195 also support 128 bit extended precision floating point numbers.
If the interval timer feature [ 2 ] : 17.1 is installed, the processor decrements the word at location 80 ('50'X) at regular intervals; the architecture does not specify the interval but does require that value subtracted make it appear as though 1 were subtracted from bit 23 300 times per second. The smaller models decremented at the same frequency (50 Hz or 60 Hz) as the AC power supply, but larger models had a high resolution timer feature. The processor causes an External interruption when the timer goes to zero.
Multi-system operation [ 68 ] is a set of features to support multi-processor systems, e.g., Direct Control , direct address relocation (prefixing).
If the storage protection feature [ 2 ] : 17-17.1 is installed, then there is a 4-bit storage key associated with every 2,048-byte block of storage and that key is checked when storing into any address in that block by either a CPU or an I/O channel. A CPU or channel key of 0 disables the check; a nonzero CPU or channel key allows data to be stored only in a block with the matching key.
Storage Protection was used to prevent a defective application from writing over storage belonging to the operating system or another application. This permitted testing to be performed along with production. Because the key was only four bits in length, the maximum number of different applications that could be run simultaneously was 15.
An additional option available on some models was fetch protection. It allowed the operating system to specify that blocks were protected from fetching as well as from storing.
The System/360 Model 20 is radically different and should not be considered to be a S/360.
The System/360 Model 44 is missing certain instructions, but a feature allowed the missing instructions to be simulated in hidden memory thus allowing the use of standard S/360 operating systems and applications.
Some models have features that extended the architecture, e.g., emulation instructions, paging, and some models make minor deviations from the architecture. Examples include:
Some deviations served as prototypes for features of the S/370 architecture. | https://en.wikipedia.org/wiki/IBM_System/360_architecture |
The IBM System/370 ( S/370 ) is a range of IBM mainframe computers announced as the successors to the System/360 family on June 30, 1970. The series mostly [ a ] maintains backward compatibility with the S/360, allowing an easy migration path for customers; this, plus improved performance, were the dominant themes of the product announcement.
Early 370 systems differed from the 360 largely in their internal circuitry, moving from the Solid Logic Technology hybrid integrated circuits containing separate transistors to more modern monolithic integrated circuits containing multiple transistors per integrated circuit, which IBM referred to as Monolithic System Technology, or MST. The higher density packaging allowed several formerly optional features from the 360 line to be included as standard features of the machines, floating-point support for instance. The 370 also added a small number of new instructions.
At the time of its introduction, the development of virtual memory systems had become a major theme in the computer market, and the 370 was considered highly controversial as it lacked this feature. This was addressed in 1972 with the System/370 Advanced Function and its associated dynamic address translation (DAT) hardware. All future machines in the lineup received this option, along with several new operating systems that supported it. Smaller additions were made throughout the lifetime of the line, which led to a profusion of models that were generally referred to by the processor number. One of the last major additions to the line in 1988 were the ESA/370 extensions that allowed a machine to have multiple virtual address spaces and easily switch among them.
The 370 was IBM's primary large mainframe offering from the 1970s through the 1980s. In September 1990, the System/370 line was replaced with the System/390 . The 390, which was based on a new ESA/390 model, expanded the multiple memory concept to include full hardware virtualization [ disputed (for: It was there earlier) – discuss ] that allowed it to run multiple operating systems at the same time.
The original System/370 line was announced on June 30, 1970, with first customer shipment of the Models 155 and 165 planned for February 1971 and April 1971 respectively. [ 1 ] The 155 first shipped in January 1971. [ 2 ] : 643 System/370 underwent several architectural improvements during its roughly 20-year lifetime. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
The following features mentioned in the 11th edition of the System/370 Principles of Operation [ 3 ] are either optional on S/360 but standard on S/370, introduced with S/370 or added to S/370 after announcement.
When the first System/370 machines, the Model 155 and the Model 165 , were introduced, the System/370 architecture was described as an extension, but not a redesign, of IBM's System/360 architecture which was introduced in 1964. [ 11 ] The System/370 architecture incorporated only a small number of changes to the System/360 architecture. These changes included: [ 12 ]
These models had core memory and did not include support for virtual storage , as they lacked a DAT (Dynamic Address Translation) box
All models of the System/370 used IBM's form of monolithic integrated circuits called MST (Monolithic System Technology) making them third generation computers. MST provided System/370 with four to eight times the circuit density and over ten times the reliability when compared to the previous second generation SLT technology of the System/360. [ 2 ] : 440
On September 23, 1970, IBM announced the Model 145 , a third model of the System/370, which was the first model to feature semiconductor main memory made from monolithic integrated circuits and was scheduled for delivery in the late summer of 1971. All subsequent S/370 models used such memory.
In 1972, a very significant change was made when support for virtual storage was introduced with IBM's "System/370 Advanced Function" announcement. IBM had initially (and controversially) chosen to exclude virtual storage from the S/370 line. [ 2 ] : 479–484 [ 23 ] The August 2, 1972 announcement included:
Virtual storage had in fact been delivered on S/370 hardware before this announcement:
Shortly after the August 2, 1972 announcement, DAT box (address relocation hardware) upgrades for the S/370-155 and S/370-165 were quietly announced, but were available only for purchase by customers who already owned a Model 155 or 165. [ 27 ] After installation, these models were known as the S/370-155-II and S/370-165-II. IBM wanted customers to upgrade their 155 and 165 systems to the widely sold S/370-158 and -168. [ 28 ] These upgrades were surprisingly expensive ($200,000 and $400,000, respectively) and had long ship date lead times after being ordered by a customer; consequently, they were never popular with customers, the majority of whom leased their systems via a third-party leasing company. [ 27 ] This led to the original S/370-155 and S/370-165 models being described as "boat anchors". The upgrade, required to run OS/VS1 or OS/VS2, was not cost effective for most customers by the time IBM could actually deliver and install it, so many customers were stuck with these machines running MVT until their lease ended. It was not unusual for this to be another four, five or even six years for the more unfortunate ones, and turned out to be a significant factor [ 29 ] in the slow adoption of OS/VS2 MVS, not only by customers in general, but for many internal IBM sites as well.
Later architectural changes primarily involved expansions in memory (central storage) – both physical memory and virtual address space – to enable larger workloads and meet client demands for more storage. This was the inevitable trend as Moore's Law eroded the unit cost of memory. As with all IBM mainframe development, preserving backward compatibility was paramount. [ citation needed ]
In 1981, IBM added the dual-address-space facility to System/370. [ 30 ] This allows a program to have two address spaces; Control Register 1 contains the segment table origin (STO) for the primary address space and CR7 contains the STO for the secondary address space. The processor can run in primary-space mode or secondary-space mode. When in primary-space mode, instructions and data are fetched from the primary address space. When in secondary-space mode, operands whose addresses defined to be logical are fetched from the secondary address space; it is unpredictable whether instructions will be fetched from the primary or secondary address space, so code must be mapped into both address spaces in the same address ranges in both address spaces. The program can switch between primary-space and secondary-space mode with the SET ADDRESS SPACE CONTROL instruction; there are also MOVE TO PRIMARY and MOVE TO SECONDARY instructions that copy a range of bytes from an address range in one address space to an address range in the other address space. [ 34 ]
Address spaces are identified by an address-space number (ASN). The ASN contains indices into a two-level table, structured similarly to a two-level page table, with entries containing a presence bit, various fields indicating permissions granted for access to the address space, the starting address and length of the segment table for the address space, and other information. The SET SECONDARY ASN instruction makes the address space identified by a given ASN value the current secondary address space. [ 34 ]
The initial System/370 architecture has a 24-bit limit on physical addresses, limiting physical memory to 16 MB. Page table entries have 12 bits of page frame address with 4 KB pages and 13 bits of page frame address with 2 KB pages, so combining a 12-bit page frame address with a 12-bit offset within the page or a 13-bit page frame address with an 11-bit offset within the page produces a 24-bit physical address. [ 35 ]
The extended real addressing feature in System/370 raises this limit to 26 bits, increasing the physical memory limit to 64 MB. Two reserved bits in the page table entry for 4 KB pages were used to extend the page frame address. The extended real addressing is only available with address translation enabled and with 4 KB pages. [ 35 ]
The following table summarizes the major S/370 series and models. The second column lists the principal architecture associated with each series. Many models implemented more than one architecture; thus, 308x processors initially shipped as S/370 architecture, but later offered XA; and many processors, such as the 4381, had microcode that allowed customer selection between S/370 or XA (later, ESA) operation.
Note also the confusing term "System/370-compatible", which appeared in IBM source documents to describe certain products. Outside IBM, this term would more often describe systems from Amdahl Corporation , Hitachi , and others, that could run the same S/370 software. This choice of terminology by IBM may have been a deliberate attempt to ignore the existence of those plug compatible manufacturers (PCMs), because they competed aggressively against IBM hardware dominance.
IBM used the name System/370 to announce the following eleven (three-digit) offerings:
The IBM System/370 Model 115 was announced March 13, 1973 [ 39 ] as "an ideal System/370 entry system for users of IBM's System/3 , 1130 computing system and System/360 Models 20 , 22 and 25 ."
It was delivered with "a minimum of two (of IBM's newly announced) directly attached IBM 3340 disk drives." [ 39 ] Up to four 3340s could be attached.
The CPU could be configured with 65,536 (64K) or 98,304 (96K) bytes of main memory. An optional 360/20 emulator was available.
The 115 was withdrawn on March 9, 1981.
The IBM System/370 Model 125 was announced Oct 4, 1972. [ 40 ]
Two, three or four directly attached IBM 3333 disk storage units provided "up to 400 million bytes online."
Main memory was either 98,304 (96K) or 131,072 (128K) bytes.
The 125 was withdrawn on March 9, 1981.
The IBM System/370 Model 135 was announced Mar 8, 1971. [ 41 ] Options for the 370/135 included a choice of four main memory sizes; IBM 1400 series (1401, 1440 and 1460) emulation was also offered.
A "reading device located in the Model 135 console" allowed updates and adding features to the Model 135's microcode.
The 135 was withdrawn on October 16, 1979.
The IBM System/370 Model 138 which was announced Jun 30, 1976 was offered with either 524,288 (512K) or 1,048,576 (1 MB) of memory. The latter was "double the maximum capacity of the Model 135," which "can be upgraded to the new computer's internal performance levels at customer locations." [ 42 ]
The 138 was withdrawn on November 1, 1983.
The IBM System/370 Model 145 was announced Sep 23, 1970, three months after the 155 and 165 models. [ 36 ] It first shipped in June 1971. [ 2 ] : 643
The first System/370 to use monolithic main memory, the Model 145 was offered in six memory sizes. A portion of the main memory, the "Reloadable Control Storage" (RCS) was loaded from a prewritten disk cartridge containing microcode to implement, for example, all needed instructions, I/O channels, and optional instructions to enable the system to emulate earlier IBM machines. [ 36 ]
The 145 was withdrawn on October 16, 1979.
The IBM System/370 Model 148 had the same announcement and withdrawal dates as the Model 138. [ 43 ]
As with the option to field-upgrade a 135, a 370/145 could be field-upgraded "at customer locations" to 148-level performance. The upgraded 135 and 145 systems were "designated the Models 135-3 and 145-3."
The IBM System/370 Model 155 and the Model 165 were announced Jun 30, 1970, the first of the 370s introduced. [ 44 ] Neither had a DAT box; they were limited to running the same non-virtual-memory operating systems available for the System/360 . The 155 first shipped in January 1971. [ 2 ] : 643
The OS/DOS [ 45 ] (DOS/360 programs under OS/360), 1401/1440/1460 and 1410/7010 [ 46 ] [ 47 ] and 7070/7074 [ 48 ] compatibility features were included, and the supporting integrated emulator programs could operate concurrently with standard System/370 workloads.
In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 155 II , which added a DAT box.
Both the 155 and the 165 were withdrawn on December 23, 1977.
The IBM System/370 Model 158 and the 370/168 were announced Aug 2, 1972. [ 49 ]
It included dynamic address translation (DAT) hardware, a prerequisite for the new virtual memory operating systems (DOS/VS, OS/VS1, OS/VS2).
A tightly coupled multiprocessor (MP) model was available, as was the ability to loosely couple this system to another 360 or 370 via an optional channel-to-channel adapter.
The 158 and 168 were withdrawn on September 15, 1980.
The IBM System/370 Model 165 was described by IBM as "more powerful" [ 11 ] compared to the "medium-scale" 370/155. It first shipped in April 1971. [ 2 ] : 643
Compatibility features included emulation for 7070/7074, 7080, and 709/7090/7094/7094 II .
Some have described the 360/85 's use of microcoded vs hardwired as a bridge to the 370/165. [ 50 ]
In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 165 II which added a DAT box.
The 165 was withdrawn on December 23, 1977.
The IBM System/370 Model 168 included "up to eight megabytes" [ 51 ] of main memory, double the maximum of 4 megabytes on the
370/158. [ 49 ]
It included dynamic address translation (DAT) hardware, a pre-requisite for the new virtual memory operating systems.
Although the 168 served as IBM's "flagship" system, [ 52 ] a 1975 newsbrief said that IBM boosted the power of the 370/168 again "in the wake of the Amdahl challenge... only 10 months after it introduced the improved 168-3 processor." [ 53 ]
The 370/168 was not withdrawn until September 1980.
The IBM System/370 Model 195 was announced Jun 30, 1970 and, at that time, it was "IBM's most powerful computing system." [ 54 ]
Its introduction came about 14 months after the announcement of its direct predecessor, the 360/195 . Both 195 machines were withdrawn Feb. 9, 1977. [ 55 ] [ 54 ]
Beginning in 1977, IBM began to introduce new systems, using the description "A compatible member of the System/370 family." [ 56 ] [ 57 ]
The first of the initial high end machines, IBM's 3033 , was announced March 25, 1977 [ 58 ] and was delivered the following March, at which time a multiprocessor version of the 3033 was announced. [ 59 ] IBM described it as "The Big One." [ 60 ]
IBM noted about the 3033, looking back, that "When it was rolled out on March 25, 1977, the 3033 eclipsed the internal operating speed of the company's previous flagship the System/370 Model 168-3 ..." [ 52 ]
The IBM 3031 and IBM 3032 were announced Oct. 7, 1977 and withdrawn Feb. 8, 1985. [ 56 ] [ 61 ]
Three systems comprised the next series of high end machines, IBM's 308X systems:
Despite the numbering, the least powerful was the 3083, which could be field-upgraded to a 3081; [ 63 ] the 3084 was the top of the line. [ 64 ]
These models introduced IBM's Extended Architecture 's 31-bit address capability [ 65 ] and a set of backward compatible MVS/Extended Architecture (MVS/XA) software replacing previous products and part of OS/VS2 R3.8:
All three 308x systems were withdrawn on August 4, 1987.
The next series of high-end machines, the IBM 3090 , began with models [ j ] 200 and 400. [ 68 ] They were announced Feb. 12, 1985, and were configured with two or four CPUs respectively. IBM subsequently announced models 120, 150, 180, 300, 500 and 600 with lower, intermediate and higher capacities; the first digit of the model number gives the number of central processors.
Starting with the E [ 69 ] models, and continuing with the J and S models, IBM offered Enterprise Systems Architecture/370 [ 70 ] (ESA/370), Processor Resource/System Manager (PR/SM) and a set of backward compatible MVS/Enterprise System Architecture (MVS/ESA) software replacing previous products:
IBM's offering of an optional vector facility (VF) extension for the 3090 came at a time when Vector processing /Array processing suggested names like Cray and Control Data Corporation (CDC). [ 72 ] [ 73 ]
The 200 and 400 were withdrawn on May 5, 1989.
The first pair of IBM 4300 processors were Mid/Low end systems announced Jan 30, 1979 [ 74 ] [ 75 ] as "compact (and).. compatible with System/370."
The 4331 was subsequently withdrawn on November 18, 1981, and the 4341 on February 11, 1986.
Other models were the 4321, [ 76 ] 4361 [ 77 ] and 4381. [ 78 ]
The 4361 has "Programmable Power-Off -- enables the user to turn off the processor under program control"; [ 77 ] "Unit power off" is (also) part of the 4381 feature list. [ 78 ]
IBM offered many Model Groups and models of the 4300 family, [ k ] ranging from the entry level 4331 to the 4381, described as "one of the most powerful and versatile intermediate system processors ever produced by IBM." [ l ]
The 4381 Model Group 3 was dual-CPU.
This low-end system, announced October 7, 1986, [ 79 ] was "designed to satisfy the computing requirements of IBM customers who value System/370 affinity" and "small enough and quiet enough to operate in an office environment."
IBM also noted its sensitivity to "entry software prices, substantial reductions in support and training requirements, and modest power consumption and maintenance costs."
Furthermore, it stated its awareness of the needs of small-to-medium size businesses to be able to respond, as "computing requirements grow," adding that "the IBM 9370 system can be easily expanded by adding additional features and racks to accommodate..."
This came at a time when Digital Equipment Corporation (DEC) and its VAX systems were strong competitors in both hardware and software; [ 80 ] the media of the day carried IBM's alleged "VAX Killer" phrase, albeit often skeptically. [ 81 ]
In the 360 era, a number of manufacturers had already standardized upon the IBM/360 instruction set and, to a degree, 360 architecture. Notable computer makers included Univac with the UNIVAC 9000 series , RCA with the RCA Spectra 70 series, English Electric with the English Electric System 4 , and the Soviet ES EVM . These computers were not perfectly compatible, nor (except for the Russian efforts) [ 82 ] [ 83 ] were they intended to be.
That changed in the 1970s with the introduction of the IBM/370 and Gene Amdahl 's launch of his own company. About the same time, Japanese giants began eyeing the lucrative mainframe market both at home and abroad. One Japanese consortium focused upon IBM and two others from the BUNCH ( B urroughs/ U nivac/ N CR/ C ontrol Data/ H oneywell) group of IBM's competitors. [ 84 ] The latter efforts were abandoned and eventually all Japanese efforts focused on the IBM mainframe lines.
Some of the era's clones included:
IBM documentation numbers the bits from high order to low order; the most significant (leftmost) bit is designated as bit number 0.
S/370 also refers to a computer system architecture specification, [ 91 ] and is a direct and mostly backward compatible evolution of the System/360 architecture [ 92 ] from which it retains most aspects. This specification does not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The architecture describes mandatory interfaces that must be available on all implementations and optional interfaces which may or may not be implemented.
Some of the aspects of this architecture are:
Some of the optional features are:
IBM took great care to ensure that changes to the architecture would remain compatible for unprivileged (problem state) programs; some new interfaces did not break the initial interface contract for privileged (supervisor mode) programs. Some examples are
Other changes were compatible only for unprivileged programs, although the changes for privileged programs were of limited scope and well defined. Some examples are:
Great care was taken in order to ensure that further modifications to the architecture would remain compatible, at least as far as non-privileged programs were concerned. This philosophy predates the definition of the S/370 architecture and started with the S/360 architecture. If certain rules are adhered to, a program written for this architecture will run with the intended results on the successors of this architecture.
Such an example is that the S/370 architecture specifies that the 64-bit PSW register bit number 32 has to be set to 0 and that doing otherwise leads to an exception. Subsequently, when the S/370-XA architecture was defined, it was stated that this bit would indicate whether the program was a program expecting a 24-bit address architecture or 31-bit address architecture. Thus, most programs that ran on the 24-bit architecture can still run on 31-bit systems; the 64-bit z/Architecture has an additional mode bit for 64-bit addresses, so that those programs, and programs that ran on the 31-bit architecture, can still run on 64-bit systems.
However, not all of the interfaces can remain compatible. Emphasis was put on having non control programs (called problem state programs) remain compatible. [ 96 ] Thus, operating systems have to be ported to the new architecture because the control interfaces can (and were) redefined in an incompatible way. For example, the I/O interface was redesigned in S/370-XA making S/370 program issuing I/O operations unusable as-is.
IBM replaced the System/370 line with the System/390 in the 1990s, and similarly extended the architecture from ESA/370 to ESA/390. This was a minor architectural change, and was upwards compatible.
In 2000, the System/390 was replaced with the zSeries (now called IBM Z). The zSeries mainframes introduced the 64-bit z/Architecture , the most significant design improvement since the 31-bit transition. [ citation needed ] All have retained essential backward compatibility with the original S/360 architecture and instruction set.
The GNU Compiler Collection (GCC) had a back end for S/370, but it became obsolete over time and was finally replaced with the S/390 backend. Although the S/370 and S/390 instruction sets are essentially the same (and have been consistent since the introduction of the S/360), GCC operability on older systems has been abandoned. [ 97 ] GCC currently works on machines that have the full instruction set of System/390 Generation 5 (G5), the hardware platform for the initial release of Linux/390 . However, a separately maintained version of GCC 3.2.3 that works for the S/370 is available, known as GCCMVS. [ 98 ]
The block multiplexer channel, previously available only on the 360/85 and 360/195, was a standard part of the architecture. For compatibility it could operate as a selector channel. [ 99 ] Block multiplexer channels were available in single byte (1.5 MB/s) and double byte (3.0 MB/s) versions.
As part of the DAT announcement, IBM upgraded channels to have Indirect Data Address Lists (IDALs). a form of I/O MMU.
Data streaming channels had a speed of 3.0 MB/s over a single byte interface, later upgraded to 4.5 MB/s.
Channel set switching allowed one processor in a multiprocessor configuration to take over the I/O workload from the other processor if it failed or was taken offline for maintenance.
System/370-XA introduced a channel subsystem that performed I/O queuing previously done by the operating system.
The System/390 introduced the ESCON channel, an optical fiber , half-duplex , serial channel with a maximum distance of 43 kilometers. Originally operating at 10 Mbyte/s, it was subsequently increased to 17 Mbyte/s.
Subsequently, FICON became the standard IBM mainframe channel; FIbre CONnection (FICON) is the IBM proprietary name for the ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol used to map both IBM's antecedent (either ESCON or parallel Bus and Tag) channel-to-control-unit cabling infrastructure and protocol onto standard FC services and infrastructure at data rates up to 16 Gigabits/sec at distances up to 100 km. Fibre Channel Protocol (FCP) allows attaching SCSI devices using the same infrastructure as FICON. | https://en.wikipedia.org/wiki/IBM_System/370 |
IBM System/370-XA is an instruction set architecture introduced by IBM in 1983 for the IBM 308X processors. [ 2 ] : 198 It extends the IBM System/370 architecture to support 31-bit virtual and physical addresses, and includes a redesigned I/O architecture.
In the System/360 , other than the 360/67 , and System/370 architectures, the general-purpose registers were 32 bits wide, the machine did 32-bit arithmetic operations, and addresses were always stored in 32-bit words, so the architecture was considered 32-bit , but the machines ignored the top 8 bits of the address resulting in 24-bit addressing. Much of System/360's and System/370's large installed code base relied on a 24-bit logical address ; In particular, a heavily used machine instruction, LA , Load Address, explicitly cleared the top eight bits of the address being placed in a register. If the 24-bit limit were to be removed, this would create migration problems for existing software.
This was addressed by adding an addressing mode bit to the Program Status Word controlling whether the program runs in 24-bit mode, in which the top eight bits of virtual addresses are ignored, or 31-bit mode, in which only the uppermost bit of virtual addresses are ignored. [ 2 ] : 201-202 [ 1 ] : 1-2 Several reasons were given for the choice of 31 bits instead of 32 bits:
Certain machine instructions in this 31-bit addressing mode alter the addressing mode bit. For example, the original subroutine call instructions BAL , Branch and Link, and its register-register equivalent, BALR , Branch and Link Register, store certain status information, the instruction length code, [ a ] the condition code and the program mask, in the top byte of the return address. A BAS , Branch and Save, instruction was added to allow 31-bit return addresses. BAS , and its register-register equivalent, BASR , Branch and Save Register, was part of the instruction set of the 360/67, which was the only System/360 model to allow addresses longer than 24 bits. These instructions were maintained, but were modified and extended for 31-bit addressing.
Additional instructions in support of allowing calls between 24-bit-addressing and 31-bit-addressing code include two new register-register call/return instructions which also effect an addressing mode change, BASSM , Branch and Save and Set Mode, [ 5 ] the 24/31-bit version of a call where the linkage address including the mode is saved and a branch is taken to an address in a possibly different mode, and BSM , Branch and Set Mode, the 24/31 bit version of a return, where the return is directly to the previously saved linkage address and in its previous mode. Taken together, BASSM and BSM allow 24-bit calls to 31-bit (and return to 24-bit), 31-bit calls to 24-bit (and return to 31-bit), 24-bit calls to 24-bit (and return to 24-bit) and 31-bit calls to 31-bit (and return to 31-bit). [ 2 ] : 202
Like BALR 14,15 (the 24-bit-only form of a call), BASSM is used as BASSM 14,15 , where the linkage address and mode are saved in register 14, and a branch is taken to the subroutine address and mode specified in register 15. Somewhat similarly to BCR 15,14 (the 24-bit-only form of an unconditional return), BSM is used as BSM 0,14 , where 0 indicates that the current mode is not saved (the program is leaving the subroutine, anyway), and a return to the caller at the address and mode specified in register 14 is to be taken. [ 6 ]
System/370 initially supported only 24-bit physical addresses; the extended real address feature extended this to 26-bit addresses. [ 7 ]
System/370-XA changed the page table entry format to support 19 bits of page frame address; pages are 4 KB in 370-XA, so combining a 19-bit page frame address with a 12-bit offset within the page produces a 31-bit physical address. [ 1 ] : 3-25 Channel command words can be in one of two formats, with format 0 being the System/370 format, with a 24-bit data address, and format 1 being an additional format, with a 31-bit data address. [ 2 ] : 202 [ 1 ] : 1-3
System/370-XA introduced a channel subsystem that performed I/O queuing previously done by the operating system. | https://en.wikipedia.org/wiki/IBM_System/370-XA |
The IBM System/390 is a discontinued mainframe product family implementing ESA/390 , the fifth generation of the System/360 instruction set architecture . The first computers to use the ESA/390 were the Enterprise System/9000 (ES/9000) family, which were introduced in 1990. These were followed by the 9672, Multiprise , and Integrated Server families of System/390 in 1994–1999, using CMOS microprocessors. The ESA/390 succeeded ESA/370 , used in the Enhanced 3090 and 4381 "E" models, and the System/370 architecture last used in the IBM 9370 low-end mainframe. ESA/390 was succeeded by the 64-bit z/Architecture in 2000.
On September 5, 1990, IBM published a group of hardware and software announcements, two [ 2 ] [ 3 ] of which included overviews of three announcements:
Despite the fact that IBM mentioned the 9000 family first in some of the day's announcements, it was clear "by the end of the day" that it was "for System/390," [ 5 ] although it was a shortened name, S/390 , that was placed on some of the actual "boxes" later shipped. [ 9 ] [ a ]
The ES/9000 include rack-mounted models, free standing air cooled models and water cooled models. The low end models were substantially less expensive than the 3090 or 4381 previously needed to run MVS/ESA , and could also run VM/ESA and VSE/ESA , which IBM announced at the same time.
IBM periodically added named features to ESA/390 in conjunction with new processors; the ESA/390 Principles of Operation manual identifies them only by name, not by the processors supporting them.
Machines supporting the architecture were sold under the brand System/390 (S/390) from September 1990. The 9672 implementations of System/390 were the first high-end IBM mainframe architecture implemented first with CMOS CPU electronics rather than the traditional bipolar logic.
The IBM z13 was the last z Systems server to support running an operating system in ESA/390 architecture mode. [ 10 ] However, all 24-bit and 31-bit problem-state application programs originally written to run on the ESA/390 architecture readily run unaffected by this change.
Eighteen models [ b ] were announced [ 11 ] September 5, 1990 for the ES/9000 in three form factors; the water-cooled 9021 to succeed the IBM 3090 , and the air-cooled standalone 9121 and rack-mounted 9221 to succeed the IBM 4381 and 9370 respectively. The largest announced model had a 100-fold performance over the smallest model, and the clock frequency ranged from 67-111 MHz (15-9 ns ) in the 9021 and 67 MHz in the 9121 to 26-33 MHz (38-30 ns) in the 9221. The 9221 models 120, 130 and 150 were initially available only with the "System/370 Base Option"; the "ESA Option" shipped in July 1991. The 9221 processors were made of VLSI CMOS chips designed in Böblingen , Germany, whence the 9672 line later originated.
The lower 6 of the 8 water-cooled models (codenamed H0) were immediately available, but used the same processor as the 3090-J, still at the 69 MHz (14.5 ns) maximum frequency and thus with unchanged performance. Those models' main difference from the 3090-J was the optional addition of ESCON , Sysplex and Integrated Cryptographic Feature. Only the models 900 and 820 had an all-new design (codenamed H2), [ c ] featuring private split I+D 128+128 KB L1 caches and a shared 4 MB L2 cache (2 MB per side) with 11-cycle latency, more direct interconnects between the processors, multi-level TLBs , branch target buffer and 111 MHz (9 ns ) clock frequency. These were the first models with out-of-order execution since the System/370-195 of 1973. However unlike the old S/360-91 -derived systems, the models 900 and 820 had full out-of-order execution for both integer and floating-point units, with precise exception handling , and a fully superscalar pipeline. Models 820 and 900 shipped to customers in September 1991, a year later than the models with older technology. Later these new technologies were used in models 520, 640, 660, 740 and 860. [ 14 ] [ 15 ] [ 16 ] [ 13 ] [ 17 ]
All three lines got additions and upgrades until 1993–1994. In February 1993 an 8-processor 141 MHz (7.1 ns ) model 982 became available, with models 972, 962, 952, 942, 941, 831, 822, 821 and 711 following in March. These models, codenamed H5, had double the L2 cache and 30% higher per-processor performance than the H2 line, and added a hardware data compression . [ 18 ] [ 19 ] The compression was also included in the new, 50% faster models of the 9121. [ 20 ] In April 1994, alongside the CMOS -based new 9672 series and improved 9221 models (with 40% faster cycle time and data compression), [ 21 ] IBM announced also their ultimate bipolar model, the 10-processor model 9X2 rated at 468 MIPS, [ d ] to become available in October. [ 37 ] [ 38 ] [ 39 ]
Previously available only on IBM 3090 , Logical Partitions (LPARs) are a standard feature of the ES/9000 processors whereby IBM's Processor Resource/Systems Manager (PR/SM) hypervisor allows different operating systems to run concurrently in separate logical partitions (LPARs), with a high degree of isolation. Initially 7 partitions per a disconnected side were supported. [ 6 ] [ 52 ] In December 1992 the LPAR capacity of the H2 (520-based) models was increased to 10 per a disconnected side. For example, a two-processor model 660 could now support up to 20 partitions instead of 14, if the two sides (each with one processor) are electrically isolated. [ 53 ]
This was introduced as part of IBM's moving towards "lights-out" operation and increased control of multiple system configurations.
Launched in 1994 first as the "Parallel Transaction Server" (alongside the 9673 "Parallel Query Server"), [ 54 ] subsumed by the "Parallel Enterprise Server" launched later in the year, [ 55 ] the six generations of the IBM 9672 machines transitioned IBM's mainframes fully to CMOS microprocessors, as by a strategic decision no more ES/9000 ( bipolar -based except the 9221) models would be released after 1994. The initial generations of 9672 were slower than the largest ES/9000 sold in parallel, but the fifth and sixth generations were the most powerful and capable ESA/390 machines built by IBM. [ 56 ]
In the course of the generations, CPUs added more instructions and increased performance. The first three generations (G1 to G3) focused on low cost. [ 58 ] The 4th generation was aimed at matching the performance of the last bipolar model, the 9021-9X2. It was decided to be accomplished by pursuing high clock frequencies. The G4 could reach 70% higher frequency than the G3 at silicon process parity, but it suffered a 23% IPC reduction from the G3. [ 58 ] The initial G4-based models became available in June 1997, [ 59 ] but it wasn't until the 370 MHz model RY5 (with a "Modular Cooling Unit") became available at the end of the year that a 9672 would almost match the 141 MHz model 9X2's performance. [ 62 ] At 370 MHz it was the second-highest clocked microprocessor at the time, after the Alpha 21164 of DEC . The execution units in each G4 processor are duplicated for the purpose of error detection and correction. [ 63 ] Arriving in late September 1998, [ 64 ] the G5 more than doubled the performance over any previous IBM mainframe, [ 60 ] [ 61 ] and restored IBM's performance lead that had been lost to Hitachi 's Skyline mainframes in 1995. [ 65 ] [ 66 ] The G5 operated at up to 500 MHz, again second only to the DEC Alphas into early 1999. The G5 also added support for the IEEE 754 floating-point formats. [ 67 ] [ 68 ] The thousandth G5 system shipped less than 100 days after the manufacturing began; the greatest ramping of production in S/390's history. [ 69 ] In late May 1999 the G6 arrived featuring copper interconnects , raising the frequency to 637 MHz, higher than the fastest DEC machines at the time.
In September 1996 IBM launched the S/390 Multiprise 2000, positioned below the 9672. [ 70 ] [ 71 ] [ 72 ] It used the same technology as the 9672 G3, but it fit half as many processors (up to five) and its off-chip caches were smaller. The 9672 G3 and the Multiprise 2000 were the last versions to support pre-XA System/370 mode. In October 1997 models of Multiprise 2000 with an 11% higher performance were launched. [ 73 ] The Multiprise 3000 , based on the 9672 G5, became available in September 1999, featuring PCI buses. [ 74 ] [ 75 ]
The S/390 Integrated Server , an even lower-end S/390 system than Multiprise, shipped by the end of 1998. It emerged from a line of S/390-compatibility/coprocessor cards for PCs, but is a true S/390 system capable of server duties, having relegated the Pentium II to the role of an I/O coprocessor. It was the first S/390 server to support PCI. It had the same performance and 256 MB maximum memory capacity as the 7 years older low-end 9221 model 170. [ 76 ] [ 77 ]
From 1997 IBM also offered a "S/390 Application StarterPak", intended as a software development kit for developing and testing mainframe software. [ 78 ] | https://en.wikipedia.org/wiki/IBM_System/390 |
IBM SystemT is a declarative information extraction system. It was first built in 2005, as a research project at IBM 's IBM Almaden Research Center . Its name is partially inspired by System R , a seminal project from the same research center.
SystemT [ 1 ] comprises the following three main components: (1) AQL, a declarative rule language with a similar syntax to SQL; (2) Optimizer, which accepts AQL statements as input and generates high-performance algebraic execution plans; and (3) Executing engine, which executes the plan generated by the Optimizer and performs information extraction over input documents.
SystemT is available as part of IBM BigInsights, [ 2 ] and has also been taught in multiple universities around the globe. A version of SystemT was available (starting in September 2016) as a companion to a sequence of online courses in Text Analytics. [ 3 ] [ 4 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IBM_SystemT |
IBM WebSphere refers to a brand of proprietary computer software products in the genre of enterprise software known as "application and integration middleware ". These software products are used by end-users to create and integrate applications with other applications. IBM WebSphere has been available to the general market since 1998.
In June 1998, IBM introduced the first product in this brand, IBM WebSphere Performance Pack . [ 1 ] As of 2012 [update] this first component formed a part of IBM WebSphere Application Server Network Deployment.
The following products have been produced by IBM within the WebSphere brand: [ 2 ] | https://en.wikipedia.org/wiki/IBM_WebSphere |
The IBTS (“Integrated Biotectural System") greenhouse is a biotectural, urban development project suited for hot arid deserts . [ 1 ] [ 2 ] It was part of the Egyptian strategy for the afforestation of desert lands from 2011 until spring of 2015, when geopolitical changes like the Islamic State of Iraq and the Levant – Sinai Province in Egypt forced the project to a halt. [ 3 ] The project begun in spring 2007 as an academic study in urban development and desert greening . It was further developed by Nicol-André Berdellé and Daniel Voelker as a private project until 2011. Afterwards LivingDesert Group including Prof. Abdel Ghany El Gindy and Dr. Mosaad Kotb from the Central Laboratory for Agricultural Climate in Egypt, Forestry Scientist Hany El Kateb, Agroecologist Wil van Eijsden and permaculturist Sepp Holzer was created to introduce the finished project in Egypt. [ 4 ]
The IBTS Greenhouse, together with the programme for the afforestation of desert lands in Egypt, [ 5 ] [ 6 ] became part of relocation strategies. These play a role in Egypt as urbanization of the Nile Delta is a problem for the agricultural sector and because of infrastructural problems like traffic congestion in Cairo. [ 7 ] [ 8 ] [ 9 ]
The IBTS features sea-water farming but inside a large greenhouse. All of the evaporated water can thus be harvested. The generation of liquid water from the atmosphere inside the IBTS requires large amounts of cooling power. This is done with the incoming sea-water. Thus the cooling requirement and the cooling power are always balanced.
The IBTS relies on a new quality of systems integration including architectural, technological and natural elements. [ 10 ] It combines food production and residence, as well as desalination of sea water , or brackish groundwater . [ 11 ] A CAE demonstration project using real weather-, soil and economic conditions proved feasibility under hyperarid conditions.
The relevance of the IBTS is its capacity for water Desalination with an efficiency of 0.45kwh per cubic metre of distillate. This is because operational cost for Desalination utilities far outweigh initial building cost over time. Also because the energy requirement for Desalination plants reach up into the GigaWatt region. The dependence on large amounts of fossil energy leaves water provision from industrial plants insecure.
Through the high efficiency, Desalination has become financially and ecologically viable for large scale agriculture, forestry and aquaculture .
Another point of relevance is the creation of a bio-diverse landscape and many jobs instead of smoking chimneys and factories along the valuable waterfront.
Particular relevance also lies in the applicability inland, also that would exclude the high Desalination capacity.
The building has its roots in construction engineering and construction physics in contrast to food production as it is for most greenhouses. It is fundamentally different from the seawater greenhouses . [ 12 ] It differs for its performance in desalination. Alternative desalination-technologies, air-to-water utilities and desalination-greenhouses in testing, require a multiple of the energy for fresh-water production.
The significance of the term Integration lies within the efficiency that systems integration can achieve, by imitation of natural systems, especially closed cycles . The establishment of closed watercycles being the most crucial of all, because of the increasing severity of the Global Water crisis particularly in hot desert climates .
The industrial-scale desalination is bound to hot climates because it requires high amounts of solar thermal power. It has turned out to be suitable in mitigation of the sinking of water tables in agricultural areas of the MENA region and beyond. In future versions the IBTS can be deployed in cold climates using extra heat energy sources like compact fusion , or small modular reactors .
The IBTS can be charged by seawater, which is turned into freshwater by evaporation. This is the primary type because it is important. Seawater is unlimited and the IBTS can thus produce excess water for sale.
At the beginning of the saltwater charging lies the seawater farming operation inside the IBTS Greenhouse. This only requires small amounts of seawater. Most of the water flows through the food-production system and is then processed in the full-desalination utility.
The IBTS can also be charged by a continuous inflow of organic matter for the workers, animals, and later residents. The organic matter, which is food and drink first, is regained through waste treatment. [ 4 ] The waste-water treatment is part of the ordinary water cycle. The organic matter is partly infiltrated underground into the root zones of the plants and partly processed in septic tanks and then applied as topsoil in the forestry. This concept has been implemented inside residential homes (A common type is an Earthship ).
In general, it is possible to build the IBTS as solids and liquids waste treatment sites for settlements, hotels, or cities. [ 1 ]
The water cycle can also be charged by a single rain event, which does occur in the desert and can be counted on. Lastly, it is possible to charge the water cycle by pumping saline or contaminated groundwater and to some extent by atmospheric water generation.
The volume of water inside the water cycle is not important as it is a quasi-closed cycle, causing evaporation from soil and exhaled moisture from people getting captured under the roof.
Losses occur due to the export of food and in case of a leak in the roof. Leaks would occur frequently under normal conditions. The Skyroof is maintained with a special refurbishment and replacement system that can deal with harsh weather and objects landing on the thin foil.
The nutrient cycle is connected to the watercycle. Charging it mainly means the practice of building up soil fertility and soil organic matter . This can entail import of biomass through organic waste , but mainly by biowaste from the production of food inside the IBTS.
In sea-water systems the biomass is created from salt-tolerant plants called halophytes . Biomass yields of up to 52 tons per hectare per year have been recorded. [ 13 ] Moreover, the biomass generation of roots are important for Carbon sequestration . This is up to 35t/ha*y extra. [ 14 ] The IBTS-Greenhouse is a Blue Carbon project. [ 15 ] A third source of biomass are external seawater farms, which do not require the pricy space under the roof of the IBTS. These can be on land or in sea. Most noteworthy are seaweed farms. [ 16 ]
Just as the nutrient cycle has to be charged with biomass there is an option to charge the atmosphere inside the IBTS, or seaweed water-ponds, with CO 2 . This would increase the biomass yield. This process has certain limits. One is the availability of trace element like phosphorus required by any organism. [ 17 ] As the best source for the charging with additional CO 2 would be industrial waste CO 2 this is another way in which the IBTS can function as waste treatment site.
The energy of operation is 0.45 kWh per cubic metre of distilled water in the full-scale version. [ 3 ] This performance is more than 10 times lower than the records set by desalination plants in Dubai and Perth according to official numbers given by the respective authorities. [ 18 ] The IBTS is based on a modular concept, with a core size of 1 hectare. This is the minimum size for the construction and for self-sufficiency , but the circular, architectural modules can be built 10 hectare large, or more. Each module is based on sub-modules allowing for immediate commencement of operation and generation of profit (like a re-afforestation site generating profit in its early stages). Best efficiency and full capacity can be provided with a superstructure approximately 100 modules large. 10 km 2 have the capacity of an industrial desalination plant, which is 0,5 million cubic meters of water per day.
Since the first version of the IBTS the atmospheric water generation has evolved through a series of hygrothermal models and can now be operated at 0.45 kwh/m 3 according to the developer. [ 19 ] The IBTS works with natural processes in closed cycles, hosted in a building. Therefore, it never hits natural, or physical limitations for growth like the desalination technology in the Persian Gulf already has because of brine discharge and temperature rise. [ 20 ] [ 21 ]
The IBTS is operated with electrical and thermal energy produced from windpower and concentrated solar power , on-site (in a proprietary process). This means that the energy requirement and the use of primary energy can be considered the same, which is not the case for common desalination plants. [ 22 ]
Common desalination plants are dependent on power-plants using fossil fuels. Accounting for energy-loss during the energy transformation in the power-plant, common desalination plants use 2-3 times more energy than stated in the usual performance data. These are common factors for energy-conversion losses for the combustion engines used in the desalination industry.
Taking this into account the IBTS uses less than 5% of the current efficiency world-record. This industrial record is about 3.5kWh/m 3 plus ca. 1.0kWh/m 3 for seawater pumping and other factors not accounted for. It is multiplied with the efficiency of primary energy use. Together 9-14 kWh/m 3 .
The term of primary energy should be combined with energy quality for realistic understanding. Energy quality in context of desalination shows a new picture for the overall efficiency not only of the physical process of desalination, but the overall economic efficiency of the IBTS using proprietary renewable energy. [ 23 ]
The maximum of 500m³ of freshwater production per day and hectare, multiplies to 0.5 million m³ on 1000 ha, equaling the output of the largest industrial desalination power plants in the world. It is reached by heat-recovery from the hot fresh-water. This recovered energy is used to heat the brine leaving the Mariculture in the IBTS doubling the daily evaporation of 100m³ and generating salt for sale. The recovered energy is also used to preheat incoming salt-water for the Mariculture. The chosen breed of fish needs warm water and that warm water also increases the natural evaporation inside the Greenhouse. The design points arose out of the computational engineering of the physical model as well as the financial plan in an iterative process.
Because of the independence of primary energy- and material resources, the efficiency of water production and the scalable, modular design the IBTS Greenhouse is sustainable. A strategic, national infrastructure project like the IBTS allows for the successful energy-transition into a sustainable economy. [ 24 ] [ 25 ]
This can be understood by a comparison of GDP growth, the generation of real values and a weighted GDP.
An example for the infrastructure services of the IBTS Greenhouse is water purification. Wastewater is percolated into the ground and provides water and nutrients for the growth of trees. This is not so easy with food crops for hygienic reasons. Thus the IBTS provides sewage treatment in countries, or areas that lack treatment plants [ 26 ]
The IBTS Greenhouse is an open concept compatible with most other technologies and practices for water- energy- and food production. It is plugin-ready for upcoming technologies like nuclear power from compact fusion, the traveling wave reactor , or breeder reactors . When these energy sources become available they can be integrated into existing IBTS infrastructure and generate even more fresh water without brine discharge into natural water bodies and the appending environmental problems. For infrastructure developments taking decades for the roll-out and upscaling it is crucial to design in terms of future-readiness, a key engineering principle.
The manufacturing process of the IBTS is designed for automation , which requires more electricity than common construction sites, or manufacturing processes. This platform design is also future ready for more available energy. An example is the large roof of the IBTS, which needs to be observed and cleaned continuously and refurbished several times over the lifecycle of the IBTS. This can only be done by special bots, or drones on the scale that the IBTS was developed for as national desert greening strategy for reclaiming and regreening entire regions.
The most famous example is the Biosphere 2 , a research project and demonstration site integrating residential areas into a new type of greenhouse. It was designed to be self-sufficient including food production in an ecosystemic context. Another example for Biotecture, which is foremost a residential home, is an Earthship . Earthships incorporate water-purification and reuse on multiple levels.
Since 2010 urban developments labeled Forest Cities, drawing from the IBTS and other pioneer projects have been created. The Gardens by the Bay using all of the core design elements of the TSPC Forest City from 2008 like artificial trees with spherical buildings on top is an outstanding example. The Liuzhou Forest City is one of many examples for green architecture, respectively green urban developments of new cities with a lot of green areas, including the facades of buildings.
The international efforts to create Forest Cities are another level of implication. China is going forward with the introduction of several hundred designated Forest Cities. [ 27 ] One of the latest examples is Shenzhen. [ 28 ] | https://en.wikipedia.org/wiki/IBTS_Greenhouse |
iC3b is a protein fragment that is part of the complement system , a component of the vertebrate immune system . iC3b is produced when complement factor I cleaves C3b . [ 1 ] Complement receptors on white blood cells are able to bind iC3b, so iC3b functions as an opsonin . Unlike intact C3b, iC3b cannot associate with factor B , thus preventing amplification of the complement cascade through the alternative pathway . Complement factor I can further cleave iC3b into a protein fragment known as C3d.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IC3b |
The ICARUS Initiative , short for International Cooperation for Animal Research Using Space, is an international effort to track the migratory patterns of small flying animals using radio transmitters. The project began in 2002 and the tracking system was installed on the International Space Station (ISS) in August 2018, [ 1 ] switched on in July 2019, [ 2 ] and began operations in September 2020. The director for the ICARUS project is Martin Wikelski, director of the Max Planck Institute of Animal Behavior in Radolfzell , Germany.
Since the late 1980s animal tracking via satellite has been accomplished through the use of the Argos system , which was historically limited to larger animals and with which ICARUS hopes to compete. [ 3 ] One major hurdle to tracking the movements of birds and especially insects is creating a transmitter small enough to place on individual animals. The ICARUS project currently implements 5 g radio transmitters that include a GPS receiver , but has plans to use devices weighing less than 1 g in the future. Wikelski believes that within about five years there will be transmitters light enough to attach to a roughly 120 mg honeybee. Since the ISS is only 320 km from the Earth's surface instead of 850 km like the Argos satellites, the ICARUS trackers do not have to create as strong of a radio signal and can therefore be smaller. [ 4 ] The transmitters tags are solar-powered and only activate when a satellite passes over them. About 5,000 to 10,000 tags were expected to be in use at the time of the 2015 launch. [ 5 ] [ 6 ]
After some delays, the installation of the necessary hardware on the International Space Station was completed in 2018. [ 7 ] However, a defect in the ICARUS computer system meant it had to be returned to Earth, fixed, and transported back to the station in 2019. Testing for the monitoring system began in March 2020 [ 8 ] and scientific operations officially started in September. [ 9 ] Data transmissions from the ISS were terminated on March 3, 2022. [ 10 ]
The primary purpose of the ICARUS Initiative is to greatly expand available data on animal migrations for the sake of conservation, although a variety of other fields of study may be advanced by the project's information gathering. Studying the movements of birds and insects may further scientists' understanding of how natural hazards and human interactions affect animal populations. Another application for the data collected by ICARUS is to investigate a possible link between unusual animal movements and impending earthquakes. It has long been hypothesized that some birds and bats can predict earthquakes because of their ability to detect shifts in magnetic fields, but so far the only evidence to support this has been anecdotal. [ 11 ] The project's migratory data may also provide greater insight into the propagation of animal-borne diseases like SARS , bird flu and West Nile virus . [ 5 ] | https://en.wikipedia.org/wiki/ICARUS_Initiative |
iCLIP [ 1 ] [ 2 ] [ 3 ] (individual-nucleotide resolution crossLinking and immunoprecipitation) is a variant of the original CLIP method used for identifying protein-RNA interactions, [ 4 ] which uses UV light to covalently bind proteins and RNA molecules to identify RNA binding sites of proteins. This crosslinking step has generally less background than standard RNA immunoprecipitation (RIP) protocols, because the covalent bond formed by UV light allows RNA to be fragmented, followed by stringent purification, and this also enables CLIP to identify the positions of protein-RNA interactions. [ 5 ] As with all CLIP methods, iCLIP allows for a very stringent purification of the linked protein-RNA complexes by stringent washing during immunoprecipitation followed by SDS-PAGE and transfer to nitrocellulose. The labelled protein-RNA complexes are then visualised for quality control, excised from nitrocellulose, and treated with proteinase to release the RNA, leaving only a few amino acids at the crosslink site of the RNA. [ 6 ]
The RNA is then reverse transcribed, causing most cDNAs to truncate at the crosslink site, and the key innovation and unique feature in the development of iCLIP was to enable such truncated cDNAs to be PCR amplified and sequenced using a next-generation sequencing platform. iCLIP also added a random sequence (unique molecular identifier, UMI ) along with experimental barcodes to the primer used for reverse transcription, thereby barcoding unique cDNAs to minimise any errors or quantitative biases of PCR, and thus improving the quantification of binding events. Enabling amplification of truncated cDNAs led to identification of the sites of RNA-protein interactions at high resolution by analysing the starting position of truncated cDNAs, as well as their precise quantification using UMIs with software called " iCount ". [ 1 ] All these innovations of iCLIP were adopted by later variants of CLIP [ 6 ] such as eCLIP [ 7 ] and irCLIP. [ 8 ] An additional approach to identify protein-RNA crosslink sites is the mutational analysis of read-through cDNAs, such as nucleotide transitions in PAR-CLIP , [ 9 ] or other types of errors that can be introduced by reverse transcriptase when it reads through the crosslink site in standard HITS-CLIP method with the Crosslink induced mutation site (CIMS) analysis. [ 10 ]
The quantitative nature of iCLIP enabled pioneering comparison across samples at the level of full RNAs, [ 11 ] or to study competitive binding of multiple RNA-binding proteins [ 12 ] or subtle changes in binding of a mutant protein at the level of binding peaks. [ 13 ] An improved variant of iCLIP (iiCLIP) was recently developed to improve the efficiency and convenience of cDNA library preparation, for example by enzymatically removing adaptor after ligation to minimise artefacts caused by adaptor carry-over, introducing the non-radioactive visualisation of the protein-RNA complex (as done originally by irCLIP [ 8 ] ), increasing efficiency of ligation, proteinase and reverse transcription reactions, and enabling bead-based purification of cDNAs. [ 14 ]
Analysis of CLIP sequencing data benefits from use of customised computational software, much of which is available as part of the Nextflow pipeline for CLIP analysis , and specialised software is available for rapid demultiplexing of complex multiplexed libraries, [ 15 ] comparative visualisation of crosslinking profiles across RNAs, [ 16 ] identification of the peaks of clustered protein-RNA crosslink sites, and identification of sequence motifs enriched around prominent crosslinks. [ 17 ] Moreover, iMaps provides a free CLIP analysis web platform and well-curated community database to facilitate studies of RNA regulatory networks across organisms, with a backend based on the Nextflow pipeline. It is applicable to the many variant protocols of CLIP (such as iCLIP, eCLIP, etc), and can be used to analyse unpublished data in a secure manner, or to obtain public CLIP data in a well-annotated format, along with various forms of quality control, visualisation and comparison. Questions on the experimental and computational challenges are collated on the Q&A CLIP Forum . | https://en.wikipedia.org/wiki/ICLIP |
The ICL 2900 Series was a range of mainframe computer systems announced by the British manufacturer International Computers Limited on 9 October 1974. The company had started development under the name "New Range" immediately on its formation in 1968. The range was not designed to be compatible with any previous machines produced by the company, nor for compatibility with any competitor's machines: rather, it was conceived as a synthetic option , combining the best ideas available from a variety of sources.
In marketing terms, the 2900 Series was superseded by Series 39 in the mid-1980s; however, Series 39 was essentially a new set of machines implementing the 2900 Series architecture, as were subsequent ICL machines branded "Trimetra".
When ICL was formed in 1968 as a result of the merger of International Computers and Tabulators (ICT) with English Electric Leo Marconi and Elliott Automation , the company considered several options for its future product line. These included enhancements to either ICT's 1900 Series or the English Electric System 4 , and a development based on J. K. Iliffe's Basic Language Machine . The option finally selected was the so-called Synthetic Option : a new design conceptualized from scratch.
As the name implies, the design was influenced by many sources, including earlier ICL machines. The design of Burroughs mainframes was influential, although ICL rejected the concept of optimising the design for one high-level language. The Multics system provided other ideas, notably in the area of protection. However, the biggest single outside influence was probably the MU5 machine developed at Manchester University .
The 2900 Series architecture uses the concept of a virtual machine as the set of resources available to a program. The concept of a virtual machine in the 2900 Series architecture differs from the term as used in other environments . Because each program runs in its own virtual machine, the concept may be likened to a process in other operating systems , while the 2900 Series process is more like a thread .
The most obvious resource in a virtual machine is the virtual store (memory). Other resources include peripherals, files, and network connections.
In a virtual machine, code can run in any of sixteen layers of protection, called access levels (or ACR levels, after the Access Control Register which controls the mechanism). The most-privileged levels of operating system code (the kernel ) operate in the same virtual machine as the user application, as do intermediate levels such as the subsystems that implement filestore access and networking. System calls thus involve a change of protection level, but not an expensive call to invoke code in a different virtual machine. Every code module executes at a particular access level, and can invoke the functions offered by lower-level (more privileged) code, but does not have direct access to memory or other resources at that level. The architecture thus offers a built-in encapsulation mechanism to ensure system integrity.
Segments of memory can be shared between virtual machines. There are two kinds of shared memory: public segments used by the operating system (which are present in all virtual machines), and global segments used for application-level shared data: this latter mechanism is used only when there is an application requirement for two virtual machines to communicate. For example, global memory segments are used for database lock tables. Hardware semaphore instructions are available to synchronise access to such segments. A minor curiosity is that two virtual machines sharing a global segment use different virtual addresses for the same memory locations, which means that virtual addresses cannot safely be passed from one VM to another.
The term used in the ICL 2900 Series and ICL Series 39 machines for central processing unit (CPU) is "Order Code Processor" (OCP).
The 2900 architecture supports a hardware-based call stack , providing an efficient vehicle for executing high-level language programs, especially those allowing recursive function calls . This was a forward-looking decision at the time, because it was expected that the dominant programming languages would initially be COBOL and FORTRAN . The architecture provides built-in mechanisms for making procedure calls using the stack, and special-purpose registers for addressing the top of the stack and the base of the current stack frame.
Off-stack data is typically addressed via a descriptor . This is a 64-bit structure containing a 32-bit virtual address and 32 bits of control information. The control information identifies whether the area being addressed is code or data; in the case of data, the size of the items addressed (1, 8, 32, 64, or 128 bits); a flag to indicate whether hardware array-bound-checking is required; and various other refinements.
The 32-bit virtual address comprises a 14-bit segment number and an 18-bit displacement within the segment.
The order code is not strictly part of the 2900 architecture. This fact has been exploited to emulate other machines by microcoding their instruction sets . However, in practice, all machines in the 2900 series implement a common order code or instruction set, known as the PLI (Primitive Level Interface). This is designed primarily as a target for high-level language compilers. The most powerful machines, such as the 2980 and 2988, implemented all instructions in hardware, whereas the others used microcoded firmware.
There are several registers, each designed for a special purpose. An accumulator register (ACC) is available for general-purpose use, and may be 32, 64, or 128 bits in size. The B register is used for indexing into arrays; the LNB (Local Name Base) register points to the base of the current stack frame, with the SF (Stack Front) register pointing to the movable 'top' of the stack; the DR register is used for holding descriptors for addressing into the heap, and so on. There are also two 32-bit pointers to off-stack data; XNB (eXtra Name Base) and LTB (Linkage Table Base).
Data formats recognized by the PLI instructions include 32-bit unsigned integers ; 32-bit and 64-bit twos-complement integers; 32-bit, 64-bit and 128-bit floating point; and 32-bit, 64-bit, and 128-bit packed decimal . Contrary to C and UNIX convention, the Boolean value true is represented as zero and false is represented as minus one. Strings are stored as arrays of 8-bit characters, conventionally encoded in EBCDIC (although ICL's EBCDIC has minor variations from IBM's version). It is possible to use ISO (essentially ASCII ) instead of EBCDIC by setting a control bit in a privileged register; among other things, this affects certain decimal conversion instructions.
Because some of the PLI instructions, notably those for procedure calling, are very powerful (especially system calls), instruction rates on the 2900 Series are not always directly comparable with those on competitors' hardware. ICL marketing literature tended to use the concept of "IBM equivalent MIPS", being the MIPS rating of an IBM mainframe that achieved the same throughput in application benchmarks. The efficiencies achieved by the 2900 architecture, notably the avoidance of system call overheads, compensated for relatively slow raw hardware performance.
The first machines announced in the 2900 Series were the 2980 and 2970. The 2980 allowed one or two order code processors (OCPs), each operating at up to 3 million instructions per second, with real memory configurable up to 8 megabytes, with a 500 nanosecond access time.
The 2980 was initially the most powerful of ICL's New Range mainframe computers. In addition to the OCPs, it consisted of a store multiple access controller (SMAC) and one or more store access controllers (SAC), a general peripheral controller (GPC), one or more disc file controllers (DFC) and a communications link controller (CLC), together with disc drives (a typical configuration would have eight EDS 200 drives), tape decks, an operating station (OPER), line printers, and card readers. It could run the ICL VME (VME/B, VME/K) or the Edinburgh Multiple Access System (EMAS) operating system. A typical 2980 configuration would cost about £2 million (equivalent to £16 million in 2023).
Unlike the 2980, the 2970 and the subsequent 2960 were microcoded, and thus allowed emulation of instruction sets such as that of the older 1900 Series or the System 4.
A 2900 Series machine was constructed from a number of functional modules, each contained in a separate cabinet. Peripheral devices were connected using ICL's Primitive Interface (Socket/Plug and cable set) to a Port Adapter on the SMAC. Logical addressing was employed and used a group scheme to identify system components in terms of Ports, Trunks, and Streams.
A Trunk was a generic name and a hardware address within a Port to which a peripheral controller would be assigned. A Trunk was a generic name for a controller for a number of Stream devices. A Stream was the generic name for the channel under which individual peripheral devices could be referenced.
The boot process for the 2960 Series merits special mention: the OCP contained a mini OPER terminal and a cassette deck. At boot, the OCP would perform its Initial Program Load (IPL) from the nominated IPL device. The IPL code provided the means for the OCP to discover the system's hardware configuration by enquiring down the Stream(s), Trunk(s), and Port(s) to find the default or manually elected boot device for the microcode set and/or Operating System to be booted. This process was called a GROPE or General Reconnaissance Of Peripheral Equipment. The cassette load method also allowed engineering staff to load and execute diagnostic software.
The first machines were subsequently replaced by a family of machines based on the 2966 mid-range design, which was less costly to build and used serial rather than parallel interconnections . The 2966 was extended upward in performance to the 2988 and downward to the 2958, augmented by dual processor versions, to cover the entire performance range. [ 2 ] | https://en.wikipedia.org/wiki/ICL_2900_Series |
The ICL Series 39 was a range of mainframe and minicomputer computer systems released by the UK manufacturer ICL in 1985. The original Series 39 introduced the "S3L" (whose corrupt pronunciation resulted in the name "Estriel" [ 1 ] : 341 ) processors and microcodes , and a nodal architecture, which is a form of Non-Uniform Memory Access .
The Series 39 range was based upon the New Range concept and the VME operating system from the company's ICL 2900 line, and was introduced as two ranges:
The original Series 39 introduced the "S3L" processors and microcodes, and a nodal architecture (see ICL VME ) which is a form of Non-Uniform Memory Access which allowed nodes to be up to 1,000 metres (3,300 ft) apart.
The Series 39 range introduced Nodal Architecture, a novel implementation of distributed shared memory that can be seen as a hybrid of a multiprocessor system and a cluster design. Each machine consists of a number of nodes , and each node contains its own order-code processor and main memory. Virtual machines are typically located (at any one time) on one node, but have the capability to run on any node and to be relocated from one node to another. Discs and other peripherals are shared between nodes. Nodes are connected using a high-speed optical bus (Macrolan) using multiple fibre optic cables, which is used to provide applications with a virtual shared memory. Memory segments that are marked as shared (public or global segments) are replicated to each node, with updates being broadcast over the inter-node network. Processes which use unshared memory segments (nodal or local) run in complete isolation from other nodes and processes. [ 2 ]
The semaphore instructions prove their worth by controlling access to the shared writable memory segments while allowing the contents to be moved around efficiently.
Overall, a well configured Series 39 with VME had an architecture which can provide a significant degree of proofing against disasters, a nod to the abortive VME/T ideas of the previous decade.
All Series 39 machines were supported by a set of waist height peripheral 'Cabinets' (connected via fibre optic cables via one or more Multi Port Switch Units or MPSU's) providing disk storage capabilities:-
Cabinet 1 was the name given to the DM1 Series 39 Level 30 (and 20/15/25/35 variants) core system.
All Series 39 machines also featured a Node Support Computer (NSC) hosted on their Storage Motherboards - this was x86 architecture and acted much like today's ILO or DRAC cards on HP / Dell Servers and allowed Support Staff to manage the Nodes remotely including the ability to completely stop and restart the main Nodes.
In the mid-1980s the Series 39 Level 30 was supplemented by a Level 20 variant which was a forcibly underclocked Level 30 (using wire links on a daughterboard). In the late 80s these were both replaced by Level 15, 25 and 35 variants which also carried various levels of clocking state but featured more memory than their predecessors and could also be fitted with Dual OCP and IOC motherboards for even more computing and I/O capability.
The early 1990s saw upgrades to the Series 39 range. DX System products were introduced to replace the DM1 systems, appearing in product line-ups already in late 1991. [ 3 ] : 84 The Essex project led to the introduction of the SX System products in 1990 to replace the Estriel ("S3L") systems. [ 4 ] These machines featured a new "very sophisticated pipelined processor" design that provided support for the ICL 2900 order code by employing a low-level "implementation order code" known as Picode. Picode is comparable to microcode but operates at much higher level than microcode from earlier machines and at a slightly lower level than ICL 2900 instructions, operating within similar constraints to those applying to conventional machine instructions. Picode instruction sequences are fed into instruction pipelines and provide atomic results, being uninterruptable. [ 5 ]
The Series 39 SX and DX products were replaced by the SY and DY products respectively, these comprising the Trimetra range along with LY products. The SY node architecture abandoned ECL in favour of CMOS technology, introduced support for symmetric multiprocessing involving up to four instruction processors per node, refined the instruction processing architecture, and provided cheaper multi-node connectivity. [ 6 ]
In contrast, the Trimetra DY system sought to use commodity hardware to provide OpenVME support through the use of emulation techniques. ICL's Millennium vision, as realised by Trimetra, entailed the provision of OpenVME in the form of an OpenVME Subsystem (OVS) alongside Microsoft Windows NT or SCO UnixWare running in a UnixWare/NT Subsystem (UNS). Whereas Trimetra SY and LY (a reduced footprint product based on SY) employed dedicated hardware to provide OVS functionality, alongside a Fujitsu-supplied Intel processor module providing UNS functionality, Trimetra DY offered an approach that supported either OVS or UNS functionality running entirely on an Intel processor system. To provide OVS, an emulator for the SY instruction set, together with input/output functionality and a platform abstraction layer, were deployed on the VxWorks operating system. [ 7 ]
With ICL having identified markets seeking higher-performance Unix or NT systems without a need for OpenVME compatibility, introducing the Trimetra Xtraserver product featuring from four to twelve 200 MHz Pentium Pro processors, [ 8 ] Trimetra in turn was replaced by Fujitsu's mainframe platform, Nova , providing the Trimetra architecture on generic Unisys ES7000 Intel -based server hardware.
Nova itself was phased out in 2007 and replaced with SuperNova , which runs OpenVME on top of Windows Server or Linux, using as few as two CPUs on generic Wintel server hardware.
The transition of the "ICL mainframe" to a pure software product was thus complete, enabling Fujitsu to concentrate on VME support and development without having to keep up with hardware technology. | https://en.wikipedia.org/wiki/ICL_Series_39 |
Integrated computational materials engineering (ICME) involves the integration of experimental results, design models, simulations, and other computational data related to a variety of materials used in multiscale engineering and design. Central to the achievement of ICME goals has been the creation of a cyberinfrastructure , a Web-based, collaborative platform which provides the ability to accumulate, organize and disseminate knowledge pertaining to materials science and engineering to facilitate this information being broadly utilized, enhanced, and expanded.
The ICME cyberinfrastructure provides storage, access, and computational capabilities for an extensive network of manufacturing, design, and life-cycle simulation software. [ 1 ] Within this software framework, data is archived, searchable and interactive, offering engineers and scientists a vast database of materials-related information for use in research, multiscale modeling , simulation implementation, and an array of other activities in support of more efficient, less costly product development. Furthermore, the ICME cyberinfrastructure is expected to provide the capability to access and link application codes, including the development of protocols necessary to integrate hierarchical modeling approaches. With an emphasis on computational efficiency, experimental validation of models, and protecting intellectual property, the cyberinfrastructure assimilates 1) process-microstructure-property relations, 2) development of constitutive materials models that accurately predict multiscale material behaviors admitting microstructure/inclusions and history effects, 3) access to shared databases of analytical and experimental data, and 4) material models. As such, it is also crucial to identifying gaps in materials knowledge, which, in turn, guides the development of new materials theories, models, and simulation tools. Such a community-based knowledge foundation ultimately enables materials informatics systems that fuse high fidelity experimental databases with models of physical processes.
In addition, the vision of the ICME cyberinfrastructure is compatible with the National Science Foundation 's (NSF) Cyberinfrastructure Vision for 21st Century Discovery , which advocates development and deployment of human-centered information technology (IT) systems that address the needs of science and engineering communities and open new opportunities for enhancing education and workforce development programs. According to the NSF directive, IT systems, such as the ICME cyberinfrastructure, should provide access to tools, services, and other networked resources, including high-performance computing facilities, data repositories, and libraries of computational tools, enabling and reliably supporting secure and efficient nationwide or global virtual organizations spanning across administrative boundaries. [ 2 ]
The National Materials Advisory Board (NMAB) of the National Academy of Engineering (NAE) committee proposed the following definition for the term ICME cyberinfrastructure:
"The Internet-based collaborative materials science and engineering research and development environments that support advanced data acquisition, data and model storage, data and model management, data and model mining, data and model visualization, and other computing and information processing services required to develop an integrated computational materials engineering capability." [ 3 ]
According to NMAB's vision, the building blocks of the ICME cyberinfrastructure are the individual web sites ( Web Portals ) which offer access to information, data, and tools, each established for specific purposes by different organizations. Linked together, these "constituent" Web Portals will form the ICME cyberinfrastructure, or ICME "Supply-Chain," i.e., a series of well-established, capable and viable organizations. [ 4 ] These organizations are to provide necessary portions of the ICME cyberinfrastructure's value chain:
For example, Mississippi State University has created an ICME cyberinfrastructure where different models, codes, and experimental structure-property data are available and discussed. Researchers are encouraged to upload their own models, codes, and experimental data with associated references for others to use. | https://en.wikipedia.org/wiki/ICME_cyberinfrastructure |
iCOMP for Intel Comparative Microprocessor Performance was an index published by Intel used to measure the relative performance of its microprocessors .
Intel was motivated to create the iCOMP rating by research which showed that many computer buyers assumed that the clock speed – the “MHz” rating – was indicative of performance, regardless of the processor type. iCOMP ratings based on standard benchmarks . [ 1 ] The formula for calculating iCOMPs is like this:
The largest component is the integer CPU benchmark from Ziff-Davis Labs (ZDbenchCPU), which is derived from the earlier PC Labs benchmarks. Whetstone (as implemened in PowerMeter) is used for 16-bit floating-point, and SPECint 92 and SPECfp 92 are used for the 32-bit components. [ 1 ]
There were three revisions of the iCOMP index. Version 1.0 (1992) was benchmarked against the 486SX 25, while version 2.0 (1996) was benchmarked against the Pentium 120. [ 2 ] For Version 3.0 (1999) it was Pentium II at 350 MHz. [ 3 ]
This microcomputer - or microprocessor -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/ICOMP_(index) |
ICRANet , the International Center for Relativistic Astrophysics Network , is an international organization which promotes research activities in relativistic astrophysics and related areas. Its members are four countries and three Universities and Research Centers: Armenia , the Federative Republic of Brazil , Italian Republic , the Vatican City State , the University of Arizona (USA), Stanford University (USA) and ICRA .
ICRANet headquarters are located in Pescara , Italy.
In 1985, the International Center for Relativistic Astrophysics [ 1 ] ICRA was founded by Remo Ruffini ( University of Rome "La Sapienza" ) together with Riccardo Giacconi ( Nobel Prize for Physics 2002 [ 2 ] ), Abdus Salam (Nobel Prize for Physics 1979 [ 3 ] ), Paul Boynton ( University of Washington ), George Coyne (former director of the Vatican observatory ), Francis Everitt ( Stanford University ) and Fang Li-Zhi ( University of Science and Technology of China ).
The Statute and the Agreement establishing ICRANet were signed on March 19, 2003, and they were recognized in the same year by the Republic of Armenia and the Vatican City State . ICRANet has been created in 2005 by a law of the Italian Government, ratified by the Italian Parliament and signed by the President of the Italian Republic Carlo Azeglio Ciampi on February 10, 2005. [ 4 ] The Republic of Armenia, Italian Republic , the Vatican City State, ICRA, the University of Arizona and the Stanford University are the founding members.
On September 12, 2005, ICRANet Steering Committee was established and had its first meeting: Remo Ruffini and Fang Li-Zhi were appointed respectively Director and Chairman of the Steering Committee. On December 19, 2006 ICRANet Scientific Committee was established and had its first meeting in Washington DC. Riccardo Giacconi was appointed Chairman and John Mester Co-Chairman.
On September 21, 2005 the Director of ICRANet signed, together with the then Ambassador of Brazil in Rome Dante Coelho De Lima the adhesion of the Federative Republic of Brazil to ICRANet. The entrance of Brazil, requested by the then President of Brazil Luiz Inácio Lula da Silva has been unanimously ratified by the Brazilian Parliament. On August 12, 2011, the then President of Brazil Dilma Rousseff signed the entrance of Brazil in ICRANet. [ 5 ]
By the beginning of the twentieth century the new branch of mathematics, tensor calculus, was developed in the works of Gregorio Ricci Curbastro and Tullio Levi Civita of the University of Padua and the University of Rome "La Sapienza" . Marcel Grossmann of the University of Zurich who had a deep knowledge of the Italian school of geometry and who was close to Einstein introduced to him these concepts. The collaboration between Einstein and Grossmann was essential for the development of General Relativity.
Remo Ruffini and Abdus Salam in 1975 established the Marcel Grossmann meetings (MG) on Recent Developments in Theoretical and Experimental General Relativity, Gravitation, and Relativistic Field Theories, [ 6 ] which take place every three years in different countries, gathering more than 1000 researchers. MG1 and MG2 were held in 1975 and in 1979 in Trieste ; MG3 in 1982 in Shanghai ; MG4 in 1985 in Rome; MG5 in 1988 in Perth ; MG6 in 1991 in Kyoto ; MG7 in 1994 at Stanford ; MG8 in 1997 in Jerusalem ; MG9 in 2000 in Rome; MG10 in 2003 in Rio de Janeiro ; MG11 in 2006 in Berlin ; MG12 in 2009 in Paris ; MG13 in 2012 in Stockholm ; MG14 in 2015 and MG15 in 2018 both in Rome. Since its foundation, ICRANet has always played a leading role in the organization of those meetings.
ICRANet has been Organizational Associate [ 7 ] of the International Year of Astronomy 2009 [ 8 ] and supported the global coordination of IYA2009 financially. In this occasion ICRANet organized a series of international meetings [ 9 ] under the general title "The Sun, the Star, the Universe and General Relativity" including: the 1st Zeldovich meeting [ 10 ] (Minsk, Belarus), the Sobral Meeting [ 11 ] (Fortaleza, Brazil), the 1st Galileo - Xu Guangqi meeting [ 12 ] (Shanghai, China), the 11th Italian-Korean Symposium on Relativistic Astrophysics [ 13 ] (Seoul, South Korea) and the 5th Australasian Conference - Christchurch Meeting [ 14 ] (Christchurch, New Zealand).
Under the initiative of the United Nations and UNESCO , 2015 was declared the International Year of Light , and it represented the centenary of the formulation of the equations of general relativity by Albert Einstein , and the fiftieth anniversary of the birth of relativistic astrophysics. [ 15 ] ICRANet was a "Bronze Associate" sponsor of those celebrations. [ 16 ]
In 2015, ICRANet also organized a series of international meetings [ 17 ] including: the Second ICRANet César Lattes Meeting [ 18 ] (Niterói – Rio de Janeiro – João Pessoa – Recife – Fortaleza, Brazil), the International Conference on Gravitation and Cosmology [ 19 ] / the 4th Galileo-Xu Guangqi meeting [ 20 ] (Beijing, China), Fourteenth Marcel Grossmann Meeting [ 21 ] - MG14 (Rome, Italy), the 1st ICRANet Julio Garavito Meeting on Relativistic Astrophysics [ 22 ] (Bucaramanga – Bogotá, Colombia), the 1st Sandoval Vallarta Caribbean Meeting on Relativistic Astrophysics [ 23 ] (Mexico City, Mexico).
The organization consists of the Director, the Steering Committee and the Scientific Committee. The members of committees are representatives
of the countries and member institutions. ICRANet has a number of permanent Faculty positions. Their activities are supported by administrative staff and secretariat personnel. ICRANet financing is based by Statute on the funds provided by the governments and by voluntary contributions, donations.
The initial Director of ICRANet appointed in 1985 was Remo Ruffini . [ 24 ] Ruffini remains Director as of 2024 [update] . [ 24 ]
In 2023 the Steering Committee consists of: [ 25 ]
The current Chairperson (2019) of the ICRANet Steering Committee is Francis Everitt .
The first Chairperson of the Scientific Committee was Riccardo Giacconi, Nobel Prize for Physics in 2002, who ended his term in 2013. The current (2019) Chairperson of the Scientific Committee is Massimo Della Valle.
The Scientific Committee in 2019 consists of: [ 26 ] Prof. Narek Sahakyan (Armenia), Dr. Barres de Almeida Ulisses (Brazil), Dr. Carlo Luciano Bianco (ICRA), Prof. Massimo Della Valle (Italy), Prof. John Mester (Stanford University), Prof. Chris Fryer (University of Arizona) and Dr. Gabriele Gionti (Vatican City State).
The Faculty [ 27 ] in 2019 consists of Professors Ulisses Barres de Almeida, Vladimir Belinski , Carlo Luciano Bianco, Donato Bini, Pascal Chardonnet, Christian Cherubini, Filippi Simonetta, Robert Jantzen, Roy Patrick Kerr, Hans Ohanian, Giovanni Pisani, Brian Mathew Punsly, Jorge Rueda, Remo Ruffini, Gregory Vereshchagin, and She-Sheng Xue, and is supported by an Adjunct Faculty [ 28 ] made up of more than 30 internationally renowned scientists participating in ICRANet activities, and between eighty "Lecturers" [ 29 ] and "Visiting Professors". [ 30 ] Among these are the Nobel Laureates Murray Gell-Mann , Theodor Hänsch , Gerard ’t Hooft and Steven Weinberg .
Currently [ when? ] ICRANet members are four countries and three Universities and research centers.
Member states:
Member institutions:
ICRANet has signed collaboration agreements with over 60 institutions, universities and research centers in different countries. [ 31 ]
The network is composed of several seats and centers. Seat agreements, establishing rights and privileges, including extraterritoriality , have been signed for the seat in Pescara in Italy, for the seat in Rio de Janeiro in Brazil and for the seat in Yerevan in Armenia. The Seat Agreement for Pescara [ 32 ] has been ratified on May 13, 2010. The Seat agreement for Yerevan has been unanimously approved [ 33 ] by the Parliament of Armenia on November 13, 2015.
High-speed optical fiber connection with different locations are made possible by the connection to the pan-European data network for the research and education community ( GÉANT ) through the GARR network.
Currently ICRANet centers [ 34 ] are operative at:
ICRANet headquarters are located in Pescara , Italy. This center coordinates ICRANet activities and yearly meetings of the Scientific and the Steering committees are usually held there. International meetings such as the Italian-Korean Symposia on Relativistic Astrophysics [ 35 ] are regularly held in this center. Scientific activities in Pescara center include the fundamental research on early cosmology by the Russian school guided by Vladimir Belinski .
Activities of the ICRANet Seat at Villa Ratti in Nice include the coordination of the IRAP PhD program, as well as scientific activities connected with the ultra high energy observations by the University of Savoy and the VLT observations performed by the Côte d’Azur Observatory, which involve the thesis works of IRAP PhD students. The University of Savoy is the closest French lab to the CERN .
Since January 2014, the ICRANet Center in Yerevan [ 36 ] has been established at the Presidium of the National Academy of Sciences of Armenia , [ 37 ] at Marshall Baghramian Avenue, 24a. Scientific activities in this center are coordinated by the Director, Dr. Narek Sahakyan. In 2014, the Government of Armenia approved the Agreement to establish the ICRANet international center in Armenia. The Seat Agreement has been signed in Rome on February 14, 2015, by the director of ICRANet, Remo Ruffini and the Ambassador of Armenia in Italy, Mr. Sargis Ghazaryan. On November 13, 2015, the Parliament of Armenia unanimously approved the Seat Agreement. [ 33 ] Since January 2016 ICRANet Armenia center is registered at the Ministry of Foreign Affairs as an international organization. [ 38 ] The main areas of scientific research in ICRANet-Armenia are in the fields of relativistic astrophysics, astroparticle physics, X-ray astrophysics, high and very high energy gamma-ray astrophysics, high energy neutrino astrophysics. The center is a full member of the MAGIC international collaboration since 2017. Also, the center is actively involved in development of the Open Universe Initiative. In Armenia, the ICRANet center collaborates with other scientific institutions from the Academy and Universities, and provides to organize joint international meetings and workshops, summer schools for PhD students and mobility programs for scientists in the field of Astrophysics. ICRANet center in Armenia coordinates ICRANet activities in the area of Central-Asian and Middle-Eastern countries.
A summer school and an international scientific conference dedicated to the issues of Relativistic Astrophysics "1st Scientific ICRANet Meeting in Armenia: Black Holes: the largest energy sources in the Universe" were held in Armenia from June 28 to July 4, 2014. [ 39 ] [ 40 ]
The Seat of ICRANet in Rio de Janeiro has been established initially on the premises granted by CBPF , with the possible expansion to the Cassino da Urca . A school of Cosmology and Astrophysics is being developed jointly with Brazilian institutions. The 2nd ICRANet César Lattes Meeting devoted to relativistic astrophysics was held in Rio de Janeiro in 2015. [ 18 ]
Currently (2019) ICRANet has signed scientific collaboration agreements with 17 Brazilian universities, institutions and research centers. [ 41 ]
There are two specific programs initiated by ICRANet, which are underway:
The ICRANet-Minsk center has been established at the National Academy of Science of Belarus (NASB), with whom ICRANet has signed a cooperation agreement on 2013. [ 42 ] [ 43 ] The Protocol for the opening of the ICRANet-Minsk center has been signed in April 2016. [ 44 ] [ 45 ] The "First ICRANet-Minsk workshop on high energy astrophysics" has been held at the ICRANet-Minsk center from 26 to 28 of April 2017. [ 46 ]
The ICRANet Center in Isfahan has been established at the Isfahan University of Technology. The Protocol of cooperation, [ 47 ] signed in 2016 by Remo Ruffini, Director of ICRANet, and Mahnoud Modarres-Hashemi, Rector of the Isfahan University of Technology, includes the promotion and development of scientific and technological research in the fields of cosmology, gravitation and relativistic astrophysics. It also includes the organization of joint international conferences and workshops, institutional exchanges for students, researchers and faculty members.
The present Chairman of the ICRANet Steering Committee Francis Everitt is responsible for the ICRANet Center at the Leland Stanford Junior University . His notable activity has been the conception, development, launch, data acquisition, and elaboration of the final data analysis of the NASA Gravity Probe B mission, one of the most complex physics experiments ever performed in space.
The first Chairman of the ICRANet Steering Committee Fang Li-Zhi developed the collaboration with the Physics Department of the University of Arizona in Tucson. The collaboration with its Astronomy Department is promoted by David Arnett .
Since 2005 ICRANet co-organizes an International Ph.D. program in Relativistic Astrophysics — International Relativistic Astrophysics Ph.D. Program , IRAP-PhD, the first joint PhD astrophysics program with: ASI - Italian Space Agency (Italy); Bremen University (Germany); Carl von Ossietzky University of Oldenburg (Germany); CAPES - Brazilian Federal Agency for Support and Evaluation of Graduate Education (Brazil); CBPF - Brazilian Centre for Physics Research (Brazil); CNR - National Research Council (Italy); FAPERJ -Foundation "Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro" (Brazil); ICRA - International Center for Relativistic Astrophysics (Italy); ICTP - Abdus Salam International Centre for Theoretical Physics (Italy); IHES - Institut Hautes Etudes Scientifiques (France); Indian centre for space physics (India); INFN - National Institute for Nuclear Physics (Italy); NAS RA - Armenian National Academy of Sciences (Armenia); Nice University Sophia Antipolis (France); Observatory of the Côte d'Azur (France); Rome University - “Sapienza” (Italy); Savoy University (France); TWAS - Academy of sciences for the developing world; UAM - Metropolitan Autonomous University (Mexico); UNIFE - University of Ferrara (Italy). [ 48 ] Among the associated centers, there are both institutes devoted to theory and others devoted to experiments and observations. In that way, PhD students can have a wider education on theoretical relativistic astrophysics and put it in practice. The official language of the IRAP PhD is English and students have also the opportunity to learn the national language of their hosting country, attending several academic courses in the partner Universities.
By 2019, 122 students were enrolled in the IRAP PhD program: [ 49 ] 1 from Albania, 4 from Argentina, 8 from Armenia, 1 from Austria, 2 from Belarus, 16 from Brazil, 5 from China, 9 from Colombia, 3 from Croatia, 5 from France, 5 from Germany 7 from India, 2 from Iran, 38 from Italy 2 from Kazakhstan, 1 from Lebanon, 1 from Mexico, 1 from Pakistan, 4 from Russia, 1 from Serbia, 1 from Sweden, 1 from Switzerland, 1 from Saudi Arabia, 2 from Taiwan and 1 from Turkey.
The IRAP-PhD program was the only European PhD program in Astrophysics awarded the Erasmus Mundus label and funded by the European Commission in 2010–2017.
ICRANet main goals are training, education and research in the field of relativistic astrophysics, cosmology, theoretical physics and mathematical physics.
Its main activities are devoted to promote the international scientific co-operation and to carry on scientific research.
According to the 2018 ICRANet Scientific Report, [ 50 ] the main areas of scientific research in ICRANet are:
Between 2006 and 2019, ICRANet has released over 1800 scientific publications in refereed journals such as Physical Review , the Astrophysical Journal , Astronomy and Astrophysics etc., in its various fields of research.
New scientific concepts and terms introduced by ICRANet scientists:
Black hole (Ruffini, Wheeler 1971) [ 51 ]
Ergosphere (Rees, Ruffini, Wheeler, 1974) [ 52 ]
Pursue and plunge (Rees, Ruffini, Wheeler, 1974) [ 52 ]
Black hole mass formula (Christodoulou, Ruffini, 1971) [ 53 ]
Reversible and irreversible transformations of black holes (Christodoulou, Ruffini, 1971) [ 53 ]
Dyadosphere (Damour, Ruffini, 1975; Preparata, Ruffini, Xue, 1998) [ 54 ] [ 55 ]
Dyadotorus (Cherubini et al., 2009) [ 56 ]
Induced Gravitational Collapse (Rueda, Ruffini, 2012) [ 57 ]
Binary-driven Hypernova (Ruffini et al., 2014) [ 58 ]
Cosmic matrix (Ruffini et al., 2015) [ 59 ]
The Galileo-Xu Guangqi meetings [ 60 ] have been created in the name of Galileo and Xu Guangqi, the collaborator of Matteo Ricci (Ri Ma Dou), generally recognized for bringing to China the works of Euclid and Galileo and for his strong commitment to the process of modernization and scientific development of China. The 1st Galileo - Xu Guangqi Meeting was held in Shanghai, China, in 2009. The 2nd Galileo - Xu Guangqi meeting took place in Hanbury Botanic Gardens (Ventimiglia, Italy) and Villa Ratti (Nice, France) in 2010. The 3rd and 4th Galileo - Xu Guangqi meetings were both held in Beijing, China, respectively in 2011 and 2015.
The Italian-Korean Symposia on Relativistic Astrophysics [ 61 ] is a series of biannual meetings, alternatively organized in Italy and in Korea since 1987. The symposia discussions cover topics in astrophysics and cosmology, such as gamma-ray bursts and compact stars, high energy cosmic rays, dark energy and dark matter, general relativity, black holes, and new physics related to cosmology.
These workshops represent a one-week dialogues on Relativistic Field Theories in Curved Space, which is inspired to the work of E. C. G. Stueckelberg. [ 62 ] Invited lectures were delivered by Professors Abhay Ashtekar, Thomas Thiemann, Gerard 't Hooft and Hagen Kleinert .
The Zeldovich Meetings [ 63 ] are a series of international conferences held in Minsk, in honor of Ya. B. Zeldovich, one of the fathers of the Soviet Atomic Bomb and the founder of the Russian School on Relativistic Astrophysics, which celebrate and discuss his wide research interests, ranging from chemical physics, elementary particle and nuclear physics to astrophysics and cosmology. The 1st Zeldovich Meeting was held at the Belarusian State University in Minsk, from 20 to 23 April 2009; the 2nd Zeldovich Meeting was held at the National Academy of Sciences of Belarus from 10 to 14 March 2014, to celebrate Ya. B. Zeldovich 100th Anniversary; the 3rd Zeldovich Meeting has been held at the National Academy of Sciences of Belarus from 23 to 27 April 2018.
ICRANet has also organized:
In the framework of the IRAP PhD program, ICRANet has organized several PhD schools: 11 of them have been held in Nice (France), 3 in Les Houches , 1 in Ferrara (Italy), 1 in Pescara (Italy) and 1 in Beijing (China). [ 66 ]
ICRANet has developed a program [ 67 ] of short and long term visits for scientific collaboration.
Prominent personalities have carried out their activities at ICRA and ICRANet, among them are: Prof. Riccardo Giacconi, Nobel Prize for Physics in 2002, Gerardus 't Hooft, Dutch physicist and Nobel Prize for Physics in 1999; Steven Weinberg, Nobel Prize in 1979; Murray Gell -Mann Nobel Prize in 1969; Subrahmanyan Chandrasekhar Nobel Prize in 1930; Haensch Theodor, Nobel Prize in 2001; Valeriy Ginzburg., Francis Everitt, Chairman of the Scientific Committee of ICRANet, Isaak Khalatnikov , Russian physicist and former director of the Landau Institute for Theoretical Physics from 1965 to 1992; Roy Kerr , New Zealand mathematician and discoverer of the " Kerr Metric "; Thibault Damour ; Demetrios Christodoulou ; Hagen Kleinert ; Neta and John Bachall; Tsvi Piran ; Charles Misner ; Robert Williams ; José Gabriel Funes ; Fang Li-Zhi ; Rashid Sunyaev .
ICRANet co-organizes with ICRA Joint Astrophysics Seminar [ 68 ] at the Department of Physics of University "La Sapienza" in Rome. All institutions collaborating with ICRANet, as well as ICRANet centers, participate at those seminars.
The main objective of the Brazilian Science Data Center [ 69 ] (BSDC) is to provide data of all international space missions existing on the wavelength of X- and gamma rays, and later on the whole electromagnetic spectrum, for all the galactic and extragalactic sources of the Universe. A special attention will be paid to the achievement and the complete respect of the levels defined by the International Virtual Observatory Alliance (IVOA). In addition to these specific objectives, BSDC will promote technical seminars, annual workshops and it will assure a plan of scientific divulgation and popularization of science with the aim of the understanding of the Universe.
The BSDC is currently being implemented at CBPF, and at the Universidade Federal do Rio Grande do Sul (UFRGS), and will be expanded to all other ICRANet centers in Brazil as well as to the other Latin-American ICRANet Centers in Argentina , Colombia and Mexico : a unique coordinated continental research network planned for Latin America. | https://en.wikipedia.org/wiki/ICRANet |
The International Conference on Theoretical, Applied, Computational and Experimental Mechanics (ICTACEM) is a professional organization in the field of engineering. The organization was founded by P.K. Sinha, a professor of aerospace engineering at the Indian Institute of Technology Kharagpur as the " Conference on Theoretical applied, Computation and experimental mechanics " in 1998, and its current name was adopted in 2001. The first conference was held on December 1, 1998. [ 3 ] [ 4 ]
Since then, the organization has consisted of more than 250 committee members and delegates. [ 5 ] [ 6 ] [ 7 ] The conference has been held every three years since 1998, with the latest taking place in 2021 (delayed from 2020). [ 8 ] [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/ICTACEM |
The ICT Development Index ( IDI ) is an index published by the United Nations International Telecommunication Union [ 1 ] based on internationally agreed information and communication technologies (ICT) indicators. This makes it a valuable tool for benchmarking the most important indicators for measuring the information society. The IDI is a standard tool that governments, operators, development agencies, researchers and others can use to measure the digital divide and compare ICT performance within and across countries.
Having the role to analyze the level of development of the information and communication technology sector (ICT), the ICT Development Index (IDI) is a composite indicator published by ITU between 2009 and 2017. It was discontinued in 2018, owing to issues of data availability and quality. In October 2022, ITU’s Plenipotentiary Conference 2022 in Bucharest adopted a revised text of Resolution 131, which defines, inter alia, the main features of the process for developing and adopting a new IDI methodology and of the IDI itself. In November 2023, the revised IDI methodology was approved by the Member States and is valid for four years. [ 2 ] In December 2023, the 2023 edition of the IDI based on the new methodology was released. The 2024 edition of the IDI was released in June 2024. [ 3 ]
The following table shows the most recent values of the ICT Development Index, based on data published by International Telecommunication Union in 2024. [ 3 ] Sorting is alphabetical by country code, according to ISO 3166-1 alpha-3 .
Note ( ITU World Telecommunication ): The reference year is 2022, unless otherwise indicated. | https://en.wikipedia.org/wiki/ICT_Development_Index |
Iodine trichloride is an interhalogen compound of iodine and chlorine . It is bright yellow but upon time and exposure to light it turns red due to the presence of elemental iodine. In the solid state is present as a planar dimer I 2 Cl 6 , with two bridging Cl atoms. [ 1 ]
It can be prepared by reacting iodine with an excess of liquid chlorine at −70 °C, [ 2 ] or heating a mixture of liquid iodine and chlorine gas to 105 °C. [ citation needed ] In the molten state it is conductive, which may indicate dissociation: [ 2 ]
It is an oxidizing agent , capable of causing fire on contact with organic materials. [ citation needed ] That oxidizing power also makes it a useful catalyst for organic chlorination reactions . [ 3 ]
Iodine trichloride reacts with concentrated hydrochloric acid , forming tetrachloroiodic acid : [ 4 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/ICl3 |
iConnectHere is the consumer division of Deltathree , which provides VoIP internet telephony service to consumers and businesses worldwide. The company's products are: Broadband (Internet) Phones, PC to Phone service, Mobile Dialers, Calling Cards and local phone numbers.
Deltathree was founded in 1996 and on March 14, 1997, first demonstrated a direct telephone conversation over the Internet . By June 1999, deltathree's PC-to-Phone and Phone-to-Phone services became commercially available. In September 2001 the iConnectHere brand and service was launched with even lower rates that initially offered. [ 1 ]
On December 19, 2001, Deltathree announced that iConnectHere would offer its PC-to-phone service to MSN Messenger and Windows Messenger users in 17 countries. [ 1 ]
In 2007, Deltathree launched a communications solution called JoIP jointly with Panasonic . JoIP is a service enabling regular phone owners of Panasonic's Globrange to make cheap international calls. [ 2 ]
In July 2010, Deltathree launched a communications solution called the JoIP Mobile (mobile.joip.com). This VoIP mobile dialer can be downloaded to the mobile (practically any smartphone) of the user: BlackBerry OS, Symbian OS, Android OS and iPhone. Windows OS and Blackberry will soon be launched as well. [ 2 ]
On August 1, 2017, Deltathree, LLC, provider of iConnecthere discontinued service.
iConnectHere provides a free client applications such as the PC to Phone Dialer and Mobile Dialers as part of its service; to date the application had 8 major releases (current version is 8). [ 3 ] Additionally, iConnectHere offers a free broadband phone adapter from Linksys along with the Pay as you Go World Plans with local receiving calls numbers all over the world among other international and U.S calling plans.
This article about a telecommunications corporation or company in the United States is a stub . You can help Wikipedia by expanding it .
This article about an IT-related or software-related company or corporation is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IConnectHere |
The iControlPad is a wireless game controller compatible with a variety of smartphones , tablets , and personal computers . It is designed for use as either a standalone gamepad or attached to appropriately sized devices, such as the iPhone , using a clamp system. [ 2 ] Due to this, the iControlPad is able to add traditional physical gaming controls to devices which otherwise rely on inputs such as touchscreens and accelerometers . [ 3 ]
The iControlPad's input controls include an eight-directional D-pad , dual analog nubs , six digital face buttons, and two digital trigger buttons on the gamepad's reverse. The sides of the iControlPad are detachable, with two different attachment types: rubber grips, for using the controller as a standard wireless gamepad; or plastic clamps, for connecting with a suitable handheld, such as a smartphone or iPod Touch . [ 2 ] A mini USB port on the bottom of the iControlPad can be used to charge the internal 1500mAh battery, update the device's firmware , and charge attached devices using a USB On-The-Go connection and an appropriate adapter. [ 1 ]
The iControlPad, a Bluetooth device, can be run in a wide variety of modes, including as a HID keyboard , mouse , joystick , and gamepad, among others, allowing compatibility with equipment which is limited to only certain types of input. [ 1 ] One of the iControlPad's modes mimics the protocol used by the iCade , an arcade cabinet released for the Apple iPad , facilitating compatibility between apps designed for the iCade and the iControlPad hardware. [ 2 ]
Due to the iControlPad's ability to operate as a Bluetooth keyboard—by mapping the D-pad and buttons to standard keyboard keys—it is able to communicate with devices such as those running Apple's iOS , including the iPhone and iPad, which do not support Bluetooth gamepads. [ 4 ] Since iOS natively supports keyboards, apps can be developed with iControlPad compatibility using either its own protocol or that of the iCade. Thus, the iControlPad is able to control video games and video game console emulators across multiple platforms. [ 5 ]
Development of the iControlPad began in 2007, [ specify ] with testing using a hacked SNES gamepad to connect to an iPhone over the dock connection . [ 6 ] Once the serial connection was working, the first prototype iControlPad was produced, using a design styled after the Sony PSP . This earliest concept was a one-piece case enveloping the iPhone, with a D-pad on the left side, and four face buttons on the right in a landscape orientation, [ 7 ] and was first revealed in 2008. [ 8 ] [ 9 ]
By November 2009, a completely redesigned iControlPad prototype was under development. This much larger version moved the controls below the screen and added two analog nubs and two trigger buttons to the controller. [ 10 ] This design, which featured clamps to attach it to the iPhone, was much closer to the version that was ultimately released, and would soon go into production. [ 11 ]
However, one large change was made very late in development. The team had secretly added Bluetooth support to the iControlPad, in order to increase compatibility beyond the iPhone and its proprietary connection. [ 12 ] This proved fortunate when Apple began exercising its rights over the dock connector, suing an unlicensed accessory maker. Thus, the iControlPad team were forced to adapt to use the Bluetooth connection for the iPhone, [ 13 ] and it was this version which finally became available for order in February 2011. [ 14 ]
Reception for the iControlPad has been mostly positive. Register Hardware noted that while "patience and geekery" were required to get the controller working, the iControlPad "almost perfectly solves the touchscreen game control conundrum". [ 32 ] Gadgetoid homed in on the device's usefulness for classic gaming, remarking that it was "awesome [...] for emulation on the go". [ 2 ] TouchArcade's reviewer said while playing games with the iControlPad that "the experience feels great", but that "[he couldn't] recommend that the typical gamer run out right now and grab one," due to its limited support on the iTunes App Store. [ 33 ]
Early reviews were mixed on the quality of the controls, with DroidGamers describing them as "very loose", [ 5 ] while, conversely, Register Hardware said "the analogue nubs and face buttons work extremely well". [ 32 ] The controller's responsiveness was later improved by replacing the original rubber keymat with a larger one. In their review, Gadgetoid lauded the inputs as having "a great tactile feel and a liberal amount of travel with a good response." [ 2 ]
A successor, the iControlPad 2 , was successfully funded via Kickstarter in October 2012. As of November 2013, it has been cancelled, and the backers were KickScammed. [ 34 ] | https://en.wikipedia.org/wiki/IControlPad |
IDEF , initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition , is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation, object-oriented analysis and design , and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain.
The most-widely recognized and used components of the IDEF family are IDEF0, a functional modeling language building on SADT, and IDEF1X, which addresses information models and database design issues.
IDEF refers to a family of modeling language , which cover a wide range of uses, from functional modeling to data, simulation, object-oriented analysis/design and knowledge acquisition. Eventually the IDEF methods have been defined up to IDEF14:
In 1995 only the IDEF0 , IDEF1X , IDEF2 , IDEF3 and IDEF4 had been developed in full. [ 8 ] Some of the other IDEF concepts had some preliminary design. Some of the last efforts were new IDEF developments in 1995 toward establishing reliable methods for business constraint discovery IDEF9 , design rationale capture IDEF6 , human system, interaction design IDEF8 , and network design IDEF14 . [ 9 ]
The methods IDEF7, IDEF10, IDEF11, IDEF 12 and IDEF13 haven't been developed any further than their initial definition. [ 10 ]
IDEF originally stood for ICAM Definition, initiated in the 1970s at the US Air Force Materials Laboratory, Wright-Patterson Air Force Base in Ohio by Dennis E. Wisnosky , Dan L. Shunk, and others. [ 11 ] and completed in the 1980s. IDEF was a product of the ICAM initiative of the United States Air Force . The IEEE recast the IDEF abbreviation as Integration Definition." [ 12 ]
The specific projects that produced IDEF were ICAM project priorities 111 and 112 (later renumbered 1102). The subsequent Integrated Information Support System (IISS) project priorities 6201, 6202, and 6203 attempted to create an information processing environment that could be run in heterogeneous physical computing environments. Further development of IDEF occurred under those projects as a result of the experience gained from applications of the new modeling techniques. The intent of the IISS efforts was to create 'generic subsystems' that could be used by a large number of collaborating enterprises, such as U.S. defense contractors and the armed forces of friendly nations.
At the time of the ICAM 1102 effort there were numerous, mostly incompatible, data model methods for storing computer data — sequential ( VSAM ), hierarchical ( IMS ), network ( Cincom 's TOTAL and CODASYL , and Cullinet 's IDMS ). The relational data model was just emerging as a promising way of thinking about structuring data for easy, efficient, and accurate access. Relational database management systems had not yet emerged as a general standard for data management.
The ICAM program office deemed it valuable to create a "neutral" way of describing the data content of large-scale systems. The emerging academic literature suggested that methods were needed to process data independently of the way it was physically stored . Thus the IDEF1 language was created to allow a neutral description of data structures that could be applied regardless of the storage method or file access method.
IDEF1 was developed under ICAM program priority 1102 by Robert R. Brown of the Hughes Aircraft Company , under contract to SofTech, Inc. Brown had previously been responsible for the development of IMS while working at Rockwell International . Rockwell chose not to pursue IMS as a marketable product but IBM , which had served as a support contractor during development, subsequently took over the product and was successful in further developing it for market. Brown credits his Hughes colleague Timothy Ramey as the inventor of IDEF1 as a viable formalism for modeling information structures. The two Hughes researchers built on ideas from and interactions with many luminaries in the field at the time. In particular, IDEF1 draws on the following techniques:
The effort to develop IDEF1 resulted in both a new method for information modeling and an example of its use in the form of a "reference information model of manufacturing." This latter artifact was developed by D. S. Coleman of the D. Appleton Company (DACOM) acting as a sub-contractor to Hughes and under the direction of Ramey. Personnel at DACOM became expert at IDEF1 modeling and subsequently produced a training course and accompanying materials for the IDEF1 modeling technique.
Experience with IDEF1 revealed that the translation of information requirements into database designs was more difficult than had originally been anticipated. The most beneficial value of the IDEF1 information modeling technique was its ability to represent data independent of how those data were to be stored and used. It provided data modelers and data analysts with a way to represent data requirements during the requirements-gathering process. This allowed designers to decide which DBMS to use after the nature of the data requirements was understood and thus reduced the "misfit" between data requirements and the capabilities and limitations of the DBMS. The translation of IDEF1 models to database designs, however, proved to be difficult.
The IDEF0 functional modeling method is designed to model the decisions, actions, and activities of an organization or system. [ 13 ] It was derived from the established graphic modeling language structured analysis and design technique (SADT) developed by Douglas T. Ross and SofTech, Inc. In its original form, IDEF0 includes both a definition of a graphical modeling language ( syntax and semantics ) and a description of a comprehensive methodology for developing models. [ 14 ] The US Air Force commissioned the SADT developers to develop a function model method for analyzing and communicating the functional perspective of a system. IDEF0 should assist in organizing system analysis and promote effective communication between the analyst and the customer through simplified graphical devices. [ 13 ]
To satisfy the data modeling enhancement requirements that were identified in the IISS-6202 project, a sub-contractor, DACOM , obtained a license to the logical database design technique (LDDT) and its supporting software (ADAM). LDDT had been developed in 1982 by Robert G. Brown of The Database Design Group entirely outside the IDEF program and with no knowledge of IDEF1. LDDT combined elements of the relational data model, the E–R model, and generalization in a way specifically intended to support data modeling and the transformation of the data models into database designs. The graphic syntax of LDDT differed from that of IDEF1 and, more importantly, LDDT contained interrelated modeling concepts not present in IDEF1. Mary E. Loomis wrote a concise summary of the syntax and semantics of a substantial subset of LDDT, using terminology compatible with IDEF1 wherever possible. DACOM labeled the result IDEF1X and supplied it to the ICAM program. [ 15 ] [ 16 ]
Because the IDEF program was funded by the government, the techniques are in the public domain . In addition to the ADAM software, sold by DACOM under the name Leverage, a number of CASE tools use IDEF1X as their representation technique for data modeling.
The IISS projects actually produced working prototypes of an information processing environment that would run in heterogeneous computing environments. Current advancements in such techniques as Java and JDBC are now achieving the goals of ubiquity and versatility across computing environments which was first demonstrated by IISS.
The third IDEF (IDEF2) was originally intended as a user interface modeling method. However, since the Integrated Computer-Aided Manufacturing (ICAM) program needed a simulation modeling tool, the resulting IDEF2 was a method for representing the time varying behavior of resources in a manufacturing system, providing a framework for specification of math model based simulations. It was the intent of the methodology program within ICAM to rectify this situation but limitation of funding did not allow this to happen. As a result, the lack of a method which would support the structuring of descriptions of the user view of a system has been a major shortcoming of the IDEF system. The basic problem from a methodology point of view is the need to distinguish between a description of what a system (existing or proposed) is supposed to do and a representative simulation model that predicts what a system will do. The latter was the focus of IDEF2 , the former is the focus of IDEF3 . [ 17 ]
The development of IDEF4 came from the recognition that the modularity, maintainability and code reusability that results from the object-oriented programming paradigm can be realized in traditional data processing applications. The proven ability of the object-oriented programming paradigm to support data level integration in large complex distributed systems is also a major factor in the widespread interest in this technology from the traditional data processing community. [ 17 ]
IDEF4 was developed as a design tool for software designers who use object-oriented languages such as the Common Lisp Object System , Flavors , Smalltalk , Objective-C , C++ , and others. Since effective usage of the object-oriented paradigm requires a different thought process than used with conventional procedural or database languages , standard methodologies such as structure charts , data flow diagrams , and traditional data design models (hierarchical, relational, and network) are not sufficient. IDEF4 seeks to provide the necessary facilities to support the object-oriented design decision making process. [ 17 ]
IDEF5 , or integrated definition for ontology description capture method, is a software engineering method to develop and maintain usable, accurate, domain ontologies . [ 18 ] In the field of computer science ontologies are used to capture the concept and objects in a specific domain , along with associated relationships and meanings. In addition, ontology capture helps coordinate projects by standardizing terminology and creates opportunities for information reuse. The IDEF5 Ontology Capture Method has been developed to reliably construct ontologies in a way that closely reflects human understanding of the specific domain. [ 18 ]
In the IDEF5 method, an ontology is constructed by capturing the content of certain assertions about real-world objects, their properties and their interrelationships, and representing that content in an intuitive and natural form. The IDEF5 method has three main components: A graphical language to support conceptual ontology analysis, a structured text language for detailed ontology characterization, and a systematic procedure that provides guidelines for effective ontology capture. [ 19 ]
IDEF6 , or integrated definition for design rationale capture, is a method to facilitate the acquisition, representation, and manipulation of the design rationale used in the development of enterprise systems . Rationale is the reason, justification, underlying motivation, or excuse that moved the designer to select a particular strategy or design feature. More simply, rationale is interpreted as the answer to the question, “Why is this design being done in this manner?” Most design methods focus on what the design is (i.e. on the final product, rather than why the design is the way it is). [ 9 ]
IDEF6 is a method that possesses the conceptual resources and linguistic capabilities needed
IDEF6 is applicable to all phases of the information system development process, from initial conceptualization through both preliminary and detailed design activities. To the extent that detailed design decisions for software systems are relegated to the coding phase, the IDEF6 technique should be usable during the software construction process as well. [ 7 ]
IDEF8, or integrated definition for human-system interaction design, is a method for producing high-quality designs of interactions between users and the systems they operate. Systems are characterized as a collection of objects that perform functions to accomplish a particular goal. The system with which the user interacts can be any system, not necessarily a computer program. Human-system interactions are designed at three levels of specification within the IDEF8 method. The first level defines the philosophy of system operation and produces a set of models and textual descriptions of overall system processes. The second level of design specifies role-centered scenarios of system use. The third level of IDEF8 design is for human-system design detailing. At this level of design, IDEF8 provides a library of metaphors to help users and designers specify the desired behavior in terms of other objects whose behavior is more familiar. Metaphors provide a model of abstract concepts in terms of familiar, concrete objects and experiences. [ 9 ]
IDEF9, or integrated definition for business constraint discovery, is designed to assist in the discovery and analysis of constraints in a business system . A primary motivation driving the development of IDEF9 was an acknowledgment that the collection of constraints that forge an enterprise system is generally poorly defined. The knowledge of what constraints exist and how those constraints interact is incomplete, disjoint, distributed, and often completely unknown. Just as living organisms do not need to be aware of the genetic or autonomous constraints that govern certain behaviors, organizations can (and most do) perform well without explicit knowledge of the glue that structures the system. In order to modify business in a predictable manner, however, the knowledge of these constraints is as critical as knowledge of genetics is to the genetic engineer. [ 9 ]
IDEF14, or integrated definition for network design method, is a method that targets the modeling and design of computer and communication networks . It can be used to model existing ("as is") or envisioned ("to be") networks. It helps the network designer to investigate potential network designs and to document design rationale. The fundamental goals of the IDEF14 research project developed from a perceived need for good network designs that can be implemented quickly and accurately. [ 9 ]
This article incorporates public domain material from the National Institute of Standards and Technology | https://en.wikipedia.org/wiki/IDEF |
IDEF0 , a compound acronym ("Icam DEFinition for Function Modeling", where ICAM is an acronym for "Integrated Computer Aided Manufacturing"), is a function modeling methodology for describing manufacturing functions, which offers a functional modeling language for the analysis, development, reengineering and integration of information systems , business processes or software engineering analysis. [ 1 ]
IDEF0 is part of the IDEF family of modeling languages in the field of software engineering , and is built on the functional modeling language Structured Analysis and Design Technique (SADT).
The IDEF0 Functional Modeling method is designed to model the decisions, actions, and activities of an organization or system. [ 2 ] It was derived from the established graphic modeling language Structured Analysis and Design Technique (SADT) developed by Douglas T. Ross and SofTech, Inc. In its original form, IDEF0 includes both a definition of a graphical modeling language ( syntax and semantics ) and a description of a comprehensive methodology for developing models. [ 3 ] The US Air Force commissioned the SADT developers "to develop a function model method for analyzing and communicating the functional perspective of a system. IDEF0 should assist in organizing system analysis and promote effective communication between the analyst and the customer through simplified graphical devices". [ 2 ]
Where the Functional flow block diagram is used to show the functional flow of a product , IDEF0 is used to show data flow , system control, and the functional flow of lifecycle processes. IDEF0 is capable of graphically representing a wide variety of business, manufacturing and other types of enterprise operations to any level of detail. It provides rigorous and precise description, and promotes consistency of usage and interpretation. It is well-tested and proven through many years of use by government and private industry. It can be generated by a variety of computer graphics tools. Numerous commercial products specifically support development and analysis of IDEF0 diagrams and models. [ 1 ]
An associated technique, Integration Definition for Information Modeling (IDEF1x), is used to supplement IDEF0 for data-intensive systems. The IDEF0 standard, Federal Information Processing Standards Publication 183 (FIPS 183), and the IDEF1x standard (FIPS 184) are maintained by the National Institute of Standards and Technology (NIST). [ 1 ]
FIPS PUB 183 "Integration Definition for Function Modeling (IDEF0)," was withdrawn as a Federal Standard (in favor of OPEN Specifications and Standards) September 2, 2008, as cited in "The Federal Register", Volume 73, page 51276 (73FR/51276). [ 4 ]
During the 1970s, the U.S. Air Force Program for Integrated Computer Aided Manufacturing (ICAM) sought to increase manufacturing productivity through systematic application of computer technology. The ICAM program identified the need for better analysis and communication techniques for people involved in improving manufacturing productivity. As a result, in 1981 the ICAM program developed a series of techniques known as the IDEF (ICAM Definition) techniques which included the following: [ 3 ]
In 1983, the U.S. Air Force Integrated Information Support System program enhanced the IDEF1 information modeling technique to form IDEF1X (IDEF1 Extended), a semantic data modeling technique. By the 1990s, IDEF0 and IDEF1X techniques are widely used in the government, industrial and commercial sectors, supporting modeling efforts for a wide range of enterprises and application domains. In 1991 the National Institute of Standards and Technology (NIST) received support from the U.S. Department of Defense, Office of Corporate Information Management (DoD/CIM), to develop one or more Federal Information Processing Standard (FIPS) for modeling techniques. The techniques selected were IDEF0 for function modeling and IDEF1X for information modeling . These FIPS documents are based on the IDEF manuals published by the U.S. Air Force in the early 1980s. [ 3 ] Sometime later, IEEE created the IDEF0 standard, and ISO adopted and published it as IEEE/ISO/IEC 31320-1.
IDEF0 may be used to model a wide variety of automated and non-automated systems. For new systems, it may be used first to define the requirements and specify the functions, and then to design an implementation that meets the requirements and performs the functions. For existing systems, IDEF0 can be used to analyze the functions the system performs and to record the mechanisms (means) by which these are done. The result of applying IDEF0 to a system is a model that consists of a hierarchical series of diagrams, text, and glossary cross-referenced to each other. The two primary modeling components are functions (represented on a diagram by boxes) and the data and objects that inter-relate those functions (represented by arrows). [ 3 ]
The IDEF0 model displayed here on the left is based on a simple syntax . Each activity is described by a verb-based label placed in a box. Inputs are shown as arrows entering the left side of the activity box while output are shown as exiting arrows on the right side of the box. Controls are displayed as arrows entering the top of the box and mechanisms are displayed as arrows entering from the bottom of the box. Inputs, Controls, Outputs, and Mechanisms (ICOM) are all referred to as concepts. [ 2 ]
IDEF0 is a model that consists of a hierarchical series of diagrams, text, and glossary cross referenced to each other. The two primary modeling components are:
As shown by Figure 3 the position at which the arrow attaches to a box conveys the specific role of the interface. The controls enter the top of the box. The inputs, the data or objects acted upon by the operation, enter the box from the left. The outputs of the operation leave the right-hand side of the box. Mechanism arrows that provide supporting means for performing the function join (point up to) the bottom of the box. [ 1 ]
The IDEF0 process starts with the identification of the prime function to be decomposed. This function
is identified on a “Top Level Context Diagram,” that defines the scope of the particular IDEF0 analysis. An example of a Top Level Context Diagram for an information system management process is shown in Figure 3. From this diagram lower-level diagrams are generated. An example of a derived diagram, called a “child” in IDEF0 terminology, for a life cycle function is shown in Figure 4. [ 1 ]
In Dec 1993 the National Institute of Standards and Technology announcing the standard for Integration Definition for Function Modeling (IDEF0) in the category Software Standard, Modeling Techniques. This publication announces the adoption of the IDEF0 as a Federal Information Processing Standard (FIPS). This standard was based on the Air Force Wright Aeronautical Laboratories Integrated Computer-Aided Manufacturing (ICAM) Architecture from June 1981. [ 3 ]
On September 2, 2008, the associated NIST standard, FIPS 183, has been withdrawn (decision on Federal Register vol. 73 / page 51276. [ 4 ]
Systems Engineering Fundamentals. Defense Acquisition University Press, 2001.
This article incorporates public domain material from the National Institute of Standards and Technology | https://en.wikipedia.org/wiki/IDEF0 |
IDEF3 or Integrated DEFinition for Process Description Capture Method is a business process modelling method complementary to IDEF0 . [ 1 ] The IDEF3 method is a scenario-driven process flow description capture method intended to capture the knowledge about how a particular system works. [ 2 ]
The IDEF3 method provides modes to represent both [ 2 ]
This method is part of the IDEF family of modeling languages in the field of systems and software engineering .
One of the primary mechanisms used for descriptions of the world is relating a story in terms of an ordered sequence of events or activities. The IDEF3 Process Description Capture Method was created to capture descriptions of sequences of activities, which is considered the common mechanisms to describe a situation or process . The primary goal of IDEF3 is to provide a structured method by which a domain expert can express knowledge about the operation of a particular system or organization. Knowledge acquisition is enabled by direct capture of assertions about real-world processes and events in a form that is most natural for capture. IDEF3 supports this kind of knowledge acquisition by providing a reliable and wellstructured approach for process knowledge acquisition , and an expressively, yet easy-to-use, language for information capture and expression. [ 1 ]
Motives for the development of IDEF3 were the need: [ 1 ]
The original IDEFs were developed since the mid-1970s for the purpose of enhancing communication among people who needed to decide how their existing systems were to be integrated. IDEF0 was designed to allow a graceful expansion of the description of a systems' functions through the process of function decomposition and categorization of the relations between functions (i.e., in terms of the Input, Output, Control, and Mechanism classification). IDEF1 was designed to allow the description of the information that an organization deems important to manage in order to accomplish its objectives. [ 2 ]
The third IDEF ( IDEF2 ) was originally intended as a user interface modeling method. However, since the Integrated Computer-Aided Manufacturing (ICAM) Program needed a simulation modeling tool, the resulting IDEF2 was a method for representing the time varying behavior of resources in a manufacturing system, providing a framework for specification of math model based simulations. It was the intent of the methodology program within ICAM to rectify this situation but limitation of funding did not allow this to happen. As a result, the lack of a method which would support the structuring of descriptions of the user view of a system has been a major shortcoming of the IDEF system. The basic problem from a methodology point of view is the need to distinguish between a description of what a system (existing or proposed) is supposed to do and a representative simulation model that will predict what a system will do. The latter was the focus of IDEF2 , the former is the focus of IDEF3. [ 2 ]
The distinction between descriptions and models , though subtle, is an important one in IDEF3, and both have a precise technical meaning. [ 1 ]
The power of a model comes from its ability to simplify the real-world system it represents and to predict certain facts about that system by virtue of corresponding facts within the model. Thus, a model is a designed system in its own right. Models are idealized systems known to be incorrect but assumed to be close enough to provide reliable predictors for the predefined areas of interest within a domain. A description , on the other hand, is a recording of facts or beliefs about something within the realm of an individual’s knowledge or experience. Such descriptions are generally incomplete; that is, the person giving a description may omit facts that he or she believes are irrelevant, or which were forgotten in the course of describing the system. Descriptions may also be inconsistent with respect to how others have observed situations within the domain. IDEF3 accommodates these possibilities by providing specific features enabling the capture and organization of alternative descriptions of the same scenario or process, see figure. [ 1 ]
Modeling necessitates taking additional steps beyond description capture to resolve conflicting or inconsistent views. This, in turn, generally requires modelers to select or create a single viewpoint and introduce artificial modeling approximations to fill in gaps where no direct knowledge or experience is available. Unlike models, descriptions are not constrained by idealized, testable conditions that must be satisfied, short of simple accuracy. [ 1 ]
The purpose of description capture may be simply to record and communicate process knowledge or to identify inconsistencies in the way people understand how key processes actually operate. By using a description capture method users need not learn and apply conventions forcing them to produce executable models (e.g., conventions ensuring accuracy, internal consistency, logical coherence, non-redundancy, completeness). Forcing users to model requires them to adopt a model design perspective and risk producing models that do not accurately capture their empirical knowledge of the domain. [ 1 ]
The notion of a scenario or story is used as the basic organizing structure for IDEF3 Process Descriptions. A scenario can be thought of as a recurring situation, a set of situations that describe a typical class of problems addressed by an organization or system, or the setting within which a process occurs. Scenarios establish the focus and boundary conditions of a description. Using scenarios in this way exploits the tendency of humans to describe what they know in terms of an ordered sequence of activities within the context of a given scenario or situation. Scenarios also provide a convenient vehicle to organize collections of process-centered knowledge. [ 1 ]
IDEF3 Process Schematics are the primary means for capturing, managing, and displaying process-centered knowledge. These schematics provide a graphical medium that helps domain experts and analysts from different application areas communicate knowledge about processes. This includes knowledge about events and activities, the objects that participate in those occurrences, and the constraining relations that govern the behavior of an occurrence. [ 1 ]
IDEF3 Object Schematics capture, manage, and display object-centered descriptions of a process—that is, information about how objects of various kinds are transformed into other kinds of things through a process, how objects of a given kind change states through a process, or context-setting information about important relations among objects in a process. [ 1 ]
IDEF3 descriptions are developed from two different perspectives: process-centered and object-centered. Because these approaches are not mutually exclusive, IDEF3 allows cross-referencing between them to represent complex process descriptions. [ 1 ]
Process schematics tend to be the most familiar and broadly used component of the IDEF3 method. These schematics provide a visualization mechanism for process-centered descriptions of a scenario. The graphical elements that comprise process schematics include Unit of Behavior (UOB) boxes, precedence links, junctions, referents, and notes. The building blocks here are: [ 1 ]
IDEF offers a series of building blocks to express detailed object-centered process information; that is, information about how objects of various kinds are transformed into other kinds of things through a process, or how objects of a given kind change states through a process. [ 1 ] | https://en.wikipedia.org/wiki/IDEF3 |
IDEF4 , or Integrated DEFinition for Object-Oriented Design , is an object-oriented design modeling language for the design of component-based client/server systems. It has been designed to support smooth transition from the application domain and requirements analysis models to the design and to actual source code generation. It specifies design objects with sufficient detail to enable source code generation. [ 1 ]
This method is part of the IDEF family of modeling languages in the field of systems and software engineering .
IDEF4 method is a graphically oriented methodology for the design of object-oriented software systems. The object-oriented programming paradigm provides the developer with an abstract view of their program as composed of a set of state maintaining objects which define the behavior of the program by the protocol of their interactions. An object consists of a set of local state defining attributes and a set of methods (procedures) that define the behavior of that particular object and its relationship to the other objects that make up the system. [ 2 ]
The IDEF4 method multi-dimensional approach to object-oriented software system design consists of the following items: [ 1 ]
The development of IDEF4 came from the recognition that the modularity, maintainability, and code reusability that results from the object-oriented programming paradigm can be realized in traditional data processing applications. The proven ability of the object-oriented programming paradigm to support data level integration in large complex distributed systems is also a major factor in the widespread interest in this technology from the traditional data processing community. [ 2 ]
IDEF4 was developed as a design tool for software designers who use object-oriented languages such as the Common Lisp Object System , Flavors , Smalltalk , Objective-C , C++ and others. Since effective usage of the object-oriented paradigm requires a different thought process than used with conventional procedural or database languages , standard methodologies such as structure charts , data flow diagrams , and traditional data design models (hierarchical, relational, and network) are not sufficient. IDEF4 seeks to provide the necessary facilities to support the object-oriented design decision making process. [ 2 ]
IDEF4 uses an object-oriented design method or procedure that is very similar to Rumbaugh ’s Object Method Technique [ 3 ] and Schlaer / Mellor ’s Object-Oriented Analysis and Design (OOA/OOD) technique. [ 4 ] However, there are some crucial differences:
These extra dimensions are shown in the figure. The edges of the box show the progression of the design from start to finish elaborating each of these dimensions.
In IDEF4, a design starts with the analysis of requirements and takes as input the domain objects. These domain objects are encoded in their equivalent IDEF4 form and marked as domain objects. As computational objects are developed for these objects, they are marked as “transitional” and finally as “completed.” The level of completion of an IDEF4 design is determined by setting measures based on the status, level, and model dimensions of individual artifacts in the design. [ 1 ]
The system-level design starts once the “raw material” (domain) objects have been collected. This develops the design context, ensures connectivity to legacy systems, and identifies the applications that must be built to satisfy the requirements. Static, dynamic, behavioral, and rationale models are built for the objects at the system level. These specifications become the requirements on the application level – the next level of design. The application level design identifies and specifies all of the software components (partitions) needed in the design. Static models, dynamic models, behavioral models, and the rationale component are built for the objects at the application level. These specifications become the requirements on the next level of design – the low-level design. Static Models, Dynamic Models, Behavioral Models, and the design rationale component are built for the low-level design objects. Sub-layers may be built within each layer to reduce complexity. [ 1 ]
IDEF4 is an iterative procedure involving partitioning, classification/specification, assembly, simulation, and rearranging activities (see figure ). First, the design is partitioned into objects, each of which is either classified against existing objects or for which an external specification is developed. The external specification enables the internal specification of the object to be delegated and performed concurrently. After classification/specification, the interfaces between the objects are specified in the assembly activity (i.e., static, dynamic, and behavioral models detailing different aspects of the interaction between objects are developed). While the models are developed, it is important to simulate use scenarios or cases [ 5 ] between objects to uncover design flaws. Based on these flaws, the designer can then rearrange the existing models and simulate them until the designer is satisfied. [ 1 ]
IDEF4’s defines a set of object oriented concepts: [ 1 ]
The IDEF4 Method assumes that the domain objects have been identified through Object-Oriented Domain Analysis. Methods such as IDEF1 , IDEF5 , IDEF3 , SA/SD can be used to perform domain analysis. [ 6 ] However, IDEF4 practitioners should be aware of how objects are identified, as the design process may reveal deficiencies in the Object-Oriented Analysis. IDEF4 had defined five types of classes: [ 1 ]
IDEF4 users design in three distinct layers: [ 1 ]
This three layered organization reduces the complexity of the design. The system design layer ensures connectivity to other systems in the design context. The application layer depicts the interfaces between the components of the system being designed. These components include commercial applications, previously designed and implemented applications, and applications to be designed. The low-level design layer represents the foundation objects of the system.
IDEF4 distinguishes between IDEF4 artifacts newly created from the application domain, artifacts in transition to design specification, and artifacts that have been specified that can be applied to create the design specification. Any design artifact in IDEF4 can be marked as domain, transition, or complete. This allows practitioners and reviewers to track the progress of the design toward completion. [ 1 ]
IDEF4 uses three design models and a design rationale component: [ 1 ]
The design rationale component provides a top-down representation of the system, giving a broad view that encompasses the three design models and documents the rationale for major design evolutions.
Each model represents a different cross section of the design. The three design models capture all the information represented in a design project, and the design rationale documents the reasoning behind the design. Each model is supported by a graphical syntax that highlights the design decisions that must be made and their impact on other perspectives of the design. To facilitate use, the graphical syntax is identical among the three models. [ 1 ]
IDEF4 provides a broad range of design features – from generic to specific. This range enables deferred decision making by allowing the designer to first capture design features in general terms and later to refine them. This significantly reduces the burden on designers by allowing them to immediately capture new design concepts with IDEF4 design features, even if these design concepts have not yet been explored in detail. [ 1 ] | https://en.wikipedia.org/wiki/IDEF4 |
The Industrial Development & Renovation Organization of Iran ( IDRO ) known as IDRO Group was established in 1967 in Iran . [ 3 ] IDRO Group is one of the largest companies in Iran . It is also one of the largest conglomerates in Asia . IDRO's objective is to develop Iran's industry sector and to accelerate the industrialization process of the country and to export Iranian products worldwide. [ 3 ] Today, IDRO owns 117 subsidiaries and affiliated companies both domestically as well as internationally. [ 1 ]
In the course of its 40 years of activity, IDRO has gradually become a major shareholder of some key industries in Iran. In recent years and in accordance with the country's privatization policy, IDRO has made great efforts to privatize its affiliated companies. While carrying on its privatization policies and lessening its role as a holding company, IDRO intends to concentrate on its prime missions and to turn into an industrial development agency.
IDRO has focused its activities on the following areas in order to materialize such strategy and to expedite the industrial development of Iran:
IDRO had privatized 140 of its companies worth about 2,000 billion rials ($200 million) in the past. The organization will offer shares of 150 industrial units to private investors by March 2010. In 2009, 290 companies were under the control of the IDRO. [ citation needed ]
This is a list of IDRO's main subsidiaries (as of 2008): | https://en.wikipedia.org/wiki/IDRO_Group |
IDT Spectrum , a subsidiary of IDT Corporation , holds and leases fixed wireless spectrum.
IDT Spectrum, Inc. and IDT Spectrum, LLC, are subsidiaries of IDT Corporation ( NYSE : IDT ), [ 1 ] [ 2 ] an international holding company, with interests primarily in the telecommunications and energy industries. In December 2001, IDT Corporation through its subsidiary Winstar Holdings, LLC, acquired FCC spectrum licenses and other assets from the bankruptcy estate of Winstar Communications . Winstar Holdings formed a subsidiary company, Winstar Spectrum, LLC, to hold its spectrum licenses and then caused the spectrum licenses to be assigned to it. As part of an internal corporate reorganization, in December 2004, Winstar Holdings formed IDT Spectrum, LLC, and caused Winstar Spectrum, LLC to transfer and assign to IDT Spectrum, LLC all of its FCC licenses except for six point-to-point licenses that are not used in our business. In January 2005, Winstar Holdings formed IDT Spectrum and contributed to IDT Spectrum all of its interests in IDT Spectrum, LLC, as well as other assets used in its connectivity services. [ 3 ]
IDT Spectrum, LLC's primary holdings include 633 spectrum licenses in the 39 GHz range, as well as an additional 16 LMDS licenses in the 28 GHz band, making it the largest single holder of 39 GHz licensed auction spectrum in the United States. [ 4 ]
The FCC's 39 GHz auctioned license band spectrum is primarily licensed in Economic Areas, or EAs. EAs are delineated by the Regional Analysis Division, Bureau of Economic Analysis, U.S. Department of Commerce and are based on 176 metropolitan or micropolitan statistical areas that serve as regional centers of economic activity, plus the surrounding counties that are economically related to these areas. On average, IDT Spectrum holds more than 500 MHz of spectrum in the top 200 U.S. markets (Economic Areas by population) and approximately 940 MHz of spectrum in the top 50 U.S. markets. IDT Spectrum's 39 GHz holdings–are contiguous across the United States (including Alaska, Hawaii and Puerto Rico). [ 3 ]
In October 2010, IDT Spectrum renewed 633 of its 39 GHz licenses, which now expire in October 2020. [ 4 ] The majority of IDT Spectrum's 28 GHz LMDS licenses expire in October 2018.
Among other business activities, IDT Spectrum leases the spectrum to customers who use their own microwave equipment. [ 5 ] The licenses held by IDT Spectrum are suited for high bandwidth point to point applications, including the needs of wireless operators to carry traffic from cell sites to network access points–referred to in the industry as backhaul. [ 6 ]
The current President and CEO of IDT Spectrum is Michael Rapaport. [ 7 ] | https://en.wikipedia.org/wiki/IDT_Spectrum |
IEBus ( Inter Equipment Bus ) is a communication bus specification "between equipments within a vehicle or a chassis " of Renesas Electronics . It defines OSI model layer 1 and layer 2 specification. IEBus is mainly used for car audio and car navigations , which established de facto standard in Japan, though SAE J1850 is major in United States. [ 1 ] IEBus is also used in some vending machines , which major customer is Fuji Electric . [ 2 ] : 244(42) Each button on the vending machine has an IEBus ID , i.e. has a controller . Detailed specification is disclosed to licensees only, but protocol analyzers are provided from some test equipment vendors. [ 3 ] Its modulation method is PWM (Pulse-Width Modulation) with 6.00 MHz base clock originally, but most of automotive customers use 6.291 MHz, and physical layer is a pair of differential signalling harness. Its physical layer adopts half-duplex , asynchronous , and multi-master communication with carrier-sense multiple access with collision detection (CSMA/CD) for medium access control . [ 4 ] : 7 It allows for up to fifty units on one bus over a maximum length of 150 meters. [ 4 ] : 7 Two differential signalling lines are used with Bus+ / Bus− naming, [ 4 ] : 5 sometimes labeled as Data(+) / Data(−).
It is sometimes described as "IE-BUS", "IE-Bus," or "IE Bus," but these are incorrect. In formal, it is "IEBus." IEBus® and Inter Equipment Bus® are registered trademark symbols of Renesas Electronics Corporation , formerly NEC Electronics Corporation , ( JPO : Reg. No.2552418 [ 5 ] and 2552419, [ 6 ] respectively).
In the middle of '80s, semiconductor unit of NEC Corporation , currently Renesas Electronics , started the study for increasing demands for automotive audio systems. [ 7 ] IEBus is introduced as a solution for the distributed control system. [ 8 ] : 18
In the late 1980s, several similar specifications, including the Domestic Digital Bus (D2B) , the Japanese Home Bus (HBS) , [ 9 ] [ 10 ] [ 11 ] and the European Home System (EHS) are proposed by different companies or organizations. These were once discussed as IEC 61030 , [ 12 ] but it was withdrawn in 2006. IEBus is also a similar specification (refer to " Transfer signal format " section), but not listed in these criteria. As the result, IEBus becomes a de facto standard of car audio in Japan. [ citation needed ] Regarding the Domestic Digital Bus (D2B) , it is re-defined as D2B Optical by Mercedes-Benz independently. As for Japanese Home Bus System (HBS) , it is defined in 1988 as Home Bus System Standard Specification, ET-2101 by JEITA and REEA (Radio Engineering & Electronics Assiation) in Japan. It is being used by several Japanese air conditioner manufacturers (for example, M-Net from Mitsubishi [ 13 ] and the P1/P2 or F1/F2 bus from Daikin [ 14 ] [ 15 ] ). Fujitsu provided HBPC (Home Bus Protocol Controller) chip as MB86046B. [ 11 ] But it is unclear whether Fujitsu (currently, Cypress) still manufactures this HBPC LSI as of 2018. Mitsumi Electric provides the MM1007 and MM1192 driver ICs for HBS. The HBS specification is also discussed in the Echonet Consortium . [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] In 2014, a utility model patent for protocol converter from HBS to RS-485 is granted in China as "CN204006496U." [ 21 ]
Regarding the replacement of IEBus, a paper by Hyundai Autonet , currently Hyundai Mobis , [ 22 ] describes as follows. "In communication methods for digital input capable amplifiers, Inter Equipment Bus (IEBus) was used in early times, but for now, Controller Area Network (CAN) is mainly used." [ 23 ]
A master talks to a slave. Each unit has a master and a slave address register. Only one device can talk on the bus at any given time. There is a pecking order for the types of communications which will take precedence over another. Each communication from master to slave must be replied to by the slave going back to the master with acknowledge bits each of those show ACK or NAK . [ 4 ] : 10 If the master does not receive the ACK within a predefined time allowance for a mode, it drops the communication and returns to its standby (listen) mode.
Detailed specification of OSI model layer 2 is disclosed to licensees only, but protocol analyzers are provided from some test equipment vendors. [ 3 ] [ 24 ] In 2012, one of Chinese manufacturer's patent is granted as "CN202841169U". [ 25 ]
An open-source software emulator called "IEBus Studio" exists on a repository of SourceForge , but the last update was on 2008-02-24. [ 26 ] [ 27 ] Another open-source analyzer software called "IEBusAnalyzer" is available on GitHub repository . [ 28 ] Some hobbyist made some tools also. [ 29 ]
From μPD6708 data sheet. [ 4 ] : 7 and μPD78098B Subseries user's manual, hardware. [ 30 ] : 428
From μPD6708 data sheet. [ 4 ] : 10 and μPD78098B Subseries user's manual, hardware. [ 30 ] : 433
This frame format is much similar to that of Domestic Digital Bus (D2B) . [ 31 ] : §10.2, p.361
P: Parity bit (1 bit); Even parity A: Acknowledge bit (1 bit) When A = 0: ACK When A = 1: NAK In broadcast communication, the value of the acknowledge bit is ignored. N: Number of data bytes
Each IEBus bit consists of four periods. [ 30 ] : 435
Each manufacturer has its own name, but it is not an alias of IEBus. Those are specifications of wire harness which comprise control cables based on IEBus, OSI model layer 3 and above communication protocol, audio cables, interconnection couplers, and so on.
Pioneer Corporation employed IEBus for its original branded car audio in early '90s. In its earlier stage, it was used just for control bus between the head unit in dashboard and the CD changer usually placed in trunk room . Nowadays, the specification includes connection between head units, navigation systems, rear speaker systems, and so on.
Pioneer Corporation pushed Toyota Motor Corporation to adopt IEBus as the genuine parts . In 1994, Toyota decided to employ IEBus for its genuine specification , [ 34 ] but it is slightly different from that of Pioneer. It is named as AVC-LAN .
Pioneer Corporation also pushed Honda Motor . Honda also decided to adopt IEBus as its genuine parts specification just after Toyota do so.
Sirius XM Satellite Radio is a satellite broadcasting radio operator in US. Its digital media receiver equipment utilizes IEBus. [ 43 ]
GR-SAKUKRA board and GR-SAKURA-FULL board [ 44 ] are Renesas official promotion boards of RX63N chip, which enables IEBus mode 0 and 1, but not mode 2, i.e. not available for Toyota AVC-LAN.
They are an Arduino pin compatible low-price ones, suitable for hobbyists.
Their color of printed circuit board is SAKURA in Japanese, which means cherry blossom .
To evaluate IEBus, an external 5V bus interface transceiver (driver/receiver) IC extension is required.
The transceiver needs to correspond to 3.3V microcontroller ( TTL logic voltage level ) interface, otherwise 3.3V ↔ 5.0V level shifter is required. Dedicated terminals of RX63N chip themselves are 5V tolerant. For further information, refer to external links .
Semiconductor intellectual property core of IEBus is available via IP core Exchange . [ 45 ]
Most of IEBus controller LSIs require external dedicated bus interface transceivers (driver/receiver ICs). In its earlier stage, bus interface transceiver is included in the device, but it raised some restrictions to users. [ 46 ] As is described in Pioneer's paper, external bus interface transceiver seems much stable. [ 32 ] Some people tried to use TI's SN75176B for this purpose, but the result seems not to be reported. [ 47 ]
Each IEBus controller may have different implementation as long as the specification can be kept. As the result, host CPU load for each IEBus controller implementation differs.
Nowadays, there are thousands of microcontroller products to be list up, those which incorporate various different IEBus controller implementations. The following list is historically notable example.
μPD6708; [ 4 ] the world's first "IEBus protocol controller" is usually thought as the golden protocol reference LSI.
This device supports full specification of IEBus mode 0, 1, and 2. It processes all the layer 1 and 2 of IEBus protocol by itself.
It is connected to a host microcontroller via 3-line serial interface.
6.291 MHz base clock is generated from 12.582 MHz external resonator.
This product contains IEBus interface transceiver.
μPD72042B; [ 48 ] the second generation of IEBus controller supports mode 0 and 1.
This device performs all the processing required for layers 1 and 2 of IEBus protocol. The device incorporate large transmission and reception buffers, allowing host microcontroller to perform IEBus operations without interruption. It also contain an IEBus interface transceiver which allow the device to connect directly to the IEBus interface. It is connected to a host microcontroller via 3-line or 2-line serial interface.
6.291 MHz base clock is generated from 6.291 MHz or 12.782 MHz external resonator.
This product contains IEBus interface transceiver.
Each external bus transceiver (driver/receiver) IC is recommended to connect via 180 Ω protection resistors against both Bus+ and BUS- line. [ 46 ]
R2A11210SP [ 49 ] is a bus interface transceiver (driver/receiver) IC for IEBus with typically 30 mV hysteresis comparator input.
HA12187FP [ 50 ] is a bus interface transceiver (driver/receiver) IC suitable for IEBus.
HA12240FP [ 51 ] is a bus interface transceiver (driver/receiver) IC for IEBus with hysteresis comparator input.
SN75176B [ 52 ] is a general purpose bus transceiver with 50mV typically hysteresis comparator input. It looks like suitable for IEBus, but the result by a person is not reported. [ 47 ]
μPD78P098A [ 53 ] [ 54 ] : §20, pp.385–418 [ 55 ] [ 56 ] is an 8-bit single-chip microcontroller with on-chip 60K bytes UV-EPROM, 2K bytes RAM, and IEBus controller, which supports mode 0, 1, and 2, with full data link layer protocol support.
This is the world's first microcontroller which incorporates IEBus controller. Its IEBus controller function is almost the same as that of μPD72042B, but is located as memory mapped I/O called SFR (special function registers). 6.291 MHz base clock is generated from 6.291 MHz external resonator, while host CPU core and watch timer works 8.388 MHz generated from the same external resonator. External bus interface transceiver is required. For programming, UV-EPROM erasor, UV-EPROM writer (27C1001A compatible), and writer adapter module are required.
μPD78P098B [ 57 ] : §20, pp.428–461 is an 8-bit single-chip microcontroller with on-chip 60K bytes UV-EPROM, 2K bytes RAM, and IEBus controller, which supports mode 0, 1, and 2, with full data link layer support. It is probably a low noise variant of μPD78098 Subseries. Documents are refined.
μPD178F098 [ 8 ] [ 58 ] : §17, pp.367–422 [ 59 ] is an 8-bit single-chip microcontroller for DTS (Digital Tuning System) of car radio, which incorporate simplified IEBus controller, 60K bytes Flash ROM, and 3K bytes RAM.
It does not support mode 0 and 2, but support mode 1 only. 6.291 MHz base clock is generated from 6.291 MHz external resonator, while host CPU core and watch timer works 8.388 MHz generated from the same external resonator. External bus interface transceiver is required.
μPD78F4938 [ 60 ] : §20, pp.467–510 is a 16-bit single-chip microcontroller for car audio, which incorporate simplified IEBus controller, 256K bytes Flash ROM, and 10K bytes RAM.
It does not support mode 0 and 2, but support mode 1 only. 6.291 MHz base clock is generated from 6.291 MHz external resonator. External bus interface transceiver is required.
V850/SB2 [ 61 ] [ 62 ] [ 63 ] : §19, pp.541–599 is a long running 32-bit microcontroller employs IEBus controller with the 1st generation V850 CPU core.
Its IEBus controller is simplified from previous ones. [ 64 ] It does not support mode 0 and 2, but support mode 1 only. [ 63 ] : 541 6.291 MHz base clock is generated from 6.291, 12.582, or 18.873 MHz external resonator. [ 63 ] : 257 This source clock is shared in the whole system in the chip including watch timer. A 32.768 kHz external crystal resonator is not used usually to reduce total BOM cost. External bus interface transceiver is required, but external 5V I/O power supply is internally regulated to 3.3V or 3.0V, [ 63 ] : 517 which enables same voltage supply with external bus interface transceiver.
In addition, this product intended to design for ultra low-noise, which enables high RF receiving sensitivity for car radio. [ 65 ] : 41–44 In addition, starter motor mask time and electrical current amplitude is well balanced.
LoL: on 03/23/2017 Rensas Electronics said "An external differential driver is required on the transmit/receive data line (not manufactured by NEC Electronics )," [ 64 ] but NEC Electronics is currently Renesas Electronics , and Renesas Electronics (formerly Hitachi) had been manufacturing "an external differential driver" named HA12240FP . [ 51 ] In Japanese, it is said as "当社" [ 66 ] which means Renesas Electronics itself .
V850E/SJ3-H and V850E/SK3-H [ 67 ] : §20, pp.973–1039 are 2nd generation V850 (E1 core) 32-bit microcontrollers .
Its IEBus controller is simplified, but supports both mode 1 and mode 2, not for mode 0.
External bus interface transceiver is required.
These products includes the V850E1 CPU core and peripheral functions. As for automotive network, these are equipped with IEBus and CAN ( Controller Area Network ) controllers.
V850ES/SG3 and V850ES/SJ3 are 3rd generation V850 (ES core) 32-bit microcontrollers those contain IEBus controller.
V850ES/SG3 [ 68 ] : §18, pp.632–697 and V850ES/SJ3 [ 69 ] : §18, pp.660–725 are 3rd generation V850 (ES core) 32-bit microcontrollers .
Its IEBus controller is simplified, but supports both mode 1 and mode 2, not for mode 0.
External bus interface transceiver is required.
These products includes the V850ES CPU core and peripheral functions. As for automotive network, these are equipped with IEBus and CAN ( Controller Area Network ) controllers.
V850E2/SG4-H, V850E2/SJ4-H, and V850E2/SK4-H [ 70 ] : §30, pp.2195–2323 are 5th generation V850 (E2v3 core) 32-bit microcontrollers .
Its IEBus controller is simplified, but supports mode 1 and 2 with 32-byte buffers both for transmission and for reception. [ 70 ] : 2199 It also has automatic mechanism both for reissuing master requests when arbitration loss occurs; and for responding to slave status requests. [ 70 ] : 2199 Its supply clock is 8.000 MHz, [ 70 ] : 2199 which might not have compatibility with 6.291 456 MHz base clock systems, almost all of car audio customer uses. It should be 8.388 MHz or nearest.
External bus interface transceiver is required.
These products includes the V850E2M CPU core and peripheral functions. As for automotive audio network, these are equipped with IEBus, CAN ( Controller Area Network ), LIN , PCM interface, MediaLB, [ 71 ] [ 72 ] and Ethernet controllers.
MB90580C Series; [ 73 ] : §21, pp.345–408 F2MC-16LX 16-bit microcontroller of Cypress Semiconductor (formerly Fujitsu Microelectronics ) has IEBus controller. It supports full feature of IEBus mode 0, 1, and 2, with 8-byte FIFO both for transmission and reception. Embedded peripheral resources performs data transmission with an intelligent I/O service function without the intervention of the CPU, enabling real-time control in various applications.
External bus interface transceiver is required.
M16C/5L Group and M16C/56 Group [ 74 ] : §21.3.5, pp.486–487 [ 75 ] is a 16-bit microcontroller with M16C/60 Series CPU Core.
UART2 can be used for IEBus controller as special mode 3 (IE mode).
External bus interface transceiver is required.
H8S/2258 and H8S/2256 [ 76 ] : 316 [ 77 ] [ 78 ] : §14, pp.481–546 is a long running microcontroller comprised internal 32-bit configuration H8S/2000 CPU core with 16-bit external bus controller. Its IEBus controller supports mode 0, 1, and 2 with 1 byte data buffer both for transfer and reception.
External bus interface transceiver is required.
RX63N [ 79 ] : §39, pp.1639–1680 is a recent 32-bit microcontroller . Its IEBus controller supports mode 0, and 1 (not 2). Arduino pin compatible low-price evaluation board, called SAKURA , is available for hobbyists. | https://en.wikipedia.org/wiki/IEBus |
D²B ( Domestic Digital Bus , IEC 61030 ) is an IEC standard for a low-speed multi-master serial communication bus for home automation applications. It was originally developed by Philips in the 1980s. In 2006 it has been withdrawn by IEC because another standard was proposed, JTC1 SC 83/WG1. There remain many IEC61030-compliant devices, such as some Philips-branded head units and CD changers from car stereos. [ 1 ]
The SCART connector provides a D²B connection for inter-device communication.
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IEC_61030 |
IEC 61108 is a collection of IEC standards for "Maritime navigation and radiocommunication equipment and systems - Global navigation satellite systems ( GNSS )".
The 61108 standards are developed in Working Group 4 (WG 4A) of Technical Committee 80 (TC80) of the IEC.
Standard IEC 61108 is divided into four parts:
On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the characteristics of shipped GNSS receivers.
On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the performance standards for shipborne GNSS receivers: | https://en.wikipedia.org/wiki/IEC_61108 |
IEC 61131 is an IEC standard for programmable controllers . It was first published in 1993; [ 1 ] the current (third) edition dates from 2013. [ 2 ] It was known as IEC 1131 before the change in numbering system by IEC. The parts of the IEC 61131 standard are prepared and maintained by working group 7, programmable control systems, of subcommittee SC 65B of Technical Committee TC65 of the IEC.
Standard IEC 61131 is divided into several parts: [ 3 ]
IEC 61499 Function Block
PLCopen has developed several standards and working groups. | https://en.wikipedia.org/wiki/IEC_61131 |
IEC 61162 is a collection of IEC standards for "Digital interfaces for navigational equipment within a ship".
The 61162 standards are developed in Working Group 6 (WG6) of Technical Committee 80 (TC80) of the IEC.
Standard IEC 61162 is divided into the following parts:
The 61162 standards are all concerning the transport of NMEA sentences, but the IEC
does not define any of these. This is left to the NMEA Organization .
Single talker and multiple listeners.
Single talker and multiple listeners, high-speed transmission.
Serial data instrument network, multiple talker-multiple listener, prioritized data.
Multiple talkers and multiple listeners.
This subgroup of TC80/WG6 has specified the use of Ethernet for shipboard navigational networks. The specification describes the transport of NMEA sentences as defined in 61162-1 over IPv4. Due to the low amount of protocol complexity it has been nicknamed Lightweight Ethernet or LWE in short. [ 2 ] [ 3 ]
There are three revision: the original published in 2011, and updates in 2018 and 2024. [ 4 ]
IEC 61162-460:2015(E) is an add-on to the IEC 61162-450 standard. This standard extends the informative guidance given in Annex D of IEC 61162-450:2011. The first edition was published in August 2015.
State May 2016 is the first bridge and system manufacturers beginning with the implementation of IEC 61162-450 and IEC 61162-460.
IEC 61162-460:2024 (also referred to as IEC 61162-460 Edition 3.0) is an updated add-on to IEC 61162-450. This standard extends the informative guidance given in Annex D of IEC 61162-450:2011. The third edition was published in April 2024.
Known devices with -450 implementation:
Known devices/systems with -460 implementation: | https://en.wikipedia.org/wiki/IEC_61162 |
IEC 61508 is an international standard published by the International Electrotechnical Commission (IEC) consisting of methods on how to apply, design, deploy and maintain automatic protection systems called safety-related systems. It is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems ( E/E/PE , or E/E/PES ).
IEC 61508 is a basic functional safety standard applicable to all industries. It defines functional safety as: “part of the overall safety relating to the EUC (Equipment Under Control) and the EUC control system which depends on the correct functioning of the E/E/PE safety-related systems, other technology safety-related systems and external risk reduction facilities.” The fundamental concept is that any safety-related system must work correctly or fail in a predictable (safe) way.
The standard has two fundamental principles:
The safety life cycle has 16 phases which roughly can be divided into three groups as follows:
All phases are concerned with the safety function of the system.
The standard has seven parts:
Central to the standard are the concepts of probabilistic risk for each safety function. The risk is a function of frequency (or likelihood) of the hazardous event and the event consequence severity. The risk is reduced to a tolerable level by applying safety functions which may consist of E/E/PES, associated mechanical devices, or other technologies. Many requirements apply to all technologies but there is strong emphasis on programmable electronics especially in Part 3.
IEC 61508 has the following views on risks:
Specific techniques ensure that mistakes and errors are avoided across the entire life-cycle. Errors introduced anywhere from the initial concept, risk analysis, specification, design, installation, maintenance and through to disposal could undermine even the most reliable protection. IEC 61508 specifies techniques that should be used for each phase of the life-cycle.
The seven parts of the first edition of IEC 61508 were published in 1998 and 2000. The second edition was published in 2010.
The standard requires that hazard and risk assessment be carried out for bespoke systems: 'The EUC (equipment under control) risk shall be evaluated, or estimated, for each determined hazardous event'.
The standard advises that 'Either qualitative or quantitative hazard and risk analysis techniques may be used' and offers guidance on a number of approaches. One of these, for the qualitative analysis of hazards, is a framework based on 6 categories of likelihood of occurrence and 4 of consequence.
Categories of likelihood of occurrence
Consequence categories
These are typically combined into a risk class matrix
Where:
The safety integrity level (SIL) provides a target to attain for each safety function. A risk assessment effort yields a target SIL for each safety function. For any given design the achieved SIL is evaluated by three measures:
1. Systematic Capability (SC) which is a measure of design quality. Each device in the design has an SC rating. The SIL of the safety function is limited to smallest SC rating of the devices used. Requirement for SC are presented in a series of tables in Part 2 and Part 3. The requirements include appropriate quality control, management processes, validation and verification techniques, failure analysis etc. so that one can reasonably justify that the final system attains the required SIL.
2. Architecture Constraints which are minimum levels of safety redundancy presented via two alternative methods - Route 1h and Route 2h.
3. Probability of Dangerous Failure Analysis [ 1 ]
The probability metric used in step 3 above depends on whether the functional component will be exposed to high or low demand:
Note the difference between function and system. The system implementing the function might be in operation frequently (like an ECU for deploying an air-bag), but the function (like air-bag deployment) might be in demand intermittently.
Certification is third party attestation that a product, process, or system meets all requirements of the certification program. Those requirements are listed in a document called the certification scheme. IEC 61508 certification programs are operated by impartial third party organizations called certification bodies (CB). These CBs are accredited to operate following other international standards including ISO/IEC 17065 and ISO/IEC 17025. Certification bodies are accredited to perform the auditing, assessment, and testing work by an accreditation body (AB). There is often one national AB in each country. These ABs operate per the requirements of ISO/IEC 17011, a standard that contains requirements for the competence, consistency, and impartiality of accreditation bodies when accrediting conformity assessment bodies. ABs are members of the International Accreditation Forum (IAF) for work in management systems, products, services, and personnel accreditation or the International Laboratory Accreditation Cooperation (ILAC) for laboratory accreditation. A Multilateral Recognition Arrangement (MLA) between ABs will ensure global recognition of accredited CBs. IEC 61508 certification programs have been established by several global Certification Bodies. Each has defined their own scheme based upon IEC 61508 and other functional safety standards. The scheme lists the referenced standards and specifies procedures which describes their test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program. IEC 61508 certification programs are being offered globally by several recognized CBs including exida, Intertek , SGS-TÜV Saar , TÜV Nord, TÜV Rheinland, TÜV SÜD and UL .
ISO 26262 is an adaptation of IEC 61508 for Automotive Electric/Electronic Systems. It is being widely adopted by the major car manufacturers. [ 2 ]
Before the launch of ISO 26262, the development of software for safety related automotive systems was predominantly covered by the Motor Industry Software Reliability Association (MISRA) guidelines. [ 3 ] The MISRA project was conceived to develop guidelines for the creation of embedded software in road vehicle electronic systems. [ 3 ] A set of guidelines for the development of vehicle based software was published in November 1994. [ 4 ] This document provided the first automotive industry interpretation of the principles of the, then emerging, IEC 61508 standard. [ 3 ]
Today MISRA is most widely known for its guidelines on how to use the C and C++ languages. [ 5 ] MISRA C has gone on to become the de facto standard for embedded C programming in the majority of safety-related industries, and is also used to improve software quality even where safety is not the main consideration.
IEC 62279 provides a specific interpretation of IEC 61508 for railway applications. It is intended to cover the development of software for railway control and protection including communications, signaling and processing systems. EN 50128 and EN 50657 are equivalent CENELEC standards of IEC 62279. [ 6 ]
The process industry sector includes many types of manufacturing processes, such as refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power. IEC 61511 is a technical standard which sets out practices in the engineering of systems that ensure the safety of an industrial process through the use of instrumentation.
IEC 61513 provides requirements and recommendations for the instrumentation and control for systems important to safety of nuclear power plants. It indicates the general requirements for systems that contain conventional hardwired equipment, computer-based equipment or a combination of both types of equipment. An overview list of safety norms specific for nuclear power plants is published by ISO. [ 7 ]
IEC 62061 is the machinery-specific implementation of IEC 61508. It provides requirements that are applicable to the system level design of all types of machinery safety-related electrical control systems and also for the design of non-complex subsystems or devices.
Software written in accordance with IEC 61508 may need to be unit tested , depending up on the SIL it needs to achieve. The main requirement in Unit Testing is to ensure that the software is fully tested at the function level and that all possible branches and paths are taken through the software. In some higher SIL level applications, the software code coverage requirement is much tougher and an MC/DC code coverage criterion is used rather than simple branch coverage. To obtain the MC/DC (modified condition/decision coverage) coverage information, one will need a Unit Testing tool, sometimes referred to as a Software Module Testing tool. | https://en.wikipedia.org/wiki/IEC_61508 |
IEC 62264 is an international standard for enterprise control system integration. This standard is based upon ANSI/ISA-95 .
IEC 62264 consists of the following parts detailed in separate IEC 62264 standard documents:
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IEC_62264 |
IEC 62304 – medical device software – software life cycle processes [ 1 ] is an international standard published by the International Electrotechnical Commission (IEC). The standard specifies life cycle requirements for the development of medical software and software within medical devices. It has been adopted as national standards and therefore can be used as a benchmark to comply with regulatory requirements .
The IEC 62304 standard calls out certain cautions on using software, particularly SOUP ( software of unknown pedigree or provenance). The standard spells out a risk-based decision model on when the use of SOUP is acceptable, and defines testing requirements for SOUP to support a rationale on why such software should be used. [ 2 ]
Source: [ 3 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IEC_62304 |
IEC 62379 is a control engineering standard for the common control interface for networked digital audio and video products. IEC 62379 uses Simple Network Management Protocol to communicate control and monitoring information.
It is a family of standards that specifies a control framework for networked audio and video equipment and is published by the International Electrotechnical Commission . It has been designed to provide a means for entering a common set of management commands to control the transmission across the network as well as other functions within the interfaced equipment.
The parts within this standard include:
Part one is common to all equipment that conforms to IEC 62379 and a preview of the published document can be downloaded from the IEC web store here, [ 1 ] a section of the International Electrotechnical Commission web site. More information is available at the project group web site. [ 2 ]
Part 2, Audio has now been published and a preview can be downloaded from the IEC web store, [ 3 ] a section the International Electrotechnical Commission web site.
A first edition of Part 3, Video has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.
It contains the video MIB required by Part 7.
Part 7, Measurement, has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.
This part specifies those aspects that are specific to the measurement requirements of the EBU ECN-IPM Group, a member of the Expert Communities Networks. [ 4 ] An associated document EBU TECH 3345 [ 5 ] has recently been published by the EBU European Broadcasting Union .
Part 3 (Document 100/1896/NP) and Part 7 (Document 100/1897/NP) have been approved by IEC TC 100. [ 6 ] [ failed verification ]
Part 5.2, Transmission over Networks - Signalling, has now been published and can be downloaded from the IEC web store, [ 7 ]
IEC 62379-3:2015 Common control interface for networked digital audio and video products - Part 3: Video has now been published and can be downloaded from the IEC web store. [ 8 ]
IEC 62379-7:2015 Common control interface for networked digital audio and video products - Part 7: Measurements has now been published and can be downloaded from the IEC web store. [ 9 ] IEC 62379-7:2015 is the standardised (and extended) version of EBU TECH 3345 - End-to-End IP Network Measurement - MIB & Parameters, which can be obtained from here: [ 10 ] published by the EBU European Broadcasting Union . | https://en.wikipedia.org/wiki/IEC_62379 |
IEEE 1451 is a set of smart transducer interface standards developed by the Institute of Electrical and Electronics Engineers (IEEE) Instrumentation and Measurement Society's Sensor Technology Technical Committee describing a set of open, common, network-independent communication interfaces for connecting transducers (sensors or actuators) to microprocessors, instrumentation systems, and control/field networks. One of the key elements of these standards is the definition of Transducer electronic data sheets (TEDS) for each transducer. The TEDS is a memory device attached to the transducer, which stores transducer identification, calibration, correction data, and manufacturer-related information. The goal of the IEEE 1451 family of standards is to allow the access of transducer data through a common set of interfaces whether the transducers are connected to systems or networks via a wired or wireless means.
A transducer electronic data sheet (TEDS) is a standardized method of storing transducer ( sensors or actuators ) identification, calibration, correction data, and manufacturer-related information. [ 1 ] TEDS formats are defined in the IEEE 1451 set of smart transducer interface standards developed by the IEEE Instrumentation and Measurement Society 's Sensor Technology Technical Committee that describe a set of open, common, network-independent communication interfaces for connecting transducers to microprocessors, instrumentation systems, and control/field networks.
One of the key elements of the IEEE 1451 standards is the definition of TEDS for each transducer. The TEDS can be implemented as a memory device attached to the transducer and containing information needed by a measurement instrument or control system to interface with a transducer. TEDS can, however, be implemented in two ways. First, the TEDS can reside in embedded memory, typically an EEPROM , within the transducer itself which is connected to the measurement instrument or control system. Second, a virtual TEDS can exist as a data file accessible by the measurement instrument or control system. A virtual TEDS extends the standardized TEDS to legacy sensors and applications where embedded memory may not be available.
The 1451 family of standards includes: [ 2 ] | https://en.wikipedia.org/wiki/IEEE_1451 |
IEEE 1541-2002 is a standard issued in 2002 by the Institute of Electrical and Electronics Engineers (IEEE) concerning the use of prefixes for binary multiples of units of measurement related to digital electronics and computing . IEEE 1541-2021 revises and supersedes IEEE 1541–2002, which is 'inactive'. [ 1 ]
While the International System of Units (SI) defines multiples based on powers of ten (like k = 10 3 , M = 10 6 , etc.), a different definition is sometimes used in computing , based on powers of two (like k = 2 10 , M = 2 20 , etc.). This is due to binary nature of current computing systems, making powers of two the simplest to calculate.
In the early years of computing, there was no significant error in using the same prefix for either quantity (2 10 = 1,024 and 10 3 = 1000 are equal, to two significant figures ). Thus, the SI prefixes were borrowed to indicate nearby binary multiples for these computer-related quantities.
Meanwhile, manufacturers of storage devices, such as hard disks , traditionally used the standard decimal meanings of the prefixes, and decimal multiples are used for transmission rates and processor clock speeds as well. As technology improved, all of these measurements and capacities increased. As the binary meaning was extended to higher prefixes, the absolute error between the two meanings increased. This has even resulted in litigation against hard drive manufacturers, because some operating systems report the size using the larger binary interpretation.
Moreover, there is not a consistent use of the symbols to indicate quantities of bits and bytes – the unit symbol "Mb", for instance, has been widely used for both megabytes and megabits. IEEE 1541 sets new recommendations to represent these quantities and unit symbols unambiguously.
After a trial period of two years, in 2005, IEEE 1541-2002 was elevated to a full-use standard by the IEEE Standards Association, and was reaffirmed on 27 March 2008.
IEEE 1541 is closely related to Amendment 2 of the international standard IEC 60027 -2. Later, the IEC standard was harmonized into the common ISO / IEC 80000-13:2008 – Quantities and units – Part 13: Information science and technology . IEC 80000-13 uses 'bit' as the symbol for bit, as opposed to 'b'.
IEEE 1541 recommends:
The bi part of the prefix comes from the word binary, so for example, kibibyte means a kilobinary byte, that is 1024 bytes.
In 1998, the International Bureau of Weights and Measures (BIPM), one of the organizations that maintain SI, published a brochure stating, among other things, that SI prefixes strictly refer to powers of ten and should not be used to indicate binary multiples, using as an example that 1 kilobit is 1000 bits and not 1024 bits. [ 2 ]
The binary prefixes have been adopted by the European Committee for Electrotechnical Standardization ( CENELEC ) as the harmonization document HD 60027-2:2003-03. [ 3 ] Adherence to this standard implies that binary prefixes would be used for powers of two and SI prefixes for powers of ten. This document has been adopted as a European standard . [ 4 ]
The IEC binary prefixes (kibi, mebi, ...) are gaining acceptance in open source software and in scientific literature. Elsewhere adoption has been slow, with some operating systems , most notably Windows , continuing to use SI prefixes (kilo, mega, ...) for binary multiples.
Supporters of IEEE 1541 emphasize that the standard solves the confusion of units in the market place. Some software (most notably free and open source ) uses the decimal SI prefixes and binary prefixes according to the standard. [ 5 ] | https://en.wikipedia.org/wiki/IEEE_1541 |
Interval arithmetic (also known as interval mathematics; interval analysis or interval computation ) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds . Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities .
Mathematically, instead of working with an uncertain real-valued variable x {\displaystyle x} , interval arithmetic works with an interval [ a , b ] {\displaystyle [a,b]} that defines the range of values that x {\displaystyle x} can have. In other words, any value of the variable x {\displaystyle x} lies in the closed interval between a {\displaystyle a} and b {\displaystyle b} . A function f {\displaystyle f} , when applied to x {\displaystyle x} , produces an interval [ c , d ] {\displaystyle [c,d]} which includes all the possible values for f ( x ) {\displaystyle f(x)} for all x ∈ [ a , b ] {\displaystyle x\in [a,b]} .
Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations ) and optimization problems .
The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset.
This treatment is typically limited to real intervals, so quantities in the form
where a = − ∞ {\displaystyle a={-\infty }} and b = ∞ {\displaystyle b={\infty }} are allowed. With one of a {\displaystyle a} , b {\displaystyle b} infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number r {\displaystyle r} can be interpreted as the interval [ r , r ] , {\displaystyle [r,r],} intervals and real numbers can be freely combined.
Consider the calculation of a person's body mass index (BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight of exactly 80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval [ 79.5 , 80.5 ) {\displaystyle [79.5,80.5)} .
The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval [ 24.537 , 24.846 ] {\displaystyle [24.537,24.846]} . Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight.
The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion.
The range of BMI examples could be reported as [ 24.5 , 24.9 ] {\displaystyle [24.5,24.9]} since this interval is a superset of the calculated interval. The range could not, however, be reported as [ 24.6 , 24.8 ] {\displaystyle [24.6,24.8]} , as the interval does not contain possible BMI values.
Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval [ 1.785 , 1.795 ) {\displaystyle [1.785,1.795)} . Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval
In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion.
A binary operation ⋆ {\displaystyle \star } on two intervals, such as addition or multiplication is defined by
In other words, it is the set of all possible values of x ⋆ y {\displaystyle x\star y} , where x {\displaystyle x} and y {\displaystyle y} are in their corresponding intervals. If ⋆ {\displaystyle \star } is monotone for each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains 0 {\displaystyle 0} ), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is
provided that x ⋆ y {\displaystyle x\star y} is defined for all x ∈ [ x 1 , x 2 ] {\displaystyle x\in [x_{1},x_{2}]} and y ∈ [ y 1 , y 2 ] {\displaystyle y\in [y_{1},y_{2}]} .
For practical applications, this can be simplified further:
The last case loses useful information about the exclusion of ( 1 / y 1 , 1 / y 2 ) {\displaystyle (1/y_{1},1/y_{2})} . Thus, it is common to work with [ − ∞ , 1 y 1 ] {\displaystyle \left[-\infty ,{\tfrac {1}{y_{1}}}\right]} and [ 1 y 2 , ∞ ] {\displaystyle \left[{\tfrac {1}{y_{2}}},\infty \right]} as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form ⋃ i [ a i , b i ] . {\textstyle \bigcup _{i}\left[a_{i},b_{i}\right].} The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite. [ 1 ]
Interval multiplication often only requires two multiplications. If x 1 {\displaystyle x_{1}} , y 1 {\displaystyle y_{1}} are nonnegative,
The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest.
With the help of these definitions, it is already possible to calculate the range of simple functions, such as f ( a , b , x ) = a ⋅ x + b . {\displaystyle f(a,b,x)=a\cdot x+b.} For example, if a = [ 1 , 2 ] {\displaystyle a=[1,2]} , b = [ 5 , 7 ] {\displaystyle b=[5,7]} and x = [ 2 , 3 ] {\displaystyle x=[2,3]} :
To shorten the notation of intervals, brackets can be used.
[ x ] ≡ [ x 1 , x 2 ] {\displaystyle [x]\equiv [x_{1},x_{2}]} can be used to represent an interval. Note that in such a compact notation, [ x ] {\displaystyle [x]} should not be confused between a single-point interval [ x 1 , x 1 ] {\displaystyle [x_{1},x_{1}]} and a general interval. For the set of all intervals, we can use
as an abbreviation. For a vector of intervals ( [ x ] 1 , … , [ x ] n ) ∈ [ R ] n {\displaystyle \left([x]_{1},\ldots ,[x]_{n}\right)\in [\mathbb {R} ]^{n}} we can use a bold font: [ x ] {\displaystyle [\mathbf {x} ]} .
Interval functions beyond the four basic operators may also be defined.
For monotonic functions in one variable, the range of values is simple to compute. If f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is monotonically increasing (resp. decreasing) in the interval [ x 1 , x 2 ] , {\displaystyle [x_{1},x_{2}],} then for all y 1 , y 2 ∈ [ x 1 , x 2 ] {\displaystyle y_{1},y_{2}\in [x_{1},x_{2}]} such that y 1 < y 2 , {\displaystyle y_{1}<y_{2},} f ( y 1 ) ≤ f ( y 2 ) {\displaystyle f(y_{1})\leq f(y_{2})} (resp. f ( y 2 ) ≤ f ( y 1 ) {\displaystyle f(y_{2})\leq f(y_{1})} ).
The range corresponding to the interval [ y 1 , y 2 ] ⊆ [ x 1 , x 2 ] {\displaystyle [y_{1},y_{2}]\subseteq [x_{1},x_{2}]} can be therefore calculated by applying the function to its endpoints:
From this, the following basic features for interval functions can easily be defined:
For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example, x n {\displaystyle x^{n}} for x ∈ [ − 1 , 1 ] {\displaystyle x\in [-1,1]} should produce the interval [ 0 , 1 ] {\displaystyle [0,1]} when n = 2 , 4 , 6 , … . {\displaystyle n=2,4,6,\ldots .} But if [ − 1 , 1 ] n {\displaystyle [-1,1]^{n}} is taken by repeating interval multiplication of form [ − 1 , 1 ] ⋅ [ − 1 , 1 ] ⋅ ⋯ ⋅ [ − 1 , 1 ] {\displaystyle [-1,1]\cdot [-1,1]\cdot \cdots \cdot [-1,1]} then the result is [ − 1 , 1 ] , {\displaystyle [-1,1],} wider than necessary.
More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at ( 1 2 + n ) π {\displaystyle \left({\tfrac {1}{2}}+n\right)\pi } or n π {\displaystyle n\pi } for n ∈ Z {\displaystyle n\in \mathbb {Z} } , respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is [ − 1 , 1 ] {\displaystyle [-1,1]} if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely −1, 0, and 1.
In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a function from a real vector to a real number, then [ f ] : [ R ] n → [ R ] {\displaystyle [f]:[\mathbb {R} ]^{n}\to [\mathbb {R} ]} is called an interval extension of f {\displaystyle f} if
This definition of the interval extension does not give a precise result. For example, both [ f ] ( [ x 1 , x 2 ] ) = [ e x 1 , e x 2 ] {\displaystyle [f]([x_{1},x_{2}])=[e^{x_{1}},e^{x_{2}}]} and [ g ] ( [ x 1 , x 2 ] ) = [ − ∞ , ∞ ] {\displaystyle [g]([x_{1},x_{2}])=[{-\infty },{\infty }]} are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, [ f ] {\displaystyle [f]} should be chosen as it gives the tightest possible result.
Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions, and operators.
The Taylor interval extension (of degree k {\displaystyle k} ) is a k + 1 {\displaystyle k+1} times differentiable function f {\displaystyle f} defined by
for some y ∈ [ x ] {\displaystyle \mathbf {y} \in [\mathbf {x} ]} , where D i f ( y ) {\displaystyle \mathrm {D} ^{i}f(\mathbf {y} )} is the i {\displaystyle i} -th order differential of f {\displaystyle f} at the point y {\displaystyle \mathbf {y} } and [ r ] {\displaystyle [r]} is an interval extension of the Taylor remainder.
The vector ξ {\displaystyle \xi } lies between x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } with x , y ∈ [ x ] {\displaystyle \mathbf {x} ,\mathbf {y} \in [\mathbf {x} ]} , ξ {\displaystyle \xi } is protected by [ x ] {\displaystyle [\mathbf {x} ]} .
Usually one chooses y {\displaystyle \mathbf {y} } to be the midpoint of the interval and uses the natural interval extension to assess the remainder.
The special case of the Taylor interval extension of degree k = 0 {\displaystyle k=0} is also referred to as the mean value form .
An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers to complex numbers . [ 2 ] Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers , there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. [ 3 ] Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages. [ 3 ]
The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic. [ 3 ] It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. [ 3 ] Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates. [ 3 ]
Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions , but with the expense that we have to sacrifice other useful properties of ordinary arithmetic. [ 3 ]
The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.
To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the function f ( x , y ) = x + y {\displaystyle f(x,y)=x+y} for x ∈ [ 0.1 , 0.8 ] {\displaystyle x\in [0.1,0.8]} and y ∈ [ 0.06 , 0.08 ] {\displaystyle y\in [0.06,0.08]} are for example [ 0.16 , 0.88 ] {\displaystyle [0.16,0.88]} . Where the same calculation is done with single-digit precision, the result would normally be [ 0.2 , 0.9 ] {\displaystyle [0.2,0.9]} . But [ 0.2 , 0.9 ] ⊉ [ 0.16 , 0.88 ] {\displaystyle [0.2,0.9]\not \supseteq [0.16,0.88]} ,
so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of f ( [ 0.1 , 0.8 ] , [ 0.06 , 0.08 ] ) {\displaystyle f([0.1,0.8],[0.06,0.08])} would be lost. Instead, the outward rounded solution [ 0.1 , 0.9 ] {\displaystyle [0.1,0.9]} is used.
The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down).
The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval [ ε 1 , ε 2 ] {\displaystyle [\varepsilon _{1},\varepsilon _{2}]} can be added.
The so-called " dependency" problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals.
As an illustration, take the function f {\displaystyle f} defined by f ( x ) = x 2 + x . {\displaystyle f(x)=x^{2}+x.} The values of this function over the interval [ − 1 , 1 ] {\displaystyle [-1,1]} are [ − 1 4 , 2 ] . {\displaystyle \left[-{\tfrac {1}{4}},2\right].} As the natural interval extension, it is calculated as:
which is slightly larger; we have instead calculated the infimum and supremum of the function h ( x , y ) = x 2 + y {\displaystyle h(x,y)=x^{2}+y} over x , y ∈ [ − 1 , 1 ] . {\displaystyle x,y\in [-1,1].} There is a better expression of f {\displaystyle f} in which the variable x {\displaystyle x} only appears once, namely by rewriting f ( x ) = x 2 + x {\displaystyle f(x)=x^{2}+x} as addition and squaring in the quadratic .
So the suitable interval calculation is
and gives the correct values.
In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if f {\displaystyle f} is continuous inside the box. However, not every function can be rewritten this way.
The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.
An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system
is precisely the line between the points ( − 1 , − 1 ) {\displaystyle (-1,-1)} and ( 1 , 1 ) . {\displaystyle (1,1).} Using interval methods results in the unit square, [ − 1 , 1 ] × [ − 1 , 1 ] . {\displaystyle [-1,1]\times [-1,1].} This is known as the wrapping effect .
A linear interval system consists of a matrix interval extension [ A ] ∈ [ R ] n × m {\displaystyle [\mathbf {A} ]\in [\mathbb {R} ]^{n\times m}} and an interval vector [ b ] ∈ [ R ] n {\displaystyle [\mathbf {b} ]\in [\mathbb {R} ]^{n}} . We want the smallest cuboid [ x ] ∈ [ R ] m {\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{m}} , for all vectors x ∈ R m {\displaystyle \mathbf {x} \in \mathbb {R} ^{m}} which there is a pair ( A , b ) {\displaystyle (\mathbf {A} ,\mathbf {b} )} with A ∈ [ A ] {\displaystyle \mathbf {A} \in [\mathbf {A} ]} and b ∈ [ b ] {\displaystyle \mathbf {b} \in [\mathbf {b} ]} satisfying.
For quadratic systems – in other words, for n = m {\displaystyle n=m} – there can be such an interval vector [ x ] {\displaystyle [\mathbf {x} ]} , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities [ A ] {\displaystyle [\mathbf {A} ]} and [ b ] {\displaystyle [\mathbf {b} ]} repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.
A rough solution [ x ] {\displaystyle [\mathbf {x} ]} can often be improved by an interval version of the Gauss–Seidel method .
The motivation for this is that the i {\displaystyle i} -th row of the interval extension of the linear equation.
can be determined by the variable x i {\displaystyle x_{i}} if the division 1 / [ a i i ] {\displaystyle 1/[a_{ii}]} is allowed. It is therefore simultaneously.
So we can now replace [ x j ] {\displaystyle [x_{j}]} by
and so the vector [ x ] {\displaystyle [\mathbf {x} ]} by each element.
Since the procedure is more efficient for a diagonally dominant matrix , instead of the system [ A ] ⋅ x = [ b ] , {\displaystyle [\mathbf {A} ]\cdot \mathbf {x} =[\mathbf {b} ]{\mbox{,}}} one can often try multiplying it by an appropriate rational matrix M {\displaystyle \mathbf {M} } with the resulting matrix equation.
left to solve. If one chooses, for example, M = A − 1 {\displaystyle \mathbf {M} =\mathbf {A} ^{-1}} for the central matrix A ∈ [ A ] {\displaystyle \mathbf {A} \in [\mathbf {A} ]} , then M ⋅ [ A ] {\displaystyle \mathbf {M} \cdot [\mathbf {A} ]} is outer extension of the identity matrix.
These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices A ∈ [ A ] {\displaystyle \mathbf {A} \in [\mathbf {A} ]} are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.
This is only suitable for systems of smaller dimension, since with a fully occupied n × n {\displaystyle n\times n} matrix, 2 n 2 {\displaystyle 2^{n^{2}}} real matrices need to be inverted, with 2 n {\displaystyle 2^{n}} vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed. [ 4 ]
An interval variant of Newton's method for finding the zeros in an interval vector [ x ] {\displaystyle [\mathbf {x} ]} can be derived from the average value extension. [ 5 ] For an unknown vector z ∈ [ x ] {\displaystyle \mathbf {z} \in [\mathbf {x} ]} applied to y ∈ [ x ] {\displaystyle \mathbf {y} \in [\mathbf {x} ]} , gives.
For a zero z {\displaystyle \mathbf {z} } , that is f ( z ) = 0 {\displaystyle f(z)=0} , and thus, must satisfy.
This is equivalent to z ∈ y − [ J f ] ( [ x ] ) − 1 ⋅ f ( y ) {\displaystyle \mathbf {z} \in \mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )} .
An outer estimate of [ J f ] ( [ x ] ) − 1 ⋅ f ( y ) ) {\displaystyle [J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} ))} can be determined using linear methods.
In each step of the interval Newton method, an approximate starting value [ x ] ∈ [ R ] n {\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{n}} is replaced by [ x ] ∩ ( y − [ J f ] ( [ x ] ) − 1 ⋅ f ( y ) ) {\displaystyle [\mathbf {x} ]\cap \left(\mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )\right)} and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of f {\displaystyle f} were in the initial range [ x ] {\displaystyle [\mathbf {x} ]} if a Newton step produces the empty set.
The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method .
As an example, consider the function f ( x ) = x 2 − 2 {\displaystyle f(x)=x^{2}-2} , the starting range [ x ] = [ − 2 , 2 ] {\displaystyle [x]=[-2,2]} , and the point y = 0 {\displaystyle y=0} . We then have J f ( x ) = 2 x {\displaystyle J_{f}(x)=2\,x} and the first Newton step gives.
More Newton steps are used separately on x ∈ [ − 2 , − 0.5 ] {\displaystyle x\in [{-2},{-0.5}]} and [ 0.5 , 2 ] {\displaystyle [{0.5},{2}]} . These converge to arbitrarily small intervals around − 2 {\displaystyle -{\sqrt {2}}} and + 2 {\displaystyle +{\sqrt {2}}} .
The Interval Newton method can also be used with thick functions such as g ( x ) = x 2 − [ 2 , 3 ] {\displaystyle g(x)=x^{2}-[2,3]} , which would in any case have interval results. The result then produces intervals containing [ − 3 , − 2 ] ∪ [ 2 , 3 ] {\displaystyle \left[-{\sqrt {3}},-{\sqrt {2}}\right]\cup \left[{\sqrt {2}},{\sqrt {3}}\right]} .
The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals.
Covering an interval vector [ x ] {\displaystyle [\mathbf {x} ]} by smaller boxes [ x 1 ] , … , [ x k ] , {\displaystyle [\mathbf {x} _{1}],\ldots ,[\mathbf {x} _{k}],} so that
is then valid for the range of values.
So, for the interval extensions described above the following holds:
Since [ f ] ( [ x ] ) {\displaystyle [f]([\mathbf {x} ])} is often a genuine superset of the right-hand side, this usually leads to an improved estimate.
Such a cover can be generated by the bisection method such as thick elements [ x i 1 , x i 2 ] {\displaystyle [x_{i1},x_{i2}]} of the interval vector [ x ] = ( [ x 11 , x 12 ] , … , [ x n 1 , x n 2 ] ) {\displaystyle [\mathbf {x} ]=([x_{11},x_{12}],\ldots ,[x_{n1},x_{n2}])} by splitting in the center into the two intervals [ x i 1 , 1 2 ( x i 1 + x i 2 ) ] {\displaystyle \left[x_{i1},{\tfrac {1}{2}}(x_{i1}+x_{i2})\right]} and [ 1 2 ( x i 1 + x i 2 ) , x i 2 ] . {\displaystyle \left[{\tfrac {1}{2}}(x_{i1}+x_{i2}),x_{i2}\right].} If the result is still not suitable then further gradual subdivision is possible. A cover of 2 r {\displaystyle 2^{r}} intervals results from r {\displaystyle r} divisions of vector elements, substantially increasing the computation costs.
With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing . This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.
Interval arithmetic can be used in various areas (such as set inversion , motion planning , set estimation , or stability analysis) to treat estimates with no exact numerical value. [ 6 ]
Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:
Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting .
Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely. [ 1 ]
If the behavior of such a system affected by tolerances satisfies, for example, f ( x , p ) = 0 {\displaystyle f(\mathbf {x} ,\mathbf {p} )=0} , for p ∈ [ p ] {\displaystyle \mathbf {p} \in [\mathbf {p} ]} and unknown x {\displaystyle \mathbf {x} } then the set of possible solutions.
can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation , interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.
Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic . Apart from the strict statements x ∈ [ x ] {\displaystyle x\in [x]} and x ∉ [ x ] {\displaystyle x\not \in [x]} , intermediate values are also possible, to which real numbers μ ∈ [ 0 , 1 ] {\displaystyle \mu \in [0,1]} are assigned. μ = 1 {\displaystyle \mu =1} corresponds to definite membership while μ = 0 {\displaystyle \mu =0} is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval.
For fuzzy arithmetic [ 7 ] only a finite number of discrete affiliation stages μ i ∈ [ 0 , 1 ] {\displaystyle \mu _{i}\in [0,1]} are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals.
The interval [ x ( i ) ] {\displaystyle \left[x^{(i)}\right]} corresponds exactly to the fluctuation range for the stage μ i . {\displaystyle \mu _{i}.}
The appropriate distribution for a function f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} concerning indistinct values x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} and the corresponding sequences.
can be approximated by the sequence.
where
and can be calculated by interval methods. The value [ y ( 1 ) ] {\displaystyle \left[y^{(1)}\right]} corresponds to the result of an interval calculation.
Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems , that is, to show that the Lorenz attractor is a strange attractor . [ 8 ] Thomas Hales used interval arithmetic in order to solve the Kepler conjecture .
Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.
Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young. [ 9 ] Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra by Paul S. Dwyer [ de ] ; [ 10 ] intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958). [ 11 ]
The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966. [ 12 ] [ 13 ] He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. [ 14 ] Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.
Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals, [ 15 ] though Moore found the first non-trivial applications.
In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch [ 16 ] [ 17 ] and Götz Alefeld [ de ] [ 18 ] at the University of Karlsruhe and later also at the Bergische University of Wuppertal .
For example, Karl Nickel [ de ] explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. [ 5 ] Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.
In 1988, Rudolf Lohner developed Fortran -based software for reliable solutions for initial value problems using ordinary differential equations . [ 19 ]
The journal Reliable Computing (originally Interval Computations ) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic. [ 20 ]
In recent years work has concentrated in particular on the estimation of preimages of parameterized functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France. [ 21 ]
There are many software packages that permit the development of numerical applications using interval arithmetic. [ 22 ] These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.
Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages , such as C++, Fortran, and Pascal . [ 23 ] The first platform was a Zuse Z23 , for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC , a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77 -based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC ; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License . At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard.
Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals.
The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language. [ 24 ]
The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers . Programs written in Frink can use intervals without rewriting or recompilation.
GAOL [ 25 ] is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming .
The Moore library [ 26 ] is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the concepts feature of C++ .
The Julia programming language [ 27 ] has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming , via the ValidatedNumerics.jl package. [ 28 ]
In addition, computer algebra systems, such as Euler Mathematical Toolbox , FriCAS , Maple , Mathematica , Maxima [ 29 ] and MuPAD , can handle intervals. A Matlab extension Intlab [ 30 ] builds on BLAS routines, and the toolbox b4m makes a Profil/BIAS interface. [ 30 ] [ 31 ]
A library for the functional language OCaml was written in assembly language and C. [ 32 ]
MPFI is a library for arbitrary precision interval arithmetic; it is written in C and is based on MPFR . [ 33 ]
A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015. [ 34 ] Two reference implementations are freely available. [ 35 ] These have been developed by members of the standard's working group: The libieeep1788 [ 36 ] library for C++, and the interval package [ 37 ] for GNU Octave .
A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations. [ 38 ]
Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing). | https://en.wikipedia.org/wiki/IEEE_1788-2015 |
IEEE 754-1985 [ 1 ] is a historic industry standard for representing floating-point numbers in computers , officially adopted in 1985 and superseded in 2008 by IEEE 754-2008 , and then again in 2019 by minor revision IEEE 754-2019 . [ 2 ] During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries , and in hardware, in the instructions of many CPUs and FPUs . The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087 .
IEEE 754-1985 represents numbers in binary , providing definitions for four levels of precision, of which the two most commonly used are:
The standard also defines representations for positive and negative infinity , a " negative zero ", five exceptions to handle invalid results like division by zero , special values called NaNs for representing those exceptions, denormal numbers to represent numbers smaller than shown above, and four rounding modes.
Floating-point numbers in IEEE 754 format consist of three fields: a sign bit , a biased exponent , and a fraction. The following example illustrates the meaning of each.
The decimal number 0.15625 10 represented in binary is 0.00101 2 (that is, 1/8 + 1/32). (Subscripts indicate the number base .) Analogous to scientific notation , where numbers are written to have a single non-zero digit to the left of the decimal point, we rewrite this number so it has a single 1 bit to the left of the "binary point". We simply multiply by the appropriate power of 2 to compensate for shifting the bits left by three positions:
Now we can read off the fraction and the exponent: the fraction is .01 2 and the exponent is −3.
As illustrated in the pictures, the three fields in the IEEE 754 representation of this number are:
IEEE 754 adds a bias to the exponent so that numbers can in many cases be compared conveniently by the same hardware that compares signed 2's-complement integers. Using a biased exponent, the lesser of two positive floating-point numbers will come out "less than" the greater following the same ordering as for sign and magnitude integers. If two floating-point numbers have different signs, the sign-and-magnitude comparison also works with biased exponents. However, if both biased-exponent floating-point numbers are negative, then the ordering must be reversed. If the exponent were represented as, say, a 2's-complement number, comparison to see which of two numbers is greater would not be as convenient.
The leading 1 bit is omitted since all numbers except zero start with a leading 1; the leading 1 is implicit and doesn't actually need to be stored which gives an extra bit of precision for "free."
The number zero is represented specially:
The number representations described above are called normalized, meaning that the implicit leading binary digit is a 1. To reduce the loss of precision when an underflow occurs, IEEE 754 includes the ability to represent fractions smaller than are possible in the normalized representation, by making the implicit leading digit a 0. Such numbers are called denormal . They don't include as many significant digits as a normalized number, but they enable a gradual loss of precision when the result of an operation is not exactly zero but is too close to zero to be represented by a normalized number.
A denormal number is represented with a biased exponent of all 0 bits, which represents an exponent of −126 in single precision (not −127), or −1022 in double precision (not −1023). [ 3 ] In contrast, the smallest biased exponent representing a normal number is 1 (see examples below).
The biased-exponent field is filled with all 1 bits to indicate either infinity or an invalid result of a computation.
Positive and negative infinity are represented thus:
Some operations of floating-point arithmetic are invalid, such as taking the square root of a negative number. The act of reaching an invalid result is called a floating-point exception. An exceptional result is represented by a special code called a NaN, for " Not a Number ". All NaNs in IEEE 754-1985 have this format:
Precision is defined as the minimum difference between two successive mantissa representations; thus it is a function only in the mantissa; while the gap is defined as the difference between two successive numbers. [ 4 ]
Single-precision numbers occupy 32 bits. In single precision:
Some example range and gap values for given exponents in single precision:
As an example, 16,777,217 cannot be encoded as a 32-bit float as it will be rounded to 16,777,216. However, all integers within the representable range that are a power of 2 can be stored in a 32-bit float without rounding.
Double-precision numbers occupy 64 bits. In double precision:
Some example range and gap values for given exponents in double precision:
The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors: the standard only specifies minimum precision and exponent requirements for such formats. The x87 80-bit extended format is the most commonly implemented extended format that meets these requirements.
Here are some examples of single-precision IEEE 754 representations:
Every possible bit combination is either a NaN or a number with a unique value in the affinely extended real number system with its associated order, except for the two combinations of bits for negative zero and positive zero, which sometimes require special attention (see below). The binary representation has the special property that, excluding NaNs, any two numbers can be compared as sign and magnitude integers ( endianness issues apply). When comparing as 2's-complement integers: If the sign bits differ, the negative number precedes the positive number, so 2's complement gives the correct result (except that negative zero and positive zero should be considered equal). If both values are positive, the 2's complement comparison again gives the correct result. Otherwise (two negative numbers), the correct FP ordering is the opposite of the 2's complement ordering.
Rounding errors inherent to floating point calculations may limit the use of comparisons for checking the exact equality of results. Choosing an acceptable range is a complex topic. A common technique is to use a comparison epsilon value to perform approximate comparisons. [ 6 ] Depending on how lenient the comparisons are, common values include 1e-6 or 1e-5 for single-precision, and 1e-14 for double-precision. [ 7 ] [ 8 ] Another common technique is ULP, which checks what the difference is in the last place digits, effectively checking how many steps away the two values are. [ 9 ]
Although negative zero and positive zero are generally considered equal for comparison purposes, some programming language relational operators and similar constructs treat them as distinct. According to the Java Language Specification, [ 10 ] comparison and equality operators treat them as equal, but Math.min() and Math.max() distinguish them (officially starting with Java version 1.1 but actually with 1.1.1), as do the comparison methods equals() , compareTo() and even compare() of classes Float and Double .
The IEEE standard has four different rounding modes; the first is the default; the others are called directed roundings .
The IEEE standard employs (and extends) the affinely extended real number system , with separate positive and negative infinities. During drafting, there was a proposal for the standard to incorporate the projectively extended real number system , with a single unsigned infinity, by providing programmers with a mode selection option. In the interest of reducing the complexity of the final standard, the projective mode was dropped, however. The Intel 8087 and Intel 80287 floating point co-processors both support this projective mode. [ 11 ] [ 12 ] [ 13 ]
The following functions must be provided:
In 1976, Intel was starting the development of a floating-point coprocessor . [ 15 ] [ 16 ] Intel hoped to be able to sell a chip containing good implementations of all the operations found in the widely varying maths software libraries. [ 15 ] [ 17 ]
John Palmer, who managed the project, believed the effort should be backed by a standard unifying floating point operations across disparate processors. He contacted William Kahan of the University of California , who had helped improve the accuracy of Hewlett-Packard 's calculators. Kahan suggested that Intel use the floating point of Digital Equipment Corporation 's (DEC) VAX. The first VAX, the VAX-11/780 had just come out in late 1977, and its floating point was highly regarded. However, seeking to market their chip to the broadest possible market, Intel wanted the best floating point possible, and Kahan went on to draw up specifications. [ 15 ] Kahan initially recommended that the floating point base be decimal [ 18 ] [ unreliable source? ] but the hardware design of the coprocessor was too far along to make that change.
The work within Intel worried other vendors, who set up a standardization effort to ensure a "level playing field". Kahan attended the second IEEE 754 standards working group meeting, held in November 1977. He subsequently received permission from Intel to put forward a draft proposal based on his work for their coprocessor; he was allowed to explain details of the format and its rationale, but not anything related to Intel's implementation architecture. The draft was co-written with Jerome Coonen and Harold Stone , and was initially known as the "Kahan-Coonen-Stone proposal" or "K-C-S format". [ 15 ] [ 16 ] [ 17 ] [ 19 ]
As an 8-bit exponent was not wide enough for some operations desired for double-precision numbers, e.g. to store the product of two 32-bit numbers, [ 20 ] both Kahan's proposal and a counter-proposal by DEC therefore used 11 bits, like the time-tested 60-bit floating-point format of the CDC 6600 from 1965. [ 16 ] [ 19 ] [ 21 ] Kahan's proposal also provided for infinities, which are useful when dealing with division-by-zero conditions; not-a-number values, which are useful when dealing with invalid operations; denormal numbers , which help mitigate problems caused by underflow; [ 19 ] [ 22 ] [ 23 ] and a better balanced exponent bias , which can help avoid overflow and underflow when taking the reciprocal of a number. [ 24 ] [ 25 ]
Even before it was approved, the draft standard had been implemented by a number of manufacturers. [ 26 ] [ 27 ] The Intel 8087, which was announced in 1980, was the first chip to implement the draft standard.
In 1980, the Intel 8087 chip was already released, [ 28 ] but DEC remained opposed, to denormal numbers in particular, because of performance concerns and since it would give DEC a competitive advantage to standardise on DEC's format.
The arguments over gradual underflow lasted until 1981 when an expert hired by DEC to assess it sided against the dissenters. DEC had the study done in order to demonstrate that gradual underflow was a bad idea, but the study concluded the opposite, and DEC gave in. In 1985, the standard was ratified, but it had already become the de facto standard a year earlier, implemented by many manufacturers. [ 16 ] [ 19 ] [ 5 ] | https://en.wikipedia.org/wiki/IEEE_754-1985 |
IEEE 754-2008 (previously known as IEEE 754r ) is a revision of the IEEE 754 standard for floating-point arithmetic .
It was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 standard.
The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854 (the radix-independent floating-point standard).
In a few cases, where stricter definitions of binary floating-point arithmetic might be performance-incompatible with some existing implementation, they were made optional.
In 2019, it was updated with a minor revision IEEE 754-2019 . [ 1 ]
The standard had been under revision since 2000, with a target completion date of December 2006. The revision of an IEEE standard broadly follows three phases:
On 11 June 2008, it was approved unanimously by the IEEE Revision Committee (RevCom), and it was formally approved by the IEEE-SA Standards Board on 12 June 2008. It was published on 29 August 2008.
Participation in drafting the standard was open to people with a solid knowledge of floating-point arithmetic. More than 90 people attended at least one of the monthly meetings, which were held in Silicon Valley , and many more participated through the mailing list.
Progress at times was slow, leading the chairman to declare at the 15 September 2005 meeting [ 2 ] that "no progress is being made, I am suspending these meetings until further notice on those grounds".
In December 2005, the committee reorganized under new rules with a target completion date of December 2006.
New policies and procedures were adopted in February 2006. In September 2006, a working draft was approved to be sent to the parent sponsoring committee (the IEEE Microprocessor Standards Committee, or MSC) for editing and to be sent to sponsor ballot.
The last version of the draft, version 1.2.5, submitted to the MSC was from 4 October 2006. [ 3 ] The MSC accepted the draft on 9 October 2006. The draft has been changed significantly in detail during the balloting process.
The first sponsor ballot took place from 29 November 2006 through 28 December 2006. Of the 84 members of the voting body, 85.7% responded—78.6% voted approval. There were negative votes (and over 400 comments) so there was a recirculation ballot in March 2007; this received an 84% approval. There were sufficient comments (over 130) from that ballot that a third draft was prepared for a second, 15-day, recirculation ballot which started in mid-April 2007. For a technical reason, the ballot process was restarted with the 4th ballot in October 2007; there were also substantial changes in the draft resulting from 650 voters' comments and from requests from the sponsor (the IEEE MSC); this ballot just failed to reach the required 75% approval. The 5th ballot had a 98.0% response rate with 91.0% approval, with comments leading to relatively small changes. The 6th, 7th, and 8th ballots sustained approval ratings of over 90% with progressively fewer comments on each draft; the 8th (which had no in-scope comments: 9 were repeats of previous comments and one referred to material not in the draft) was submitted to the IEEE Standards Revision Committee ('RevCom') for approval as an IEEE standard.
The IEEE Standards Revision Committee (RevCom) considered and unanimously approved the IEEE 754r draft at its June 2008 meeting, and it was approved by the IEEE-SA Standards Board on 12 June 2008. Final editing is complete and the document has now been forwarded to the IEEE Standards Publications Department for publication.
The new IEEE 754 (formally IEEE Std 754-2008, the IEEE Standard for Floating-Point Arithmetic) was published by the IEEE Computer Society on 29 August 2008, and is available from the IEEE Xplore website [ 4 ]
This standard replaces IEEE 754-1985 . IEEE 854, the Radix-Independent floating-point standard was withdrawn in December 2008.
The most obvious enhancements to the standard are the addition of a 16-bit and a 128-bit binary type and three decimal types, some new operations, and many recommended functions. However, there have been significant clarifications in terminology throughout. This summary highlights the main differences in each major clause of the standard.
The scope (determined by the sponsor of the standard) has been widened to include decimal formats and arithmetic, and adds extendable formats.
Many of the definitions have been rewritten for clarification and consistency. A few terms have been renamed for clarity (for example, denormalized has been renamed to subnormal ).
The description of formats has been made more regular, with a distinction between arithmetic formats (in which arithmetic may be carried out) and interchange formats (which have a standard encoding). Conformance to the standard is now defined in these terms.
The specification levels of a floating-point format have been enumerated, to clarify the distinction between:
The sets of representable entities are then explained in detail, showing that they can be treated with the significand being considered either as a fraction or an integer. The particular sets known as basic formats are defined, and the encodings used for interchange of binary and decimal formats are explained.
The binary interchange formats have the " half precision " (16-bit storage format) and " quad precision " (128-bit format) added, together with generalized formulae for some wider formats; the basic formats have 32-bit, 64-bit, and 128-bit encodings.
Three new decimal formats are described, matching the lengths of the 32–128-bit binary formats. These give decimal interchange formats with 7, 16, and 34-digit significands, which may be normalized or unnormalized. For maximum range and precision, the formats merge part of the exponent and significand into a combination field , and compress the remainder of the significand using either a decimal integer encoding (which uses Densely Packed Decimal , or DPD, a compressed form of BCD ) encoding or conventional binary integer encoding. The basic formats are the two larger sizes, which have 64-bit and 128-bit encodings. Generalized formulae for some other interchange formats are also specified.
Extended and extendable formats allow for arithmetic at other precisions and ranges.
This clause has been changed to encourage the use of static attributes for controlling floating-point operations, and (in addition to required rounding attributes) allow for alternate exception handling, widening of intermediate results, value-changing optimizations, and reproducibility.
The round-to-nearest, ties away from zero rounding attribute has been added (required for decimal operations only).
This section has numerous clarifications (notably in the area of comparisons), and several previously recommended operations (such as copy, negate, abs, and class) are now required.
New operations include fused multiply–add (FMA), explicit conversions, classification predicates (isNan( x ), etc.), various min and max functions, a total ordering predicate, and two decimal-specific operations (samequantum and quantize).
The min and max operations are defined but leave some leeway for the case where the inputs are equal in value but differ in representation. In particular:
In order to support operations such as windowing in which a NaN input should be quietly replaced with one of the end points, min and max are defined to select a number, x , in preference to a quiet NaN:
These functions are called minNum and maxNum to indicate their preference for a number over a quiet NaN. However, in the presence of a signaling NaN input, a quiet NaN is returned as with the usual operations. After the publication of the standard, it was noticed that these rules make these operations non-associative; for this reason, they have been replaced by new operations in IEEE 754-2019 .
Decimal arithmetic, compatible with that used in Java , C# , PL/I , COBOL , Python , REXX , etc., is also defined in this section. In general, decimal arithmetic follows the same rules as binary arithmetic (results are correctly rounded, and so on), with additional rules that define the exponent of a result (more than one is possible in many cases).
Unlike in 854, 754-2008 requires correctly rounded base conversion between decimal and binary floating point within a range which depends on the format.
This clause has been revised and clarified, but with no major additions. In particular, it makes formal recommendations for the encoding of the signaling/quiet NaN state.
This clause has been revised and considerably clarified, but with no major additions.
This clause has been extended from the previous Clause 8 ('Traps') to allow optional exception handling in various forms, including traps and other models such as try/catch. Traps and other exception mechanisms remain optional, as they were in IEEE 754-1985.
This clause is new; it recommends fifty operations, including log, power, and trigonometric functions, that language standards should define. These are all optional (none are required in order to conform to the standard). The operations include some on dynamic modes for attributes, and also a set of reduction operations (sum, scaled product, etc.).
This clause is new; it recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result.
This clause is new; it recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.
This annex is new; it lists some useful references.
This annex is new; it provides guidance to debugger developers for features that are desired for supporting the debugging of floating-point code.
This is a new index, which lists all the operations described in the standard (required or optional).
Due to changes in CPU design and development, the 2008 IEEE floating-point standard could be viewed as historical or outdated as the 1985 standard it replaced. There were many outside discussions and items not covered in the standardization process, the items below are the ones that became public knowledge: | https://en.wikipedia.org/wiki/IEEE_754-2008 |
IEEE 754-2008 (previously known as IEEE 754r ) is a revision of the IEEE 754 standard for floating-point arithmetic .
It was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 standard.
The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854 (the radix-independent floating-point standard).
In a few cases, where stricter definitions of binary floating-point arithmetic might be performance-incompatible with some existing implementation, they were made optional.
In 2019, it was updated with a minor revision IEEE 754-2019 . [ 1 ]
The standard had been under revision since 2000, with a target completion date of December 2006. The revision of an IEEE standard broadly follows three phases:
On 11 June 2008, it was approved unanimously by the IEEE Revision Committee (RevCom), and it was formally approved by the IEEE-SA Standards Board on 12 June 2008. It was published on 29 August 2008.
Participation in drafting the standard was open to people with a solid knowledge of floating-point arithmetic. More than 90 people attended at least one of the monthly meetings, which were held in Silicon Valley , and many more participated through the mailing list.
Progress at times was slow, leading the chairman to declare at the 15 September 2005 meeting [ 2 ] that "no progress is being made, I am suspending these meetings until further notice on those grounds".
In December 2005, the committee reorganized under new rules with a target completion date of December 2006.
New policies and procedures were adopted in February 2006. In September 2006, a working draft was approved to be sent to the parent sponsoring committee (the IEEE Microprocessor Standards Committee, or MSC) for editing and to be sent to sponsor ballot.
The last version of the draft, version 1.2.5, submitted to the MSC was from 4 October 2006. [ 3 ] The MSC accepted the draft on 9 October 2006. The draft has been changed significantly in detail during the balloting process.
The first sponsor ballot took place from 29 November 2006 through 28 December 2006. Of the 84 members of the voting body, 85.7% responded—78.6% voted approval. There were negative votes (and over 400 comments) so there was a recirculation ballot in March 2007; this received an 84% approval. There were sufficient comments (over 130) from that ballot that a third draft was prepared for a second, 15-day, recirculation ballot which started in mid-April 2007. For a technical reason, the ballot process was restarted with the 4th ballot in October 2007; there were also substantial changes in the draft resulting from 650 voters' comments and from requests from the sponsor (the IEEE MSC); this ballot just failed to reach the required 75% approval. The 5th ballot had a 98.0% response rate with 91.0% approval, with comments leading to relatively small changes. The 6th, 7th, and 8th ballots sustained approval ratings of over 90% with progressively fewer comments on each draft; the 8th (which had no in-scope comments: 9 were repeats of previous comments and one referred to material not in the draft) was submitted to the IEEE Standards Revision Committee ('RevCom') for approval as an IEEE standard.
The IEEE Standards Revision Committee (RevCom) considered and unanimously approved the IEEE 754r draft at its June 2008 meeting, and it was approved by the IEEE-SA Standards Board on 12 June 2008. Final editing is complete and the document has now been forwarded to the IEEE Standards Publications Department for publication.
The new IEEE 754 (formally IEEE Std 754-2008, the IEEE Standard for Floating-Point Arithmetic) was published by the IEEE Computer Society on 29 August 2008, and is available from the IEEE Xplore website [ 4 ]
This standard replaces IEEE 754-1985 . IEEE 854, the Radix-Independent floating-point standard was withdrawn in December 2008.
The most obvious enhancements to the standard are the addition of a 16-bit and a 128-bit binary type and three decimal types, some new operations, and many recommended functions. However, there have been significant clarifications in terminology throughout. This summary highlights the main differences in each major clause of the standard.
The scope (determined by the sponsor of the standard) has been widened to include decimal formats and arithmetic, and adds extendable formats.
Many of the definitions have been rewritten for clarification and consistency. A few terms have been renamed for clarity (for example, denormalized has been renamed to subnormal ).
The description of formats has been made more regular, with a distinction between arithmetic formats (in which arithmetic may be carried out) and interchange formats (which have a standard encoding). Conformance to the standard is now defined in these terms.
The specification levels of a floating-point format have been enumerated, to clarify the distinction between:
The sets of representable entities are then explained in detail, showing that they can be treated with the significand being considered either as a fraction or an integer. The particular sets known as basic formats are defined, and the encodings used for interchange of binary and decimal formats are explained.
The binary interchange formats have the " half precision " (16-bit storage format) and " quad precision " (128-bit format) added, together with generalized formulae for some wider formats; the basic formats have 32-bit, 64-bit, and 128-bit encodings.
Three new decimal formats are described, matching the lengths of the 32–128-bit binary formats. These give decimal interchange formats with 7, 16, and 34-digit significands, which may be normalized or unnormalized. For maximum range and precision, the formats merge part of the exponent and significand into a combination field , and compress the remainder of the significand using either a decimal integer encoding (which uses Densely Packed Decimal , or DPD, a compressed form of BCD ) encoding or conventional binary integer encoding. The basic formats are the two larger sizes, which have 64-bit and 128-bit encodings. Generalized formulae for some other interchange formats are also specified.
Extended and extendable formats allow for arithmetic at other precisions and ranges.
This clause has been changed to encourage the use of static attributes for controlling floating-point operations, and (in addition to required rounding attributes) allow for alternate exception handling, widening of intermediate results, value-changing optimizations, and reproducibility.
The round-to-nearest, ties away from zero rounding attribute has been added (required for decimal operations only).
This section has numerous clarifications (notably in the area of comparisons), and several previously recommended operations (such as copy, negate, abs, and class) are now required.
New operations include fused multiply–add (FMA), explicit conversions, classification predicates (isNan( x ), etc.), various min and max functions, a total ordering predicate, and two decimal-specific operations (samequantum and quantize).
The min and max operations are defined but leave some leeway for the case where the inputs are equal in value but differ in representation. In particular:
In order to support operations such as windowing in which a NaN input should be quietly replaced with one of the end points, min and max are defined to select a number, x , in preference to a quiet NaN:
These functions are called minNum and maxNum to indicate their preference for a number over a quiet NaN. However, in the presence of a signaling NaN input, a quiet NaN is returned as with the usual operations. After the publication of the standard, it was noticed that these rules make these operations non-associative; for this reason, they have been replaced by new operations in IEEE 754-2019 .
Decimal arithmetic, compatible with that used in Java , C# , PL/I , COBOL , Python , REXX , etc., is also defined in this section. In general, decimal arithmetic follows the same rules as binary arithmetic (results are correctly rounded, and so on), with additional rules that define the exponent of a result (more than one is possible in many cases).
Unlike in 854, 754-2008 requires correctly rounded base conversion between decimal and binary floating point within a range which depends on the format.
This clause has been revised and clarified, but with no major additions. In particular, it makes formal recommendations for the encoding of the signaling/quiet NaN state.
This clause has been revised and considerably clarified, but with no major additions.
This clause has been extended from the previous Clause 8 ('Traps') to allow optional exception handling in various forms, including traps and other models such as try/catch. Traps and other exception mechanisms remain optional, as they were in IEEE 754-1985.
This clause is new; it recommends fifty operations, including log, power, and trigonometric functions, that language standards should define. These are all optional (none are required in order to conform to the standard). The operations include some on dynamic modes for attributes, and also a set of reduction operations (sum, scaled product, etc.).
This clause is new; it recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result.
This clause is new; it recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.
This annex is new; it lists some useful references.
This annex is new; it provides guidance to debugger developers for features that are desired for supporting the debugging of floating-point code.
This is a new index, which lists all the operations described in the standard (required or optional).
Due to changes in CPU design and development, the 2008 IEEE floating-point standard could be viewed as historical or outdated as the 1985 standard it replaced. There were many outside discussions and items not covered in the standardization process, the items below are the ones that became public knowledge: | https://en.wikipedia.org/wiki/IEEE_754-2008_revision |
The IEEE Standard for Floating-Point Arithmetic ( IEEE 754 ) is a technical standard for floating-point arithmetic originally established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably . Many hardware floating-point units use the IEEE 754 standard.
The standard defines:
IEEE 754-2008 , published in August 2008, includes nearly all of the original IEEE 754-1985 standard, plus the IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic . The current version, IEEE 754-2019, was published in July 2019. [ 1 ] It is a minor revision of the previous version, incorporating mainly clarifications, defect fixes and new recommended operations.
The need for a floating-point standard arose from chaos in the business and scientific computing industry in the 1960s and 1970s. IBM used a hexadecimal floating-point format with a longer significand and a shorter exponent [ clarification needed ] . CDC and Cray computers used ones' complement representation, which admits a value of +0 and −0. CDC 60-bit computers did not have full 60-bit adders, so integer arithmetic was limited to 48 bits of precision from the floating-point unit. Exception processing from divide-by-zero was different on different computers. Moving data between systems and even repeating the same calculations on different systems was often difficult.
The first IEEE standard for floating-point arithmetic, IEEE 754-1985 , was published in 1985. It covered only binary floating-point arithmetic.
A new version, IEEE 754-2008 , was published in August 2008, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw . It replaced both IEEE 754-1985 (binary floating-point arithmetic) and IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic . The binary formats in the original standard are included in this new standard along with three new basic formats, one binary and two decimal. To conform to the current standard, an implementation must implement at least one of the basic formats as both an arithmetic format and an interchange format.
The international standard ISO/IEC/IEEE 60559:2011 (with content identical to IEEE 754-2008) has been approved for adoption through ISO / IEC JTC 1 /SC 25 under the ISO/IEEE PSDO Agreement [ 2 ] [ 3 ] and published. [ 4 ]
The current version, IEEE 754-2019 published in July 2019, is derived from and replaces IEEE 754-2008, following a revision process started in September 2015, chaired by David G. Hough and edited by Mike Cowlishaw. It incorporates mainly clarifications (e.g. totalOrder ) and defect fixes (e.g. minNum ), but also includes some new recommended operations (e.g. augmentedAddition ). [ 5 ] [ 6 ]
The international standard ISO/IEC 60559:2020 (with content identical to IEEE 754-2019) has been approved for adoption through ISO/IEC JTC 1 /SC 25 and published. [ 7 ]
The next projected revision of the standard is in 2029. [ 8 ]
An IEEE 754 format is a "set of representations of numerical values and symbols". A format may also include how the set is encoded. [ 9 ]
A floating-point format is specified by
A format comprises
For example, if b = 10, p = 7, and emax = 96, then emin = −95, the significand satisfies 0 ≤ c ≤ 9 999 999 , and the exponent satisfies −101 ≤ q ≤ 90 . Consequently, the smallest non-zero positive number that can be represented is 1×10 −101 , and the largest is 9999999×10 90 (9.999999×10 96 ), so the full range of numbers is −9.999999×10 96 through 9.999999×10 96 . The numbers − b 1− emax and b 1− emax (here, −1×10 −95 and 1×10 −95 ) are the smallest (in magnitude) normal numbers ; non-zero numbers between these smallest numbers are called subnormal numbers .
Some numbers may have several possible floating-point representations. For instance, if b = 10, and p = 7, then −12.345 can be represented by −12345×10 −3 , −123450×10 −4 , and −1234500×10 −5 . However, for most operations, such as arithmetic operations, the result (value) does not depend on the representation of the inputs.
For the decimal formats, any representation is valid, and the set of these representations is called a cohort . When a result can have several representations, the standard specifies which member of the cohort is chosen.
For the binary formats, the representation is made unique by choosing the smallest representable exponent allowing the value to be represented exactly. Further, the exponent is not represented directly, but a bias is added so that the smallest representable exponent is represented as 1, with 0 used for subnormal numbers. For numbers with an exponent in the normal range (the exponent field being neither all ones nor all zeros), the leading bit of the significand will always be 1. Consequently, a leading 1 can be implied rather than explicitly present in the memory encoding, and under the standard the explicitly represented part of the significand will lie between 0 and 1. This rule is called leading bit convention , implicit bit convention , or hidden bit convention . This rule allows the binary format to have an extra bit of precision. The leading bit convention cannot be used for the subnormal numbers as they have an exponent outside the normal exponent range and scale by the smallest represented exponent as used for the smallest normal numbers.
Due to the possibility of multiple encodings (at least in formats called interchange formats ), a NaN may carry other information: a sign bit (which has no meaning, but may be used by some operations) and a payload , which is intended for diagnostic information indicating the source of the NaN (but the payload may have other uses, such as NaN-boxing [ 10 ] [ 11 ] [ 12 ] ).
The standard defines five basic formats that are named for their numeric base and the number of bits used in their interchange encoding. There are three binary floating-point basic formats (encoded with 32, 64 or 128 bits) and two decimal floating-point basic formats (encoded with 64 or 128 bits). The binary32 and binary64 formats are the single and double formats of IEEE 754-1985 respectively. A conforming implementation must fully implement at least one of the basic formats.
The standard also defines interchange formats , which generalize these basic formats. [ 13 ] For the binary formats, the leading bit convention is required. The following table summarizes some of the possible interchange formats (including the basic formats).
In the table above, integer values are exact, whereas values in decimal notation (e.g. 1.0) are rounded values. The minimum exponents listed are for normal numbers; the special subnormal number representation allows even smaller (in magnitude) numbers to be represented with some loss of precision. For example, the smallest positive number that can be represented in binary64 is 2 −1074 ; contributions to the −1074 figure include the emin value −1022 and all but one of the 53 significand bits (2 −1022 − (53 − 1) = 2 −1074 ).
Decimal digits is the precision of the format expressed in terms of an equivalent number of decimal digits. It is computed as digits × log 10 base . E.g. binary128 has approximately the same precision as a 34 digit decimal number.
log 10 MAXVAL is a measure of the range of the encoding. Its integer part is the largest exponent shown on the output of a value in scientific notation with one leading digit in the significand before the decimal point (e.g. 1.698 × 10 38 is near the largest value in binary32, 9.999999 × 10 96 is the largest value in decimal32).
The binary32 (single) and binary64 (double) formats are two of the most common formats used today. The figure below shows the absolute precision for both formats over a range of values. This figure can be used to select an appropriate format given the expected value of a number and the required precision.
An example of a layout for 32-bit floating point is
and the 64 bit layout is similar.
The standard specifies optional extended and extendable precision formats, which provide greater precision than the basic formats. [ 14 ] An extended precision format extends a basic format by using more precision and more exponent range. An extendable precision format allows the user to specify the precision and exponent range. An implementation may use whatever internal representation it chooses for such formats; all that needs to be defined are its parameters ( b , p , and emax ). These parameters uniquely describe the set of finite numbers (combinations of sign, significand, and exponent for the given radix) that it can represent.
The standard recommends that language standards provide a method of specifying p and emax for each supported base b . [ 15 ] The standard recommends that language standards and implementations support an extended format which has a greater precision than the largest basic format supported for each radix b . [ 16 ] For an extended format with a precision between two basic formats the exponent range must be as great as that of the next wider basic format. So for instance a 64-bit extended precision binary number must have an 'emax' of at least 16383. The x87 80-bit extended format meets this requirement.
The original IEEE 754-1985 standard also had the concept of extended formats , but without any mandatory relation between emin and emax . For example, the Motorola 68881 80-bit format, [ 17 ] where emin = − emax , was a conforming extended format, but it became non-conforming in the 2008 revision.
Interchange formats are intended for the exchange of floating-point data using a bit string of fixed length for a given format.
For the exchange of binary floating-point numbers, interchange formats of length 16 bits, 32 bits, 64 bits, and any multiple of 32 bits ≥ 128 [ e ] are defined. The 16-bit format is intended for the exchange or storage of small numbers (e.g., for graphics).
The encoding scheme for these binary interchange formats is the same as that of IEEE 754-1985: a sign bit, followed by w exponent bits that describe the exponent offset by a bias , and p − 1 bits that describe the significand. The width of the exponent field for a k -bit format is computed as w = round(4 log 2 ( k )) − 13. The existing 64- and 128-bit formats follow this rule, but the 16- and 32-bit formats have more exponent bits (5 and 8 respectively) than this formula would provide (3 and 7 respectively).
As with IEEE 754-1985, the biased-exponent field is filled with all 1 bits to indicate either infinity (trailing significand field = 0) or a NaN (trailing significand field ≠ 0). For NaNs, quiet NaNs and signaling NaNs are distinguished by using the most significant bit of the trailing significand field exclusively, [ f ] and the payload is carried in the remaining bits.
For the exchange of decimal floating-point numbers, interchange formats of any multiple of 32 bits are defined. As with binary interchange, the encoding scheme for the decimal interchange formats encodes the sign, exponent, and significand. Two different bit-level encodings are defined, and interchange is complicated by the fact that some external indicator of the encoding in use may be required.
The two options allow the significand to be encoded as a compressed sequence of decimal digits using densely packed decimal or, alternatively, as a binary integer . The former is more convenient for direct hardware implementation of the standard, while the latter is more suited to software emulation on a binary computer. In either case, the set of numbers (combinations of sign, significand, and exponent) that may be encoded is identical, and special values (±zero with the minimum exponent, ±infinity, quiet NaNs, and signaling NaNs) have identical encodings.
The standard defines five rounding rules. The first two rules round to a nearest value; the others are called directed roundings :
At the extremes, a value with a magnitude strictly less than k = b emax ( b − 1 2 b 1 − p ) {\displaystyle k=b^{\text{emax}}\left(b-{\tfrac {1}{2}}b^{1-p}\right)} will be rounded to the minimum or maximum finite number (depending on the value's sign). Any numbers with exactly this magnitude are considered ties; this choice of tie may be conceptualized as the midpoint between ± b emax ( b − b 1 − p ) {\displaystyle \pm b^{\text{emax}}(b-b^{1-p})} and ± b emax + 1 {\displaystyle \pm b^{{\text{emax}}+1}} , which, were the exponent not limited, would be the next representable floating-point numbers larger in magnitude. Numbers with a magnitude strictly larger than k are rounded to the corresponding infinity. [ 18 ]
"Round to nearest, ties to even" is the default for binary floating point and the recommended default for decimal. "Round to nearest, ties to away" is only required for decimal implementations. [ 19 ]
Unless specified otherwise, the floating-point result of an operation is determined by applying the rounding function on the infinitely precise (mathematical) result. Such an operation is said to be correctly rounded . This requirement is called correct rounding . [ 20 ]
Required operations for a supported arithmetic format (including the basic formats) include:
The standard provides comparison predicates to compare one floating-point datum to another in the supported arithmetic format. [ 32 ] Any comparison with a NaN is treated as unordered. −0 and +0 compare as equal.
The standard provides a predicate totalOrder , which defines a total ordering on canonical members of the supported arithmetic format. [ 33 ] The predicate agrees with the comparison predicates (see section § Comparison predicates ) when one floating-point number is less than the other. The main differences are: [ 34 ]
The totalOrder predicate does not impose a total ordering on all encodings in a format. In particular, it does not distinguish among different encodings of the same floating-point representation, as when one or both encodings are non-canonical. [ 33 ] IEEE 754-2019 incorporates clarifications of totalOrder .
For the binary interchange formats whose encoding follows the IEEE 754-2008 recommendation on placement of the NaN signaling bit , the comparison is identical to one that type puns the floating-point numbers to a sign–magnitude integer (assuming a payload ordering consistent with this comparison), an old trick for FP comparison without an FPU. [ 35 ]
The standard defines five exceptions, each of which returns a default value and has a corresponding status flag that is raised when the exception occurs. [ g ] No other exception handling is required, but additional non-default alternatives are recommended (see § Alternate exception handling ).
The five possible exceptions are
These are the same five exceptions as were defined in IEEE 754-1985, but the division by zero exception has been extended to operations other than the division.
Some decimal floating-point implementations define additional exceptions, [ 36 ] [ 37 ] which are not part of IEEE 754:
Additionally, operations like quantize when either operand is infinite, or when the result does not fit the destination format, will also signal invalid operation exception. [ 38 ]
In the IEEE 754 standard, zero is signed, meaning that there exist both a "positive zero" (+0) and a "negative zero" (−0). In most run-time environments , positive zero is usually printed as " 0 " and the negative zero as " -0 ". The two values behave as equal in numerical comparisons, but some operations return different results for +0 and −0. For instance, 1/(−0) returns negative infinity, while 1/(+0) returns positive infinity (so that the identity 1/(1/±∞) = ±∞ is maintained). Other common functions with a discontinuity at x = 0 which might treat +0 and −0 differently include Γ( x ) and the principal square root of y + xi for any negative number y . As with any approximation scheme, operations involving "negative zero" can occasionally cause confusion. For example, in IEEE 754, x = y does not always imply 1/ x = 1/ y , as 0 = −0 but 1/0 ≠ 1/(−0) . [ 39 ] Moreover, the reciprocal square root [ h ] of ±0 is ±∞ while the mathematical function 1 / x {\displaystyle 1/{\sqrt {x}}} over the real numbers does not have any negative value.
Subnormal values fill the underflow gap with values where the absolute distance between them is the same as for adjacent values just outside the underflow gap. This is an improvement over the older practice to just have zero in the underflow gap, and where underflowing results were replaced by zero (flush to zero). [ 40 ]
Modern floating-point hardware usually handles subnormal values (as well as normal values), and does not require software emulation for subnormals.
The infinities of the extended real number line can be represented in IEEE floating-point datatypes, just like ordinary floating-point values like 1, 1.5, etc. They are not error values in any way, though they are often (depends on the rounding) used as replacement values when there is an overflow. Upon a divide-by-zero exception, a positive or negative infinity is returned as an exact result. An infinity can also be introduced as a numeral (like C's "INFINITY" macro, or " ∞ " if the programming language allows that syntax).
IEEE 754 requires infinities to be handled in a reasonable way, such as
IEEE 754 specifies a special value called "Not a Number" (NaN) to be returned as the result of certain "invalid" operations, such as 0/0, ∞×0 , or sqrt(−1). In general, NaNs will be propagated, i.e. most operations involving a NaN will result in a NaN, although functions that would give some defined result for any given floating-point value will do so for NaNs as well, e.g. NaN ^ 0 = 1. There are two kinds of NaNs: the default quiet NaNs and, optionally, signaling NaNs. A signaling NaN in any arithmetic operation (including numerical comparisons) will cause an "invalid operation" exception to be signaled.
The representation of NaNs specified by the standard has some unspecified bits that could be used to encode the type or source of error; but there is no standard for that encoding. In theory, signaling NaNs could be used by a runtime system to flag uninitialized variables, or extend the floating-point numbers with other special values without slowing down the computations with ordinary values, although such extensions are not common.
It is a common misconception that the more esoteric features of the IEEE 754 standard discussed here, such as extended formats, NaN, infinities, subnormals etc., are only of interest to numerical analysts , or for advanced numerical applications. In fact the opposite is true: these features are designed to give safe robust defaults for numerically unsophisticated programmers, in addition to supporting sophisticated numerical libraries by experts. The key designer of IEEE 754, William Kahan notes that it is incorrect to "... [deem] features of IEEE Standard 754 for Binary Floating-Point Arithmetic that ...[are] not appreciated to be features usable by none but numerical experts. The facts are quite the opposite. In 1977 those features were designed into the Intel 8087 to serve the widest possible market... Error-analysis tells us how to design floating-point arithmetic, like IEEE Standard 754, moderately tolerant of well-meaning ignorance among programmers". [ 41 ]
A property of the single- and double-precision formats is that their encoding allows one to easily sort them without using floating-point hardware, as if the bits represented sign-magnitude integers, although it is unclear whether this was a design consideration (it seems noteworthy that the earlier IBM hexadecimal floating-point representation also had this property for normalized numbers). With the prevalent two's-complement representation, interpreting the bits as signed integers sorts the positives correctly, but with the negatives reversed; as one possible correction for that, with an xor to flip the sign bit for positive values and all bits for negative values, all the values become sortable as unsigned integers (with −0 < +0 ). [ 35 ]
The standard recommends optional exception handling in various forms, including presubstitution of user-defined default values, and traps (exceptions that change the flow of control in some way) and other exception handling models that interrupt the flow, such as try/catch. The traps and other exception mechanisms remain optional, as they were in IEEE 754-1985.
Clause 9 in the standard recommends additional mathematical operations [ 45 ] that language standards should define. [ 46 ] None are required in order to conform to the standard.
The following are recommended arithmetic operations, which must round correctly: [ 47 ]
The asinPi {\displaystyle \operatorname {asinPi} } , acosPi {\displaystyle \operatorname {acosPi} } and tanPi {\displaystyle \operatorname {tanPi} } functions were not part of the IEEE 754-2008 standard because they were deemed less necessary. [ 49 ] asinPi {\displaystyle \operatorname {asinPi} } and acosPi {\displaystyle \operatorname {acosPi} } were mentioned, but this was regarded as an error. [ 5 ] All three were added in the 2019 revision.
The recommended operations also include setting and accessing dynamic mode rounding direction, [ 50 ] and implementation-defined vector reduction operations such as sum, scaled product, and dot product , whose accuracy is unspecified by the standard. [ 51 ]
As of 2019 [update] , augmented arithmetic operations [ 52 ] for the binary formats are also recommended. These operations, specified for addition, subtraction and multiplication, produce a pair of values consisting of a result correctly rounded to nearest in the format and the error term, which is representable exactly in the format. At the time of publication of the standard, no hardware implementations are known, but very similar operations were already implemented in software using well-known algorithms. The history and motivation for their standardization are explained in a background document. [ 53 ] [ 54 ]
As of 2019, the formerly required minNum , maxNum , minNumMag , and maxNumMag in IEEE 754-2008 are now deprecated due to their non-associativity . Instead, two sets of new minimum and maximum operations are recommended. [ 55 ] The first set contains minimum , minimumNumber , maximum and maximumNumber . The second set contains minimumMagnitude , minimumMagnitudeNumber , maximumMagnitude and maximumMagnitudeNumber . The history and motivation for this change are explained in a background document. [ 56 ]
The standard recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result. By contrast, the previous 1985 version of the standard left aspects of the language interface unspecified, which led to inconsistent behavior between compilers, or different optimization levels in an optimizing compiler .
Programming languages should allow a user to specify a minimum precision for intermediate calculations of expressions for each radix. This is referred to as preferredWidth in the standard, and it should be possible to set this on a per-block basis. Intermediate calculations within expressions should be calculated, and any temporaries saved, using the maximum of the width of the operands and the preferred width if set. Thus, for instance, a compiler targeting x87 floating-point hardware should have a means of specifying that intermediate calculations must use the double-extended format . The stored value of a variable must always be used when evaluating subsequent expressions, rather than any precursor from before rounding and assigning to the variable.
The IEEE 754-1985 version of the standard allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has reduced these allowances, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language) and describes what needs to be done to achieve reproducible results.
The standard requires operations to convert between basic formats and external character sequence formats. [ 57 ] Conversions to and from a decimal character format are required for all formats. Conversion to an external character sequence must be such that conversion back using round to nearest, ties to even will recover the original number. There is no requirement to preserve the payload of a quiet NaN or signaling NaN, and conversion from the external character sequence may turn a signaling NaN into a quiet NaN.
The original binary value will be preserved by converting to decimal and back again using: [ 58 ]
For other binary formats, the required number of decimal digits is [ i ]
where p is the number of significant bits in the binary format, e.g. 237 bits for binary256.
When using a decimal floating-point format, the decimal representation will be preserved using:
Algorithms, with code, for correctly rounded conversion from binary to decimal and decimal to binary are discussed by Gay, [ 59 ] and for testing – by Paxson and Kahan. [ 60 ]
The standard recommends providing conversions to and from external hexadecimal-significand character sequences , based on C99 's hexadecimal floating point literals. Such a literal consists of an optional sign ( + or - ), the indicator "0x", a hexadecimal number with or without a period, an exponent indicator "p", and a decimal exponent with optional sign. The syntax is not case-sensitive. [ 61 ] The decimal exponent scales by powers of 2. For example, 0x0.1p0 is 1/16 and 0x0.1p-4 is 1/256. [ 62 ] | https://en.wikipedia.org/wiki/IEEE_754-2019 |
IEEE 802.11ad (also referred to by its subject directional multi-gigabit , i.e., DMG ) [ 1 ] is an amendment to the IEEE 802.11 wireless networking standard, developed to provide a Multiple Gigabit Wireless System (MGWS) standard in the 60 GHz band, and is a networking standard for WiGig networks. Because it uses the V band of the millimeter wave (mmW) band, the range of IEEE 802.11ad communication would be rather limited (just a few meters and difficult to pass through obstacles/walls) compared to other conventional Wi-Fi systems. [ 2 ] [ 3 ] However, its great bandwidth enables the transmission of data at high data rates up to multiple gigabits per second , enabling usage scenarios like transmission of uncompressed UHD video over the wireless network. [ 4 ]
The WiGig standard was announced in 2009 and added to the IEEE 802.11 family in December 2012.
After revision, the 60 GHz band is 57 to 71 GHz. The band is subdivided into 6 (previously 4) different channels in IEEE 802.11ad, each of them occupy 2160 MHz of space and provide 1760 MHz of bandwidth. [ 5 ] [ 6 ]
Some of these frequencies might not be available for the use of IEEE 802.11ad networks around the world (reserved for other purposes or requires licenses). Below is a list of available unlicensed spectrums for IEEE 802.11ad in different parts of the world: [ 7 ] | https://en.wikipedia.org/wiki/IEEE_802.11ad |
IEEE 802.11ay , Enhanced Throughput for Operation in License-exempt Bands above 45 GHz , is a follow-up to IEEE 802.11ad WiGig standard which quadruples the bandwidth and adds MIMO up to 8 streams. [ 1 ] [ 2 ] Development started in 2015 and the final standard IEEE 802.11ay-2021 was approved in March 2021.
802.11ay is a type of WLAN in the IEEE 802.11 family of Wi-Fi WLANs . It is an improvement on IEEE 802.11ad rather than a new standard. [ 3 ] [ 4 ] It uses the 60 GHz band [ 5 ] and has a transmission rate of 20–40 Gbit/s and an extended transmission distance of 300–500 meters. It includes mechanisms for channel bonding and MU-MIMO technologies. [ 2 ] It was originally expected to be released in 2017, but was delayed until 2021. [ 6 ]
Where 802.11ad uses a maximum of 2.16 GHz bandwidth, 802.11ay bonds four of those channels together for a maximum bandwidth of 8.64 GHz. MIMO is also added with a maximum of four streams. [ 2 ] The link-rate per stream is 44 Gbit/s, with four streams this goes up to 176 Gbit/s. Higher order modulation is also added, probably up to 256-QAM. [ 7 ]
Applications could include replacement for Ethernet and other cables within offices or homes, and provide backhaul connectivity outside for service providers. [ 8 ]
802.11ay should not be confused with the similarly named 802.11ax that was officially approved in 2021. The 802.11ay standard is designed to run at much higher frequencies. The lower frequency of 802.11ax enables it to penetrate walls somewhat, while 802.11ay is generally blocked by walls. [ 9 ]
Draft version 0.1 of 802.11ay was released in January 2017, followed by draft version 0.2 in March 2017. Draft version 1.0 was made available in November 2017, and draft 1.2 was available as of April 2018. [ 1 ] [ 10 ]
Draft version 7.0 was released in December 2020 and the Final 802 Working Group Approval was received in February 2021. [ 1 ] | https://en.wikipedia.org/wiki/IEEE_802.11ay |
IEEE 802.11bb is a line-of-sight light-based wireless networking standard that is part of the 802.11 suite of standards, which defines an interoperable communications protocol for Li-Fi devices. [ 1 ] Its proponents state that it will allow for very high speed communication that is faster than Wi-Fi . [ 2 ]
Li-Fi is intended to provide better bandwidth compared to microwave . To achieve faster speeds the standard will likely need to adopt some of the technologies used with optical fiber based networking. Multiple channels can reach extremely high speeds.
The 802.11bb standard describes the use of light in the near-infrared 800 to 1000 nm waveband to implement data rates between 10 Mbit/s and 9.6 Gbit/s, with interoperability between devices with different capabilities. [ 3 ] [ 4 ]
Development of 802.11bb was carried out by the IEEE 802.11 Light Communications Task Group. Companies participating in the standardization effort included pureLiFi and Fraunhofer HHI . [ 5 ]
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IEEE_802.11bb |
The IEEE Standard for Radix-Independent Floating-Point Arithmetic ( IEEE 854 ), was the first Institute of Electrical and Electronics Engineers (IEEE) international standard for floating-point arithmetic with radices other than 2, including radix 10. [ 1 ] IEEE 854 did not specify any data formats, whereas IEEE 754-1985 did specify formats for binary (radix 2) floating point. IEEE 754-1985 and IEEE 854-1987 were both superseded in 2008 by IEEE 754-2008 , [ 2 ] which specifies floating-point arithmetic for both radix 2 ( binary ) and radix 10 ( decimal ), and specifies two alternative formats for radix 10 floating-point values, and even more so with IEEE 754-2019 . [ 3 ] IEEE 754-2008 also had many other updates to the IEEE floating-point standardisation.
IEEE 854 arithmetic was first commercially implemented in the HP-71B handheld computer, which used decimal floating point with 12 digits of significand, and an exponent range of ±499, with a 15 digit significand used for intermediate results. | https://en.wikipedia.org/wiki/IEEE_854-1987 |
The IEEE Alexander Graham Bell Medal is an award honoring "exceptional contributions to communications and networking sciences and engineering" in the field of telecommunications . [ 1 ] The medal is one of the highest honors awarded by the Institute of Electrical and Electronics Engineers (IEEE) for achievements in telecommunication sciences and engineering.
It was instituted in 1976 by the directors of IEEE, commemorating the centennial of the invention of the telephone by Alexander Graham Bell . The award is presented either to an individual, or to a team of two or three persons. [ 1 ]
The institute's reasoning for the award was described thus:
The invention of the telephone by Alexander Graham Bell in 1876 was a major event in electrotechnology . It was instrumental in stimulating the broad telecommunications industry that has dramatically improved life throughout the world. As an individual, Bell himself exemplified the contributions that scientists and engineers have made to the betterment of mankind. [ 1 ]
Recipients of the award receive a gold medal, bronze replica, certificate, and an honorarium . [ 1 ]
As listed by the IEEE: [ 2 ] | https://en.wikipedia.org/wiki/IEEE_Alexander_Graham_Bell_Medal |
The IEEE Annals of the History of Computing is a quarterly peer-reviewed academic journal published by the IEEE Computer Society . It covers the history of computing , computer science , and computer hardware . It was founded in 1979 by the American Federation of Information Processing Societies .
The journal publishes scholarly articles, interviews, "think pieces", and memoirs by computer pioneers, and news and events in the field. It was established in July 1979 as Annals of the History of Computing , [ 1 ] with Bernard Galler as editor-in-chief . The journal became an IEEE publication in 1992, and was retitled to IEEE Annals of the History of Computing . The 2020 impact factor was 0.741. The current editor in chief is Troy Astarte at Swansea University in Wales.
This article about a computer science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/IEEE_Annals_of_the_History_of_Computing |
The IEEE Communications Magazine is a monthly magazine published by the IEEE Communications Society dealing with all areas of communications including light-wave telecommunications , high-speed data communications , personal communications systems (PCS), ISDN , and more. It includes special features, technical articles, book reviews, conferences, short courses, standards, governmental regulations and legislation, new products, and Society news. The magazine is published as IEEE Communications Magazine since 1979, replacing the IEEE Communications Society Magazine (1977–1978) and the Communications Society (1973–1976). According to the Journal Citation Reports , the magazine has a 2013 impact factor of 4.460. [ 1 ] It is abstracted and indexed in most of the major bibliographic databases . [ 2 ] The current editor-in-chief is Tarek S. El-Bawab.
This science and technology magazine–related article is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about magazines . Further suggestions might be found on the article's talk page .
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IEEE_Communications_Magazine |
The IEEE Journal of Oceanic Engineering is a journal published by the Institute of Electrical and Electronics Engineers . The journal's editor in chief is Associate Professor Mandar Chitre, of the National University of Singapore . [ 1 ] According to the Journal Citation Reports , the journal has a 2022 impact factor of 4.2. [ 2 ]
This engineering-related article is a stub . You can help Wikipedia by expanding it .
This article about an engineering journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/IEEE_Journal_of_Oceanic_Engineering |
IEEE Magnetics Letters is a peer-reviewed scientific journal that was started in January 2010. [ 1 ] It covers the physics and engineering of magnetism , magnetic materials , applied magnetics, design and application of magnetic devices, biomagnetics, magneto-electronics, and spin electronics. [ 2 ] [ 1 ] It publishes short articles of up to five pages in length and is a hybrid open access journal . The editor-in-chief is Massimiliano d'Aquino ( University of Naples Federico II ). | https://en.wikipedia.org/wiki/IEEE_Magnetics_Letters |
IEEE MultiMedia is a quarterly peer-reviewed scientific journal published by the IEEE Computer Society and covering multimedia technologies. Topics of interest include image processing, video processing, audio analysis, text retrieval and understanding, data mining and analysis, and data fusion. It was established in 1994 and the current editor-in-chief is Shu-Ching Chen (Florida International University). The 2018 impact factor was 3.556.
This article about a journal on mass media is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/IEEE_MultiMedia |
Interval arithmetic (also known as interval mathematics; interval analysis or interval computation ) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds . Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities .
Mathematically, instead of working with an uncertain real-valued variable x {\displaystyle x} , interval arithmetic works with an interval [ a , b ] {\displaystyle [a,b]} that defines the range of values that x {\displaystyle x} can have. In other words, any value of the variable x {\displaystyle x} lies in the closed interval between a {\displaystyle a} and b {\displaystyle b} . A function f {\displaystyle f} , when applied to x {\displaystyle x} , produces an interval [ c , d ] {\displaystyle [c,d]} which includes all the possible values for f ( x ) {\displaystyle f(x)} for all x ∈ [ a , b ] {\displaystyle x\in [a,b]} .
Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations ) and optimization problems .
The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset.
This treatment is typically limited to real intervals, so quantities in the form
where a = − ∞ {\displaystyle a={-\infty }} and b = ∞ {\displaystyle b={\infty }} are allowed. With one of a {\displaystyle a} , b {\displaystyle b} infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number r {\displaystyle r} can be interpreted as the interval [ r , r ] , {\displaystyle [r,r],} intervals and real numbers can be freely combined.
Consider the calculation of a person's body mass index (BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight of exactly 80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval [ 79.5 , 80.5 ) {\displaystyle [79.5,80.5)} .
The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval [ 24.537 , 24.846 ] {\displaystyle [24.537,24.846]} . Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight.
The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion.
The range of BMI examples could be reported as [ 24.5 , 24.9 ] {\displaystyle [24.5,24.9]} since this interval is a superset of the calculated interval. The range could not, however, be reported as [ 24.6 , 24.8 ] {\displaystyle [24.6,24.8]} , as the interval does not contain possible BMI values.
Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval [ 1.785 , 1.795 ) {\displaystyle [1.785,1.795)} . Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval
In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion.
A binary operation ⋆ {\displaystyle \star } on two intervals, such as addition or multiplication is defined by
In other words, it is the set of all possible values of x ⋆ y {\displaystyle x\star y} , where x {\displaystyle x} and y {\displaystyle y} are in their corresponding intervals. If ⋆ {\displaystyle \star } is monotone for each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains 0 {\displaystyle 0} ), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is
provided that x ⋆ y {\displaystyle x\star y} is defined for all x ∈ [ x 1 , x 2 ] {\displaystyle x\in [x_{1},x_{2}]} and y ∈ [ y 1 , y 2 ] {\displaystyle y\in [y_{1},y_{2}]} .
For practical applications, this can be simplified further:
The last case loses useful information about the exclusion of ( 1 / y 1 , 1 / y 2 ) {\displaystyle (1/y_{1},1/y_{2})} . Thus, it is common to work with [ − ∞ , 1 y 1 ] {\displaystyle \left[-\infty ,{\tfrac {1}{y_{1}}}\right]} and [ 1 y 2 , ∞ ] {\displaystyle \left[{\tfrac {1}{y_{2}}},\infty \right]} as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form ⋃ i [ a i , b i ] . {\textstyle \bigcup _{i}\left[a_{i},b_{i}\right].} The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite. [ 1 ]
Interval multiplication often only requires two multiplications. If x 1 {\displaystyle x_{1}} , y 1 {\displaystyle y_{1}} are nonnegative,
The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest.
With the help of these definitions, it is already possible to calculate the range of simple functions, such as f ( a , b , x ) = a ⋅ x + b . {\displaystyle f(a,b,x)=a\cdot x+b.} For example, if a = [ 1 , 2 ] {\displaystyle a=[1,2]} , b = [ 5 , 7 ] {\displaystyle b=[5,7]} and x = [ 2 , 3 ] {\displaystyle x=[2,3]} :
To shorten the notation of intervals, brackets can be used.
[ x ] ≡ [ x 1 , x 2 ] {\displaystyle [x]\equiv [x_{1},x_{2}]} can be used to represent an interval. Note that in such a compact notation, [ x ] {\displaystyle [x]} should not be confused between a single-point interval [ x 1 , x 1 ] {\displaystyle [x_{1},x_{1}]} and a general interval. For the set of all intervals, we can use
as an abbreviation. For a vector of intervals ( [ x ] 1 , … , [ x ] n ) ∈ [ R ] n {\displaystyle \left([x]_{1},\ldots ,[x]_{n}\right)\in [\mathbb {R} ]^{n}} we can use a bold font: [ x ] {\displaystyle [\mathbf {x} ]} .
Interval functions beyond the four basic operators may also be defined.
For monotonic functions in one variable, the range of values is simple to compute. If f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is monotonically increasing (resp. decreasing) in the interval [ x 1 , x 2 ] , {\displaystyle [x_{1},x_{2}],} then for all y 1 , y 2 ∈ [ x 1 , x 2 ] {\displaystyle y_{1},y_{2}\in [x_{1},x_{2}]} such that y 1 < y 2 , {\displaystyle y_{1}<y_{2},} f ( y 1 ) ≤ f ( y 2 ) {\displaystyle f(y_{1})\leq f(y_{2})} (resp. f ( y 2 ) ≤ f ( y 1 ) {\displaystyle f(y_{2})\leq f(y_{1})} ).
The range corresponding to the interval [ y 1 , y 2 ] ⊆ [ x 1 , x 2 ] {\displaystyle [y_{1},y_{2}]\subseteq [x_{1},x_{2}]} can be therefore calculated by applying the function to its endpoints:
From this, the following basic features for interval functions can easily be defined:
For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example, x n {\displaystyle x^{n}} for x ∈ [ − 1 , 1 ] {\displaystyle x\in [-1,1]} should produce the interval [ 0 , 1 ] {\displaystyle [0,1]} when n = 2 , 4 , 6 , … . {\displaystyle n=2,4,6,\ldots .} But if [ − 1 , 1 ] n {\displaystyle [-1,1]^{n}} is taken by repeating interval multiplication of form [ − 1 , 1 ] ⋅ [ − 1 , 1 ] ⋅ ⋯ ⋅ [ − 1 , 1 ] {\displaystyle [-1,1]\cdot [-1,1]\cdot \cdots \cdot [-1,1]} then the result is [ − 1 , 1 ] , {\displaystyle [-1,1],} wider than necessary.
More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at ( 1 2 + n ) π {\displaystyle \left({\tfrac {1}{2}}+n\right)\pi } or n π {\displaystyle n\pi } for n ∈ Z {\displaystyle n\in \mathbb {Z} } , respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is [ − 1 , 1 ] {\displaystyle [-1,1]} if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely −1, 0, and 1.
In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a function from a real vector to a real number, then [ f ] : [ R ] n → [ R ] {\displaystyle [f]:[\mathbb {R} ]^{n}\to [\mathbb {R} ]} is called an interval extension of f {\displaystyle f} if
This definition of the interval extension does not give a precise result. For example, both [ f ] ( [ x 1 , x 2 ] ) = [ e x 1 , e x 2 ] {\displaystyle [f]([x_{1},x_{2}])=[e^{x_{1}},e^{x_{2}}]} and [ g ] ( [ x 1 , x 2 ] ) = [ − ∞ , ∞ ] {\displaystyle [g]([x_{1},x_{2}])=[{-\infty },{\infty }]} are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, [ f ] {\displaystyle [f]} should be chosen as it gives the tightest possible result.
Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions, and operators.
The Taylor interval extension (of degree k {\displaystyle k} ) is a k + 1 {\displaystyle k+1} times differentiable function f {\displaystyle f} defined by
for some y ∈ [ x ] {\displaystyle \mathbf {y} \in [\mathbf {x} ]} , where D i f ( y ) {\displaystyle \mathrm {D} ^{i}f(\mathbf {y} )} is the i {\displaystyle i} -th order differential of f {\displaystyle f} at the point y {\displaystyle \mathbf {y} } and [ r ] {\displaystyle [r]} is an interval extension of the Taylor remainder.
The vector ξ {\displaystyle \xi } lies between x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } with x , y ∈ [ x ] {\displaystyle \mathbf {x} ,\mathbf {y} \in [\mathbf {x} ]} , ξ {\displaystyle \xi } is protected by [ x ] {\displaystyle [\mathbf {x} ]} .
Usually one chooses y {\displaystyle \mathbf {y} } to be the midpoint of the interval and uses the natural interval extension to assess the remainder.
The special case of the Taylor interval extension of degree k = 0 {\displaystyle k=0} is also referred to as the mean value form .
An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers to complex numbers . [ 2 ] Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers , there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. [ 3 ] Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages. [ 3 ]
The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic. [ 3 ] It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. [ 3 ] Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates. [ 3 ]
Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions , but with the expense that we have to sacrifice other useful properties of ordinary arithmetic. [ 3 ]
The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.
To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the function f ( x , y ) = x + y {\displaystyle f(x,y)=x+y} for x ∈ [ 0.1 , 0.8 ] {\displaystyle x\in [0.1,0.8]} and y ∈ [ 0.06 , 0.08 ] {\displaystyle y\in [0.06,0.08]} are for example [ 0.16 , 0.88 ] {\displaystyle [0.16,0.88]} . Where the same calculation is done with single-digit precision, the result would normally be [ 0.2 , 0.9 ] {\displaystyle [0.2,0.9]} . But [ 0.2 , 0.9 ] ⊉ [ 0.16 , 0.88 ] {\displaystyle [0.2,0.9]\not \supseteq [0.16,0.88]} ,
so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of f ( [ 0.1 , 0.8 ] , [ 0.06 , 0.08 ] ) {\displaystyle f([0.1,0.8],[0.06,0.08])} would be lost. Instead, the outward rounded solution [ 0.1 , 0.9 ] {\displaystyle [0.1,0.9]} is used.
The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down).
The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval [ ε 1 , ε 2 ] {\displaystyle [\varepsilon _{1},\varepsilon _{2}]} can be added.
The so-called " dependency" problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals.
As an illustration, take the function f {\displaystyle f} defined by f ( x ) = x 2 + x . {\displaystyle f(x)=x^{2}+x.} The values of this function over the interval [ − 1 , 1 ] {\displaystyle [-1,1]} are [ − 1 4 , 2 ] . {\displaystyle \left[-{\tfrac {1}{4}},2\right].} As the natural interval extension, it is calculated as:
which is slightly larger; we have instead calculated the infimum and supremum of the function h ( x , y ) = x 2 + y {\displaystyle h(x,y)=x^{2}+y} over x , y ∈ [ − 1 , 1 ] . {\displaystyle x,y\in [-1,1].} There is a better expression of f {\displaystyle f} in which the variable x {\displaystyle x} only appears once, namely by rewriting f ( x ) = x 2 + x {\displaystyle f(x)=x^{2}+x} as addition and squaring in the quadratic .
So the suitable interval calculation is
and gives the correct values.
In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if f {\displaystyle f} is continuous inside the box. However, not every function can be rewritten this way.
The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.
An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system
is precisely the line between the points ( − 1 , − 1 ) {\displaystyle (-1,-1)} and ( 1 , 1 ) . {\displaystyle (1,1).} Using interval methods results in the unit square, [ − 1 , 1 ] × [ − 1 , 1 ] . {\displaystyle [-1,1]\times [-1,1].} This is known as the wrapping effect .
A linear interval system consists of a matrix interval extension [ A ] ∈ [ R ] n × m {\displaystyle [\mathbf {A} ]\in [\mathbb {R} ]^{n\times m}} and an interval vector [ b ] ∈ [ R ] n {\displaystyle [\mathbf {b} ]\in [\mathbb {R} ]^{n}} . We want the smallest cuboid [ x ] ∈ [ R ] m {\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{m}} , for all vectors x ∈ R m {\displaystyle \mathbf {x} \in \mathbb {R} ^{m}} which there is a pair ( A , b ) {\displaystyle (\mathbf {A} ,\mathbf {b} )} with A ∈ [ A ] {\displaystyle \mathbf {A} \in [\mathbf {A} ]} and b ∈ [ b ] {\displaystyle \mathbf {b} \in [\mathbf {b} ]} satisfying.
For quadratic systems – in other words, for n = m {\displaystyle n=m} – there can be such an interval vector [ x ] {\displaystyle [\mathbf {x} ]} , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities [ A ] {\displaystyle [\mathbf {A} ]} and [ b ] {\displaystyle [\mathbf {b} ]} repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.
A rough solution [ x ] {\displaystyle [\mathbf {x} ]} can often be improved by an interval version of the Gauss–Seidel method .
The motivation for this is that the i {\displaystyle i} -th row of the interval extension of the linear equation.
can be determined by the variable x i {\displaystyle x_{i}} if the division 1 / [ a i i ] {\displaystyle 1/[a_{ii}]} is allowed. It is therefore simultaneously.
So we can now replace [ x j ] {\displaystyle [x_{j}]} by
and so the vector [ x ] {\displaystyle [\mathbf {x} ]} by each element.
Since the procedure is more efficient for a diagonally dominant matrix , instead of the system [ A ] ⋅ x = [ b ] , {\displaystyle [\mathbf {A} ]\cdot \mathbf {x} =[\mathbf {b} ]{\mbox{,}}} one can often try multiplying it by an appropriate rational matrix M {\displaystyle \mathbf {M} } with the resulting matrix equation.
left to solve. If one chooses, for example, M = A − 1 {\displaystyle \mathbf {M} =\mathbf {A} ^{-1}} for the central matrix A ∈ [ A ] {\displaystyle \mathbf {A} \in [\mathbf {A} ]} , then M ⋅ [ A ] {\displaystyle \mathbf {M} \cdot [\mathbf {A} ]} is outer extension of the identity matrix.
These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices A ∈ [ A ] {\displaystyle \mathbf {A} \in [\mathbf {A} ]} are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.
This is only suitable for systems of smaller dimension, since with a fully occupied n × n {\displaystyle n\times n} matrix, 2 n 2 {\displaystyle 2^{n^{2}}} real matrices need to be inverted, with 2 n {\displaystyle 2^{n}} vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed. [ 4 ]
An interval variant of Newton's method for finding the zeros in an interval vector [ x ] {\displaystyle [\mathbf {x} ]} can be derived from the average value extension. [ 5 ] For an unknown vector z ∈ [ x ] {\displaystyle \mathbf {z} \in [\mathbf {x} ]} applied to y ∈ [ x ] {\displaystyle \mathbf {y} \in [\mathbf {x} ]} , gives.
For a zero z {\displaystyle \mathbf {z} } , that is f ( z ) = 0 {\displaystyle f(z)=0} , and thus, must satisfy.
This is equivalent to z ∈ y − [ J f ] ( [ x ] ) − 1 ⋅ f ( y ) {\displaystyle \mathbf {z} \in \mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )} .
An outer estimate of [ J f ] ( [ x ] ) − 1 ⋅ f ( y ) ) {\displaystyle [J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} ))} can be determined using linear methods.
In each step of the interval Newton method, an approximate starting value [ x ] ∈ [ R ] n {\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{n}} is replaced by [ x ] ∩ ( y − [ J f ] ( [ x ] ) − 1 ⋅ f ( y ) ) {\displaystyle [\mathbf {x} ]\cap \left(\mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )\right)} and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of f {\displaystyle f} were in the initial range [ x ] {\displaystyle [\mathbf {x} ]} if a Newton step produces the empty set.
The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method .
As an example, consider the function f ( x ) = x 2 − 2 {\displaystyle f(x)=x^{2}-2} , the starting range [ x ] = [ − 2 , 2 ] {\displaystyle [x]=[-2,2]} , and the point y = 0 {\displaystyle y=0} . We then have J f ( x ) = 2 x {\displaystyle J_{f}(x)=2\,x} and the first Newton step gives.
More Newton steps are used separately on x ∈ [ − 2 , − 0.5 ] {\displaystyle x\in [{-2},{-0.5}]} and [ 0.5 , 2 ] {\displaystyle [{0.5},{2}]} . These converge to arbitrarily small intervals around − 2 {\displaystyle -{\sqrt {2}}} and + 2 {\displaystyle +{\sqrt {2}}} .
The Interval Newton method can also be used with thick functions such as g ( x ) = x 2 − [ 2 , 3 ] {\displaystyle g(x)=x^{2}-[2,3]} , which would in any case have interval results. The result then produces intervals containing [ − 3 , − 2 ] ∪ [ 2 , 3 ] {\displaystyle \left[-{\sqrt {3}},-{\sqrt {2}}\right]\cup \left[{\sqrt {2}},{\sqrt {3}}\right]} .
The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals.
Covering an interval vector [ x ] {\displaystyle [\mathbf {x} ]} by smaller boxes [ x 1 ] , … , [ x k ] , {\displaystyle [\mathbf {x} _{1}],\ldots ,[\mathbf {x} _{k}],} so that
is then valid for the range of values.
So, for the interval extensions described above the following holds:
Since [ f ] ( [ x ] ) {\displaystyle [f]([\mathbf {x} ])} is often a genuine superset of the right-hand side, this usually leads to an improved estimate.
Such a cover can be generated by the bisection method such as thick elements [ x i 1 , x i 2 ] {\displaystyle [x_{i1},x_{i2}]} of the interval vector [ x ] = ( [ x 11 , x 12 ] , … , [ x n 1 , x n 2 ] ) {\displaystyle [\mathbf {x} ]=([x_{11},x_{12}],\ldots ,[x_{n1},x_{n2}])} by splitting in the center into the two intervals [ x i 1 , 1 2 ( x i 1 + x i 2 ) ] {\displaystyle \left[x_{i1},{\tfrac {1}{2}}(x_{i1}+x_{i2})\right]} and [ 1 2 ( x i 1 + x i 2 ) , x i 2 ] . {\displaystyle \left[{\tfrac {1}{2}}(x_{i1}+x_{i2}),x_{i2}\right].} If the result is still not suitable then further gradual subdivision is possible. A cover of 2 r {\displaystyle 2^{r}} intervals results from r {\displaystyle r} divisions of vector elements, substantially increasing the computation costs.
With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing . This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.
Interval arithmetic can be used in various areas (such as set inversion , motion planning , set estimation , or stability analysis) to treat estimates with no exact numerical value. [ 6 ]
Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:
Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting .
Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely. [ 1 ]
If the behavior of such a system affected by tolerances satisfies, for example, f ( x , p ) = 0 {\displaystyle f(\mathbf {x} ,\mathbf {p} )=0} , for p ∈ [ p ] {\displaystyle \mathbf {p} \in [\mathbf {p} ]} and unknown x {\displaystyle \mathbf {x} } then the set of possible solutions.
can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation , interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.
Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic . Apart from the strict statements x ∈ [ x ] {\displaystyle x\in [x]} and x ∉ [ x ] {\displaystyle x\not \in [x]} , intermediate values are also possible, to which real numbers μ ∈ [ 0 , 1 ] {\displaystyle \mu \in [0,1]} are assigned. μ = 1 {\displaystyle \mu =1} corresponds to definite membership while μ = 0 {\displaystyle \mu =0} is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval.
For fuzzy arithmetic [ 7 ] only a finite number of discrete affiliation stages μ i ∈ [ 0 , 1 ] {\displaystyle \mu _{i}\in [0,1]} are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals.
The interval [ x ( i ) ] {\displaystyle \left[x^{(i)}\right]} corresponds exactly to the fluctuation range for the stage μ i . {\displaystyle \mu _{i}.}
The appropriate distribution for a function f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} concerning indistinct values x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} and the corresponding sequences.
can be approximated by the sequence.
where
and can be calculated by interval methods. The value [ y ( 1 ) ] {\displaystyle \left[y^{(1)}\right]} corresponds to the result of an interval calculation.
Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems , that is, to show that the Lorenz attractor is a strange attractor . [ 8 ] Thomas Hales used interval arithmetic in order to solve the Kepler conjecture .
Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.
Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young. [ 9 ] Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra by Paul S. Dwyer [ de ] ; [ 10 ] intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958). [ 11 ]
The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966. [ 12 ] [ 13 ] He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. [ 14 ] Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.
Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals, [ 15 ] though Moore found the first non-trivial applications.
In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch [ 16 ] [ 17 ] and Götz Alefeld [ de ] [ 18 ] at the University of Karlsruhe and later also at the Bergische University of Wuppertal .
For example, Karl Nickel [ de ] explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. [ 5 ] Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.
In 1988, Rudolf Lohner developed Fortran -based software for reliable solutions for initial value problems using ordinary differential equations . [ 19 ]
The journal Reliable Computing (originally Interval Computations ) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic. [ 20 ]
In recent years work has concentrated in particular on the estimation of preimages of parameterized functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France. [ 21 ]
There are many software packages that permit the development of numerical applications using interval arithmetic. [ 22 ] These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.
Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages , such as C++, Fortran, and Pascal . [ 23 ] The first platform was a Zuse Z23 , for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC , a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77 -based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC ; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License . At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard.
Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals.
The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language. [ 24 ]
The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers . Programs written in Frink can use intervals without rewriting or recompilation.
GAOL [ 25 ] is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming .
The Moore library [ 26 ] is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the concepts feature of C++ .
The Julia programming language [ 27 ] has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming , via the ValidatedNumerics.jl package. [ 28 ]
In addition, computer algebra systems, such as Euler Mathematical Toolbox , FriCAS , Maple , Mathematica , Maxima [ 29 ] and MuPAD , can handle intervals. A Matlab extension Intlab [ 30 ] builds on BLAS routines, and the toolbox b4m makes a Profil/BIAS interface. [ 30 ] [ 31 ]
A library for the functional language OCaml was written in assembly language and C. [ 32 ]
MPFI is a library for arbitrary precision interval arithmetic; it is written in C and is based on MPFR . [ 33 ]
A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015. [ 34 ] Two reference implementations are freely available. [ 35 ] These have been developed by members of the standard's working group: The libieeep1788 [ 36 ] library for C++, and the interval package [ 37 ] for GNU Octave .
A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations. [ 38 ]
Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing). | https://en.wikipedia.org/wiki/IEEE_P1788 |
The IEEE P1906.1 - Recommended Practice for Nanoscale and Molecular Communication Framework [ 1 ] is a standards working group sponsored by the IEEE Communications Society Standards Development Board whose goal is to develop a common framework for nanoscale and molecular communication . [ 2 ] Because this is an emerging technology , the standard is designed to encourage innovation by reaching consensus on a common definition, terminology, framework, goals, metrics, and use-cases that encourage innovation and enable the technology to advance at a faster rate. The draft passed an initial sponsor balloting with comments on January 2, 2015. The comments were addressed by the working group and the resulting draft ballot passed again on August 17, 2015. Finally, additional material regarding SBML was contributed and the final draft passed again on October 15, 2015. The draft standard was approved by IEEE RevCom in the final quarter of 2015.
Working group membership includes experts in industry and academia with strong backgrounds in mathematical modeling , engineering , physics , economics and biological sciences . [ 3 ]
Electronic components such as transistors , or electrical / electromagnetic message carriers whose operation is similar at the macroscale and nanoscale are excluded from the definition. A human-engineered, synthetic component must form part of the system because it is important to avoid standardizing nature or physical processes. The definition of communication , particularly in the area of cell-surface interactions as viewed by biologists versus non-biologists has been a topic of debate. The interface is viewed as a communication channel , whereas the ' receptor-signaling - gene expression ' events are the network.
The draft currently comprises: definition, terminology, framework, metrics, use-cases, and reference code ( ns-3 ). [ 4 ]
The standard provides a very broad foundation and encompasses all approaches to nanoscale communication. While there have been many superficial academic attempts to classify nanoscale communication approaches, the standard considers two fundamental approaches: waves and particles . This includes any hybrid of the two as well as quasiparticles .
A unique contribution of the standard is an ns-3 reference model that enables users to build upon the standard components.
Applications are numerous, however, there appears to be strong emphasis on medical and biological use-cases in nanomedicine .
The IEEE P1906.1 working group is developing ns-3 nanoscale simulation software that implements the IEEE 1906.1 standard and serves as a reference model and base for development of a wide-variety of interoperable small-scale communication physical layer models. [ 9 ]
The Best Readings on nanoscale communication networks provides good background information related to the standard. [ 10 ] The Topics section breaks down the information using the standard approach. [ 11 ]
IEEE 1906.1 is the foundation for nanoscale communication. Additional standards are expected to build upon it.
IEEE 1906.1.1 Standard Data Model for Nanoscale Communication Systems The Standard Data Model for Nanoscale Communication Systems defines a network management and configuration data model for nanoscale communication. [ 12 ] This data model has several goals:
The data model is written in YANG and will enable remote configuration and operation of nanoscale communication over the Internet using NETCONF . | https://en.wikipedia.org/wiki/IEEE_P1906.1 |
The IEEE Transactions on Control Systems Technology is published bimonthly by the IEEE Control Systems Society . The journal publishes papers, letters, tutorials, surveys, and perspectives on control systems technology . The editor-in-chief is Prof. Andrea Serrani ( Ohio State University ). According to the Journal Citation Reports , the journal has a 2019 impact factor of 5.312. [ 1 ]
This article about an engineering journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/IEEE_Transactions_on_Control_Systems_Technology |
IEEE Transactions on Dielectrics and Electrical Insulation is a peer-reviewed scientific journal published bimonthly by the Institute of Electrical and Electronics Engineers . It was co-founded in 1965 by the IEEE Dielectrics and Electrical Insulation Society under the name IEEE Transactions on Electrical Insulation. The journal covers the advances in dielectric phenomena and measurements, and electrical insulation . Its editor-in-chief is Michael Wübbenhorst ( KU Leuven ). [ 1 ]
According to the Journal Citation Reports , the journal has a 2022 impact factor of 3.1. [ 2 ]
This article about an engineering journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/IEEE_Transactions_on_Dielectrics_and_Electrical_Insulation |
IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society . It covers information theory and the mathematics of communications . It was established in 1953 as IRE Transactions on Information Theory . The editor-in-chief is Muriel Médard ( Massachusetts Institute of Technology ). As of 2007, the journal allows the posting of preprints on arXiv . [ 1 ]
According to Jack van Lint , it is the leading research journal in the whole field of coding theory . [ 2 ] A 2006 study using the PageRank network analysis algorithm found that, among hundreds of computer science -related journals, IEEE Transactions on Information Theory had the highest ranking and was thus deemed the most prestigious. ACM Computing Surveys , with the highest impact factor , was deemed the most popular. [ 3 ]
This article about an engineering journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/IEEE_Transactions_on_Information_Theory |
IEEE Transactions on Magnetics is a monthly peer-reviewed scientific journal that covers the basic physics of magnetism , magnetic materials , applied magnetics, magnetic devices, and magnetic data storage . The editor-in-chief is Amr Adly ( Cairo University, Egypt ).
The journal is abstracted and indexed in the Science Citation Index , [ 1 ] Current Contents /Physical, Chemical & Earth Sciences, [ 1 ] Scopus , [ 2 ] CSA databases , and EBSCOhost . According to the Journal Citation Reports , the journal has a recent impact factor of 2.1. [ 3 ] | https://en.wikipedia.org/wiki/IEEE_Transactions_on_Magnetics |
IEFBR14 is a utility program that runs on mainframe computers from IBM . It runs in all mainframe environments derived from OS/360 , including z/OS . It is a placeholder that returns the exit status zero, similar to the true command on UNIX-like systems. [ 1 ]
On OS/360 and derived mainframe systems, most programs never specify files (usually called datasets ) directly, but instead reference them indirectly through the Job Control Language (JCL) statements that invoke the programs. These data definition (or " DD ") statements can include a "disposition" ( DISP=... ) parameter that indicates how the file is to be managed — whether a new file is to be created or an old one re-used; and whether the file should be deleted upon completion or retained; etc .
IEFBR14 was created because while DD statements can create or delete files easily, they cannot do so without a program to be run due to a certain peculiarity of the Job Management system, which always requires that the Initiator actually execute a program, even if that program is effectively a null statement . [ 2 ] The program used in the JCL does not actually need to use the files to cause their creation or deletion — the DD DISP=... specification does all the work. Thus a very simple do-nothing program was needed to fill that role.
IEFBR14 can thus be used to create or delete a data set using JCL.
A secondary reason to run IEFBR14 was to unmount devices (usually tapes or disks) that had been left mounted from a previous job, perhaps because of an error in that job's JCL or because the job ended in error. In either event, the system operators would often need to demount the devices, and a started task – DEALLOC – was often provided for this purpose.
Simply entering the command
at the system console would run the started task, which consisted of just one step . However, due to the design of Job Management, DEALLOC must actually exist in the system's procedure library, SYS1.PROCLIB, lest the start command fail.
Also, all such started tasks must be a single jobstep as the "Started Task Control" (STC) module within the Job Management component of the operating system only accepts single-step jobs, and it fails all multi-step jobs, without exception.
At least on z/OS, branching off to execute another program would cause the calling program to be evaluated for syntax errors at that point. [ 1 ]
The "IEF" derives from a convention on mainframe computers that programs supplied by IBM were grouped together by function or creator and that each group shared a three-letter prefix. In OS/360, the first letter was almost always "I", and the programs produced by the Job Management group (including IEFBR14) all used the prefix "IEF". Other common prefixes included "IEB" for dataset utility programs, "IEH" for system utility programs, and "IEW" for program linkage and loading. [ 3 ] Other major components were (and still are) "IEA" (Operating System Supervisor) and "IEC" ( Input/Output Supervisor ).
As explained below, "BR 14" was the essential function of the program, to simply return to the operating system. This portion of a program name was often mnemonic — for example, IEBUPDTE was the dataset utility (IEB) that applied updates (UPDTE) to source code files, and IEHINITT was the system utility (IEH) that initialized (INIT) magnetic tape labels (T).
As explained further in "Usage" below, the name "BR14" comes from the IBM assembler-language instruction " B ranch (to the address in) R egister 14 ", which by convention is used to "return from a subroutine ". Most early users of OS/360 were familiar with IBM Assembler Language and would have recognized this at once.
Example JCL would be :
To create a Partitioned Data Set:
IEFBR14 consisted initially of a single instruction a "Branch to Register" 14. The mnemonic used in the IBM Assembler was BR and hence the name: IEF BR 14 . BR 14 is identically equivalent to BCR 15,14 (Branch Always [ mask = 15 = always ] to the address contained in general purpose register 14). BR is a pseudo instruction for BCR 15. The system assembler accepts many cases of such pseudo-instructions, as logical equivalents to the canonical System/360 instructions. The canonical instance of BR 14 is BCR 15,14 .
The linkage convention for OS/360 and its descendants requires that a program be invoked with register 14 containing the address to return control to when complete, and register 15 containing the address at which the called program is loaded into memory; at completion, the program loads a return code in register 15, and then branches to the address contained in register 14. But, initially IEFBR14 was not coded with these characteristics in mind, as IEFBR14 was initially used as a dummy control section, one which simply returned to the caller, not as an executable module.
The original version of the program did not alter register 15 at all as its original application was as a placeholder in certain load modules which were generated during Sysgen (system generation), not as an executable program, per se. Since IEFBR14 was always invoked by the functional equivalent of the canonical BALR 14,15 instruction, the return code in register 15 was always non-zero. Later, a second instruction was to be added to clear the return code so that it would exit with a determinate status, namely zero. Initially, programmers were not using all properties of the Job Control Language, anyway, so an indeterminate return code was not a problem. However, subsequently programmers were indeed using these properties, so a determinate status became mandatory. This modification to IEFBR14 did not in any way impact its original use as a placeholder.
The machine code for the modified program is:
The equivalent machine code, eliminating the BR for clarity, is:
This makes perfect sense as the OS/360 Initiator initially "attaches" the job-step task using the ATTACH macro-instruction (SVC 42), and "unwinding" the effect of this ATTACH macro (it being a Type 2 SVC instruction) must be a complementary instruction, namely an EXIT macro (necessarily a Type 1 SVC instruction, SVC 3).
Trombetta, Michael & Finkelstein Sue Carolyn (1985). "OS JCL and utilities". Addison Wesley. page 152. | https://en.wikipedia.org/wiki/IEFBR14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.