text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A glass break detector is a sensor that detects if a pane of glass has been shattered or broken. [ 1 ] These sensors are commonly used near glass doors or glass storefront windows. They are widely used in electronic burglar-alarm systems.
The detection process begins with a microphone that picks up noises and vibrations coming from the glass. If the vibrations exceed a certain threshold (which is sometimes user selectable), then they are analyzed by detector circuitry. Simpler detectors merely use narrowband microphones tuned to frequencies typical of glass shattering. These are merely designed to react to sound magnitudes above a certain threshold, whereas more complex designs analytically compare the sound to one or more glass-break profiles using signal transforms similar to DCT and FFT . [ 2 ] These digitally sophisticated detectors only react if both the amplitude threshold and statistically expressed similarity threshold are breached. Advances in technology have also led to the use of wireless glass-break detectors. [ 3 ]
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glass_break_detector |
A glass cockpit is an aircraft cockpit that features an array of electronic (digital) flight instrument displays , typically large LCD screens, rather than traditional analog dials and gauges. [ 1 ] While a traditional cockpit relies on numerous mechanical gauges (nicknamed "steam gauges") to display information, a glass cockpit uses several multi-function displays and a primary flight display driven by flight management systems , that can be adjusted to show flight information as needed. This simplifies aircraft operation and navigation and allows pilots to focus only on the most pertinent information. They are also popular with airline companies as they usually eliminate the need for a flight engineer , saving costs. In recent [ when? ] years the technology has also become widely available in small aircraft.
As aircraft displays have modernized, the sensors that feed them have modernized as well. Traditional gyroscopic flight instruments have been replaced by electronic attitude and heading reference systems (AHRS) and air data computers (ADCs), improving reliability and reducing cost and maintenance. GPS receivers are usually integrated into glass cockpits.
Early glass cockpits, found in the McDonnell Douglas MD-80 , Boeing 737 Classic , ATR 42 , ATR 72 and in the Airbus A300-600 and A310 , used electronic flight instrument systems (EFIS) to display attitude and navigational information only, with traditional mechanical gauges retained for airspeed, altitude, vertical speed, and engine performance. The Boeing 757 and 767-200/-300 introduced an electronic engine-indicating and crew-alerting system (EICAS) for monitoring engine performance while retaining mechanical gauges for airspeed, altitude and vertical speed.
Later glass cockpits, found in the Boeing 737NG , 747-400 , 767-400 , 777 , Airbus A320 , later Airbuses, Ilyushin Il-96 and Tupolev Tu-204 have completely replaced the mechanical gauges and warning lights in previous generations of aircraft. While glass cockpit-equipped aircraft throughout the late 20th century still retained analog altimeters , attitude , and airspeed indicators as standby instruments in case the EFIS displays failed, more modern aircraft have increasingly been using digital standby instruments as well, such as the integrated standby instrument system .
Glass cockpits originated in military aircraft in the late 1960s and early 1970s; an early example is the Mark II avionics of the F-111D (first ordered in 1967, delivered from 1970 to 1973), which featured a multi-function display .
Prior to the 1970s, air transport operations were not considered sufficiently demanding to require advanced equipment like electronic flight displays. Also, computer technology was not at a level where sufficiently light and powerful electronics were available. The increasing complexity of transport aircraft, the advent of digital systems and the growing air traffic congestion around airports began to change that.
The Boeing 2707 was one of the earliest commercial aircraft designed with a glass cockpit. Most cockpit instruments were still analog, but cathode-ray tube (CRT) displays were to be used for the attitude indicator and horizontal situation indicator (HSI). However, the 2707 was cancelled in 1971 after insurmountable technical difficulties and ultimately the end of project funding by the US government.
The average transport aircraft in the mid-1970s had more than one hundred cockpit instruments and controls, and the primary flight instruments were already crowded with indicators, crossbars, and symbols, and the growing number of cockpit elements were competing for cockpit space and pilot attention. [ 3 ] As a result, NASA conducted research on displays that could process the raw aircraft system and flight data into an integrated, easily understood picture of the flight situation, culminating in a series of flights demonstrating a full glass cockpit system.
The success of the NASA-led glass cockpit work is reflected in the total acceptance of electronic flight displays. The safety and efficiency of flights have been increased with improved pilot understanding of the aircraft's situation relative to its environment (or " situational awareness ").
By the end of the 1990s, liquid-crystal display (LCD) panels were increasingly favored among aircraft manufacturers because of their efficiency, reliability and legibility. Earlier LCD panels suffered from poor legibility at some viewing angles and poor response times, making them unsuitable for aviation. Modern aircraft such as the Boeing 737 Next Generation , 777 , 717 , 747-400ER , 747-8F 767-400ER , 747-8 , and 787 , Airbus A320 family (later versions), A330 (later versions), A340-500/600 , A340-300 (later versions), A380 and A350 are fitted with glass cockpits consisting of LCD units. [ 4 ]
The glass cockpit has become standard equipment in airliners , business jets , and military aircraft . It was fitted into NASA's Space Shuttle orbiters Atlantis , Columbia , Discovery , and Endeavour , and the Russian Soyuz TMA model spacecraft that were launched for the first time in 2002. By the end of the century glass cockpits began appearing in general aviation aircraft as well. In 2003, Cirrus Design 's SR20 and SR22 became the first light aircraft equipped with glass cockpits, which they made standard on all Cirrus aircraft. By 2005 , even basic trainers like the Piper Cherokee and Cessna 172 were shipping with glass cockpits as options (which nearly all customers chose), as well as many modern utility aircraft such as the Diamond DA42 . The Lockheed Martin F-35 Lightning II features a "panoramic cockpit display" touchscreen that replaces most of the switches and toggles found in an aircraft cockpit. The civilian Cirrus Vision SF50 has the same, which they call a "Perspective Touch" glass cockpit.
Unlike the previous era of glass cockpits—where designers merely copied the look and feel of conventional electromechanical instruments onto cathode-ray tubes—the new displays represent a true departure. They look and behave very similarly to other computers, with windows and data that can be manipulated with point-and-click devices. They also add terrain, approach charts, weather, vertical displays, and 3D navigation images.
The improved concepts enable aircraft makers to customize cockpits to a greater degree than previously. All of the manufacturers involved have chosen to do so in one way or another—such as using a trackball , thumb pad or joystick as a pilot-input device in a computer-style environment. Many of the modifications offered by the aircraft manufacturers improve situational awareness and customize the human-machine interface to increase safety.
Modern glass cockpits might include synthetic vision systems (SVS) or enhanced flight vision systems (EFVS). Synthetic vision systems display a realistic 3D depiction of the outside world (similar to a flight simulator ), based on a database of terrain and geophysical features in conjunction with the attitude and position information gathered from the aircraft navigational systems. Enhanced flight vision systems add real-time information from external sensors, such as an infrared camera.
All new airliners such as the Airbus A380 , Boeing 787 and private jets such as Bombardier Global Express and Learjet use glass cockpits.
Many modern general aviation aircraft are available with glass cockpits. Systems such as the Garmin G1000 are now available on many new GA aircraft, including the classic Cessna 172 . Many small aircraft can also be modified post-production to replace analogue instruments.
Glass cockpits are also popular as a retrofit for older private jets and turboprops such as Dassault Falcons , Raytheon Hawkers , Bombardier Challengers , Cessna Citations , Gulfstreams , King Airs , Learjets , Astras , and many others. Aviation service companies work closely with equipment manufacturers to address the needs of the owners of these aircraft.
Today, smartphones and tablets use mini-applications, or "apps", to remotely control complex devices, by WiFi radio interface. They demonstrate how the "glass cockpit" idea is being applied to consumer devices. Applications include toy-grade UAVs which use the display and touch screen of a tablet or smartphone to employ every aspect of the "glass cockpit" for instrument display, and fly-by-wire for aircraft control.
The glass cockpit idea made news in 1980s trade magazines, like Aviation Week & Space Technology , when NASA announced that it would be replacing most of the electro-mechanical flight instruments in the space shuttles with glass cockpit components. The articles mentioned how glass cockpit components had the added benefit of being a few hundred pounds lighter than the original flight instruments and support systems used in the Space Shuttles. The Space Shuttle Atlantis was the first orbiter to be retrofitted with a glass cockpit in 2000 with the launch of STS-101 . Columbia was the second orbiter with a glass cockpit on STS-109 in 2002, followed by Discovery in 2005 with STS-114 , and Endeavour in 2007 with STS-118 .
NASA's Orion spacecraft will use glass cockpits derived from Boeing 787 Dreamliner . [ 5 ]
As aircraft operation depends on glass cockpit systems, flight crews must be trained to deal with failures. The Airbus A320 family has seen fifty incidents where several flight displays were lost. [ 6 ]
On 25 January 2008, United Airlines Flight 731 experienced a serious glass-cockpit blackout, losing half of the Electronic Centralised Aircraft Monitor ( ECAM ) displays as well as all radios, transponders, Traffic Collision Avoidance System ( TCAS ), and attitude indicators. [ 7 ] The pilots were able to land at Newark Airport without radio contact in good weather and daylight conditions.
Airbus has offered an optional fix, which the US National Transportation Safety Board (NTSB) has suggested to the US Federal Aviation Administration (FAA) as mandatory, but the FAA has yet to make it a requirement. [ 8 ] [ dubious – discuss ] A preliminary NTSB factsheet is available. [ 9 ] Due to the possibility of a blackout, glass cockpit aircraft also have an integrated standby instrument system that includes (at a minimum) an artificial horizon , altimeter and airspeed indicator . It is electronically separate from the main instruments and can run for several hours on a backup battery.
In 2010, the NTSB published a study done on 8,000 general aviation light aircraft. The study found that, although aircraft equipped with glass cockpits had a lower overall accident rate, they also had a larger chance of being involved in a fatal accident. [ 9 ] The NTSB Chairman said in response to the study: [ 10 ]
Training is clearly one of the key components to reducing the accident rate of light planes equipped with glass cockpits, and this study clearly demonstrates the life and death importance of appropriate training on these complex systems... While the technological innovations and flight management tools that glass cockpit-equipped airplanes bring to the general aviation community should reduce the number of fatal accidents, we have not—unfortunately—seen that happen. | https://en.wikipedia.org/wiki/Glass_cockpit |
A glass code is a method of classifying glasses for optical use, such as the manufacture of lenses and prisms . There are many different types of glass with different compositions and optical properties, and a glass code is used to distinguish between them.
There are several different glass classification schemes in use, most based on the catalogue systems used by glass manufacturers such as Pilkington and Schott Glass . These tend to be based on the material composition, for example BK7 is the Schott Glass classification of a common borosilicate crown glass .
The international glass code is based on U.S. military standard MIL-G-174, and is a six-digit number specifying the glass according to its refractive index n d at the Fraunhofer d- (or D 3 -) line, 589.3 nm, and its Abbe number V d also taken at that line. The resulting glass code is the value of n d − 1 rounded to three digits, followed by V d rounded to three digits, with all decimal points ignored. For example, BK7 has n d = 1.5168 and V d = 64.17, giving a six-digit glass code of 517642. [ 1 ]
Consequently, a linear approximation for the refractive index dispersion close that wavelength is given by:
where λ {\displaystyle \lambda } is the wavelength in nanometers.
The following table shows some example glasses and their glass code. Note that the glass properties can vary slightly between different manufacturer types. [ 2 ] | https://en.wikipedia.org/wiki/Glass_code |
The appearance of different colors in glass is largely due to the way light interacts with the materials it contains. In an extremely pure glass, without impurities such as bubbles, coloring ions, or crystalline and nano-sized phases, all visible light would pass through, and the glass would appear completely transparent . When such impurities are present, they selectively absorb certain wavelengths of light, resulting in coloured glass. [ 1 ]
Glass coloring and color marking may be obtained in several ways.
Ordinary soda-lime glass appears colorless to the naked eye when it is thin, although iron oxide impurities produce a green tint which can be viewed in thick pieces or with the aid of scientific instruments. Further metals and metal oxides can be added to glass during its manufacture to change its color which can enhance its aesthetic appeal. Examples of these additives are listed below:
The principal methods of this are enamelled glass , essentially a technique for painting patterns or images, used for both glass vessels and on stained glass, and glass paint, typically in black, and silver stain , giving yellows to oranges on stained glass. All of these are fired in a kiln or furnace to fix them, and can be extremely durable when properly applied. This is not true of "cold-painted" glass, using oil paint or other mixtures, which rarely last more than a few centuries.
Tin oxide with antimony and arsenic oxides produce an opaque white glass ( milk glass ), first used in Venice to produce an imitation porcelain , very often then painted with enamels . Similarly, some smoked glasses may be based on dark-colored inclusions, but with ionic coloring it is also possible to produce dark colors (see above).
Glass containing two or more phases with different refractive indices shows coloring based on the Tyndall effect and explained by the Mie theory , if the dimensions of the phases are similar or larger than the wavelength of visible light . The scattered light is blue and violet as seen in the image, while the transmitted light is yellow and red.
Dichroic glass has one or several coatings in the nanometer-range (for example metals, metal oxides, or nitrides) which give the glass dichroic optical properties. Also the blue appearance of some automobile windshields is caused by dichroism. | https://en.wikipedia.org/wiki/Glass_coloring_and_color_marking |
A glass crusher provides for pulverization of glass to a yield size of 2 inches (5 cm) or less. [ 1 ] Recycling operations may range from simple, manually-fed, self-contained machines to extravagant crushing systems complete with screens , conveyors , crushers and separators. All non-glass contaminants must generally be removed from the glass prior to recycling. The processes used in glass crushing for recycling involves the same methods used by the aggregate industry for crushing rock into sand ( rock crusher ).
The use of VSI crushers in large scale operations allow the production of up to 125 tons per hour [ 2 ] of crushed glass cullet .
VSI crushers use a high speed rotor with wear-resistant tips and a crushing chamber designed to 'throw' the glass against. The VSI crushers utilize velocity rather than surface force as the predominant force to break glass as this allows the breaking force to be applied evenly both across the surface of the material as well as through the mass of the material. In its shattered state, glass has a jagged and uneven surface. Applying surface force ( pressure ) results in unpredictable and typically non-cubicle particles . As glass is 'thrown' by a VSI rotor against a solid anvil , it fractures and breaks along fissures . Final particle size can be controlled by 1) the velocity at which the glass is thrown against the anvil and 2) the distance between the end of the rotor and the impact point on the anvil. The product resulting from VSI crushing is generally of a consistent cubicle shape which may optimize yield in consumptive applications such as the fabrication of fiberglass , ceramic ware, flux agents and abrasives . Due to the highly abrasive nature of the glass material, a VSI crushing process is generally preferred over Horizontal Shaft Impact and most other crushing methods with higher maintenance and lower wear part lives.
VSI crushers generally utilize a high speed spinning rotor at the center of the crushing chamber and an outer impact surface of either abrasive resistant metal anvils or crushed glass (or rock in an aggregate applications). Utilizing cast metal surfaces ' anvils ' are traditionally referred to as a "Shoe and Anvil VSI". Utilizing crushed material on the outer walls of the crusher for new material to be crushed against is traditionally referred to as "rock on rock VSI". | https://en.wikipedia.org/wiki/Glass_crusher |
Glass databases are a collection of glass compositions, glass properties, glass models , associated trademark names, patents etc. These data were collected from publications in scientific papers and patents, from personal communication with scientists and engineers, and other relevant sources.
Since the beginning of scientific glass research in the 19th century, thousands of glass property-composition datasets were published. The first attempt to summarize all those data systematically was the monograph "Glastechnische Tabellen" . [ 1 ] World War II and the Cold War prevented similar efforts for many years afterwards.
In 1956, "Phase Diagrams for Ceramists" was published the first time, containing a collection of phase diagrams . [ 2 ] This database is known today as "Phase Equilibria Diagrams" . [ 3 ]
in 1983, the "Handbook of Glass Data" was published, [ 4 ] followed by the creation of the Japanese database Interglad in 1991. [ 5 ] The "Handbook of Glass Data" was later digitalized and substantially expanded under the name SciGlass . [ 6 ] Currently, SciGlass contains properties of about 400,000 glass compositions, INTERGLAD about 380,000, [ 7 ] and "Phase Equilibria Diagrams" includes about 31,000 diagrams.
in 2019, the SciGlass data was made publicly available on GitHub [ 8 ] under the ODC Open Database License (ODbL) .
In 2023, the re-emergence of the SciGlass database as SciGlass Sage [ 9 ] offered "AI" assistance, a property predictor powered by random forest regression models, and a generator using predictive models in conjunction with genetic algorithms .
In 2024, SciGlass Next was created as an open-access web database utilizing the SciGlass data available on GitHub. [ 8 ] The database is hosted in the public domain of Friedrich Schiller University Jena .
The website provides comprehensive documentation, including step-by-step instructions and glossaries of properties and symbols used.
Most features are covered, including:
The following list of glass database contents is not complete, and it may not be up to date. For full features see the references section below. All databases contain citations to the original data sources and the chemical composition of the glasses or ceramics . | https://en.wikipedia.org/wiki/Glass_databases |
Glass disease , also referred to as sick glass or glass illness , is a degradation process of glass that can result in weeping , crizzling , spalling , cracking and fragmentation. [ 1 ] [ 2 ] Glass disease is caused by an inherent instability in the chemical composition of the original glass formula. [ 3 ] Properties of a particular glass will vary with the type and proportions of silica , alkali and alkaline earth in its composition. [ 4 ] Once damage has occurred it is irreversible, but decay processes can be slowed by climate control to regulate surrounding temperature, humidity, and air flow.
Glass disease is caused by an inherent fault in the chemical composition of the original glass formula. [ 3 ] Glass contains three types of components: network formers establish basic structure, network stabilizers make glass strong and water resistant, and flux lowers the melting point at which the glass can be formed. [ 5 ] Common formulations of glass may include silica (SiO 2 ) as a former, alkali oxides such as soda (Na 2 O) or potash (K 2 O) for flux, and alkaline earth oxides such as lime (CaO) for stabilizing. [ 4 ] [ 5 ]
Structurally, glass is composed of a network of SiO 4 -tetrahedrons. In addition to the network former silicon which establishes its principal structure, glass contains network modifying agents such as the alkali ions Na + and K + and the alkali earth ions Ca 2+ and Mg 2+ . Glass does not have a defined stoichiometry , rather the network is flexible. It can incorporate other ions, depending upon factors such as the main composition and firing conditions of the glass. [ 6 ] This causes almost all glass to be chemically unstable to some extent. [ 7 ]
Electron charge differences of ions within the structure form the basis of its bonding. Both viscosity and transition temperature are related to the availability of oxygen bonds in the glass's composition. Modifying agents tend to lower the melting point of the silica. Higher contents of SiO 2 increase acidity of the glass. Higher contents of CaO, Na 2 O, and K 2 O increase basicity . [ 6 ] The chemical stability of glass decreases when only Na 2 O and K 2 O are added as flux, because bonding becomes weaker. The chemical stability of glass can be increased by adding CaO, MgO, ZnO, and Al 2 O 3 . To be stable, glass composition must balance temperature lowering agents with stabilizing agents. [ 6 ]
Exposure of a glass surface to moisture, either in solution or from humidity in the atmosphere, causes chemical reactions to occur on and below the surface of the glass. The exchange of alkali metal ions (from within the glass) and hydrogen ions (from outside) can cause chemical and structural changes to the glass. When alkali metal cations in the near-surface layer are replaced by smaller hydrogen ions, structural differences between the affected surface layer and the unaffected lower layers of glass cause increasing tensile stress, which in turn can cause cracking. [ 7 ] [ 8 ]
The likelihood of degradation due to glass disease depends on the amount and proportion of alkaline compounds mixed with silica, and on surrounding conditions. [ 3 ] Inadequate calcium oxide causes the alkalis in the glass to remain water-soluble at a low level of humidity. Exposure to higher levels of relative humidity during storage or display causes alkali to hydrate and leach out of the glass. Repeated changes in humidity can be particularly damaging. Any glass object can deteriorate if it is exposed to unsuitable environmental conditions. [ 9 ] Crystal, historic glass, or treasured family items should never be exposed to the high temperatures and water pressure of a dishwasher. [ 10 ] [ 11 ]
Energy dispersive x-ray analysis (EDXA), [ 1 ] [ 8 ] scanning electron microscopy (SEM) and secondary ion mass spectrometry (SIMS) can be used to study exchange reactions in different types of glass. By quantifying and studying chemical structure and reactions at the near-surface layer, the mechanisms of glass disease can be better understood. [ 7 ] Measurement of the pH of glass surfaces is particularly important if glass objects have a matte surface, or have been exposed to kaolin or other substances. In the case of extremely small objects such as glass beads, pH measurement may be necessary to determine whether alkaline salts are present and changes in the glass are occurring. [ 9 ] [ 12 ]
The processes involved in glass disease can reduce the transparency of the glass or even threaten the integrity of the structure. Glass disease causes a complex disintegration of the glass which can be identified through a variety of symptoms, including weeping, crizzling, spalling, cracking and fragmentation. [ 3 ]
The following description of glass beads from an object in the collection of the British Museum , effectively illustrates the range of symptoms that can occur with glass disease:
"Two factors indicated that the deterioration was the result of the phenomenon commonly referred to as ‘glass disease’; first, damage was limited to beads of one particular colour (pale yellow) and second, visible signs of all the various stages of glass disease were present on these beads. This included the presence of small white crystals on the surface of most pale yellow beads and a fine network of uniform cracks or ‘crizzling’ crossing the surface of 55 of the 69 beads. This crizzling appeared to be more prevalent around the bead holes. A total of 32 beads had areas of spalling, or advanced crizzling, where cracks had extended further into the glass structure... Many had areas that had already spalled and the fragments lost. Vertical cracks extending through the glass were present on 37 beads and four beads had become detached due to complete fragmentation." [ 1 ]
The initial stage of glass disease occurs when moisture causes alkali to be leached out of the glass. This becomes apparent when hygroscopic alkali deposits on the glass give it a cloudy or hazy appearance. [ 7 ] [ 13 ] This may occur within as little as five to 10 years of the glass's manufacturing.
The glass may feel slippery or slimy [ 10 ] and tiny droplets, or weeping , may be seen in high humidity (above 55%). [ 14 ] The hydrated alkali can form fine crystals on the surface of the glass in low relative humidity (below 40%). [ 4 ]
At this stage, it may be possible to gently wash the glass and remove the surface alkali. [ 14 ] This will help to stabilize the glass by reducing the surface pH, and by removing dust, soiling, and hygroscopic components that attract further moisture. [ 9 ]
If alkali builds up due to ion exchange, and remains on the surface of the glass, the decay process will accelerate. The presence of sodium or potassium ions in the alkali build up will increase the pH on the surface of the glass, causing it to become basic. This will dissolve silica from the glass as well as releasing more alkali ions. [ 10 ] [ 7 ]
The haziness seen on the glass may not disappear entirely when washed and dried. When examined closely at an angle with a low light, fine cracks like tiny silvery lines or shimmering rays, may be visible. [ 13 ] A microscope can confirm the presence of cracks. The cracks are caused by the loss of alkali, which leaves microscopic gaps in the structure of the glass. [ 14 ]
As higher amounts of alkali leach from the glass cracks are likely to become deeper. Crizzling is a distinctive network of fine cracks in the glass which is visible to the naked eye. [ 3 ] [ 15 ] In some cases, the crazing can gain a more uniform appearance. [ 13 ] However, crizzling may not be uniform due to the creation of micro-climates on the glass. [ 14 ]
Distinct cracks may appear on the surface of the glass, and surface material may flake or chip, a process referred to as spalling. [ 14 ] [ 13 ]
In the most severe stage of deterioration, the structural integrity of the glass has been lost and the glass may separate into pieces. [ 14 ]
A survey of glass objects at the Victoria and Albert Museum in London, in 1992, found that more than 1 in 10 objects in the collection were affected by crizzling, ranging from 16th century Venetian to 20th century Scandinavian glass. [ 3 ] [ 16 ] Venetian glass is particularly susceptible because artisans minimized the use of lime, to make the glass as clear as possible.
The works of modern glassmakers who experiment with their glass formulas, such as Ettore Sottsass , can also be at high risk for damage. [ 10 ]
Museums such as the National Museum of the American Indian may find glass disease an issue of great importance because many of the Native American cultural materials in their collections incorporate glass beads . [ 9 ] [ 17 ] Small ornamental glass beads were often made cheaply, using recipes with a high flux to silica ratio. This makes them more susceptible to glass disease. Blues, reds, and black are often affected by glass disease. The combination of glass beads with other materials (cordage, fabric, leather, metal, bone, surface colorants, ceremonial substances, and kaolin) complicates deterioration and conservation of ethnographic objects. [ 12 ]
In the earliest stage of glass disease, it may be possible to wash the glass to remove the surface alkali. The Corning Museum of Glass recommends washing with tap water (tepid, not hot [ 18 ] ) and a mild (non-ionic [ 18 ] ) conservation detergent. This should be followed by rinsing with de-ionized or distilled water, and careful drying to remove moisture. Careful washing can remove surface deposits, restore the appearance of clearness to the glass, and help to slow further deterioration. [ 19 ] [ 18 ] Ethanol has also been suggested for cleaning, particularly for glass beads, depending on the surrounding materials that may be affected. [ 9 ]
Once more serious damage has occurred, it cannot be reversed. Climate control of humidity and temperature is a possible intervention. Because crizzling results from the reaction of components of the glass with water vapour, controlling humidity and temperature can slow its occurrence. [ 3 ] At the Corning Museum of Glass , items in the collection are kept at stable levels of relative humidity, between 40 and 55 percent. [ 10 ] Fans may be used within a case to encourage the movement of air and minimize adsorption of moisture on the glass surface. Deterioration is more likely to occur in areas with restricted air-flow which allow moisture to remain on the glass. [ 14 ] Chemical methods for retarding corrosion rates and stabilizing surfaces are being investigated. [ 4 ]
When a composite object contains a variety of materials, one of which is sick glass, the considerations involved in conserving and displaying the object become more complicated. For example, the British Museum chose to conserve and display a Siberian shamanic apron made of leather, glass and other materials. They weighed the likelihood that it would decay more quickly if shown against the desirability of making a unique object visible and the inevitability of its eventual degradation. "Its conservation and display ensures that access to this beautiful and unique object is maximized before the pale yellow beads, which are intrinsic to the object, are inevitably lost beyond repair." [ 1 ] | https://en.wikipedia.org/wiki/Glass_disease |
A glass ionomer cement ( GIC ) is a dental restorative material used in dentistry as a filling material and luting cement , [ 1 ] including for orthodontic bracket attachment. [ 2 ] Glass-ionomer cements are based on the reaction of silicate glass-powder (calciumaluminofluorosilicate glass [ 3 ] ) and polyacrylic acid , an ionomer . Occasionally water is used instead of an acid, [ 2 ] altering the properties of the material and its uses. [ 4 ] This reaction produces a powdered cement of glass particles surrounded by matrix of fluoride elements and is known chemically as glass polyalkenoate. [ 5 ] There are other forms of similar reactions which can take place, for example, when using an aqueous solution of acrylic/ itaconic copolymer with tartaric acid , this results in a glass-ionomer in liquid form. An aqueous solution of maleic acid polymer or maleic/acrylic copolymer with tartaric acid can also be used to form a glass-ionomer in liquid form. Tartaric acid plays a significant part in controlling the setting characteristics of the material. [ 5 ] Glass-ionomer based hybrids incorporate another dental material , for example resin -modified glass ionomer cements (RMGIC) and compomers (or modified composites). [ 5 ]
Non-destructive neutron scattering has evidenced GIC setting reactions to be non-monotonic, with eventual fracture toughness dictated by changing atomic cohesion, fluctuating interfacial configurations and interfacial terahertz (THz) dynamics. [ 6 ]
It is on the World Health Organization's List of Essential Medicines . [ 7 ]
Glass ionomer cement is primarily used in the prevention of dental caries . This dental material has good adhesive bond properties to tooth structure, [ 8 ] allowing it to form a tight seal between the internal structures of the tooth and the surrounding environment. Dental caries are caused by bacterial production of acid during their metabolic actions. The acid produced from this metabolism results in the breakdown of tooth enamel and subsequent inner structures of the tooth, if the disease is not intervened by a dental professional, or if the carious lesion does not arrest and/or the enamel re-mineralises by itself. Glass ionomer cements act as sealants when pits and fissures in the tooth occur and release fluoride to prevent further enamel demineralisation and promote remineralisation . Fluoride can also hinder bacterial growth, by inhibiting their metabolism of ingested sugars in the diet. It does this by inhibiting various metabolic enzymes within the bacteria. This leads to a reduction in the acid produced during the bacteria's digestion of food, preventing a further drop in pH and therefore preventing caries. [ citation needed ]
There is evidence that when using sealants, only 6% of people develop tooth decay over a 2-year period, in comparison to 40% of people when not using a sealant. [ 9 ] However, it is recommended that the use of fluoride varnish alongside glass ionomer sealants should be applied in practice to further reduce the risk of secondary dental caries. [ 10 ]
The addition of resin to glass ionomers improves them significantly, allowing them to be more easily mixed and placed. [ 3 ] Resin-modified glass ionomers allow equal or higher fluoride release and there is evidence of higher retention, higher strength and lower solubility. [ 3 ] Resin-based glass ionomers have two setting reactions: an acid-base setting and a free-radical polymerisation . The free-radical polymerisation is the predominant mode of setting, as it occurs more rapidly than the acid-base mode. Only the material properly activated by light will be optimally cured . The presence of resin protects the cement from water contamination. Due to the shortened working time, it is recommended that placement and shaping of the material occurs as soon as possible after mixing. [ 5 ]
Dental sealants were first introduced as part of the preventative programme, in the late 1960s, in response to increasing cases of pits and fissures on occlusal surfaces due to caries. [ 9 ] This led to glass ionomer cements to be introduced in 1972 by Wilson and Kent as derivative of the silicate cements and the polycarboxylate cements. [ 5 ] The glass ionomer cements incorporated the fluoride releasing properties of the silicate cements with the adhesive qualities of polycarboxylate cements. [ 4 ] This incorporation allowed the material to be stronger, less soluble and more translucent (and therefore more aesthetic) than its predecessors. [ 5 ]
Glass ionomer cements were initially intended to be used for the aesthetic restoration of anterior teeth and were recommended for restoring Class III and Class V cavity preparations. [ 8 ] There have now been further developments in the material's composition to improve properties. For example, the addition of metal or resin particles into the sealant is favoured due to the longer working time and the material being less sensitive to moisture during setting. [ 8 ]
When glass ionomer cements were first used, they were mainly used for the restoration of abrasion/erosion lesions and as a luting agent for crown and bridge reconstructions. However, this has now been extended to occlusal restorations in deciduous dentition, restoration of proximal lesions and cavity bases and liners. [ 4 ] This is made possible by the ever-increasing new formulations of glass ionomer cements.
One of the early commercially successful GICs, employing G338 glass and developed by Wilson and Kent, served purpose as non-load bearing restorative materials. However, this glass resulted in a cement too brittle for use in load-bearing applications such as in molar teeth. The properties of G338 being shown to be related to its phase-composition, specifically the interplay between its three amorphous phases Ca/Na-Al-Si-O, Ca-Al-F and Ca-P-O-F, as characterised by mechanical testing, differential scanning calorimetry (DSC) and X-ray diffraction (XRD), [ 11 ] as well as quantum chemical modelling and ab initio molecular dynamics simulations. [ 12 ]
When the two dental sealants are compared, there has always been a contradiction as to which materials is more effective in caries reduction. Therefore, there are claims against replacing resin-based sealants, the current gold standard, with glass ionomer. [ 13 ] [ 14 ] [ 15 ]
Glass ionomer sealants are thought to prevent caries through a steady fluoride release over a prolonged period and the fissures are more resistant to demineralization, even after the visible loss of sealant material, [ 9 ] however, a systemic review found no difference in caries development when GICs was used as a fissure sealing material compared to the conventional resin based sealants, in addition, it has less retention to the tooth structure than the resin based sealants. [ 16 ]
These sealants have hydrophilic properties, allowing them to be an alternative of the hydrophobic resin in the generally wet oral cavity. Resin-based sealants are easily destroyed by saliva contamination.
They chemically bond with both enamel and dentin and do not necessarily require preparation/mechanical retention and can therefore be applied without harming existing tooth structure. This makes them ideal in many situations when tooth preservation is foremost and with minimally invasive techniques, particularly Class V fillings where there is a larger area of exposed dentin with only a thin ring of enamel. This often results in longer retention and service life than resin Class V fillings.
They chemically bond to enamel and dentin leaving a smaller gap for bacteria to enter. Particularly when paired with silver diamine fluoride this can arrest caries and harden active caries and prevent further damage.
They can be placed and cured outside of clinical settings and do not require a curing light.
Chemically curable glass ionomer cements are considered safe from allergic reactions but a few have been reported with resin-based materials. Nevertheless, allergic reactions are very rarely associated with both sealants. [ 9 ]
The main disadvantage of glass ionomer sealants or cements has been inadequate retention or simply lack of strength, toughness, and
limited wear resistance. [ 17 ] [ 18 ] For instance, due to its poor retention rate, periodic recalls are necessary, even after 6 months, to eventually replace the lost sealant. [ 9 ] [ 19 ] Different methods have been used to address the physical shortcomings of the glass ionomer cements such as thermo-light curing (polymerization), [ 20 ] [ 21 ] or addition of the zirconia, hydroxyapatite, N-vinyl pyrrolidone, N-vinyl caprolactam, and fluoroapatite to reinforce the glass ionomer cements. [ 22 ]
Glass ionomers are widely used due to their versatile properties and ease of use. Prior to procedures, starter materials for glass ionomers are supplied as a powder and liquid or as a powder mixed with water. These materials can be mixed and encapsulated. [ 23 ]
Preparation of the material should involve following manufacture instructions. A paper pad or cool dry glass slab may be used for mixing the raw materials though it is important to note that the use of the glass slab will retard the reaction and hence increase the working time. [ 23 ] The raw materials in liquid and powder form should not be dispensed onto the chosen surface until the mixture is required in the clinical procedure the glass ionomer is being used for, as a prolonged exposure to the atmosphere could interfere with the ratio of chemicals in the liquid. At the stage of mixing, a spatula should be used to rapidly incorporate the powder into the liquid for a duration of 45–60 seconds depending on manufacture instructions and the individual products. [ 24 ]
Once mixed together to form a paste, an acid-base reaction occurs which allows the glass ionomer complex to set over a certain period of time and this reaction involves four overlapping stages:
It is important to note that glass ionomers have a long setting time and need protection from the oral environment in order to minimize interference with dissolution and prevent contamination. [ 25 ]
The type of application for glass ionomers depends on the cement consistency as varying levels of viscosity from very high viscosity to low viscosity, can determine whether the cement is used as luting agents, orthodontic bracket adhesives, pit and fissure sealants, liners and bases, core build-ups, or intermediate restorations. [ 23 ]
The different clinical uses of glass ionomer compounds as restorative materials include;
All GICs contain a basic glass and an acidic polymer liquid, which set by an acid-base reaction. The polymer is an ionomer , containing a small proportion – some 5 to 10% – of substituted ionic groups. These allow it to be acid decomposable and clinically set readily. [ citation needed ]
The glass filler is generally a calcium alumino fluorosilicate powder, which upon reaction with a polyalkenoic acid gives a glass polyalkenoate-glass residue set in an ionised, polycarboxylate matrix. [ citation needed ]
The acid base setting reaction begins with the mixing of the components. The first phase of the reaction involves dissolution. The acid begins to attack the surface of the glass particles, as well as the adjacent tooth substrate, thus precipitating their outer layers but also neutralising itself. As the pH of the aqueous solution rises, the polyacrylic acid begins to ionise, and becoming negatively charged it sets up a diffusion gradient and helps draw cations out of the glass and dentine. The alkalinity also induces the polymers to dissociate, increasing the viscosity of the aqueous solution. [ citation needed ]
The second phase is gelation, where as the pH continues to rise and the concentration of the ions in solution to increase, a critical point is reached and insoluble polyacrylates begin to precipitate. These polyanions have carboxylate groups whereby cations bind them, especially Ca 2+ in this early phase, as it is the most readily available ion, crosslinking into calcium polyacrylate chains that begin to form a gel matrix, resulting in the initial hard set, within five minutes. Crosslinking, H bonds and physical entanglement of the chains are responsible for gelation. During this phase, the GIC is still vulnerable and must be protected from moisture. If contamination occurs, the chains will degrade and the GIC lose its strength and optical properties. Conversely, dehydration early on will crack the cement and make the surface porous. [ citation needed ]
Over the next twenty four hours maturation occurs. The less stable calcium polyacrylate chains are progressively replaced by aluminium polyacrylate, allowing the calcium to join the fluoride and phosphate and diffuse into the tooth substrate, forming polysalts, which progressively hydrate to yield a physically stronger matrix. [ 31 ]
The incorporation of fluoride delays the reaction, increasing the working time. Other factors are the temperature of the cement, and the powder to liquid ratio – more powder or heat speeding up the reaction. [ citation needed ]
GICs have good adhesive relations with tooth substrates, uniquely chemically bonding to dentine and, to a lesser extend, to enamel. During initial dissolution, both the glass particles and the hydroxyapatite structure are affected, and thus as the acid is buffered the matrix reforms, chemically welded together at the interface into a calcium phosphate polyalkenoate bond. In addition, the polymer chains are incorporated into both, weaving cross links, and in dentine the collagen fibres also contribute, both linking physically and H-bonding to the GIC salt precipitates. There is also microretention from porosities occurring in the hydroxyapatite. [ 32 ]
Works employing non-destructive neutron scattering and terahertz (THz) spectroscopy have evidenced that GIC's developing fracture toughness during setting is related to interfacial THz dynamics, changing atomic cohesion and fluctuating interfacial configurations. Setting of GICs is non-monotonic, characterised by abrupt features, including a glass–polymer coupling point, an early setting point, where decreasing toughness unexpectedly recovers, followed by stress-induced weakening of interfaces. Subsequently, toughness declines asymptotically to long-term fracture test values. [ 6 ]
The pattern of fluoride release from glass ionomer cement is characterised by an initial rapid release of appreciable amounts of fluoride, followed by a taper in the release rate over time. [ 33 ] An initial fluoride “burst” effect is desirable to reduce the viability of remaining bacteria in the inner carious dentin, hence, inducing enamel or dentin remineralization. [ 33 ] The constant fluoride release during the following days are attributed to the fluoride ability to diffuse through cement pores and fractures. Thus, continuous small amounts of fluoride surrounding the teeth reduces demineralization of the tooth tissues. [ 33 ] A study by Chau et al. shows a negative correlation between acidogenicity of the biofilm and the fluoride release by GIC, [ 34 ] suggestive that enough fluoride release may decrease the virulence of cariogenic biofilms . [ 35 ] In addition, Ngo et al. (2006) studied the interaction between demineralised dentine and Fuji IX GP which includes a strontium – containing glass as opposed to the more conventional calcium -based glass in other GICs. A substantial amount of both strontium and fluoride ions was found to cross the interface into the partially demineralised dentine affected by caries. [ 35 ] This promoted mineral depositions in these areas where calcium ion levels were low. Hence, this study supports the idea of glass ionomers contributing directly to remineralisation of carious dentine, provided that good seal is achieved with intimate contact between the GIC and partly demineralised dentine. This, then raises a question, “Is glass ionomer cement a suitable material for permanent restorations?” due to the desirable effects of fluoride release by glass ionomer cement.
Numerous studies and reviews have been published with respect to GIC used in primary teeth restorations. Findings of a systematic review and meta-analysis suggested that conventional glass ionomers were not recommended for Class II restorations in primary molars . [ 36 ] This material showed poor anatomical form and marginal integrity, and composite restorations were shown to be more successful than GIC when good moisture control could be achieved. [ 36 ] Resin modified glass ionomer cements (RMGIC) were developed to overcome the limitations of the conventional glass ionomer as a restorative material. A systematic review supports the use of RMGIC in small to moderate sized class II cavities, as they are able to withstand the occlusal forces on primary molars for at least one year. [ 36 ] With their desirable fluoride releasing effect, RMGIC may be considered for Class I and Class II restorations of primary molars in high caries risk population.
With regard to permanent teeth, there is insufficient evidence to support the use of RMGIC as long term restorations in permanent teeth. Despite the low number of randomised control trials , a meta- analysis review by Bezerra et al. [2009] reported significantly fewer carious lesions on the margins of glass ionomer restorations in permanent teeth after six years as compared to amalgam restorations. [ 37 ] In addition, adhesive ability and longevity of GIC from a clinical standpoint can be best studied with restoration of non- carious cervical lesions . A systematic review shows GIC has higher retention rates than resin composite in follow up periods of up to 5 years. [ 38 ] Unfortunately, reviews for Class II restorations in permanent teeth with glass ionomer cement are scarce with high bias or short study periods. However, a study [ 39 ] [2003] of the compressive strength and the fluoride release was done on 15 commercial fluoride- releasing restorative materials. A negative linear correlation was found between the compressive strength and fluoride release ( r 2 =0.7741), i.e., restorative materials with high fluoride release have lower mechanical properties. [ 39 ] | https://en.wikipedia.org/wiki/Glass_ionomer_cement |
A glass knife is a knife with a blade made of glass , with a fracture line forming an extremely sharp cutting edge.
Glass knives were used in antiquity due to their natural sharpness and the ease with which they could be manufactured. In modern electron microscopy glass knives are used to make the ultrathin sections needed for imaging.
In the Stone Age , bladed tools were made by chipping suitable stones which broke with a conchoidal fracture , a process known as knapping or lithic reduction . The same technique was used to make tools, including knives, out of obsidian , natural volcanic glass.
From the 1920s until the 1940s, Dur-X glass fruit and cake knives were sold for use in kitchens under a 1938 US Patent. [ 1 ] Before the wide availability of inexpensive stainless steel cutlery, they were used for cutting citrus fruit, tomatoes and other acidic foods, the flavor of which would be tainted by steel knives and which would stain ordinary steel knives. They were molded in tempered glass, with a cutting edge ground sharp. [ 2 ] [ 3 ]
While glass knives as such are no longer in general use, knives with ceramic blades made from zirconium dioxide have been available since the mid-1980s, with a very sharp and long-lasting edge produced by grinding rather than fracturing. [ 4 ]
Modern glass knives were once the blade of choice for the ultra-thin sectioning required in transmission electron microscopy because they can be manufactured by hand and are sharper than softer metal blades because the crystalline structure of metals makes it impossible to obtain a continuous sharp edge. [ 5 ] The advent of diamond knives , which keep their edge much longer and are more suitable for cutting hard materials, quickly relegated glass knives to a second-rate status. However, some labs still use glass knives because they are significantly less expensive than diamond knives. A common practice is to use a glass knife to cut the block which contains the sample to near the location of the specimen to be examined. Then the glass knife is replaced by a diamond blade for the actual ultrathin sectioning. This extends the life of the diamond blade which is used only when its superior performance is critical. [ citation needed ]
Obsidian, a naturally occurring volcanic glass, is used to make extremely sharp surgical scalpels , significantly sharper than is possible with steel. The blades are brittle and very easily broken.
Glass knives can be produced by hand using pliers with two raised bumps on one jaw and a single bump between the two bumps on the opposing jaw, but special machines called "knife-makers" are used in most electron microscopy laboratories to ensure repeatable results. The glass used typically starts out as 1-inch-wide (25 mm) strips of 1 ⁄ 4 -inch-thick (6.4 mm) plate glass, which is cut into 1 inch (2.5 cm) squares. The glass square is then scored across the diagonal with a steel or tungsten carbide glass-cutting wheel to determine where the square will break, and pressure is then applied gradually across the opposite diagonal until the square breaks. This technique provides two usable knife edges, one on each of the two resulting triangles. The better the break is aligned with the diagonal, the better the cutting edge. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Glass_knife |
Glass microspheres are microscopic spheres of glass manufactured for a wide variety of uses in research , medicine , consumer goods and various industries. Glass microspheres are usually between 1 and 1000 micrometers in diameter, although the sizes can range from 100 nanometers to 5 millimeters in diameter. Hollow glass microspheres, sometimes termed microballoons or glass bubbles , have diameters ranging from 10 to 300 micrometers .
Hollow spheres are used as a lightweight filler in composite materials such as syntactic foam and lightweight concrete . [ 1 ] Microballoons give syntactic foam its light weight, low thermal conductivity , and a resistance to compressive stress that far exceeds that of other foams. [ 2 ] These properties are exploited in the hulls of submersibles and deep-sea oil drilling equipment, where other types of foam would implode . Hollow spheres of other materials create syntactic foams with different properties: ceramic balloons e.g. can make a light syntactic aluminium foam. [ 3 ]
Hollow spheres also have uses ranging from storage and slow release of pharmaceuticals and radioactive tracers to research in controlled storage and release of hydrogen . [ 4 ] Microspheres are also used in composites to fill polymer resins for specific characteristics such as weight, sandability and sealing surfaces. When making surfboards for example, shapers seal the EPS foam blanks with epoxy and microballoons to create an impermeable and easily sanded surface upon which fiberglass laminates are applied.
Glass microspheres can be made by heating tiny droplets of dissolved water glass in a process known as ultrasonic spray pyrolysis (USP), and properties can be improved somewhat by using a chemical treatment to remove some of the sodium . [ 5 ] Sodium depletion has also allowed hollow glass microspheres to be used in chemically sensitive resin systems, such as long pot life epoxies or non-blown polyurethane composites.
Additional functionalities, such as silane coatings, are commonly added to the surface of hollow glass microspheres to increase the matrix/microspheres interfacial strength (the common failure point when stressed in a tensile manner).
Microspheres made of high quality optical glass, can be produced for research on the field of optical resonators or cavities . [ 6 ]
Glass microspheres are also produced as waste product in coal-fired power stations . In this case the product would be generally termed " cenosphere " and carry an aluminosilicate chemistry (as opposed to the sodium silica chemistry of engineered spheres). Small amounts of silica in the coal are melted and as they rise up the chimneystack, expand and form small hollow spheres. These spheres are collected together with the ash, which is pumped in a water mixture to the resident ash dam. Some of the particles do not become hollow and sink in the ash dams, while the hollow ones float on the surface of the dams. They become a nuisance, especially when they dry, as they become airborne and blow over into surrounding areas.
Microspheres have been used to produce focal regions, known as photonic nanojets [ 7 ] and whose sizes are large enough to support internal resonances, but at the same time small enough, so that geometrical optics cannot be applied for studying their properties. Previous research has demonstrated experimentally and with simulations the use of microspheres in order to increase the signal intensity obtained in different experiments. A confirmation of the photonic jet in the microwave scale, observing the backscattering enhancement that occurred when metallic particles were introduced in the focus area. A measurable enhancement of the backscattered light in the visible range was obtained when a gold nanoparticle was placed inside the photonic nanojet region produced by a dielectric microsphere with a 4.4 μm diameter. A use of nanojets produced by transparent microspheres in order to excite optical active materials, under upconversion processes with different numbers of excitation photons, has been analyzed as well. [ 8 ]
Monodisperse glass microspheres have high sphericity and a very tight particle size distribution, often with CV<10% and specification of >95% of particles in size range. Monodisperse glass particles are often used as spacers in adhesives and coatings, such as bond line spacers in epoxies. Just a small amount of spacer grade monodisperse microspheres can create a controlled gap, as well as define and maintain specified bond line thickness. Spacer grade particles can also be used as calibration standards and tracer particles for qualifying medical devices. High quality spherical glass microspheres are often used in gas plasma displays, automotive mirrors, electronic displays, flip chip technology, filters, microscopy, and electronic equipment.
Other applications include syntactic foams [ 9 ] and particulate composites and reflective paints.
Dispensing of microspheres can be a difficult task. When utilizing microspheres as a filler for standard mixing and dispensing machines, a breakage rate of up to 80% can occur, depending upon factors such as pump choice, material viscosity, material agitation, and temperature. Customized dispensers for microsphere-filled materials may reduce the microsphere breakage rate to a minimal amount. A progressive cavity pump is the pump of choice for dispensing materials with microspheres, which can reduce microsphere breakage as much as 80%. | https://en.wikipedia.org/wiki/Glass_microsphere |
Glass printing involves applying images , patterns , or text to glass surfaces. Various techniques can be used, each offering distinct aesthetic and functional results. This specialized field encompasses methods such as screen printing , digital printing , and pad printing , among others.
This glass art related article is a stub . You can help Wikipedia by expanding it .
This printmaking -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glass_printing |
Glass recycling is the processing of waste glass into usable products. [ 1 ] Glass that is crushed or imploded and ready to be remelted is called cullet . [ 2 ] There are two types of cullet: internal and external. Internal cullet is composed of defective products detected and rejected by a quality control process during the industrial process of glass manufacturing , transition phases of product changes (such as thickness and colour changes) and production offcuts. External cullet is waste glass that has been collected or reprocessed with the purpose of recycling. External cullet (which can be pre- or post-consumer) is classified as waste. The word "cullet", when used in the context of end-of-waste, will always refer to external cullet.
To be recycled, glass waste needs to be purified and cleaned of contamination. Then, depending on the end use and local processing capabilities, it might also have to be separated into different sizes and colours. Many recyclers collect different colours of glass separately since glass tends to retain its colour after recycling . The most common colours used for consumer containers are clear (flint) glass, green glass, and brown (amber) glass. Glass is ideal for recycling since none of the material is degraded by normal use.
Many collection points have separate bins for clear (flint), green and brown (amber). Glass re-processors intending to make new glass containers require separation by colour. If the recycled glass is not going to be made into more glass, or if the glass re-processor uses newer optical sorting equipment, separation by colour at the collection point may not be required. Heat-resistant glass, such as Pyrex or borosilicate glass , must not be part of the glass recycling stream, because even a small piece of such material will alter the viscosity of the fluid in the furnace at remelt. [ 3 ]
To use external cullet in production, as much contamination should be removed as possible. Typical contaminations are:
Manpower or machinery can be used in different stages of purification. Since they melt at higher temperatures than glass, separation of inorganics, the removal of heat resistant glass and lead glass is critical. In the modern recycling facilities, dryer systems and optical sorting machines are used. The input material should be sized and cleaned for the highest efficiency in automatic sorting. More than one free fall or conveyor belt sorter can be used, depending on the requirements of the process. Different colors can be sorted by optical sorting machines.
Glass bottles and jars are infinitely recyclable. [ 4 ] The use of recycled glass in manufacturing conserves raw materials and reduces energy consumption. [ 5 ] Because the chemical energy required to melt the raw materials has already been expended, the use of cullet can significantly reduce energy consumption compared with manufacturing new glass from silica (SiO 2 ), soda ash (Na 2 CO 3 ), and calcium carbonate (CaCO 3 ). Soda lime glass from virgin raw materials theoretically requires approximately 2.671 GJ/tonne compared to 1.886 GJ/tonne to melt 100% glass cullet. [ 6 ] As a general rule, every 10% increase in cullet usage results in an energy savings of 2–3% in the melting process, with a theoretical maximum potential of 30% energy saving. [ 6 ] Every metric ton (1,000 kg) of waste glass recycled into new items saves 315 kilograms (694 lb) of carbon dioxide from being released into the atmosphere during the manufacture of new glass. [ 7 ] But recycling glass does not avoid the remelting process, which accounts for 75% of the energy consumption during production. [ 8 ]
The use of the recycled glass as aggregate in concrete has become popular, with large-scale research on that application being carried out at Columbia University in New York. [ citation needed ] Recycled glass greatly enhances the aesthetic appeal of the concrete. Recent research has shown that concrete made with recycled glass aggregates have better long-term strength and better thermal insulation, due to the thermal properties of the glass aggregates. [ 9 ] Glass which is not recycled, but crushed, reduces the volume of waste sent to landfill . [ 10 ] [ 3 ] Waste glass may also be kept out of landfill by using it for roadbed aggregate. [ 5 ]
Glass aggregate , a mix of colors crushed to a small size, is substituted for pea gravel or crushed rock in many construction and utility projects, reducing costs to a degree that varies depending on the size of the project. Glass aggregate is not sharp to handle. In many cases, the state Department of Transportation has specifications for use, size and percentage of quantity for use. [ citation needed ] Common applications are as pipe bedding—placed around sewer, storm water or drinking water pipes, to transfer weight from the surface and protect the pipe. Another common use is as fill to bring the level of a concrete floor even with a foundation. Foam glass gravel provides a lighter aggregate with other useful properties.
Other uses for recycled glass include:
Mixed waste streams may be collected from materials recovery facilities or mechanical biological treatment systems. Some facilities can sort mixed waste streams into different colours using electro-optical sorting units.
The alternative markets for recycled glass waste include the construction sector (using glass waste for road pavement construction, as an aggregate in asphalt , pipe bedding material, drainage or filler aggregate), the production of cement and concrete (using glass waste as aggregate ), [ 12 ] [ 13 ] [ 14 ] as partial replacement to cement, [ 15 ] [ 16 ] [ 17 ] partial replacement for cement and aggregate in the same mixture [ 17 ] or raw material for cement production, [ 17 ] as well as decorative aggregate, [ 18 ] abrasives, [ 19 ] or filtration media. [ 20 ]
Three different samples of recycled glass with different gradation curves produced from residential and industrial waste glass streams in Victoria were studied in this research to investigate their usage as a construction material in geotechnical applications. The Fine Recycled Glass (FRG) and Medium Recycled Glass (MRG) were classified as well-graded (SW-SM), while Coarse Recycled Glass (CRG) was poorly graded (GP) according to the Unified Soil Classification System (USCS). The specific gravity of recycled glass was approximately 10% lower than that of natural aggregate. MRG exhibited higher maximum dry unit weight and lower optimum water content compared to FRG. LA abrasion tests showed FRG and MRG to have abrasion resistance similar to construction and demolition material, while CRG had higher abrasion values. Post-compaction analysis indicated stability for FRG and MRG, but CRG displayed poor compaction behavior due to particle shape and moisture absorption issues. CBR and direct shear tests revealed MRG's superior shear resistance and slightly higher internal friction angle compared to FRG. Consolidated drained triaxial shear tests confirmed these findings, suggesting FRG and MRG behave similarly to natural sand and gravel mixtures in geotechnical applications. Hydraulic conductivity tests demonstrated medium permeability and good drainage characteristics for FRG and MRG. Compliance with EPA Victoria requirements for fill material was also confirmed. Overall, the study supports using recycled glass in various geotechnical engineering applications. [ 21 ]
Polymer concrete , a material commonly used in industrial flooring, uses polymers , typically resins , to replace lime-type cements as a binder . Researchers have found that ground recycled glass can be used as a substitute for sand when making polymer concrete. [ 22 ] According to research, using recycled glass instead of sand produces a high strength, water-resistant material suitable for industrial flooring and infrastructure drainage, particularly in areas subject to heavy traffic such as service stations, forklift operating areas and airports. [ 22 ] [ peacock prose ]
Despite all the improvement in the waste and recovery processes, challenges include:
In 2004, Germany recycled 2.116 million tons of glass. Reusable glass or plastic (PET) bottles are available for many drinks, especially beer and carbonated water as well as soft drinks ( Mehrwegflaschen ). The deposit per bottle ( Pfand ) is €0.08-€0.15, compared to €0.25 for recyclable but not reusable plastic bottles. There is no deposit for glass bottles which do not get refilled.
Non-deposit bottles are collected in three colours: white, green and brown.
The first bottle bank for non-deposit bottles ( glasbak ) was installed in Zeist in 1972. Glass is collected in three colours: white, green and brown. [ 25 ] There is a deposit for refillable beer bottles when returned to supermarkets . [ 26 ]
Glass collection points, known as bottle banks are very common near shopping centres , at civic amenity sites and in local neighborhoods in the United Kingdom . The first bottle bank was introduced by Stanley Race CBE , then president of the Glass Manufacturers' Federation and Ron England in Barnsley on 6 June 1977. [ 27 ] Development work was done by the DoE at Warren Spring Laboratory , Stevenage, (now AERA at Harwell) and Nazeing Glass Works, Broxbourne to prove if a usable glass product could be made from over 90% recycled glass. It was found necessary to use magnets to remove unwanted metal closures in the mixture.
Bottle banks commonly stand beside collection points for other recyclable waste like paper , metals and plastics . Local, municipal waste collectors usually have one central point for all types of waste in which large glass containers are located.
In 2007 there were over 50,000 bottle banks in the United Kingdom, and 752,000 tons of glass was being recycled annually. [ 28 ]
Approximately 45% glass waste gets recycled each year. Non-deposit bottles are typically collected in three colors: clear, green and brown. [ 29 ]
Rates of recycling and methods of waste collection vary substantially across the United States because laws are written on the state or local level and large municipalities often have their own unique systems. Many cities do curbside recycling , meaning they collect household recyclable waste on a weekly or bi-weekly basis that residents set out in special containers in front of their homes and transported to a materials recovery facility . This is typically single-stream recycling , which creates an impure product and partly explains why, as of 2019, the US has a recycling rate of around 33% versus 90% in some European countries. [ 30 ] European countries have requirements for minimum recycled glass content, and more widespread deposit-return systems that provide more uniform material streams. [ 31 ] The lower population density and long distances in much of the United States, and the cost of shipping heavy glass also mean that recycling is not inherently economical in places where there are no nearby buyers. [ 31 ]
Apartment dwellers usually use shared containers that may be collected by the city or by private recycling companies which can have their own recycling rules. In some cases, glass is specifically separated into its own container because broken glass is a hazard to the people who later manually sort the co-mingled recyclables. Sorted recyclables are later sold to companies to be used in the manufacture of new products.
In 1971, the state of Oregon passed a law requiring buyers of carbonated beverages (such as beer and soda) to pay five cents per container (increased to ten cents in April 2017) as a deposit which would be refunded to anyone who returned the container for recycling. This law has since been copied in nine other states including New York and California. The abbreviations of states with deposit laws are printed on all qualifying bottles and cans. In states with these container deposit laws, most supermarkets automate the deposit refund process by providing machines which will count containers as they are inserted and then print credit vouchers that can be redeemed at the store for the number of containers returned. Small glass bottles (mostly beer) are broken, one-by-one, inside these deposit refund machines as the bottles are inserted. A large, wheeled hopper (very roughly 1.5 m by 1.5 m by 0.5 m) inside the machine collects the broken glass until it can be emptied by an employee. Nationwide bottle refunds recover 80% of glass containers that require a deposit. [ 5 ]
Major companies in the space include Strategic Materials, which purchases post-consumer glass for 47 facilities across the country. [ 32 ] Strategic Materials has worked to correct misconceptions about glass recycling. [ 33 ] Glass manufacturers such as Owens-Illinois ultimately include recycled glass in their product. The Glass Recycling Coalition is a group of companies and stakeholders working to improve glass recycling. [ 34 ]
In 2019, many Australian cities after decades of poor planning and minimum investment are winding back their glass recycling programmes in favour of plastic usage. [ 35 ]
For many years, there was only one state in Australia with a return deposit scheme on glass containers. Other states had unsuccessfully tried to lobby for glass deposit schemes. [ 36 ] More recently this situation has changed dramatically, with the original scheme in South Australia now joined by legislated container deposit schemes in New South Wales, [ 37 ] Queensland, [ 38 ] Australian Capital Territory, [ 39 ] and the Northern Territory, with schemes planned in Western Australia (2020), Tasmania (2022) and Victoria (2023).
South Africa has an efficient returnable bottle system which includes beer, spirit and liquor bottles. Bottles and jars manufactured in South Africa contain at least 40% recycled glass. [ 40 ]
Life Cycle Analysis (LCA) is an important tool for ecological evaluation of products or processes. LCA is an internationally accepted standard (ISO 14040 & ISO 14044) and scientific tool that is used to quantify the environmental performance attributable to the different life stages of our products, including upstream stages such as raw material production and energy supply. Results are benchmarked based on LCA indicators with the final aim of identifying operational efficiencies and optimising product design while providing a higher level of environmental transparency. [ 41 ] The life-cycle of glass starts from extraction of raw materials, to distribution, use by final consumers to disposal/landfilling. In light of saving the economy and the environment, researchers are working to eliminate the linearity of this lifecycle to have a circular/closed loop life cycle where extraction of raw materials and landfilling after final consumption will be eliminated. [ 41 ] Glass takes up to millions of years to decompose in the environment and even more in landfill. Fortunately, glass is 100% recyclable, making it a sustainable resource for producing new forms of packaging without relying on raw materials. The problem now is that only 70% of glasses are being collected for recycling in the EU (30% in the US) (which is already good, but can be better). [ 8 ] Its recyclability can hence be improved by improving its collection rate all around the world. The only way we can increase collection rate is to enlighten every single consumer of glass to properly dispose and speak up against improper disposal of this glass.
The Cradle-to-Cradle analysis is an approach which evaluates a product's overall sustainability across its entire life cycle. It expands the definition of design quality to include positive effects on economic, ecological and social health. The Cradle to cradle analysis of glass showed that the most impactful phase of a glass lifecycle is at its raw materials usage. Hence, why the sustainability of this product is focused on eliminating this stage of production by recycling used glasses to make secondary raw materials. [ 42 ]
International Organization for Standardization (ISO) is a non-governmental institution (established under the aegis of the UN) bridging public and private sectors. ISO is an international standard setter for “business, government and society,” through its pursuit of voluntary standards. These standards range from those dealing with size, clarity, and weights measures to the systems businesses ought to put in place to enhance customer satisfaction. Its work thus has an intimate impact on daily life by shaping and molding the way in which commerce is conducted, the operating procedures of business, and the way in which consumers engage with markets. [ 43 ] Some of this standard setting was the result of government and business agreement on product development; others were the consequence of commercial battles fought out over the most appropriate format. The organization boasts having developed more than 17,000 international standards in its 60-year history and claims that it is engaged in producing an additional 1,100 standards each year. [ 43 ] ISO are usually put in consideration in lifecycle assessment of products.
The ISO 81.040 contains the international standards for glass. It's divided in four chapters.
Other related ISO:
Media related to Glass recycling at Wikimedia Commons | https://en.wikipedia.org/wiki/Glass_recycling |
The glass sea creatures (alternately called the Blaschka sea creatures , glass marine invertebrates , Blaschka invertebrate models , and Blaschka glass invertebrates ) are works of glass artists Leopold and Rudolf Blaschka . The artistic predecessors of the Glass Flowers , the sea creatures were the output of the Blaschkas' successful mail-order business of supplying museums and private collectors around the world with sets of glass models of marine invertebrates .
Between 1863 and 1880, the Blaschkas – working in Dresden – executed at least 10,000 of these highly detailed glass models, representing some 700 different species . [ 1 ]
A number of large collections of the models are held by museums and other academic institutions. Harvard 's Museum of Natural History exhibits many of the Blaschka's glass creations, and its Museum of Comparative Zoology hold 430 items in the Blaschka Glass Invertebrate Collection and display about 60 at any given time. [ 2 ] Cornell University has about 570 items in its collection and has restored some 170 of these, [ 3 ] with many others in its collection stored at the Corning Museum of Glass in Corning, New York . [ 4 ] The largest collection in Europe, of 530 pieces, is at Ireland's Natural History Museum . Other holdings include the Boston Museum of Science ; the Field Museum of Natural History in Chicago , Natural History Museum in London , Redpath Museum of McGill University in Montreal , Natural History Museum in Geneva , and both Trinity College Dublin and University College Dublin in Ireland; [ 5 ] Hancock Museum in Newcastle upon Tyne , England ; The Grant Museum of Zoology [ 6 ] in London, and Aquarium-Museum in Liège , Belgium , [ 7 ] the Canterbury Museum, Christchurch in New Zealand [ 8 ] and Melbourne Museum , in Melbourne , Australia .
In 1853, shortly after the death of his father and wife Caroline, the latter to a cholera epidemic, Leopold Blaschka – grief stricken and in need of a vacation – traveled to the United States. En route the ship was becalmed and lay still upon the sea for two weeks. [ 9 ] During this period of forced idleness, Leopold studied and sketched the local marine invertebrate population, intrigued by the transparency of their bodies similar to the glass his family had long worked. [ 10 ]
Leopold felt a sense of quiet, inspirational, wonder at these luminescent ocean dwellers, a sense which he recorded and translated by Henri Reiling: "It is a beautiful night in May. Hopeful, we look out over the darkness of the sea, which is as smooth as a mirror; there emerges all around in various places a flashlike bundle of light beams, as if it is surrounded by thousands of sparks, that form true bundles of fire and of other bright lighting spots, and the seemingly mirrored stars. There emerges close before us a small spot in a sharp greenish light, which becomes ever larger and larger and finally becomes a bright shining sunlike figure." [ 9 ]
This sense of wonder would fuel his later work but, in the meantime and upon his return to Dresden , Leopold focused on his family business which was the production the glass eyes, costume ornaments, lab equipment, and other such fancy goods and specialty items that only a master Lampworker could accomplish; [ 11 ] plus the task of furthering the training of his son and apprentice (and eventual successor), Rudolf Blaschka. However, like anyone, he did have free time, and his hobby was to make glass models of plants – as opposed to invertebrates. This would, many years later, become a base for the fabled Ware Collection of Blaschka Glass Models of Plants (otherwise known as the Glass Flowers), but, for the moment, such artistry was naught but an amusing and profitless pastime done between his various commissions. [ 11 ] Yet, unsurprisingly given their stunning quality, this amusing hobby – itself born out of seeking consolation in nature upon his wife's death – attracted attention. Aristocratic attention, as it turned out, specifically the eyes of Prince Camille de Rohan who, being something of a naturalist himself, commissioned the Blaschkas to craft 100 glass orchids for his private collection. [ 12 ] Naturally the Prince was more than a little impressed by the mastery Leopold's work, and "between 1860 and 1862, the prince exhibited about 100 models of orchids and other exotic plants, which he displayed on two artificial tree trunks in his palace in Prague," [ 9 ] a fateful act which brought the skill of the Blaschkas to the attention of another man whom the Prince had actually once introduced to Leopold: a certain Ludwig Reichenbach . [ 13 ]
Director of the natural history museum in Dresden, Prof. Reichenbach was faced with an annoying yet seemingly unsolvable problem in regards to showing marine life. Land-based flora and fauna was not an issue, for it was a relatively simple matter to exhibit mounted and stuffed creatures such as gorillas and elephants, their lifelike poses attracting and exciting the museum's visitors. Invertebrates, however, by their very nature, posed a problem. [ 10 ] [ 12 ] [ 14 ] In the 19th century the only practiced method of showcasing them was to take a live specimen and place it in a sealed jar of alcohol. [ 15 ] This of course killed it but, more importantly, time and their lack of hard parts eventually rendered them into little more than colorless floating blobs of jelly. Neither pretty nor a terribly effective teaching tool, Prof. Reichenbach wanted something more, specifically 3D colored models of marine invertebrates that were both lifelike and able to stand the test of time. [ 11 ] By coincidence, in 1863, he "saw an exhibition of highly detailed, realistic glass flowers created by a Bohemian lampworker named Leopold Blaschka." [ 15 ]
Enchanted by the botanical models, and positive that Leopold held the key to ending his own showcasing issue, in 1863 [ 10 ] Reichenbach convinced and commissioned Leopold to produce twelve model sea anemones . [ 12 ] [ 13 ] [ 16 ] These marine models, hailed as "an artistic marvel in the field of science and a scientific marvel in the field of art," [ 17 ] were a great improvement on previous methods of presenting such creatures: drawings, pressing, photographs and papier-mâché or wax models. [ 10 ] [ 18 ] and exactly what Prof. Reichenbach needed. Moreover they, at last, provided an outlet for the wonder Leopold had felt all those years ago when observing the phosphorescent ocean life.
The key fact, though, was that these glass marine models were, as would soon be acknowledged, "perfectly true to nature," [ 19 ] and as such represented an extraordinary opportunity both for the scientific community and the Blaschkas themselves (to create sea anemone images faithful to nature, images from P.H. Gosse 's Actinologia Britannica , 1860, were utilised). [ 20 ] Knowing this and thrilled with his newly acquired set of glass sea creatures, Reichenbach advised Leopold to drop his current and generations long family business of glass fancy goods and the like in favor of selling glass marine invertebrates to museums, aquaria, universities, and private collectors. [ 10 ] [ 21 ] Advice which would prove wise and fateful both economically and scientifically, for Leopold did as the Dresden natural history museum director suggested.
Unlike the eventual Glass Flower, a private commission to a single University's museum, the Blaschka glass sea creatures were a global enterprise; and not just for museums and other such educational institutes, for "as popular interest in the history and sciences of the natural world burgeoned during the latter half of the 19th century, the sea became particularly alluring. The spread of home aquariums and the advent of deep-sea diving revealed a new frontier, filled with wondrous and unusual creatures." [ 22 ] In short, for the first time since Darwin , there was great universal interest in the natural world, and it became a sign of culture, of worldliness and sophistication, to exhibit examples of life in one's drawing rooms and parlors. [ 23 ] Hence private individuals were after these extraordinary models as well, and the Blaschkas, knowing this and knowing that Reichenbach was correct in that many museums would want them, made a mail-order business out of it. This business was hugely successful and they ended up making and selling 10,000 glass invertebrates dispersed in a diaspora of shipments all across the globe. [ 2 ] [ 24 ] [ 25 ] Indeed, "the world had never seen anything quite like the beautiful, scientifically accurate Blaschka models" [ 26 ] and yet they were available via so common a means as mail-order per one's local card catalog; for example, Ward's Natural Science would sell a small glass octopus for approximately $2.50. [ 11 ] Not glorious, perhaps, but highly effective, and museums and universities began purchasing them en masse to put on display much as Prof. Reichenbach had – for natural history museums directors the world over had the same marine invertebrate showcasing problem. [ 9 ] In short, Blaschka invertebrate models mail-order enterprise succeeded for two reason: 1- there was a huge and global demand; 2- they were the only and best glass artists capable of crafting literally scientifically flawless models. Initially the designs for these were based on drawings in books, but Leopold was soon able to use his earlier drawings to produce highly detailed models of other species, [ 10 ] and his reputation quickly spread. [ 12 ]
As Leopold wrote in an English-language trade catalog preserved at the Rakow Research Library at The Corning Museum of Glass: "[Models of invertebrate animals] have been purchased by... museums and scholastic establishments in all the quarters of the globe... in New Zealand... in Tokio [sic], Japan... for the Indian Museum in Calcutta... in the United States of America by Professor Ward's Natural Science Establishment in Rochester, New York; for the Museum of Comparative Zoology in Cambridge, Massachusetts; for the Boston Society of Natural History; the University of Cornell; the Wellesley Female College... In Great Britain, Scotland and Ireland, copies have been conveyed to London, Edinburgh and Dublin... In Austria, orders have not only been made for the Imperial Royal Court collection, but also for the universities in Innsbruck, Graz, Czernovitz, and so forth. In Germany, purchases have been made for the universities of Berlin, Bonn, Koenigsberg, Jena, Leipzig, Rostock and many other museums." [ 9 ]
Leopold gradually extended his range of work by studying marine animals from the North Sea , Baltic Sea and Mediterranean , [ 10 ] and later constructed an aquarium at his house, in order to keep live specimens from which to model. [ 12 ]
Yet the fate of the marine invertebrate mail-order business was ultimately to be tied to those bought by Harvard's Museum of Comparative Zoology . At some time after the museum's founding in 1859, a collection of 430 glass sea creatures were purchased by either Louis Agassiz , the first director, or his son and successor Alexander Agassiz . [ 23 ] and was likely at least partially unpacked by Alexander Agassiz’s personal secretary Elizabeth Hodges Clark , one of the first and few women with any authority in the museum. [ 27 ] This set was not the largest ever sold and the models were no different from any of the others made by the Blaschkas, but their effect was to be greater than all the rest combined.
Paradoxically and in historically circular twist, the reason that the glass sea creatures sold to Harvard were to prove so crucial was because the University would soon, and did, open its new Botanical Museum in 1888. Given in effect a series of empty rooms and invited to make a museum for teaching botany, the first director, George Lincoln Goodale , faced a familiar problem. [ 11 ] At that time, Harvard was the global center of botanical study and, as such, Prof. Goodale wanted the best for his students, but the only used method was showcasing pressed and carefully labeled botanical specimens – a methodology that offered a twofold problem: being pressed, the specimens were two-dimensional and tended to lose their color. [ 28 ] [ 29 ] Hence they were hardly the ideal teaching tools. [ 11 ] In fact, Goodale's problem was essentially the same as Reichenbach's had been, but applied to botany rather than marine biology for, in both cases, the practiced method of exhibition robbed the specimens of color and three-dimensional form.
Moreover and also like Prof. Reichenbach, Prof. Goodale first learned of Leopold and Rudolf Blaschkas' skill per an exhibition – that being the glass marine invertebrates belonging to the Museum of Comparative Zoology. And, like Reichenbach, upon seeing the Blaschkas' work Goodale was instantly sure that they held the answer to his showcasing problem. Thus, in yet another direct historical parallel, in 1886 the Blaschkas were approached by Goodale for the sole purpose of finding them, with a request to make a series of glass botanical models for Harvard. Naturally Leopold was initially unwilling as, again, his current business of selling glass marine invertebrates was booming; but, eventually, the famed glass artists agreed to send test-models to the U.S. and, although damaged in customs, [ 30 ] the fragments convinced Goodale that Blaschka glass art was a more than worthy educational investment. Thus, with the generous sponsorship of Elizabeth C. Ware and her daughter Mary , the initial contract was signed and dictated that the Blaschkas need only work half-time on the models, thus allowing them to continue their production of the Glass sea creatures. However, in 1890, they and Goodale – acting on behalf of the Wares – signed an updated version that allowed Leopold and Rudolf to work on them (the Glass Flowers) full-time; [ 31 ] [ 32 ] [ 33 ] though some sources describe the agreement as a shift from a 3-year contract to a 10-year one. [ 34 ] Regardless, the production and time of the Glass sea creatures was over, their fame as well as the attention of their makers shifting to the Glass Flowers – a project that, fifty years later, ended with the death of Rudolf Blaschka (Leopold having died thirty-nine years earlier).
Today, over a century after their making, the glass sea creatures live in the shadow of their younger botanical cousins, so much so that many of those well aware of the Glass Flowers have never even heard of them. The fact is that, "gradually, these glass animals began to disappear, their habitats shifting into dusty closets and museum storage. People began to forget that these incredible glass creations had existed in the first place." [ 26 ] Recently, however, that has begun to change, the invertebrate models being remembered and rediscovered.
With a collection 700 models bought in 1888, [ 19 ] the Corning Museum of Glass boasts the largest known collection of Blaschka sea creatures. Displayed (at least in part) in an exhibition named Fragile Legacy , "researchers at Cornell are using the collection as a time capsule for seeking out and documenting the creatures still living in our oceans today." Corning's exhibit also allows visitors to try crafting glass sea slugs [ 35 ] as well as view subsequent works inspired by the Blaschkas. [ 36 ] The exhibit was open through January 8, 2017. The Corning Museum of Glass produced a film entitled Fragile Legacy [ 37 ] exploring the related topics of the Glass sea creatures and the living ones they represent.
Even those specimens purchased by Harvard's Museum of Comparative Zoology (MCZ) suffered a degree of neglect; they were not forgotten, but they were scattered much as the quote above describes, across several departments, and it was believed that the University only possessed 60-70 models (rather than the actual 430). [ 23 ] Recently Harvard has restored and, to best of their abilities, repaired the Glass sea creatures with the hired and instrumental help of Preservation Specialist and Glass Worker Elizabeth R. Brill of Corning, New York, a marine biologist and daughter of a glass chemist. [ 38 ] (Brill later co-authored a book about the Glass sea creatures.) Today they form the Harvard Museum of Natural History Sea Creatures in Glass display which, when combined with the Glass Flowers, form the largest Blaschka collection on display in the world. [ 23 ]
For a several month period beginning in 2015 and ending in the early summer of 2016, the HMNH set up a "temporary display highlighting twenty-seven of the most popular plant models as well as some items from the Blaschka archives" [ 39 ] while the main Glass Flowers exhibit was under renovation. This exhibit was unique because it was the first recorded time that the Glass Flowers have been jointly exhibited with the Glass sea creatures in a major and equal display. [ 39 ] The renovation exhibit was dismantled when, on May 21, 2016, the main Glass Flowers exhibit reopened. The Glass sea creatures remained as a permanent exhibit in the same location until 2020, when they were relocated into a nearby room, and exhibit, all their own.
In 1885 Andrew Dixon White, first president of Cornell University , authorized purchase of 570 glass marine invertebrates, [ 40 ] "some of which are on exhibit at Corson Mudd Hall and the Herbert F. Johnson Museum, making Cornell one of the few universities in the world where students and the public can view these wondrous creations." [ 41 ] However and like so many of their counterpart collections, they were neglected after a time and, in this case, remained forgotten under dust and grit until the latter half of the 20th century. [ 40 ] Currently Cornell has restored approximately 170 of the models thus far and "restoration work will continue as funding allows." [ 42 ]
The Natural History Museum branch of the National Museum of Ireland in Dublin was among the Blaschkas' "earliest customers and initially commissioned 85 glass models, paying the then significant sum of £15. It went on to purchase 530 models from the Blaschkas" – making it the largest collection of Blaschka Invertebrate Models in Europe [ 43 ] Since then, the Dead Zoo, as Ireland's Natural History Museum is sometimes called, [ 44 ] [ 45 ] "has undertaken research on the conservation of these delicate objects." [ 46 ] Noteworthy in that, like Corning, they have forever taken excellent care of the Sea Creatures, the National Museum of Ireland is another center of learning regarding the Blaschkas; a fact proven in that, in 2006, they hosted (alongside University College Dublin ) the Dublin Blaschka Congress , "conceived as a gathering to bring together the diverse scholarly disciplines that are uniquely, if eccentrically, joined in the study of scientific glass models." [ 47 ] Crucially and naturally, the Congress dealt with the Glass Flowers no less than their older maritime cousins. [ 48 ]
In 2007 the University of Wisconsin–Madison Zoological Museum accidentally uncovered their hitherto forgotten 50-model collection in a "series of keyholes under the exhibit cases along a first-floor corridor [ 22 ] Curator Paula Holahan made the discovery, stating "It's not uncommon to find things packed away in any museum that is over 100 years old." The specimens, currently too brittle to be publicly displayed, remain in storage until conservation measures are funded and completed. These funds are not materializing, although the museum hopes to one willing to sponsor the restoration before the effects of age become irreversible. [ 22 ]
A number of glass models, including shells and sea slugs were displayed in the 1893 World's Columbian Exposition and were among the 2,947 series purchased by the museum from Ward's Natural Science Establishment. [ 49 ] Many are on display in the What is an Animal? permanent exhibit. [ 50 ]
The Natural History Museum, London holds 182 of the models. [ 51 ]
There is a large display of marine invertebrates and also two models of single cell animals living in fresh water.
The Museum of Science (MoS) has a small display of marine invertebrates towards the end of their Natural Mysteries exhibit.
The D'Arcy Thompson Zoology Museum at the University of Dundee in Scotland showcases the Blaschka models of marine invertebrates which its founder, Scottish biologist and mathematician D'Arcy Thompson acquired in 1888 to use as teaching aids. [ 52 ] In his 1917 book On Growth and Form , Thompson compares the forms of various marine invertebrates to the shapes made by glass-blowers, suggesting a link to these models .
The University of Vienna has a collection of 145 glass marine invertebrates, "the second largest collection of Blaschka models in the German-speaking part of Europe after the Kremsmünster Abbey . The collection was used in teaching until the 1930s and was rediscovered only in the 1980s." [ 53 ] In 2016 the collection was loaned to and put on display at the Naturhistorisches Museum Wien .
The Natural History Museum of the University of Pisa hosts one of the few glass marine invertebrates model in Italy. "Consisting of 51 marine invertebrates reproduction created for didactical purposes and probably belonging to the first phase of Blaschka’s production (1822 – 1895)." [ 54 ]
Many of the Glass sea creatures are yet to be located; Leopold's record books tell where many of the shipments went, [ 55 ] yet the condition and current whereabouts of the majority of these collections remains unknown. [ 25 ] The original six glass sea anemones purchased by Reichenbach in 1863 as well as the rest of that first collection were destroyed in the bombing of Dresden in World War II . [ 16 ] | https://en.wikipedia.org/wiki/Glass_sea_creatures |
The glass–liquid transition , or glass transition , is the gradual and reversible transition in amorphous materials (or in amorphous regions within semicrystalline materials) from a hard and relatively brittle "glassy" state into a viscous or rubbery state as the temperature is increased. [ 2 ] An amorphous solid that exhibits a glass transition is called a glass . The reverse transition, achieved by supercooling a viscous liquid into the glass state, is called vitrification .
The glass-transition temperature T g of a material characterizes the range of temperatures over which this glass transition occurs (as an experimental definition, typically marked as 100 s of relaxation time). It is always lower than the melting temperature , T m , of the crystalline state of the material, if one exists, because the glass is a higher energy state (or enthalpy at constant pressure) than the corresponding crystal.
Hard plastics like polystyrene and poly(methyl methacrylate) are used well below their glass transition temperatures, i.e., when they are in their glassy state. Their T g values are both at around 100 °C (212 °F). Rubber elastomers like polyisoprene and polyisobutylene are used above their T g , that is, in the rubbery state, where they are soft and flexible; crosslinking prevents free flow of their molecules, thus endowing rubber with a set shape at room temperature (as opposed to a viscous liquid). [ 3 ]
Despite the change in the physical properties of a material through its glass transition, the transition is not considered a phase transition ; rather it is a phenomenon extending over a range of temperature and defined by one of several conventions. [ 4 ] [ 5 ] Such conventions include a constant cooling rate (20 kelvins per minute (36 °F/min)) [ 2 ] and a viscosity threshold of 10 12 Pa·s , among others. Upon cooling or heating through this glass-transition range, the material also exhibits a smooth step in the thermal-expansion coefficient and in the specific heat , with the location of these effects again being dependent on the history of the material. [ 6 ] The question of whether some phase transition underlies the glass transition is a matter of ongoing research. [ 4 ] [ 5 ] [ 7 ] [ when? ]
Glass transition (in polymer science): process in which a polymer melt changes on cooling to a polymer glass or a polymer glass changes on heating to a polymer melt. [ 8 ]
The glass transition of a liquid to a solid-like state may occur with either cooling or compression. [ 10 ] The transition comprises a smooth increase in the viscosity of a material by as much as 17 orders of magnitude within a temperature range of 500 K without any pronounced change in material structure. [ 11 ] This transition is in contrast to the freezing or crystallization transition, which is a first-order phase transition in the Ehrenfest classification and involves discontinuities in thermodynamic and dynamic properties such as volume, energy, and viscosity. In many materials that normally undergo a freezing transition, rapid cooling will avoid this phase transition and instead result in a glass transition at some lower temperature. Other materials, such as many polymers , lack a well defined crystalline state and easily form glasses, even upon very slow cooling or compression. The tendency for a material to form a glass while quenched is called glass forming ability. This ability depends on the composition of the material and can be predicted by the rigidity theory . [ 12 ]
Below the transition temperature range, the glassy structure does not relax in accordance with the cooling rate used. The expansion coefficient for the glassy state is roughly equivalent to that of the crystalline solid. If slower cooling rates are used, the increased time for structural relaxation (or intermolecular rearrangement) to occur may result in a higher density glass product. Similarly, by annealing (and thus allowing for slow structural relaxation) the glass structure in time approaches an equilibrium density corresponding to the supercooled liquid at this same temperature. T g is located at the intersection between the cooling curve (volume versus temperature) for the glassy state and the supercooled liquid. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ]
The configuration of the glass in this temperature range changes slowly with time towards the equilibrium structure. The principle of the minimization of the Gibbs free energy provides the thermodynamic driving force necessary for the eventual change. At somewhat higher temperatures than T g , the structure corresponding to equilibrium at any temperature is achieved quite rapidly. In contrast, at considerably lower temperatures, the configuration of the glass remains sensibly stable over increasingly extended periods of time.
Thus, the liquid-glass transition is not a transition between states of thermodynamic equilibrium . It is widely believed that the true equilibrium state is always crystalline. Glass is believed to exist in a kinetically locked state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon. Time and temperature are interchangeable quantities (to some extent) when dealing with glasses, a fact often expressed in the time–temperature superposition principle. On cooling a liquid, internal degrees of freedom successively fall out of equilibrium . However, there is a longstanding debate whether there is an underlying second-order phase transition in the hypothetical limit of infinitely long relaxation times. [ clarification needed ] [ 6 ] [ 18 ] [ 19 ] [ 20 ]
In a more recent model of glass transition, the glass transition temperature corresponds to the temperature at which the largest openings between the vibrating elements in the liquid matrix become smaller than the smallest cross-sections of the elements or parts of them when the temperature is decreasing. As a result of the fluctuating input of thermal energy into the liquid matrix, the harmonics of the oscillations are constantly disturbed and temporary cavities ("free volume") are created between the elements, the number and size of which depend on the temperature. The glass transition temperature T g0 defined in this way is a fixed material constant of the disordered (non-crystalline) state that is dependent only on the pressure. As a result of the increasing inertia of the molecular matrix when approaching T g0 , the setting of the thermal equilibrium is successively delayed, so that the usual measuring methods for determining the glass transition temperature in principle deliver T g values that are too high. In principle, the slower the temperature change rate is set during the measurement, the closer the measured T g value T g0 approaches. [ 21 ] Techniques such as dynamic mechanical analysis can be used to measure the glass transition temperature. [ 22 ]
The definition of the glass and the glass transition are not settled, and many definitions have been proposed over the past century. [ 23 ]
Franz Simon : [ 24 ] Glass is a rigid material obtained from freezing-in a supercooled liquid in a narrow temperature range.
Zachariasen : [ 25 ] Glass is a topologically disordered network, with short range order equivalent to that in the corresponding crystal. [ 26 ]
Glass is a "frozen liquid” (i.e., liquids where ergodicity has been broken), which spontaneously relax towards the supercooled liquid state over a long enough time.
Glasses are thermodynamically non-equilibrium kinetically stabilized amorphous solids, in which the molecular disorder and the thermodynamic properties corresponding to the state of the respective under-cooled melt at a temperature T* are frozen-in. Hereby T* differs from the actual temperature T . [ 27 ]
Glass is a nonequilibrium, non-crystalline condensed state of matter that exhibits a glass transition. The structure of glasses is similar to that of their parent supercooled liquids (SCL), and they spontaneously relax toward the SCL state. Their ultimate fate is to solidify, i.e., crystallize. [ 23 ]
Refer to the figure on the bottom right plotting the heat capacity as a function of temperature. In this context, T g is the temperature corresponding to point A on the curve. [ 28 ]
Different operational definitions of the glass transition temperature T g are in use, and several of them are endorsed as accepted scientific standards. Nevertheless, all definitions are arbitrary, and all yield different numeric results: at best, values of T g for a given substance agree within a few kelvins. One definition refers to the viscosity , fixing T g at a value of 10 13 poise (or 10 12 Pa·s). As evidenced experimentally, this value is close to the annealing point of many glasses. [ 29 ]
In contrast to viscosity, the thermal expansion , heat capacity , shear modulus, and many other properties of inorganic glasses show a relatively sudden change at the glass transition temperature. Any such step or kink can be used to define T g . To make this definition reproducible, the cooling or heating rate must be specified.
The most frequently used definition of T g uses the energy release on heating in differential scanning calorimetry (DSC, see figure). Typically, the sample is first cooled with 10 K/min and then heated with that same speed.
Yet another definition of T g uses the kink in dilatometry (a.k.a. thermal expansion): refer to the figure on the top right. Here, heating rates of 3–5 K/min (5.4–9.0 °F/min) are common. The linear sections below and above T g are colored green. T g is the temperature at the intersection of the red regression lines. [ 28 ]
Summarized below are T g values characteristic of certain classes of materials.
Dry nylon-6 has a glass transition temperature of 47 °C (117 °F). [ 35 ] Nylon-6,6 in the dry state has a glass transition temperature of about 70 °C (158 °F). [ 36 ] [ 37 ] Whereas polyethene has a glass transition range of −130 to −80 °C (−202 to −112 °F) [ 38 ] The above are only mean values, as the glass transition temperature depends on the cooling rate and molecular weight distribution and could be influenced by additives. For a semi-crystalline material, such as polyethene that is 60–80% crystalline at room temperature, the quoted glass transition refers to what happens to the amorphous part of the material upon cooling.
In 1971, Zeller and Pohl discovered that [ 42 ] when glass is at a very low temperature ~1K, its specific heat has a linear component: c ≈ c 1 T + c 3 T 3 {\displaystyle c\approx c_{1}T+c_{3}T^{3}} . This is an unusual effect, because crystal material typically has c ∝ T 3 {\displaystyle c\propto T^{3}} , as in the Debye model . This was explained by the two-level system hypothesis, [ 43 ] which states that a glass is populated by two-level systems, which look like a double potential well separated by a wall. The wall is high enough such that resonance tunneling does not occur, but thermal tunneling does occur. Namely, if the two wells have energy difference Δ E ∼ k B T {\displaystyle \Delta E\sim k_{B}T} , then a particle in one well can tunnel to the other well by thermal interaction with the environment. Now, imagine that there are many two-level systems in the glass, and their Δ E {\displaystyle \Delta E} is randomly distributed but fixed ("quenched disorder"), then as temperature drops, more and more of these two-level levels are frozen out (meaning that it takes such a long time for a tunneling to occur, that they cannot be experimentally observed).
Consider a single two-level system that is not frozen-out, whose energy gap is Δ E = O ( 1 / β ) {\displaystyle \Delta E=O(1/\beta )} . It is in a Boltzmann distribution, so its average energy = β Δ E e β Δ E − 1 β − 1 {\displaystyle ={\frac {\beta \Delta E}{e^{\beta \Delta E}-1}}\beta ^{-1}} .
Now, assume that the two-level systems are all quenched, so that each Δ E {\displaystyle \Delta E} varies little with temperature. In that case, we can write n ( Δ E ) {\displaystyle n(\Delta E)} as the density of states with energy gap Δ E {\displaystyle \Delta E} . We also assume that n ( Δ E ) {\displaystyle n(\Delta E)} is positive and smooth near Δ E ≈ 0 {\displaystyle \Delta E\approx 0} .
Then, the total energy contributed by those two-level systems is E ¯ ∼ ∫ 0 O ( 1 / β ) β Δ E e β Δ E − 1 β − 1 n ( Δ E ) d Δ E = β − 2 ∫ 0 O ( 1 ) a e a − 1 n ( a / β ) d a ∝ β − 2 n ( 0 ) {\displaystyle {\bar {E}}\sim \int _{0}^{O(1/\beta )}{\frac {\beta \Delta E}{e^{\beta \Delta E}-1}}\beta ^{-1}\;n(\Delta E)d\Delta E=\beta ^{-2}\int _{0}^{O(1)}{\frac {a}{e^{a}-1}}n(a/\beta )da\propto \beta ^{-2}n(0)}
The effect is that the average energy in these two-level systems is E ¯ ∼ T 2 {\displaystyle {\bar {E}}\sim T^{2}} , leading to a ∂ T E ¯ ∝ T {\displaystyle \partial _{T}{\bar {E}}\propto T} term.
In experimental measurements, the specific heat capacity of glass is measured at different temperatures, and a ( T 2 , c / T ) {\displaystyle (T^{2},c/T)} graph is plotted. Assuming that c ≈ c 1 T + c 3 T 3 {\displaystyle c\approx c_{1}T+c_{3}T^{3}} , the graph should show c / T ≈ c 1 + c 3 T 2 {\displaystyle c/T\approx c_{1}+c_{3}T^{2}} , that is, a straight line with slope showing the typical Debye-like heat capacity, and a vertical intercept showing the anomalous linear component. [ 41 ]
As a liquid is supercooled, the difference in entropy between the liquid and solid phase decreases. By extrapolating the heat capacity of the supercooled liquid below its glass transition temperature , it is possible to calculate the temperature at which the difference in entropies becomes zero. This temperature has been named the Kauzmann temperature .
If a liquid could be supercooled below its Kauzmann temperature, and it did indeed display a lower entropy than the crystal phase, this would be paradoxical, as the liquid phase should have the same vibrational entropy, but much higher positional entropy, as the crystal phase. This is the Kauzmann paradox , still not definitively resolved. [ 44 ] [ 45 ]
There are many possible resolutions to the Kauzmann paradox.
Kauzmann himself resolved the entropy paradox by postulating that all supercooled liquids must crystallize before the Kauzmann temperature is reached.
Perhaps at the Kauzmann temperature, glass reaches an ideal glass phase , which is still amorphous, but has a long-range amorphous order which decreases its overall entropy to that of the crystal. The ideal glass would be a true phase of matter. [ 45 ] [ 46 ] The ideal glass is hypothesized, but cannot be observed naturally, as it would take too long to form. Something approaching an ideal glass has been observed as "ultrastable glass" formed by vapor deposition , [ 47 ]
Perhaps there must be a phase transition before the entropy of the liquid decreases. In this scenario, the transition temperature is known as the calorimetric ideal glass transition temperature T 0c . In this view, the glass transition is not merely a kinetic effect, i.e. merely the result of fast cooling of a melt, but there is an underlying thermodynamic basis for glass formation. The glass transition temperature:
Perhaps the heat capacity of the supercooled liquid near the Kauzmann temperature smoothly decreases to a smaller value.
Perhaps first order phase transition to another liquid state occurs before the Kauzmann temperature with the heat capacity of this new state being less than that obtained by extrapolation from higher temperature.
Silica (the chemical compound SiO 2 ) has a number of distinct crystalline forms in addition to the quartz structure. Nearly all of the crystalline forms involve tetrahedral SiO 4 units linked together by shared vertices in different arrangements ( stishovite , composed of linked SiO 6 octahedra , is the main exception). Si-O bond lengths vary between the different crystal forms. For example, in α-quartz the bond length is 161 picometres (6.3 × 10 −9 in), whereas in α-tridymite it ranges from 154–171 pm (6.1 × 10 −9 –6.7 × 10 −9 in). The Si-O-Si bond angle also varies from 140° in α-tridymite to 144° in α-quartz to 180° in β-tridymite. Any deviations from these standard parameters constitute microstructural differences or variations that represent an approach to an amorphous , vitreous or glassy solid .
The transition temperature T g in silicates is related to the energy required to break and re-form covalent bonds in an amorphous (or random network) lattice of covalent bonds . The T g is clearly influenced by the chemistry of the glass. For example, addition of elements such as B , Na , K or Ca to a silica glass , which have a valency less than 4, helps in breaking up the network structure, thus reducing the T g . Alternatively, P , which has a valency of 5, helps to reinforce an ordered lattice, and thus increases the T g . [ 48 ] T g is directly proportional to bond strength, e.g. it depends on quasi-equilibrium thermodynamic parameters of the bonds e.g. on the enthalpy H d and entropy S d of configurons – broken bonds: T g = H d / [ S d + R ln[(1 − f c )/ f c ] where R is the gas constant and f c is the percolation threshold. For strong melts such as Si O 2 the percolation threshold in the above equation is the universal Scher–Zallen critical density in the 3-D space e.g. f c = 0.15, however for fragile materials the percolation thresholds are material-dependent and f c ≪ 1. [ 49 ] The enthalpy H d and the entropy S d of configurons – broken bonds can be found from available experimental data on viscosity. [ 50 ] On the surface of SiO 2 films, scanning tunneling microscopy has resolved clusters of ca. 5 SiO 2 in diameter that move in a two-state fashion on a time scale of minutes. This is much faster than dynamics in the bulk, but in agreement with models that compare bulk and surface dynamics. [ 51 ] [ 52 ]
In polymers the glass transition temperature, T g , is often expressed as the temperature at which the Gibbs free energy is such that the activation energy for the cooperative movement of 50 or so elements of the polymer is exceeded [ citation needed ] . This allows molecular chains to slide past each other when a force is applied. From this definition, we can see that the introduction of relatively stiff chemical groups (such as benzene rings) will interfere with the flowing process and hence increase T g . [ 53 ] The stiffness of thermoplastics decreases due to this effect (see figure.) When the glass temperature has been reached, the stiffness stays the same for a while, i.e., at or near E 2 , until the temperature exceeds T m , and the material melts. This region is called the rubber plateau.
In ironing , a fabric is heated through this transition so that the polymer chains become mobile. The weight of the iron then imposes a preferred orientation. T g can be significantly decreased by addition of plasticizers into the polymer matrix. Smaller molecules of plasticizer embed themselves between the polymer chains, increasing the spacing and free volume, and allowing them to move past one another even at lower temperatures. Addition of plasticizer can effectively take control over polymer chain dynamics and dominate the amounts of the associated free volume so that the increased mobility of polymer ends is not apparent. [ 54 ] The addition of nonreactive side groups to a polymer can also make the chains stand off from one another, reducing T g . If a plastic with some desirable properties has a T g that is too high, it can sometimes be combined with another in a copolymer or composite material with a T g below the temperature of intended use. Note that some plastics are used at high temperatures, e.g., in automobile engines, and others at low temperatures. [ 32 ]
In viscoelastic materials, the presence of liquid-like behavior depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. The silicone toy Silly Putty behaves quite differently depending on the time rate of applying a force: pull slowly and it flows, acting as a heavily viscous liquid; hit it with a hammer and it shatters, acting as a glass.
On cooling, rubber undergoes a liquid-glass transition , which has also been called a rubber-glass transition .
Molecular motion in condensed matter can be represented by a Fourier series whose physical interpretation consists of a superposition of longitudinal and transverse waves of atomic displacement with varying directions and wavelengths. In monatomic systems, these waves are called density fluctuations . (In polyatomic systems, they may also include compositional fluctuations.) [ 55 ]
Thus, thermal motion in liquids can be decomposed into elementary longitudinal vibrations (or acoustic phonons ) while transverse vibrations (or shear waves) were originally described only in elastic solids exhibiting the highly ordered crystalline state of matter. In other words, simple liquids cannot support an applied force in the form of a shearing stress , and will yield mechanically via macroscopic plastic deformation (or viscous flow). Furthermore, the fact that a solid deforms locally while retaining its rigidity – while a liquid yields to macroscopic viscous flow in response to the application of an applied shearing force – is accepted by many as the mechanical distinction between the two. [ 56 ] [ 57 ]
The inadequacies of this conclusion, however, were pointed out by Frenkel in his revision of the kinetic theory of solids and the theory of elasticity in liquids . This revision follows directly from the continuous characteristic of the viscoelastic crossover from the liquid state into the solid one when the transition is not accompanied by crystallization—ergo the supercooled viscous liquid . Thus we see the intimate correlation between transverse acoustic phonons (or shear waves) and the onset of rigidity upon vitrification , as described by Bartenev in his mechanical description of the vitrification process. [ 58 ] [ 59 ]
The velocities of longitudinal acoustic phonons in condensed matter are directly responsible for the thermal conductivity that levels out temperature differentials between compressed and expanded volume elements. Kittel proposed that the behavior of glasses is interpreted in terms of an approximately constant " mean free path " for lattice phonons, and that the value of the mean free path is of the order of magnitude of the scale of disorder in the molecular structure of a liquid or solid. The thermal phonon mean free paths or relaxation lengths of a number of glass formers have been plotted versus the glass transition temperature, indicating a linear relationship between the two. This has suggested a new criterion for glass formation based on the value of the phonon mean free path. [ 60 ]
It has often been suggested that heat transport in dielectric solids occurs through elastic vibrations of the lattice, and that this transport is limited by elastic scattering of acoustic phonons by lattice defects (e.g. randomly spaced vacancies). [ 61 ] These predictions were confirmed by experiments on commercial glasses and glass ceramics , where mean free paths were apparently limited by "internal boundary scattering" to length scales of 10–100 micrometres (0.00039–0.00394 in). [ 62 ] [ 63 ] The relationship between these transverse waves and the mechanism of vitrification has been described by several authors who proposed that the onset of correlations between such phonons results in an orientational ordering or "freezing" of local shear stresses in glass-forming liquids, thus yielding the glass transition. [ 64 ]
The influence of thermal phonons and their interaction with electronic structure is a topic that was appropriately introduced in a discussion of the resistance of liquid metals. Lindemann's theory of melting is referenced, [ 65 ] and it is suggested that the drop in conductivity in going from the crystalline to the liquid state is due to the increased scattering of conduction electrons as a result of the increased amplitude of atomic vibration . Such theories of localization have been applied to transport in metallic glasses , where the mean free path of the electrons is very small (on the order of the interatomic spacing). [ 66 ] [ 67 ]
The formation of a non-crystalline form of a gold-silicon alloy by the method of splat quenching from the melt led to further considerations of the influence of electronic structure on glass forming ability, based on the properties of the metallic bond . [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ]
Other work indicates that the mobility of localized electrons is enhanced by the presence of dynamic phonon modes. One claim against such a model is that if chemical bonds are important, the nearly free electron models should not be applicable. However, if the model includes the buildup of a charge distribution between all pairs of atoms just like a chemical bond (e.g., silicon, when a band is just filled with electrons) then it should apply to solids . [ 73 ]
Thus, if the electrical conductivity is low, the mean free path of the electrons is very short. The electrons will only be sensitive to the short-range order in the glass since they do not get a chance to scatter from atoms spaced at large distances. Since the short-range order is similar in glasses and crystals, the electronic energies should be similar in these two states. For alloys with lower resistivity and longer electronic mean free paths, the electrons could begin to sense [ dubious – discuss ] that there is disorder in the glass, and this would raise their energies and destabilize the glass with respect to crystallization. Thus, the glass formation tendencies of certain alloys may therefore be due in part to the fact that the electron mean free paths are very short, so that only the short-range order is ever important for the energy of the electrons.
It has also been argued that glass formation in metallic systems is related to the "softness" of the interaction potential between unlike atoms. Some authors, emphasizing the strong similarities between the local structure of the glass and the corresponding crystal, suggest that chemical bonding helps to stabilize the amorphous structure. [ 74 ] [ 75 ]
Other authors have suggested that the electronic structure yields its influence on glass formation through the directional properties of bonds. Non-crystallinity is thus favored in elements with a large number of polymorphic forms and a high degree of bonding anisotropy . Crystallization becomes more unlikely as bonding anisotropy is increased from isotropic metallic to anisotropic metallic to covalent bonding, thus suggesting a relationship between the group number in the periodic table and the glass forming ability in elemental solids . [ 76 ] | https://en.wikipedia.org/wiki/Glass_transition |
Glass with embedded metal and sulfides ( GEMS ) are tiny spheroids in cosmic dust particles with bulk compositions that are approximately chondritic . They form the building blocks of anhydrous interplanetary dust particles (IDPs) in general, and cometary IDPs , in particular. Their compositions, mineralogy and petrography appear to have been shaped by exposure to ionizing radiation . Since the exposure occurred prior to the accretion of cometary IDPs, and therefore comets themselves, GEMS are likely either solar nebula or presolar interstellar grains . The properties of GEMS (size, shape, mineralogy) bear a strong resemblance to those of interstellar silicate grains as inferred from astronomical observations. [ fn 1 ]
This planetary science article is a stub . You can help Wikipedia by expanding it .
This glass -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glass_with_embedded_metal_and_sulfides |
In integral calculus , Glasser's master theorem explains how a certain broad class of substitutions can simplify certain integrals over the whole interval from − ∞ {\displaystyle -\infty } to + ∞ . {\displaystyle +\infty .} It is applicable in cases where the integrals must be construed as Cauchy principal values , and a fortiori it is applicable when the integral converges absolutely . It is named after M. L. Glasser, who introduced it in 1983. [ 1 ]
A special case called the Cauchy–Schlömilch substitution or Cauchy–Schlömilch transformation [ 2 ] was known to Cauchy in the early 19th century. [ 3 ] It states that if
then
where PV denotes the Cauchy principal value.
If a {\displaystyle a} , a i {\displaystyle a_{i}} , and b i {\displaystyle b_{i}} are real numbers and
then | https://en.wikipedia.org/wiki/Glasser's_master_theorem |
The glassworts are various succulent, annual halophytic plants, that is, plants that thrive in saline environments, such as seacoasts and salt marshes . The original English glasswort plants belong to the genus Salicornia , but today the glassworts include halophyte plants from several genera, some of which are native to continents unknown to the medieval English, and growing in ecosystems, such as mangrove swamps, never envisioned when the term glasswort was coined.
The common name "glasswort" came into use in the 16th century to describe plants growing in England whose ashes could be used for making soda-based (as opposed to potash -based) glass . [ 1 ] [ 2 ]
The ashes of glasswort plants, and also of their Mediterranean counterpart saltwort plants, yield soda ash , which is an important ingredient for glassmaking and soapmaking . Soda ash is an alkali whose active ingredient is now known to be sodium carbonate .
Glasswort and saltwort plants sequester the sodium they absorb from salt water into their tissues (see Salsola soda ). Ashing of the plants converts some of this sodium into sodium carbonate (or "soda", in one of the old uses of the term). [ citation needed ]
In the medieval and early post-medieval centuries, various glasswort plants were collected at tidal marshes and other saline places in the Mediterranean region. The collected plants were burned. The resulting ashes were mixed with water. Sodium carbonate is soluble in water. Non-soluble components of the ashes sank to the bottom of the water container. The water with the sodium carbonate dissolved in it was then transferred to another container, and then the water was evaporated off, leaving behind the sodium carbonate. Another major component of the ashes that is soluble in water is potassium carbonate, a.k.a. potash. The resulting product consisted mainly of a mixture of sodium carbonate and potassium carbonate. This product was called "soda ash" (it was also called "alkali"). It contained 20% to 30% sodium carbonate. For glassmaking, it was superior to a potash product obtained by the same procedure from the ashes of non-salty plants. If plant ashes were not washed as just described, they were still usable in glassmaking but the results were not as good. [ citation needed ]
The appearance of the word glasswort in English is reasonably contemporaneous with a 16th-century resurgence in English glassmaking, which had suffered a long decline after Roman times. [ 3 ] [ 4 ] This resurgence was led by glassmakers who emigrated to England from Lorraine and from Venice . The Lorraine glassmakers brought with them the technology of forest glass , the greenish glass that used potash from wood ashes as a flux. The Venetian glassmakers brought with them the technology of cristallo , the immaculately clear glass that used soda ash as a flux. These glassmakers would have recognized Salicornia europaea growing in England as a source for soda ash. Prior to their arrival, it was said that the plant "hath no name in English". [ 2 ]
By the 18th century, Spain had an enormous industry producing soda ash from saltworts; the soda ash from this source was known as barrilla . [ 5 ] Scotland had a large 18th-century industry producing soda ash from seaweed. The source of this ash was kelp . This industry was so lucrative that it led to overpopulation in the Western Isles of Scotland, and one estimate is that 100,000 people were occupied with "kelping" during the summer months. [ 6 ] In the same period, soda ash ( la soude de Narbonne ) was produced in quantity from glasswort proper around Narbonne , France. [ 7 ] [ 8 ] The commercialization of the Leblanc process for synthesizing sodium carbonate (from salt, limestone , and sulfuric acid ) brought an end to the era of farming for soda ash in the first half of the 19th century. [ citation needed ]
Young shoots of Salicornia europaea are tender and can be eaten raw as a salad: glasswort salad or samphire salad ( Turkish : Deniz börülcesi salatası ). This salad is a part of Turkish cuisine , also made with lemon juice, olive oil [ 9 ] and garlic. [ 10 ] [ 11 ] It is commonly served as a meze . [ citation needed ] The shoots can also be pickled. [ 12 ]
The plant can further be prepared in several ways – cooked, steamed, or stir fried – and eaten as a vegetable dish. [ 13 ]
Plants that have been called glassworts include: | https://en.wikipedia.org/wiki/Glasswort |
Glauber is a scientific discovery method written in the context of computational philosophy of science . It is related to machine learning in artificial intelligence .
Glauber was written, among other programs, by Pat Langley , Herbert A. Simon , G. Bradshaw and J. Zytkow to demonstrate how scientific discovery may be obtained by problem solving methods, in their book Scientific Discovery, Computational Explorations on the Creative Mind . [ 1 ]
Their programs simulate historical scientific discoveries based on the empirical evidence known at the time of discovery.
Glauber was named after Johann Rudolph Glauber , a 17th-century alchemist whose work helped to develop acid-base theory . Glauber (the method) rediscovers the law of acid-alkali reactions producing salts, given the qualities of substances and observed facts, the result of mixing substances. From that knowledge Glauber discovers that substances that taste bitter react with substances tasting sour, producing substances tasting salty.
In few words, the law:
Glauber was designed by Pat Langley as part of his work on discovery heuristics in an attempt to have a computer automatically review a host of values and characteristics and make independent analyses from them. In the case of Glauber, the goal was to have an autonomous application that could estimate, even perfectly describe, the nature of a given chemical compound by comparing it to related substances. Langley formalized and compiled Glauber in 1983.
The software were supplied with information about a variety of materials as they had been described by 17-18th century chemists, before most of modern chemical knowledge had been uncovered or invented. Qualitative descriptions like taste , rather than numerical data such as molecular weight , were programmed into the application. Chemical reactions that were known in that era and the distinction between reactants and products were also provided. From this knowledge, Glauber was to figure out which substances were acids , bases , and salts without any quantitative information. The system examined chemical substances and all of their most likely reactions and correlates the expected taste and related acidity or saltiness according to the rule that acids and bases produce salts.
Glauber was a very successful advance in theoretical chemistry as performed by computer and it, along with similar systems developed by Herbert A. Simon including Stahl (which examines oxidation ) and DALTON (which calculates atomic weight ), helped form the groundwork of all current automated chemical analysis.
Glauber uses two predicates: Reacts and Has-Quality, represented in Lisp lists as follows:
For their experiment the authors used the following facts:
Discovering the following law and equivalence classes:
The modern notation with strings like: NaOH, HCl, etc., is used just as short substance names. Here they do not mean the chemical structure of the substances, which was not known at the time of the discovery; the program works with any name used in the 17th century like aqua regia , muriatic acid , etc.
Glauber is based in two procedures: Form-Class and Determine-Quantifier.
The procedure Form-Class generalize the Reacts predicates by replacing the substance names by variables ranging on equivalence classes determined by a quality whose value distinguishes the substances in each class.
In the experiment designed by its authors, the substances are partitioned in three classes based in the value of the taste quality based on their values: acids (sour), alkalis (bitter) and salts (salty). | https://en.wikipedia.org/wiki/Glauber_discovery_system |
The Glauber multiple scattering theory [ 1 ] [ 2 ] is a framework developed by Roy J. Glauber to describe the scattering of particles off composite targets, such as nuclei , in terms of multiple interactions between the probing particle and the individual constituents of the target. It is widely used [ 3 ] in high-energy physics , nuclear physics , and hadronic physics , where quantum coherence effects and multiple scatterings are significant.
The basic idea of the Glauber formalism is that the incident projectile is assumed to interact with each component of the complex target in turn as it moves in a straight line through the target. This assumes the eikonal approximation , viz that the projectile's trajectory is nearly straight-line, with only small-angle deflections due to interactions with the target component. The theory accounts for the fact that a projectile may interact with more than one constituent (e.g., the nucleons of a target nucleus) as it passes through the target nucleus. These interactions are treated coherently . The scattering amplitude is taken as the sum over contributions from multiple scatterings. This is done using the optical model , where the target nucleus is treated as a complex potential. In fact, coherent superposition of scattering amplitudes from all possible paths through the nucleus is a fundamental aspect, leading to phenomena like diffraction patterns. The theory often uses Gaussian or Woods–Saxon potential distributions for nuclear densities.
The elastic scattering amplitude F ( q ) {\displaystyle F(q)} in Glauber theory is given by: [ 4 ]
where: q → {\displaystyle {\vec {q}}} is the momentum transfer , b → {\displaystyle {\vec {b}}} is the impact parameter , χ ( b → ) {\displaystyle \chi ({\vec {b}})} is the eikonal phase shift representing the integrated interaction potential.
For a nucleus, χ ( b → ) {\displaystyle \chi ({\vec {b}})} is expressed as the sum of contributions from individual nucleons, χ ( b → ) = ∑ j χ j ( b → − s → j ) {\displaystyle \textstyle \chi ({\vec {b}})=\sum _{j}\chi _{j}({\vec {b}}-{\vec {s}}_{j})} where s → j {\displaystyle {\vec {s}}_{j}} is the transverse position of nucleon j {\displaystyle j} .
At high energies, the above formalism simplifies by focusing on transverse geometry and neglecting effects like spin or low-energy dynamics. Relativistic corrections were not part of the original formalism, but have been included in modern applications when they are necessary (high-energy cases). [ 5 ] Other simplifications are that the theory assumes independent scatterings, neglects correlations between nucleons and, as an effective modeling, does not account for some QCD effects directly, which are significant at very small distances.
The Glauber theory has been applied to: | https://en.wikipedia.org/wiki/Glauber_multiple_scattering_theory |
Glaze defects are any flaws in the surface quality of a ceramic glaze , its physical structure or its interaction with the body.
Certain glaze defects are a result of differences in the thermal expansion coefficient of the glaze and the clay body.
Crazing is a spider web pattern of cracks penetrating the glaze. It is caused by tensile stresses greater than the glaze is able to withstand. [ 1 ] [ 2 ] Common reasons for such stresses are: a mismatch between the thermal expansions of glaze and body; from moisture expansion of the body; and in the case of glazed tiles fixed to a wall, movement of the wall or of the bonding material used to fix the tile to the wall. [ 3 ] The cracks can allow the ingress of water into the cracks. Once fired, ware tends to be more resistant to crazing due to better development of the glaze/body interfacial layer, which reduces stress gradients between the glaze and body. [ 3 ]
In pottery a distinction is often made between crazing, as an accidental defect, and "crackle", which is when the same phenomenon, often strongly accentuated, is produced deliberately. The Chinese in particular enjoyed the random effects of crackle, though it spans a spectrum: in Ru ware it is a tolerated feature of most pieces, but not sought, while in Guan ware a strong crackle is a desired effect.
The causes of crazing include: [ 1 ] [ 3 ]
Steger's Crazing Test is a method for the assessment of the glaze fit. It is undertaken by measuring any deformation on cooling of a thin bar that was glazed only on one side. [ 4 ] [ 5 ] [ 6 ] A common method of testing glazed ceramic ware for crazing resistance is to expose pieces of ware to the steam in an autoclave at a minimum of 50 psi. [ 7 ] [ 8 ]
Seger's Rules are a series of empirical rules put forward by Hermann Seger for the prevention of crazing and peeling. To prevent crazing, the body should be adjusted as follows: decrease the clay, increase the free silica ; replace some of the ball clay by kaolin ; decrease the feldspar ; grind the silica more finely; biscuit fire at higher temperature. Alternatively, the glaze can be adjusted: increase silica and/or decrease fluxes; replace some SiO 2 by B 2 O 3 ; replace fluxes of high equivalent weight by fluxes of lower equivalent weight. To prevent peeling, the body or glaze should be adjusted in the reverse direction. [ 9 ]
Shivering describes the breaking away of glaze from ceramic ware as a result of greater compression in the glaze layer than the body caused by the glaze having an expansion coefficient below the clay body's. [ 10 ]
It is the opposite of crazing, as are the preventative steps: see Seger's Rule above. Shivering is also known as peeling . [ 3 ] [ 11 ] [ 12 ]
Regulations have existed since the late 1960s to protect consumers from the potential risk of toxic materials, mainly metals, being released from glazes into drink and foodstuffs. Lead and cadmium are the metals of greatest concern, although testing can be extended to include others. The propensity for any glaze to release metal may depend on complex interactions between the formulation used, any applied decoration and the kiln atmosphere. [ 1 ]
Monitoring the level of metal release from glazed ware forms part of the quality control procedures of all reputable producers. [ 1 ] [ 13 ] Test methods are specified according to national and international standards, although testing usually involves: the ware being immersed or filled with a 4% acetic acid solution; covered and left for 24 hours at room temperature, although if cooking ware is being tested higher temperatures are needed; the acetic acid solution decanted from the ware and the concentration of leached metal measured by Atomic absorption spectroscopy . [ 14 ] Acceptance limits are enforced by legislation, and whilst varying between countries all are within the ppm range. Some of the most well recognised legislation are: across Europe 'EC Directive 84/500/EEC 1984'; for the UK 'GB Ceramic Ware (Safety) Regulations SI 1647, 1988'; and for the USA 'FDA Compliance Policy Guide 7117.06 and 7117.07 for cadmium and lead.' [ 15 ] [ 16 ] [ 17 ]
A large bubble sometimes present as a fault in ceramic ware. Blisters appear as large bubbles either just below or penetrating the surface, leaving sharp, rough edges that collect dirt. The surface of the glaze is very unpleasant and looks like a boiled mass of bubbles, craters and pinholes. [ 3 ] [ 18 ]
A defect that appears as irregular, bare patches of fired body showing through the glaze where it has failed to adhere to or wet the body on firing. The cause is a weak bond between glaze and body; this may result from greasy patches or dust on the surface of the biscuit ware or from shrinkage of the applied glaze slip during drying. The fault is more likely to occur with once-fired ware such as sanitaryware. [ 1 ] [ 3 ] [ 19 ] [ 20 ]
Metal marks are dark lines, often accompanied by damage in the glaze, caused by the deposition of metal during the use of metal utensils. The cutlery, or other relatively soft metal, will leave a very thin smear of metal on pottery ware if the glaze is minutely pitted. A glaze may have this defective surface as it leaves the glost kiln , or it may subsequently develop such a surface as a result of inadequate chemical durability. The fault is also known as cutlery marking. [ 3 ] [ 21 ] [ 22 ] [ 23 ]
A fault that is commonly the result of a bubble in the glaze when it was molten that burst but was only partially healed. The bubbles are most often from gas that originates from air trapped between the particles of powdered glaze as the glaze begins to mature, or from gases evolved from carbonate compounds. [ 24 ] [ 25 ]
A specific example of pin-holes is Spit-out . These are pin-holes or craters sometimes occurring in glazed non-vitreous ceramics while they are in the decorating kiln . The cause of this defect is the evolution of water vapour , adsorbed by the porous body, during the period between the glost firing and the decorating firing, via minute cracks in the glaze. [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/Glaze_defects |
A glazier is a tradesperson responsible for cutting, installing, and removing glass (and materials used as substitutes for glass, such as some plastics ). [ 1 ] They also refer to blueprints to figure out the size, shape, and location of the glass in the building. They may have to consider the type and size of scaffolding they need to stand on to fit and install the glass. Glaziers may work with glass in various surfaces and settings, such as cutting and installing windows , doors , shower doors , skylights , storefronts , display cases , mirrors , facades , interior walls , ceilings , and tabletops . [ 1 ] [ 2 ]
The Occupational Outlook Handbook of the U.S. Department of Labor lists the following as typical tasks for a glazier: [ 3 ]
The National Occupational Analysis recognized by the Canadian Council of Directors of Apprenticeship separates the trade into 5 blocks of skills, each with a list of skills, and a list of tasks and subtasks a journeyman is expected to be able to accomplish: [ 4 ]
Tools used by glaziers "include cutting boards, glass-cutting blades, straightedges, glazing knives, saws, drills, grinders, putty,scrapers, sandpaper, sanding blocks, 5 in 1's respirator/dust mask and glazing compounds." [ 1 ]
Some glaziers work specifically with glass in motor vehicles ; other work specifically with the safety glass used in aircraft. Others repair old antique windows and doors that need glass replaced. [ 1 ] [ 3 ]
Glaziers are typically educated at the high school diploma or equivalent level and learn the skills of the trade through an apprenticeship program, which in the U.S. is typically four years. [ 3 ]
In the U.S., apprenticeship programs are offered through the National Glass Association as well as trade associations and local contractors' associations. A large portion of glaziers in the United States are members of the IUPAT, the International Union of Painters and Allied Trades which offers its own apprenticeship program which consists of 8000 hours of on the job training and 4 years of classroom education. Because of this, IUPAT Glaziers tend to be well rounded in all aspects of the trade, and therefore carry a higher production rate, face fewer health & safety risks and command a higher pay rate. [ 1 ]
In Canada, glaziers usually go through a formal apprenticeship which includes about four years of on-the-job experience combined with classroom study in order to get certified. Unions and many employers offer these apprenticeships. To become an apprentice, one must be at least 18 years old and have a graduated high school. Once a person is certified, they will be eligible to apply for the Red Seal allowing the person to work anywhere in Canada without re-certifying. [ 5 ] In Ontario, Canada , apprenticeships are offered at the provincial level and certified through the Ontario College of Trades . [ 6 ]
In Australia, while you do not need formal qualifications to work as a glazier, it is usual for apprentices to complete a Certificate III in Glass and Glazing as part of their training. Most apprentices choose to do the Certificate III in Glass and Glazing (MSF30418) part-time (three years). You can also choose to do the course full time (one year study). The Certificate II in Glass and Glazing (MSF20413) is also available for those who need additional study. [ 7 ] [ 8 ]
Occupational hazards encountered by glaziers include the risks of being cut by glass or tools and falling from scaffolds or ladders or lead exposure from old lead paint on antique windows. [ 1 ] [ 3 ] The use of heavy equipment may also cause injury: the National Institute for Occupational Safety and Health (NIOSH) reported in 1990 that a journeyman glazier died in an industrial accident in Indiana after attempting to use a manlift to carry a thousand-pound case of glass which the manlift did not have capacity to carry. [ 9 ]
According to the Occupational Outlook Handbook , there are some 45,300 glaziers in the United States, with median pay of $38,410 per year in 2014. [ 3 ] Two-thirds of Glaziers work in the foundation, structure, and building exterior contractors industry, with smaller numbers working in building material and supplies dealing, building finishing contracting, automotive repair and maintenance, and glass and glass product manufacturing. [ 2 ] [ 3 ]
Among the 50 states , only Connecticut and Florida require glaziers to hold a license . [ 3 ]
Media related to Glaziers at Wikimedia Commons | https://en.wikipedia.org/wiki/Glazier |
Glazing , which derives from the Middle English for 'glass', is a part of a wall or window , made of glass . [ 1 ] [ 2 ] Glazing also describes the work done by a professional " glazier ". Glazing is also less commonly used to describe the insertion of ophthalmic lenses into an eyeglass frame. [ 3 ]
Common types of glazing that are used in architectural applications include clear and tinted float glass , tempered glass , and laminated glass as well as a variety of coated glasses, all of which can be glazed singly or as double, or even triple , glazing units. Ordinary clear glass has a slight green tinge, [ 4 ] but special colorless glasses are offered by several manufacturers. [ 5 ]
Glazing can be mounted on the surface of a window sash or door stile , usually made of wood , aluminium or PVC . The glass is fixed into a rabbet (rebate) in the frame in a number of ways including triangular glazing points, putty , etc. Toughened and laminated glass can be glazed by bolting panes directly to a metal framework by bolts passing through drilled holes.
Glazing is commonly used in low temperature solar thermal collectors because it helps retain the collected heat.
The first recorded use of glazing in windows was by the Romans in the first century AD. This glass was rudimentary, essentially a blown cylinder that had been flattened out, and was not very transparent. In the eleventh century, techniques were developed where the glass was spun into a disc, creating a thinner circular window, or a cylinder was again formed, but this time it was cut from edge to edge and unrolled to make a rectangle-shaped window. The newer cylinder method remained the dominant method until the 19th century, and individual panes of glass were therefore limited in size to the dimensions of those cylinders.
Continuous plate production was invented in 1848 by Henry Bessemer, who drew a ribbon of glass through rollers. This standardized the thickness of the glass, but its use in mass-production was limited by the need to polish both sides of the glass after manufacture, which was time-consuming and expensive. The process was slowly refined throughout the next century, with automated grinders and polishers being added to bring the cost down.
The breakthrough in large, mass-produced, continuous glass production happened in the 1950s with the development of the Float glass manufacturing process. Molten glass is poured over a surface of molten tin, where it flattens out and can be drawn off in a ribbon. The advantage of this process is that it is scalable to any size and produces high quality panes without any further polishing or grinding. Float glass has continued to be the most used type of glazing to the present day. [ 6 ]
The most common glass used for glazing is Soda–lime glass , which has many advantages over other glass types. Silica ( SiO 2 ) makes up the bulk of the composition of this material at 70–75% by weight. Pure silica has a melting point that would be prohibitively expensive to reach with large-scale manufacturing, so sodium oxide (soda, Na 2 O ) is added, which reduces the melting point. However, the sodium ions are water-soluble, which is not a desired property, so calcium oxide (lime, CaO ) is added to reduce the solubility. The end result is a product which is high quality, clear, relatively cheap to produce, and recycles easily. [ 7 ]
Approximately 25% to 30% of HVAC energy costs stem from heat gain and loss through the glazing in windows. [ 8 ] Multiple methods have therefore been developed to minimize heat transfer through the glass. The glazing itself is a barrier to transfer via convection, so the two strategies for reducing heat transfer focus on minimizing conduction and radiation.
The strategy to reduce conduction is the use of Insulated glazing , where two or more panes of glass are used in series, each separated from each other by a space. Double-paned windows are the norm in new residential installations, as they offer substantial energy savings in comparison to single-paned glass. Each individual glass pane has poor insulation properties, with an R-value (insulation) , or measure of an object's resistance to heat conduction, of 0.9.
However, when two panes are placed in series with a gap between them, held in place and sealed by a spacer, the still gas in the gap acts as an insulator. The ideal gap size varies by location, but on average it ranges from 15-18 mm thick, giving a final assembly size of 23-26 mm assuming a typical glazing thickness of 4 mm. [ 9 ] A double-paned window with air in the gap has an R-value of 2.1, which is much better than the 0.9 that a single pane of glass yields. A triple-paned window, which is not as popular but is used occasionally in environments with extreme temperatures, has an R-value of 3.2. While these values are much lower than those of walls, which have R-values starting at 12-15, the reduction in heat transfer is nevertheless substantial. Higher R-values still can be obtained by filling the gap with a less conductive gas such as argon (or less commonly, krypton or xenon). [ 10 ]
One final alternate method to reducing conduction is by creating and maintaining a vacuum in between the panes of glass, achieving a very high R-value of 10 while also greatly minimizing the required gap between the panes to 2 mm, yielding an assembly size as small as 10 mm. This technology was first launched commercially in 1996, and while several million units have been produced in the ensuing decades, it remains prohibitively expensive for most use cases and has yet to see widespread adoption. [ 11 ]
The strategy to reduce radiation involves coating the glass with a low-emissivity (Low-E) coating, which reflects away much of the infrared light that hits it. There are two types of low-e coating. [ 12 ] The first is Solar Control Low-E, where the intent is to block incoming solar radiation, which reduces heat gain inside the building and therefore the cooling costs associated with removing that heat. When installed on a double-paned window, the coating is placed on the inner face of the outside pane, and optionally on the inner face of the inner pane to improve insulating performance as well. This type of coating is most appropriate for cooling-dominated climates and buildings with large internal loads, where the goal is primarily to stop the buildings from overheating.
In a heating-dominated climate, the second type of low-e coating is more appropriate. This is Passive Low-E, where the goal is to retain heat inside the building. These coatings do not block as much of the short-wave infrared light from the sun, but do block any long-wave infrared light coming from the inside, functioning as somewhat of a greenhouse. These coatings are placed on the inner pane of glass, on the outer face if less solar heat gain is desired, and on the inner face if more solar heat gain is desired. Especially when combined with double-or-triple-paned windows, the R-values achieved with low-e coatings can be quite high, with a 3-paned window filled with argon with one low-e coating having an R-value of 5.4. [ 10 ] One trade-off of low-e coatings is that while they are primarily aimed at reducing the amount of infrared light passing through the window, they do also somewhat reduce the amount of visible light passing through, and the building may incur higher lighting demand as a result.
There are two methods of applying the Low-E coating to the glazing: Hard Coat and Soft Coat. Hard Coat is applied either in or directly after the tin bath in the float glass manufacturing process. This produces a coating which is very durable and inexpensive, as it is added during the existing production process. However, it is not as energy efficient and allows more infrared light to pass through than the Soft Coat method. The Soft Coat, on the other hand, is applied after the glass has already been manufactured and cut and tends to be clearer and better at insulating. However, the additional manufacturing step adds to the cost of production, and the coating will degrade when exposed to the elements, and so can only be placed on the inside faces of a double-paned window. Generally, solar control Low-E windows are soft coat and passive Low-E windows are hard coat due to the lower emissivity of the soft coat. [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Glazing_(window) |
A glazing jack or glazing machine is a type of machine used for polishing leather . The machine consists of a solid glass cylinder, typically around two inches (5 cm) in diameter and six inches (15 cm) in length, mounted to the end of a rotating or reciprocating arm. The arm repeatedly and rapidly draws the glass across the surface of the leather, with significant downward pressure, as the operator moves the leather underneath the arm. [ 1 ] [ 2 ]
The repeated stroking of the leather smooths and compresses the surface and raises various color tones. Heat generated by friction during the glazing process can darken and harden the aniline finish of the leather, and can raise oils in the leather to the surface. Because no pigment is used, the porous structure of the leather remains visible, providing a depth to its appearance. The operator of the glazing jack can control the surface finish by varying the pressure of the tool and the number of strokes applied. Similarly to glazing, a copper or glass tool can be drawn across the leather by hand to create "sleeked" and "glassed" finishes, respectively.
Because jacking leather is a time-consuming and labor-intensive process, it is often reserved for more expensive or exotic leathers, such as reptile leathers.
This tool article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glazing_jack |
Glembatumumab vedotin (also known as CDX-011 and CR011-vcMMAE) is an antibody-drug conjugate (ADC) that targets cancer cells expressing transmembrane glycoprotein NMB (GPNMB).
In May 2010, the U.S FDA granted Fast Track designation to CDX-011 for the treatment of advanced, refractory, or resistant GPNMB-expressing breast cancer. [ 1 ]
The fully human IgG2 monoclonal antibody glembatumumab (CR011) is linked to monomethyl auristatin E (MMAE). [ citation needed ]
It uses a valine-citrulline enzyme-cleavable linker. [ 2 ] The linkage is stable in the bloodstream. The antibody binds to GPNMB on the cancer cells, the ADC is internalised, the linkage is broken and MMAE is released to kill the cell. [ 3 ]
In preclinical studies glembatumumab vedotin was capable of killing GPNMB expressing melanoma and breast cancer cells in vitro [ 4 ] [ 5 ] and inducing partial or complete regression of GPNMB-expressing tumors in mouse models. [ 3 ] [ 5 ] [ 6 ]
Glembatumumab vedotin was in development through April 2018 by Celldex Therapeutics , [ 7 ] who acquired CuraGen in 2009. It was originally developed through a partnership between CuraGen and Amgen , using Xenomouse technology licensed from Abgenix and ADC technology licensed from Seattle Genetics . In 2015, Celldex announced that it had formed a cooperative research and development agreement with NCI to sponsor two clinical trials for uveal melanoma and pediatric osteosarcoma. These were both phase II clinical trials . [ 8 ]
In September 2010 a Phase 2b clinical study started of glembatumumab vedotin in 120 patients with GPNMB-expressing breast cancer including those with triple negative breast cancer . [ citation needed ]
As of June 2011 [update] , Phase I/II clinical trials of glembatumumab vedotin for the treatment of advanced melanoma [ 9 ] and breast cancer [ 10 ] have been completed but no official study result was posted. Preliminary results from these trials have shown that glembatumumab vedotin has some clinical activity (promotes tumor shrinkage) in both cancer types. [ 11 ] Patients whose tumors express GPNMB respond better to glembatumumab and have longer progression-free survival than those whose tumors do not express GPNMB; in melanoma, [ 12 ] and breast cancer. [ 13 ]
An accelerated approval Phase II clinical trial (METRIC) investigating glembatumumab vedotin versus capecitabine (2:1 with crossover allowed) has begun in November 2013, expected to enroll 300 patients with GPNMB-expressing metastatic triple negative breast cancer. Patients who have progressed after receiving anthracyclines and taxanes are eligible. [ 14 ]
Development of the ADC was discontinued in April 2018 after missing the primary endpoint of its study and failed to help women with tough-to-treat metastatic triple-negative breast cancers (TNBC) stay both alive and progression-free for longer than Roche Holding AG's Xeloda (capecitabine). [ 7 ] | https://en.wikipedia.org/wiki/Glembatumumab_vedotin |
In the study of zero sum games , Glicksberg's theorem (also Glicksberg's existence theorem ) is a result that shows certain games have a minimax value. [ 1 ] If A and B are Hausdorff compact spaces , and K is an upper semicontinuous or lower semicontinuous function on A × B {\displaystyle A\times B} , then
where f and g run over Borel probability measures on A and B .
The theorem is useful if f and g are interpreted as mixed strategies of two players in the context of a continuous game . If the payoff function K is upper semicontinuous, then the game has a value.
The continuity condition may not be dropped: see example of a game with no value . [ 2 ]
This game theory article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glicksberg's_theorem |
A glidant is a substance that is added to a powder to improve its flowability . A glidant will only work at a certain range of concentrations . Above a certain concentration, the glidant will in fact function to inhibit flowability.
In tablet manufacture, glidants are usually added just prior to compression.
Examples of glidants include ascorbyl palmitate , [ 1 ] calcium palmitate , [ 2 ] magnesium stearate , fumed silica (colloidal silicon dioxide), starch and talc . [ 3 ]
A glidant's effect is due to the counter-action of factors that cause poor flowability of powders. For instance, correcting surface irregularity, reducing interparticular friction and decreasing surface charge . The result is a decrease in the angle of repose which is an indication of an enhanced powder's flowability.
This material -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glidant |
Glide is a molecular modeling software for docking of small molecules into proteins and other biopolymers . [ 1 ] [ 2 ] It was developed by Schrödinger, Inc.
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glide_(docking) |
In geometry , a glide reflection or transflection is a geometric transformation that consists of a reflection across a hyperplane and a translation ("glide") in a direction parallel to that hyperplane, combined into a single transformation. Because the distances between points are not changed under glide reflection, it is a motion or isometry . When the context is the two-dimensional Euclidean plane , the hyperplane of reflection is a straight line called the glide line or glide axis . When the context is three-dimensional space , the hyperplane of reflection is a plane called the glide plane . The displacement vector of the translation is called the glide vector .
When some geometrical object or configuration appears unchanged by a transformation, it is said to have symmetry , and the transformation is called a symmetry operation . Glide-reflection symmetry is seen in frieze groups (patterns which repeat in one dimension, often used in decorative borders), wallpaper groups (regular tessellations of the plane), and space groups (which describe e.g. crystal symmetries). Objects with glide-reflection symmetry are in general not symmetrical under reflection alone, but two applications of the same glide reflection result in a double translation, so objects with glide-reflection symmetry always also have a simple translational symmetry .
When a reflection is composed with a translation in a direction perpendicular to the hyperplane of reflection, the composition of the two transformations is a reflection in a parallel hyperplane. However, when a reflection is composed with a translation in any other direction, the composition of the two transformations is a glide reflection, which can be uniquely described as a reflection in a parallel hyperplane composed with a translation in a direction parallel to the hyperplane.
A single glide is represented as frieze group p11g. A glide reflection can be seen as a limiting rotoreflection , where the rotation becomes a translation. It can also be given a Schoenflies notation as S 2∞ , Coxeter notation as [∞ + ,2 + ], and orbifold notation as ∞×.
In the Euclidean plane, reflections and glide reflections are the only two kinds of indirect (orientation-reversing) isometries .
For example, there is an isometry consisting of the reflection on the x -axis, followed by translation of one unit parallel to it. In coordinates, it takes
This isometry maps the x -axis to itself; any other line which is parallel to the x -axis gets reflected in the x -axis, so this system of parallel lines is left invariant.
The isometry group generated by just a glide reflection is an infinite cyclic group . [ 1 ]
Combining two equal glide reflections gives a pure translation with a translation vector that is twice that of the glide reflection, so the even powers of the glide reflection form a translation group.
In the case of glide-reflection symmetry, the symmetry group of an object contains a glide reflection, and hence the group generated by it. If that is all it contains, this type is frieze group p11g.
Example pattern with this symmetry group:
A typical example of glide reflection in everyday life would be the track of footprints left in the sand by a person walking on a beach.
Frieze group nr. 6 (glide-reflections, translations and rotations) is generated by a glide reflection and a rotation about a point on the line of reflection. It is isomorphic to a semi-direct product of Z and C 2 .
Example pattern with this symmetry group:
For any symmetry group containing some glide-reflection symmetry, the translation vector of any glide reflection is one half of an element of the translation group. If the translation vector of a glide reflection is itself an element of the translation group, then the corresponding glide-reflection symmetry reduces to a combination of reflection symmetry and translational symmetry .
Glide-reflection symmetry with respect to two parallel lines with the same translation implies that there is also translational symmetry in the direction perpendicular to these lines, with a translation distance which is twice the distance between glide reflection lines. This corresponds to wallpaper group pg; with additional symmetry it occurs also in pmg, pgg and p4g.
If there are also true reflection lines in the same direction then they are evenly spaced between the glide reflection lines. A glide reflection line parallel to a true reflection line already implies this situation. This corresponds to wallpaper group cm. The translational symmetry is given by oblique translation vectors from one point on a true reflection line to two points on the next, supporting a rhombus with the true reflection line as one of the diagonals. With additional symmetry it occurs also in cmm, p3m1, p31m, p4m and p6m.
In the Euclidean plane 3 of 17 wallpaper groups require glide reflection generators. p2gg has orthogonal glide reflections and 2-fold rotations. cm has parallel mirrors and glides, and pg has parallel glides. (Glide reflections are shown below as dashed lines)
Glide planes are noted in the Hermann–Mauguin notation by a , b or c , depending on which axis the glide is along. (The orientation of the plane is determined by the position of the symbol in the Hermann–Mauguin designation.) If the axis is not defined, then the glide plane may be noted by g . When the glide plane is parallel to the screen, these planes may be indicated by a bent arrow in which the arrowhead indicates the direction of the glide. When the glide plane is perpendicular to the screen, these planes can be represented either by dashed lines when the glide is parallel to the plane of the screen or dotted lines when the glide is perpendicular to the plane of the screen. Additionally, a centered lattice can cause a glide plane to exist in two directions at the same time. This type of glide plane may be indicated by a bent arrow with an arrowhead on both sides when the glide plan is parallel to the plane of the screen or a dashed and double-dotted line when the glide plane is perpendicular to the plane of the screen. There is also the n glide, which is a glide along the half of a diagonal of a face, and the d glide, which is along a fourth of either a face or space diagonal of the unit cell . The latter is often called the diamond glide plane as it features in the diamond structure. The n glide plane may be indicated by diagonal arrow when it is parallel to the plane of the screen or a dashed-dotted line when the glide plane is perpendicular to the plane of the screen. A d glide plane may be indicated by a diagonal half-arrow if the glide plane is parallel to the plane of the screen or a dashed-dotted line with arrows if the glide plane is perpendicular to the plane of the screen. If a d glide plane is present in a crystal system, then that crystal must have a centered lattice. [ 2 ]
In today's version of Hermann–Mauguin notation, the symbol e is used in cases where there are two possible ways of designating the glide direction because both are true. For example if a crystal has a base-centered Bravais lattice centered on the C face, then a glide of half a cell unit in the a direction gives the same result as a glide of half a cell unit in the b direction.
The isometry group generated by just a glide reflection is an infinite cyclic group . Combining two equal glide plane operations gives a pure translation with a translation vector that is twice that of the glide reflection, so the even powers of the glide reflection form a translation group.
In the case of glide-reflection symmetry, the symmetry group of an object contains a glide reflection and the group generated by it. For any symmetry group containing a glide reflection, the glide vector is one half of an element of the translation group. If the translation vector of a glide plane operation is itself an element of the translation group, then the corresponding glide plane symmetry reduces to a combination of reflection symmetry and translational symmetry .
Glide symmetry can be observed in nature among certain fossils of the Ediacara biota ; the machaeridians ; and certain palaeoscolecid worms. [ 3 ] It can also be seen in many extant groups of sea pens . [ 4 ]
In Conway's Game of Life , a commonly occurring pattern called the glider is so named because it repeats its configuration of cells, shifted by a glide reflection, after two steps of the automaton. After four steps and two glide reflections, the pattern returns to its original orientation, shifted diagonally by one unit. Continuing in this way, it moves across the array of the game. [ 5 ]
In electrical engineering, if the graph of a periodic function also has glide symmetries, it is called "half-wave symmetric". This extra symmetry causes the fourier series of the function to only contain odd terms. Examples include the sine function, square waves and triangle waves. | https://en.wikipedia.org/wiki/Glide_reflection |
Gliding flight is heavier-than-air flight without the use of thrust ; the term volplaning also refers to this mode of flight in animals. [ 1 ] It is employed by gliding animals and by aircraft such as gliders . This mode of flight involves flying a significant distance horizontally compared to its descent and therefore can be distinguished from a mostly straight downward descent like a round parachute.
Although the human application of gliding flight usually refers to aircraft designed for this purpose, most powered aircraft are capable of gliding without engine power. As with sustained flight, gliding generally requires the application of an airfoil , such as the wings on aircraft or birds, or the gliding membrane of a gliding possum . However, gliding can be achieved with a flat ( uncambered ) wing, as with a simple paper plane , [ 2 ] or even with card-throwing . However, some aircraft with lifting bodies and animals such as the flying snake can achieve gliding flight without any wings by creating a flattened surface underneath.
Most winged aircraft can glide to some extent, but there are several types of aircraft designed to glide:
The main human application is currently recreational, though during the Second World War military gliders were used for carrying troops and equipment into battle. The types of aircraft that are used for sport and recreation are classified as gliders (sailplanes) , hang gliders and paragliders . These two latter types are often foot-launched. The design of all three types enables them to repeatedly climb using rising air and then to glide before finding the next source of lift. When done in gliders (sailplanes), the sport is known as gliding and sometimes as soaring. For foot-launched aircraft, it is known as hang gliding and paragliding . Radio-controlled gliders with fixed wings are also soared by enthusiasts.
In addition to motor gliders , some powered aircraft are designed for routine glides during part of their flight; usually when landing after a period of a powered flight. These include:
Aircraft which are not designed for glide may forced to perform gliding flight in an emergency, such as all engine failure or fuel exhaustion. See list of airline flights that required gliding flight .
A number of animals have separately evolved gliding many times, without any single ancestor. Birds in particular use gliding flight to minimise their use of energy. Large birds are notably adept at gliding, including:
Like recreational aircraft, birds can alternate periods of gliding with periods of soaring in rising air , and so spend a considerable time airborne with a minimal expenditure of energy. The great frigatebird in particular is capable of continuous flights up to several weeks. [ 3 ]
To assist gliding, some mammals have evolved a structure called the patagium . This is a membranous structure found stretched between a range of body parts. It is most highly developed in bats. For similar reasons to birds, bats can glide efficiently. In bats, the skin forming the surface of the wing is an extension of the skin of the abdomen that runs to the tip of each digit, uniting the forelimb with the body. The patagium of a bat has four distinct parts:
Other mammals such as gliding possums and flying squirrels also glide using a patagium, but with much poorer efficiency than bats. They cannot gain height. The animal launches itself from a tree, spreading its limbs to expose the gliding membranes, usually to get from tree to tree in rainforests as an efficient means of both locating food and evading predators. This form of arboreal locomotion , is common in tropical regions such as Borneo and Australia, where the trees are tall and widely spaced.
In flying squirrels, the patagium stretches from the fore- to the hind-limbs along the length of each side of the torso. In the sugar glider , the patagia extend between the fifth finger of each hand to the first toe of each foot. This creates an aerofoil enabling them to glide 50 metres or more. [ 4 ] This gliding flight is regulated by changing the curvature of the membrane or moving the legs and tail. [ 5 ]
In addition to mammals and birds, other animals notably flying fish , flying snakes , flying frogs and flying squid also glide.
The flights of flying fish are typically around 50 meters (160 ft), [ 6 ] though they can use updrafts at the leading edge of waves to cover distances of up to 400 m (1,300 ft). [ 6 ] [ 7 ] To glide upward out of the water, a flying fish moves its tail up to 70 times per second. [ 8 ] It then spreads its pectoral fins and tilts them slightly upward to provide lift. [ 9 ] At the end of a glide, it folds its pectoral fins to re-enter the sea, or drops its tail into the water to push against the water to lift itself for another glide, possibly changing direction. [ 8 ] [ 9 ] The curved profile of the "wing" is comparable to the aerodynamic shape of a bird wing. [ 10 ] The fish is able to increase its time in the air by flying straight into or at an angle to the direction of updrafts created by a combination of air and ocean currents . [ 8 ] [ 9 ]
Snakes of the genus Chrysopelea are also known by the common name "flying snake". Before launching from a branch, the snake makes a J-shape bend. After thrusting its body up and away from the tree, it sucks in its abdomen and flaring out its ribs to turn its body into a "pseudo concave wing", [ 11 ] all the while making a continual serpentine motion of lateral undulation [ 12 ] parallel to the ground [ 13 ] to stabilise its direction in mid-air in order to land safely. [ 14 ] Flying snakes are able to glide better than flying squirrels and other gliding animals , despite the lack of limbs, wings, or any other wing-like projections, gliding through the forest and jungle it inhabits with the distance being as great as 100 m. [ 13 ] [ 15 ] Their destination is mostly predicted by ballistics ; however, they can exercise some in-flight attitude control by "slithering" in the air. [ 16 ]
Flying lizards of the genus Draco are capable of gliding flight via membranes that may be extended to create wings (patagia), formed by an enlarged set of ribs. [ 17 ]
Gliding flight has evolved independently among 3,400 species of frogs [ 18 ] from both New World ( Hylidae ) and Old World ( Rhacophoridae ) families. [ 19 ] This parallel evolution is seen as an adaptation to their life in trees, high above the ground. Characteristics of the Old World species include "enlarged hands and feet, full webbing between all fingers and toes, lateral skin flaps on the arms and legs
Three principal forces act on aircraft and animals when gliding: [ 20 ]
As the aircraft or animal descends, the air moving over the wings and body generates lift perpendicular to the motion and drag parallel to the motion. Because the glider is descending as it moves forwards, the weight vector has a forward component that can balance the drag, allowing the speed to remain constant. The energy for overcoming the drag comes from the loss of gravitational potential energy . [ 21 ]
Even though the weight causes the glider to descend, if the air is rising faster than the sink rate, there will be a gain of altitude.
The lift-to-drag ratio, or L/D ratio , is the amount of lift generated by a wing or vehicle, divided by the drag it creates by moving through the air. A higher L/D ratio leads to a better glide slope angle, or glide ratio.
The effect of airspeed on the rate of descent can be depicted by a polar curve . These curves show the airspeed where minimum sink can be achieved and the airspeed with the best L/D ratio. The curve is an inverted U-shape. As speeds reduce the amount of lift falls rapidly around the stalling speed. The peak of the 'U' is at minimum drag.
As lift and drag are both proportional to the coefficient of Lift and Drag respectively multiplied by the same factor (1/2 ρ air v 2 S), the L/D ratio can be simplified to the Coefficient of lift divided by the coefficient of drag or Cl/Cd, and since both are proportional to the airspeed, the ratio of L/D or Cl/Cd is then typically plotted against angle of attack.
Induced drag is caused by the generation of lift by the wing. At low speeds an aircraft has to generate lift with a higher angle of attack, leading to greater induced drag. This term dominates the low-speed side of the drag graph, the left side of the U.
Parasitic drag is the drag unrelated to creating the lift, and is caused by skin friction and the shape of the body and wing. This drag is more pronounced at higher speeds, forming the right side of the drag graph's U shape. Profile drag is lowered primarily by reducing cross section and streamlining.
As lift increases steadily until the critical angle, it is normally the point where the combined drag is at its lowest, that the wing or aircraft is performing at its best L/D.
Designers will typically select a wing design which produces an L/D peak at the chosen cruising speed for a powered fixed-wing aircraft, thereby maximizing economy. Like all things in aeronautical engineering , the lift-to-drag ratio is not the only consideration for wing design. Performance at high angle of attack and a gentle stall are also important.
Minimising drag is of particular interest in the design and operation of high performance gliders (sailplanes) , the largest of which can have glide ratios approaching 60 to 1, though many others have a lower performance; 25:1 being considered adequate for training use.
When flown at a constant speed in still air a glider moves forwards a certain distance for a certain distance downwards. The ratio of the distance forwards to downwards is called the glide ratio . The glide ratio (E) is numerically equal to the lift-to-drag ratio under these conditions; but is not necessarily equal during other manoeuvres, especially if speed is not constant. A glider's glide ratio varies with airspeed, but there is a maximum value which is frequently quoted. Glide ratio usually varies little with vehicle loading; a heavier vehicle glides faster, but nearly maintains its glide ratio. [ 22 ]
Glide ratio (or "finesse") is the cotangent of the downward angle, the glide angle (γ). Alternatively it is also the forward speed divided by sink speed (unpowered aircraft):
Glide number (ε) is the reciprocal of glide ratio but sometime it is confused.
Although the best glide ratio is important when measuring the performance of a gliding aircraft, its glide ratio at a range of speeds also determines its success (see article on gliding ).
Pilots sometimes fly at the aircraft's best L/D by precisely controlling airspeed and smoothly operating the controls to reduce drag. However the strength of the likely next lift, minimising the time spent in strongly sinking air and the strength of the wind also affects the optimal speed to fly . Pilots fly faster to get quickly through sinking air, and when heading into wind to optimise the glide angle relative to the ground. To achieve higher speed across country, gliders (sailplanes) are often loaded with water ballast to increase the airspeed and so reach the next area of lift sooner. This has little effect on the glide angle since the increases in the rate of sink and in the airspeed remain in proportion and thus the heavier aircraft achieves optimal L/D at a higher airspeed. If the areas of lift are strong on the day, the benefits of ballast outweigh the slower rate of climb.
If the air is rising faster than the rate of sink, the aircraft will climb. At lower speeds an aircraft may have a worse glide ratio but it will also have a lower rate of sink. A low airspeed also improves its ability to turn tightly in the centre of the rising air where the rate of ascent is greatest. A sink rate of approximately 1.0 m/s is the most that a practical hang glider or paraglider could have before it would limit the occasions that a climb was possible to only when there was strongly rising air. Gliders (sailplanes) have minimum sink rates of between 0.4 and 0.6 m/s depending on the class . Aircraft such as airliners may have a better glide ratio than a hang glider, but would rarely be able to thermal because of their much higher forward speed and their much higher sink rate. (The Boeing 767 in the Gimli Glider incident achieved a glide ratio of only 12:1).
The loss of height can be measured at several speeds and plotted on a " polar curve " to calculate the best speed to fly in various conditions, such as when flying into wind or when in sinking air. Other polar curves can be measured after loading the glider with water ballast. As mass increases, the best glide ratio is achieved at higher speeds (The glide ratio is not increased).
Soaring animals and aircraft may alternate glides with periods of soaring in rising air . Five principal types of lift are used: [ 31 ] thermals , ridge lift , lee waves , convergences and dynamic soaring . Dynamic soaring is used predominately by birds, and some model aircraft, though it has also been achieved on rare occasions by piloted aircraft. [ 32 ]
Examples of soaring flight by birds are the use of:
For humans, soaring is the basis for three air sports : gliding , hang gliding and paragliding . | https://en.wikipedia.org/wiki/Gliding_flight |
Glitter cells (also called Sternheimer-Malbin positive cells) are polymorphonuclear leukocyte neutrophils with granules that show a Brownian movement and that are found in the urine , most commonly associated with urinary tract infections or pyelonephritis and especially prevalent under conditions of hypotonic urine (samples with specific gravity less than 1.01). [ 1 ] First described in 1908, [ 2 ] they derive their name from their appearance when viewed on a wet mount preparation under a microscope ; the granules within their cytoplasm can be seen moving, giving them a "glittering appearance." [ 3 ] due to swelling of the neutrophil as result of hypotonicity. In addition to a glittering morphology, glitter cells also exhibit a colorless or pale blue nuclei and pale blue or gray cytoplasmic region when stained with Sternheimer-Malbin Stain. [ 4 ] The presence of glitter cells may be indicative of inflammatory changes in the bladder and kidney. | https://en.wikipedia.org/wiki/Glitter_cell |
In probability theory , Glivenko's theorem states that if φ n , n ∈ N {\displaystyle \varphi _{n},n\in \mathbb {N} } , φ {\displaystyle \varphi } are the characteristic functions of some probability distributions μ n , μ {\displaystyle \mu _{n},\mu } respectively and φ n → φ {\displaystyle \varphi _{n}\to \varphi } almost everywhere, then μ n → μ {\displaystyle \mu _{n}\to \mu } in the sense of probability distributions. [ 1 ]
This probability -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glivenko's_theorem_(probability_theory) |
In the theory of probability , the Glivenko–Cantelli theorem (sometimes referred to as the fundamental theorem of statistics ), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli , describes the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows. [ 1 ] Specifically, the empirical distribution function converges uniformly to the true distribution function almost surely .
The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets. [ 2 ] The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory , with applications to machine learning . Applications can be found in econometrics making use of M-estimators .
Assume that X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots } are independent and identically distributed random variables in R {\displaystyle \mathbb {R} } with common cumulative distribution function F ( x ) {\displaystyle F(x)} . The empirical distribution function for X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} is defined by F n ( x ) = 1 n ∑ i = 1 n I ( − ∞ , x ] ( X i ) = 1 n | { i ∣ X i ≤ x , 1 ≤ i ≤ n } | , {\displaystyle F_{n}(x)={\frac {1}{n}}\sum _{i=1}^{n}I_{(-\infty ,x]}(X_{i})={\frac {1}{n}}{\bigl |}\left\{i\mid X_{i}\leq x,\ 1\leq i\leq n\right\}{\bigr |},} where I C {\displaystyle I_{C}} is the indicator function of the set C . {\displaystyle C.} For every (fixed) x , {\displaystyle x,} F n ( x ) {\displaystyle F_{n}(x)} is a sequence of random variables which converge to F ( x ) {\displaystyle F(x)} almost surely by the strong law of large numbers . Glivenko and Cantelli strengthened this result by proving uniform convergence of F n {\displaystyle F_{n}} to F . {\displaystyle F.}
This theorem originates with Valery Glivenko [ 4 ] and Francesco Cantelli , [ 5 ] in 1933.
For simplicity, consider a case of continuous random variable X {\displaystyle X} . Fix − ∞ = x 0 < x 1 < ⋯ < x m − 1 < x m = ∞ {\displaystyle -\infty =x_{0}<x_{1}<\cdots <x_{m-1}<x_{m}=\infty } such that F ( x j ) − F ( x j − 1 ) = 1 m {\displaystyle F(x_{j})-F(x_{j-1})={\frac {1}{m}}} for j = 1 , … , m {\displaystyle j=1,\dots ,m} . Now for all x ∈ R {\displaystyle x\in \mathbb {R} } there exists j ∈ { 1 , … , m } {\displaystyle j\in \{1,\dots ,m\}} such that x ∈ [ x j − 1 , x j ] {\displaystyle x\in [x_{j-1},x_{j}]} .
Therefore,
Since max j ∈ { 1 , … , m } | F n ( x j ) − F ( x j ) | → 0 a.s. {\textstyle \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|\to 0{\text{ a.s.}}} by strong law of large numbers, we can guarantee that for any positive ε {\textstyle \varepsilon } and any integer m {\textstyle m} such that 1 / m < ε {\textstyle 1/m<\varepsilon } , we can find N {\textstyle N} such that for all n ≥ N {\displaystyle n\geq N} , we have max j ∈ { 1 , … , m } | F n ( x j ) − F ( x j ) | ≤ ε − 1 / m a.s. {\textstyle \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|\leq \varepsilon -1/m{\text{ a.s.}}} . Combined with the above result, this further implies that ‖ F n − F ‖ ∞ ≤ ε a.s. {\textstyle \|F_{n}-F\|_{\infty }\leq \varepsilon {\text{ a.s.}}} , which is the definition of almost sure convergence.
One can generalize the empirical distribution function by replacing the set ( − ∞ , x ] {\displaystyle (-\infty ,x]} by an arbitrary set C from a class of sets C {\displaystyle {\mathcal {C}}} to obtain an empirical measure indexed by sets C ∈ C . {\displaystyle C\in {\mathcal {C}}.}
Where I C ( x ) {\displaystyle I_{C}(x)} is the indicator function of each set C {\displaystyle C} .
Further generalization is the map induced by P n {\displaystyle P_{n}} on measurable real-valued functions f , which is given by
Then it becomes an important property of these classes whether the strong law of large numbers holds uniformly on F {\displaystyle {\mathcal {F}}} or C {\displaystyle {\mathcal {C}}} .
Consider a set S {\displaystyle \ {\mathcal {S}}\ } with a sigma algebra of Borel subsets A and a probability measure P . {\displaystyle \ \mathbb {P} ~.} For a class of subsets,
and a class of functions
define random variables
where P n ( C ) {\displaystyle \ \mathbb {P} _{n}(C)\ } is the empirical measure, P n f {\displaystyle \ \mathbb {P} _{n}f\ } is the corresponding map, and
Definitions
Glivenko–Cantelli classes of functions (as well as their uniform and universal forms) are defined similarly, replacing all instances of C {\displaystyle {\mathcal {C}}} with F {\displaystyle {\mathcal {F}}} .
The weak and strong versions of the various Glivenko-Cantelli properties often coincide under certain regularity conditions. The following definition commonly appears in such regularity conditions:
Theorems
The following two theorems give sufficient conditions for the weak and strong versions of the Glivenko-Cantelli property to be equivalent.
Theorem ( Talagrand , 1987) [ 6 ]
Theorem ( Dudley , Giné, and Zinn, 1991) [ 7 ]
The following theorem is central to statistical learning of binary classification tasks.
Theorem ( Vapnik and Chervonenkis , 1968) [ 8 ]
There exist a variety of consistency conditions for the equivalence of uniform Glivenko-Cantelli and Vapnik-Chervonenkis classes. In particular, either of the following conditions for a class C {\displaystyle {\mathcal {C}}} suffice: [ 9 ] | https://en.wikipedia.org/wiki/Glivenko–Cantelli_theorem |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/Glixon_code |
The glnALG operon is an operon that regulates the nitrogen content of a cell. It codes for the structural gene glnA the two regulatory genes glnL and glnG. glnA encodes glutamine synthetase , an enzyme which catalyzes the conversion of glutamate and ammonia to glutamine , thereby controlling the nitrogen level in the cell. glnG encodes NR I which regulates the expression of the glnALG operon at three promoters , which are glnAp1, glnAp2 located upstream of glnA) and glnLp (intercistronic glnA-glnL region). glnL encodes NR II which regulates the activity of NR I . [ 1 ] No significant homology is found in Eukaryotes.
The glnALG has three structural genes:
glnALG operon, along with the glnD and glnF and their gene products , plays an extremely important role in regulating the nitrogen level inside the cell. It also plays a role in the ammonium (methylammonium) transport system (Amt). Hence it increases the ammonia content of the cell when grown on glutamine or glutamate.
Hence along with histidase , glnALG operon maintains homeostasis within the cell.
The glnALG operon is regulated by an intricate network of repressors and activators . Along with NR I and NR II , there are gene products of glnF and glnD which play a key role in this network.
The expression of the glnALG operon is regulated by the NRI at three promoters: glnAp1, glnAp2 and glnLp. The initiation of transcription at glnAp1 is stimulated exclusively under carbon starvation conditions and stationary phase during which cAMP accumulates in high concentration in the cell. The binding of cAMP to the catabolite activator protein (CAP) causes CAP to bind to a specific DNA site in glnAp1, and glnAp1 is repressed by NR I . Initiation of transcription at glnAp2 requires the activated form of NR I , i.e. NR I –P(phosphorylated NR I ), as well as the glnF gene product, σ 54 , [ 3 ] and it is regulated by NR II . NR II in the presence of ATP , catalyzes the transfer of ϒ-phosphate of ATP to NR I . In the presence of P II , which is encoded by glnB, NR II catalyzes the dephosphorylation of NR I –P.
The nitrogen content in the cell is directly proportional to the ratio of concentration of glutamine to the concentration of 2-ketoglutarate. When nitrogen content is lower, the product of glnD gene, uridylyl transferase catalyzes the conversion of P II to give P II -UMP, hampering PII's ability of dephosphorylating NR I –P. Uridylyl transferase catalyzes this reaction because the high concentration of 2-ketoglutarate allosterically activates it. In the case of high nitrogen, there is excess of NR I which represses the transcription of the promoters glnAp1, glnAp2 and glnLp, which in turn represses the synthesis of glutamine synthetase. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/GlnALG_operon |
The GloFish is a patented and trademarked brand of fluorescently colored genetically modified aquarium fish . They have been created from several different species of fish: zebrafish were the first GloFish available in pet stores, and recently the black tetra , tiger barb , [ 1 ] rainbow shark , Siamese fighting fish , X-ray tetra , and most recently bronze corydoras [ 2 ] have been added to the lineup. They are sold in many colors, trademarked as "Starfire Red", "Moonrise Pink", "Sunburst Orange", "Electric Green", "Cosmic Blue", and "Galactic Purple", although not all species are available in all colors. Although not originally developed for the ornamental fish trade, it is one of the first genetically modified animals to become publicly available. The rights to GloFish are owned by Spectrum Brands, Inc., which purchased GloFish from Yorktown Technologies, the original developer of GloFish, in May 2017.
The original zebrafish (or zebra danio, Danio rerio ) is a native of rivers in India and Bangladesh . It measures three centimeters long and has gold and dark blue stripes. In 1999, Dr. Zhiyuan Gong [ 3 ] and his colleagues at the National University of Singapore were working with a gene that encodes the green fluorescent protein (GFP), originally extracted from a jellyfish , that naturally produced bright green fluorescence . They inserted the gene into a zebrafish embryo, allowing it to integrate into the zebrafish's genome , which caused the fish to be brightly fluorescent under both natural white light and ultraviolet light. Their goal was to develop a fish that could detect pollution by selectively fluorescing in the presence of environmental toxins . The development of the constantly fluorescing fish was the first step in this process, and the National University of Singapore filed a patent application on this work. [ 4 ] Shortly thereafter, his team developed a line of red fluorescent zebra fish by adding a gene from a sea coral , and orange-yellow fluorescent zebra fish, by adding a variant of the jellyfish gene. Later, a team of researchers at the National Taiwan University , headed by Professor Huai-Jen Tsai, succeeded in creating a medaka (rice fish) with a fluorescent green color, which, like the zebrafish, is a model organism used in biology.
The scientists from NUS and businessmen Alan Blake and Richard Crockett from Yorktown Technologies, L.P., a company in Austin, Texas , met and a deal was signed whereby Yorktown obtained the worldwide rights to market the fluorescent zebrafish, which Yorktown subsequently branded as "GloFish". At around the same time, a separate deal was made between Taikong, the largest aquarium fish producer in Taiwan, and the Taiwanese researchers to market the green medaka in Taiwan under the name TK-1. In the spring of 2003, Taiwan became the first to authorize sales of a genetically modified organism as a pet. One hundred thousand fish were reportedly sold in less than a month at US$18.60 each. The fluorescent medaka are not GloFish, as they are not marketed by Yorktown Technologies, but instead by Taikong Corp under a different brand name.
GloFish were introduced to the United States market in late 2003 by Yorktown Technologies, after two years of research. The governmental environmental risk assessment was made by the U.S. Food and Drug Administration (FDA), which has jurisdiction over all genetically modified (GM) animals, including fluorescent zebra fish, since they consider the inserted gene to be a drug. The FDA determined in December 2003:
Because tropical aquarium fish are not used for food purposes, they pose no threat to the food supply. There is no evidence that these genetically engineered zebra danio fish pose any more threat to the environment than their unmodified counterparts which have long been widely sold in the United States. In the absence of a clear risk to the public health, the FDA finds no reason to regulate these particular fish. [ 5 ]
Marketing of the fish was met by protests from a non-governmental organization called the Center for Food Safety . They were concerned that approval of the GloFish based only on a Food and Drug Administration risk assessment would create a precedent of inadequate scrutiny of biotech animals in general. [ citation needed ] The group filed a lawsuit in US Federal District Court to block the sale of the GloFish. The lawsuit sought a court order stating that the sale of transgenic fish is subject to federal regulation beyond the FDA's charter, and as such should not be sold without more extensive approvals. In the opinion of Joseph Mendelson, the Center for Food Safety's legal director:
It's clear this sets a precedent for genetically engineered animals. It opens the dams to a whole host of nonfood genetically engineered organisms. That's unacceptable to us and runs counter to things the National Academy of Sciences and other scientific review boards have said, particularly when it comes to mobile GM organisms like fish and insects. [ 6 ]
The Center for Food Safety's suit was found to be without merit and dismissed on March 30, 2005. [ citation needed ]
In addition to the red zebrafish, Yorktown Technologies released green and orange-yellow versions of the zebrafish in mid-2006. In 2011, blue and purple zebrafish were released. These lines of fish incorporate genes from sea coral. [ 1 ] In 2012, Yorktown Technologies introduced a green version of a GloFish derived from a different species of fish, the black tetra . [ 1 ] This was followed by a green version of a tiger barb . In 2013, Yorktown Technologies introduced orange, pink, and purple Tetras, which made Tetras the first GloFish to be available in pink. This was followed in 2014 by the release of red and blue Tetras. The colors are trademarked as "Starfire Red", "Moonrise Pink", "Sunburst Orange", "Electric Green", "Cosmic Blue", and "Galactic Purple".
Other fish released include the GloFish shark, available in orange, green, and purple. Though these fish are not scientifically related to sharks, they are based on the albino rainbow shark. [ 7 ] In February 2020, green GloFish bettas also known as Globettas were released, with three different variations. These variations include female, (young) male, and premium (adult) male.
Despite the speculation of aquarium enthusiasts that the eggs of the fluorescent fish were pressure treated to make them infertile, it has been found some GloFish are indeed fertile and will reproduce in a captive environment. [ 8 ] However, the GloFish Fluorescent Fish License states "Intentional breeding and/or any sale, barter, or trade, of any offspring of GloFish fluorescent ornamental fish is strictly prohibited". [ 9 ]
Sale or possession of GloFish was made illegal in California in 2003 due to a regulation that restricts genetically modified fish. The regulation was implemented before the marketing of GloFish, largely due to concern about a fast-growing biotech salmon. The regulations were lifted in 2015 due to a growing body of evidence and the findings of the Food and Drug Administration and the Florida Department of Agriculture and Consumer Services . GloFish are now legal in California for importation and commercial sale. [ 10 ]
The import, sale and possession of these fish is not permitted within the European Union. On November 9, 2006, however, the Netherlands' Ministry of Housing, Spatial Planning and the Environment (VROM) found 1,400 fluorescent fish, which were sold in various aquarium shops. [ 11 ]
In January 2009, the U.S. Food & Drug Administration formalized their recommendations for genetically engineered animals. [ 12 ] These non-binding recommendations describe the way in which FDA regulates all GM animals, including GloFish. [ 13 ]
Research published in 2014 assessed the environmental safety associated with GloFish. One paper concluded that there is little risk of invasiveness into the environment. [ 14 ] A second study concluded that there is no difference in risk between GloFish and wild-type danios. [ 15 ]
The sentiments of aquarium retailers towards the GloFish have been used as an indicator of the public's positive reaction to controversial agricultural and aesthetic biotechnologies. The practical reception of GloFish among fish retailers was found to be affected by multiple effects, including concerns over ethics, customer demand, and the high cost of stocking the fish. Some retailers opted not to stock the fish due to low trust in federal agencies to properly regulate the organisms. [ 16 ]
GloFish are more vulnerable to predation compared to the wild type, according to a study published in 2011. In experiments including habitat complexity, transgenic red fluorescent zebrafish were approximately twice as vulnerable as the wild type to predation by largemouth bass ( Micropterus salmoides ) and eastern mosquitofish ( Gambusia holbrooki ), two native predators that potentially resist invasion by introduced fish. [ 17 ]
According to Howard et al. 2015, wild-type males had a significant advantage over GloFish when it came to mating. [ 18 ] According to the mating trials that were analyzed in the study, wild-type males sired twice as much as the genetically modified fish due to their more aggressive nature. [ 18 ] However, in Owen et al. 2012 by the same group, female zebrafish preferred the GloFish rather than wild-type males. [ 18 ] | https://en.wikipedia.org/wiki/GloFish |
Global Mobile Information System Simulator (GloMoSim) is a network protocol simulation software that simulates wireless and wired network systems.
GloMoSim is designed using the parallel discrete event simulation capability provided by Parsec , a parallel programming language . [ 1 ] GloMoSim currently supports protocols for a purely wireless network .
It uses the Parsec compiler to compile the simulation protocols .
Parsec is a C -based simulation language, developed by the Parallel Computing Laboratory at UCLA , for sequential and parallel execution of discrete-event simulation models.
GloMoSim is no longer under active development
This simulation software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GloMoSim |
GAHP ( Global Alliance on Health and Pollution ) is a network of international and national level agencies committed to a collaborative, multi-sectoral approach to address the global pollution crisis and the resulting health and economic impacts. GAHP’s overall goal is to reduce death and illness caused by all forms of toxic pollution, including air, water, soil and chemical wastes especially in low and middle-income countries. [ 1 ]
GAHP is a collaborative body made up of more than 60 members and dozens of observers that advocates for resources and solutions to pollution problems. GAHP was formed because international and national level actors/ agencies recognize that a collaborative, multi-stakeholder, multi-sectoral approach is necessary and critical to deal with the global pollution crisis and resulting health and economic impacts.
In 2012, Pure Earth initiated the alliance with representatives from the World Bank , UNEP , UNDP , UNIDO , Asian Development Bank , the European Commission , and Ministries of Environment and Health of many low and middle-income countries to formulate strategies to address pollution and health at scale. GAHP incorporated as a foundation in 2019 in Geneva, Switzerland.
GAHP focuses its efforts in two main areas: advocacy and awareness raising and country-specific support. GAHP builds public, technical and financial support to address pollution globally by promoting scientific research, raising awareness and tracking progress. GAHP assists low- and middle-income countries to prioritize and address pollution and problems through Health and Pollution Action Plans. [ 2 ]
In October 2017, GAHP published the Lancet Commission on Pollution and Health in collaboration with The Lancet . [ 3 ] The commission "addresses the full health and economic costs of air, water, and soil pollution . Through analyses of existing and emerging data, the Commission reveals pollution’s severe and underreported contribution to the Global Burden of Disease. It uncovers the economic costs of pollution to low-income and middle-income countries. The Commission will inform key decision makers around the world about the burden that pollution places on health and economic development, and about available cost-effective pollution control solutions and strategies." [ 4 ]
The report's findings were distributed widely through media outlets, reaching over 2 billion people and counting. The work of the Commission was also covered extensively through special partnerships with high-profile media organizations. [ 5 ]
In addition, GAHP updates findings from The Lancet Commission on Pollution and Health, and provides a ranking of pollution deaths on a global, regional and country level with Pollution and Health Metrics: Global, Regional and Country Analysis Archived 2021-03-08 at the Wayback Machine reports.
Pollution remains the world’s largest environmental threat to human health, responsible in 2017 for 15% of all deaths globally, and 275 million Disability-Adjusted Life Years. The 2019 report, which uses the most recent Global Burden of Disease data from the Institute of Health Metrics Evaluation, underscores the extent and severity of harm caused by air, water, and occupational pollution. [ 6 ] | https://en.wikipedia.org/wiki/Global_Alliance_on_Health_and_Pollution |
The Global Biodiversity Information Facility ( GBIF ) is an international organisation that focuses on making scientific data on biodiversity available via the Internet using web services . [ 1 ] The data are provided by many institutions from around the world; GBIF's information architecture makes these data accessible and searchable through a single portal. Data available through the GBIF portal are primarily distribution data on plants, animals, fungi, and microbes for the world, and scientific names data.
The mission of the GBIF is to facilitate free and open access to biodiversity data worldwide to underpin sustainable development . [ 1 ] Priorities, with an emphasis on promoting participation and working through partners, include mobilising biodiversity data, developing protocols and standards to ensure scientific integrity and interoperability, building an informatics architecture to allow the interlinking of diverse data types from disparate sources, promoting capacity building and catalysing development of analytical tools for improved decision-making. [ 1 ] [ 2 ]
GBIF strives to form informatics linkages among digital data resources from across the spectrum of biological organisation, from genes to ecosystems , and to connect these to issues important to science, society and sustainability by using georeferencing and GIS tools. It works in partnership with other international organisations such as the Catalogue of Life partnership, Biodiversity Information Standards , the Consortium for the Barcode of Life (CBOL), the Encyclopedia of Life (EOL), and GEOSS . The biodiversity data available through the GBIF has increased by more than 1,150% in the past decade, partially due to the participation of citizen scientists . [ 3 ] [ 4 ]
From 2002 to 2014, GBIF awarded a prestigious annual global award in the area of biodiversity informatics , the Ebbe Nielsen Prize , valued at €30,000. As of 2018 [update] , the GBIF Secretariat presents two annual prizes: the GBIF Ebbe Nielsen Challenge and the Young Researchers Award. [ 5 ] | https://en.wikipedia.org/wiki/Global_Biodiversity_Information_Facility |
A Global Boundary Stratotype Section and Point ( GSSP ), sometimes referred to as a golden spike , is an internationally agreed upon reference point on a stratigraphic section which defines the lower boundary of a stage on the geologic time scale . The effort to define GSSPs is conducted by the International Commission on Stratigraphy , a part of the International Union of Geological Sciences . Most, but not all, GSSPs are based on paleontological changes. Hence GSSPs are usually described in terms of transitions between different faunal stages , though far more faunal stages have been described than GSSPs. The GSSP definition effort commenced in 1977. As of 2024, 79 of the 101 stages that need a GSSP have a ratified GSSP. [ 1 ]
A geologic section has to fulfill a set of criteria to be adapted as a GSSP by the ICS . The following list summarizes the criteria: [ 2 ] [ 3 ]
Once a GSSP boundary has been agreed upon, a 'golden spike' is driven into the geologic section to mark the precise boundary for future geologists (though in practice the 'spike' need neither be golden nor an actual spike). As such, GSSPs are also sometimes referred to as golden spikes . The first stratigraphic boundary was defined in 1972 by identifying the Silurian - Devonian boundary with a bronze plaque at a locality called Klonk , northeast of the village of Suchomasty in the Czech Republic .
The Precambrian - Cambrian boundary GSSP at Fortune Head , Newfoundland is a typical GSSP. It is accessible by paved road and is set aside as a nature preserve . A continuous section is available from beds that are clearly Precambrian into beds that are clearly Cambrian. The boundary is set at the first appearance of a complex trace fossil Treptichnus pedum that is found worldwide. The Fortune Head GSSP is unlikely to be washed away or built over. Nonetheless, Treptichnus pedum is less than ideal as a marker fossil as it is not found in every Cambrian sequence, and it is not assured that it is found at the same level in every exposure. In fact, further eroding its value as a boundary marker, it has since been identified in strata 4m below the GSSP. [ 5 ] However, no other fossil is known that would be preferable. There is no radiometrically datable bed at the boundary at Fortune Head, but there is one slightly above the boundary in similar beds nearby.
These factors have led some geologists [ who? ] to suggest that this GSSP is in need of reassigning. [ citation needed ]
Because defining a GSSP depends on finding well-preserved geologic sections and identifying key events, this task becomes more difficult as one goes farther back in time. Before 630 million years ago, boundaries on the geologic timescale are defined simply by reference to fixed dates, known as "Global Standard Stratigraphic Ages" (GSSAs). | https://en.wikipedia.org/wiki/Global_Boundary_Stratotype_Section_and_Point |
Global Census of Marine Life on Seamounts (commonly CenSeam ) is a global scientific initiative, launched in 2005, that is designed to expand the knowledge base of marine life at seamounts . [ 1 ] Seamounts are underwater mountains, not necessarily volcanic in origin, which often form subsurface archipelagoes and are found throughout the world's ocean basins, with almost half in the Pacific. There are estimated to be as many as 100,000 seamounts at least one kilometer in height, and more if lower rises are included. [ 2 ] However, they have not been explored very much—in fact, only about half of one percent have been sampled—and almost every expedition to a seamount discovers new species and new information. There is evidence that seamounts can host concentrations of biologic diversity, each with its own unique local ecosystem; they seem to affect oceanic currents , resulting among other things in local concentration of plankton which in turn attracts species that graze on it, and indeed are probably a significant overall factor in biogeography of the oceans. They also may serve as way stations in the migration of whales and other pelagic species. Despite being poorly studied, they are heavily targeted by commercial fishing , including dredging . In addition they are of interest to potential seabed mining . [ 1 ]
The overall goal of CenSeam is "to determine the role of seamounts in the biogeography, biodiversity, productivity, and evolution of marine organisms, and to evaluate the effects of human exploitation on seamounts." To this effect, the group organizes and contributes to various research efforts about seamount biodiversity. [ 3 ] Specifically, the project aims to act as a standardized scaffold for future studies and samplings, citing inefficiency and incompatibility between individual research efforts in the past. [ 4 ] To give a scale of their mission, there are an estimated 100,000 seamounts in the ocean, but only 350 of them have been sampled, and only about 100 sampled thoroughly. [ 3 ] [ 5 ] Although sampling all 100,000 seamounts is infeasible, major seamounts can be sampled in such a way. [ 4 ] [ 6 ] [ 7 ]
CenSeam is a subdivision of the Census of Marine Life program. [ 8 ] Organisationally, the components of CenSeam consist of a secretariat (Malcolm Clark, Mireille Consalvey, Ashley Rowden and Karen Stocks) which is hosted by the National Institute of Water and Atmospheric Research in Wellington , New Zealand; an international steering committee; a taxonomic advisory panel; and two working groups, Data Analysis and Standardisation. [ 9 ]
In 2008 CenSeam began collaborating with the International Seabed Authority to study effects of seabed mining on seamount ecosystems. [ 5 ] | https://en.wikipedia.org/wiki/Global_Census_of_Marine_Life_on_Seamounts |
Global Change Biology is a biweekly peer-reviewed scientific journal covering research on the interface between biological systems and all aspects of environmental change that affect a substantial part of the globe [ 1 ] including climate change , global warming , land use change , invasive species , urbanization , wildfire , and greenhouse gases . The editor-in-chief is Stephen P. Long , [ 2 ] environmental plant physiologist , Fellow of the Royal Society and member of the National Academy of Sciences ( University of Illinois and Lancaster University ).
This journal has a sister journal: GCB Bioenergy: Bioproducts for a Sustainable Bioeconomy .
This article about an environment journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Global_Change_Biology |
The U.S. Global Change Research Program (USGCRP) develops and curates the Global Change Information System (GCIS) [ 1 ] to establish "data interfaces and interoperable repositories of climate and global change data which can be easily and efficiently accessed, integrated with other data sets, maintained and expanded over time." [ 2 ] The initial focus of GCIS is to support the United States Third National Climate Assessment (NCA3), which is to publish reports that enhance the transparency and ability of decision-makers to understand the conclusions and use of the underlying data for their own purposes. [ 3 ] [ 4 ]
The project scope includes analyzing alterations in climate, land use and land cover, natural resources including water, agriculture and biodiversity, atmospheric composition, chemical composition and ecological systems that may alter the Earth's capacity to sustain life.
Global change research includes activities aimed at describing and understanding the interactive physical, chemical and biological processes that regulate the Earth system ; the unique environment that the Earth provides for life; changes that are occurring in the Earth system; and the manner in which such systems, environments and changes are influenced by human actions. [ 2 ]
GCIS has developed information models and ontology to represent the content structure of the NCA3 and its associated provenance information and has been extending its models to incorporate more global change information. Records of objects and relationships within the GCIS are represented in a database for the Semantic Web , which can be queried using the SPARQL language. GCIS assigns globally unique persistent identifiers to all of the entities, activities, and agents relevant to provenance. Each identifier is mapped to a uniform resource identifier (URI) in the GCIS namespace, [ 5 ] allowing use of those identifiers for the Semantic Web and other linked data systems. By categorizing, annotating, and linking provenance information, the GCIS becomes capable of answering provenance-tracking questions about global change research. [ 6 ] [ 7 ]
The GCIS Ontology [ 8 ] promotes representation and documentation by incorporating the World Wide Web Consortium (W3C)’s recommendation on provenance modeling (PROV) into the design. [ 9 ] [ 10 ] The ontology uses the namespace prefix GCIS. A conceptual map of the GCIS Ontology version 1.2 [ 11 ] is accessible on the Web.
This article incorporates public domain material from About the Global Change Information System . U.S. Global Change Research Program . | https://en.wikipedia.org/wiki/Global_Change_Information_System |
The Global Connectivity Index ( GCI ) is a guide for policy makers and industry leaders to develop a roadmap to the digital economy. The GCI has evolved, by increasing the number of nations tracked in its rankings and constantly strengthening the methodology and research standards it employs. The growth of the GCI’s database, since the first Index was published in 2014, offers practical insights and recommendations for policymakers on what it takes to succeed in the digital economy.
Today, the GCI [ 1 ] tracks and benchmarks the progress of 79 nations toward the digital economy. Its core methodology analyzes 40 indicators that identify progress made in the interplay of ICT investment, technology adoption, user experience, and market development. Based on these criteria, the Index assigns a “GCI score” for each indicator based on a realistic future target value. The movement of even a single GCI point from year to year is a significant reflection of a country’s progress toward a digital economy.
The GCI offers a unique research framework to assess a nation’s digital transformation by looking at four economic pillars namely supply, demand, experience and potential, in addition to technology enablers - broadband, cloud, AI and Internet of Things (IoT). Under this proprietary research methodology, three clusters of nations are grouped according to their GCI position and GDP per capita. The three GCI clusters – Starters (GCI Score 23-39), Adopters (score 40-64), and Frontrunners (score 65-85) – account for almost 95% of global GDP.
Most ICT indexes focus on a single technology area such as broadband, cloud, data center or other technology enablers. What differentiates the GCI is that it is the only index available that goes deeper into key technology areas and at the same time measures their collective impact on the digital economy. In addition to the four core technology enablers, the GCI also takes into consideration other indicators such as workforce, ICT laws, and e-Government services. It is seen as an authoritative source that informs policy makers and industry leaders on their nation’s investment, adoption, quality and potential compared to their peers, as well as its related impact on the digital economy.
The widening of the global digital divide
The recurrent theme of the GCI is that Frontrunners are pulling far ahead of nations that are lower on the spectrum of the GCI S-curve. The widening gap suggests an ICT version of sociology’s “Matthew Effect”, where the “rich get richer and the poor get poorer” based on an accumulated advantage over time. Policy makers in the Adopters, and especially in the Starters, are advised to consider the deepening inequality gap as it may have long-term consequences on their ability to compete and sustain economic growth.
However, Frontrunners that invested heavily in ICT infrastructure over the years saw growth stagnate due to having now exhausted much of the value of that infrastructure. To drive sustainable growth , countries or organizations are advised to focus on participating in global win-win collaboration. The GCI 2019 outlines five roles needed to develop such global Intelligent Connectivity ecosystems. These are Decision Makers (countries, organizations or enterprises), Data Scientists, ICT Companies, Data Collectors, and End Users. These roles collaborate and leverage each other’s strengths to create value for all participants: nations, enterprises, and the public.
A comparison of GCI reports over the four-year period from 2015 through 2019 shows significant movement in country rankings. In the GCI 2019 report, most of the 79 nations in the rankings saw overall GCI scores improve.
Key GCI findings (2015-2019)
Nations across the GCI spectrum are discovering an “AI upside potential" in 2019. Countries with the highest GCI scores can leverage Intelligent Connectivity to accelerate economic growth up to 2.4 times faster than other nations for every GCI point improvement.
AI readiness is integral to success in the digital economy. AI readiness assesses whether a country has met the three preconditions for AI: computing power, labeled data and algorithms.
An additional 10% of ICT infrastructure investment each year incorporated into an economic master plan beginning in 2016, over time, would have a multiplier effect that by 2025 could add US$17.6 trillion in GDP to the global economy. In real terms, the potential impact is equal to about the size of the European Union's GDP in 2016. Using this economic impact mode, the GCI 2017 finds that every additional US$1 of ICT Infrastructure investment could bring a return of US$3 in GDP in 2016, US$3.70 in 2020 and see a potential return of US$5 in 2025.
GCI scores are not abstract numbers but have a real-world effect on economic growth. A movement in GCI score of only 1 point correlates to: a 2.3% increase in productivity, a 2.2% rise in innovation and a 2.1% increase in national competitiveness.
A 20% increase in ICT investment will grow a nation's GDP by 1%.
By 2025, as many as 100 billion connections will be generated globally, 90% of which will come from intelligent sensors. This increase will be due to enterprises becoming enabled by the internet. By leveraging connectivity to streamline business processes, reduce costs and improve efficiency, enterprises will drive innovation and move the focus from a consumer driven internet to an industrial one.
Differences between the three GCI clusters – Starters, Adopters and Frontrunners
Starter: Average GDP per capita: US$3,800 | GCI score: 23-39 These nations are in the early stage of ICT infrastructure build-out. Their focus is on expanding connectivity to give more people access to the digital economy.
Adopter: Average GDP per capita: US$17,200 | GCI score: 40-64 Nations in this cluster experience the largest GDP growth from investment in ICT Infrastructure. Their focus is on increasing demand for high-speed connectivity to facilitate industry digitization and economic growth.
Frontrunner: Average GDP per capita: US$58,100 | GCI score: 65-85 These nations are mainly developed economies that focus on enhancing the user experience. Their priority shifts to investment in big data and IoT to develop a smarter and more efficient society.
The GCI research model includes 40 indicators that can be analyzed in terms of four economic pillars and four technology enablers. Based on these indicators, the GCI fully and objectively measures, analyzes, and forecasts the economies tracked; quantifies the digital economy transformation journey they are undergoing; and provides a reference tool for policy makers and industry leaders. The four economic pillars are ICT supply, demand, experience, and potential. The four technology enablers are broadband, cloud services, AI, and the IoT.
The first report published in 2014 covered 25 nations and 10 industries, including finance, manufacturing, education, transportation and logistics which accounted for 95% of global GDP. The GCI 2015 report first covered 50 nations with 38 indicators. In the GCI 2016 report, two new indicators were introduced, raising the total to 40. In addition, the GCI 2016 also included updated definitions (e.g. replacing 3G coverage with 4G coverage) based on advances in ICT. In 2018, the GCI broadened the scope from 50 to 79 nations. The research methodology was expanded in 2019 again by adding the new AI perimeter which includes: Data creation, AI Investment, AI-enabled robotics and AI potential. Intelligent Connectivity’s five enabling technologies were also consolidated into four: Broadband, Cloud, IoT and AI in the same year.
The GCI has gradually become a global benchmark for the assessment of digital transformation. It has been cited by more than 30 authoritative agencies including: Accenture, [ 7 ] Asia Development Bank Institute, [ 8 ] APEC Business Advisory Council, [ 9 ] CITADEL, [ 10 ] Ernst & Young, [ 11 ] Inter American Development Bank, [ 12 ] The International Telecommunication Union, [ 13 ] GSMA, [ 14 ] G20 [ 15 ] and The Center for Transatlantic Relations. [ 16 ] | https://en.wikipedia.org/wiki/Global_Connectivity_Index |
The Global Energy Prize is an international award in the field of energy industry which is given for "outstanding scientific research and scientific-technical developments in the field of energy which promote greater efficiency and environmental security for energy sources on Earth in the interests of all mankind" .
It was founded in 2002 at the initiative of a Nobel Prize in Physics laureate Zhores Alferov . The headquarters are in Moscow, Russia . The prize is awarded by the President of Russia or "a person authorized by the president". The media and the professional community consider it "a biggest Russian award" and "one of the biggest in the world". Some depictions in the press described it as "a Russian analogue to the Nobel prize ". [ 1 ] This is confirmed by the IREG Observatory on Academic Ranking and Excellence which includes the Prize in its "top-99" list of the most recognized global awards. [ 2 ] It is the only award from Russia included in this list.
The award is managed by The Global Energy Association , which is dedicated to the development of international research and projects in energy industry. Besides award, the Association oversees conferences and informational programmes in this field, programmes for younger scientists and produces an annual report "Ten breakthrough ideas in energy for the next 10 years" .
The author of the concept was Zhores Alferov , Russian Nobel-winning physicist (2000), academician of the Russian Academy of Sciences . The prize was created in 2002 and Alferov was appointed the head of the International Committee for its awarding. [ 3 ] The founders of the prize were PJSC Gazprom , PJCS Federal Grid Company of the Unified Energy Systems (FGC UES, Former JSC Unified Energy Systems of Russia) and Yukos . The creation of the prize was announced by Vladimir Putin at the 2002 Russia—European Union Summit.
The first Global Energy Prize award ceremony took place in June 2003 at the Konstantinovsky Palace, Strelna (St Petersburg) and was attended by President Putin. The award was presented to three scientists: Nick Holonyak (USA), a professor at the University of Illinois , "for his invention of the first semiconductor LEDs (light-emitting diodes) in the visible region of the light spectrum, and his role as founder of the new field of silicon electronics and micro-electronics for power applications"; Ian Douglas Smith (USA), chief manager and senior researcher in Titan Pulse Sciences Division company, "for fundamental research into the physics of high-power pulse-energy engineering, and the development of pulsed power in electron accelerator applications", and a Russian scientist Gennady Mesyats for the same.
For the prize's management, the Global Energy Prize Foundation was established. It was functional until 2010 and, besides the prize, launched a number of energy-related programs. In 2010 it was converted into a voluntary association , and in October 2016 it was renamed into The Association for the development of international research and projects in the energy sector "Global Energy" . As of 2021, the Association's members were Gazprom, " Rosseti FGC UES " and Surgutneftegaz .
In 2020, the association broadened its geographical presence, so a new record was set in the 2021 nomination cycle. For the first time, 36 countries were represented on the long list – three times the number in 2019 (12 countries) and nearly twice the number in 2020 (20 countries). [ 4 ]
The 2021 list features scientists not only from North America, Western Europe and Asia, but also from Eastern Europe – Hungary and Latvia – from the Middle East and from Africa – Algeria, Burkina Faso, Cameroon, Ghana, Gambia, Egypt, Jordan, Madagascar, Nigeria, Togo and Zimbabwe – and from Latin America – Mexico and Uruguay. And for the first time, women were among the candidates – from India, Kazakhstan, the United States and Zimbabwe.
In 2020, new members joined the board of trustees – the former president of Uruguay , Julio Maria Sanguinetti Coirolo , and the General Director, Association of Power Utilities of Africa (APUA), Abel Didier Tella. The new President of the Global Energy Association became Sergey Brilev , a prominent Russian TV journalist and manager. The former presidents were Igor Lobovsky (2003–2018) and Alexander Ignatov (2018–2020).
As of 2021, the monetary part of the award amounted to 39 million of Russian rubles (530,000 USD ). The association, besides award, oversees energy-related conferences and informational projects, programmes for younger scientists with participation of honoured experts. It also produces an annual report "Ten breakthrough ideas in energy for the next 10 years". Since 2020, the ceremony has been held in different cities of Russia: the first location to be selected was the Tsiolkovsky State Museum of the History of Cosmonautics in Kaluga .
Up to now, the last public event of announcing the laureates took place in July 2024 in Volgograd ; [ 5 ] the awarding ceremony should proceed in September.
In 2020, along with the existing Global Energy Prize, a new type of award was established: Honorary Diploma of the Association , for Russian scientists contributing to the energy industry of the Russian Federation. The first laureate was mathematician Viktor Maslov – for "fundamental input into the safety of nuclear energy". In 2021 the Association presented its diploma to physicist Igor Grekhov , [ 6 ] in 2022 – to hydro-power engineer Yuri Vasil'ev. [ 7 ]
Since 2022, the Honorary Diplomas are also awarded to the specialists from the developing countries (7 holders as of mid-2024).
The International Award Committee is responsible for choosing the laureates of the Global Energy Prize. It includes:
The board of trustees of the association is responsible for supervision of its general management. It includes:
Since 2003, 53 scientists from 16 countries were awarded. Among them people from Australia, the UK, Germany, Greece, Denmark, Iceland, Italy, Canada, China, Russia, the US, Ukraine, France, Switzerland, Sweden and Japan. The laureates are presented an honorary medal, a statuette, a diploma and a golden honorary pin (besides monetary amount).
Nominations are accepted from scientists and/or organizations through representatives. They have to be preliminarily authorized by the Association. Among them are Nobel Prize laureates, laureates of prizes such as Kyoto Prize , Max Planck Prize, Wolf Prize , Balzan Prize , past Global Energy Prize laureates. | https://en.wikipedia.org/wiki/Global_Energy_Prize |
Global Environmental Change is a scientific journal publishing peer-reviewed research on environmental change that was established in 1990 by Butterworth-Heinemann . It is currently published by Elsevier . As of 2024 [update] the editor-in-chief are Dabo Guan and Harini Nagendra. As of 2019 [update] the journal had an impact factor of 10.466, according to Journal Citation Reports , ranking it 4th out of 265 journals in the category environmental sciences . [ 1 ]
This article about an environment journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Global_Environmental_Change |
The Global Health Security Initiative (GHSI) is a collaborative effort among several nations and organizations focused on strengthening global health security. [ 1 ] [ 2 ] [ 3 ] Established in response to the 2001 terrorist attacks, its primary goal is to prepare for and address public health risks related biological, chemical, nuclear terrorism, or pandemics . [ 3 ] [ 4 ] The initiative includes members from North America, Europe, and Asia, with the World Health Organization participating as an observer. [ 4 ] [ 1 ]
The idea on which the Global Health Security Initiative is based was suggested by then US Secretary of Health and Human Services, Tommy Thompson, after the World Trade Center attacks on 11 September 2001. [ 3 ] [ 1 ] He proposed that countries fighting bioterrorism should collaborate, share information, and coordinate their efforts in order to best protect global health . [ 3 ] [ 1 ] [ 5 ]
GHSI was launched in November 2001 by Canada (who hosted the first meeting in Ottawa ), the European Commission , France, Germany, Italy, Japan, Mexico, the United Kingdom, and the United States. The World Health Organization (WHO) would act as observer to the GHSI. The ministers agreed on eight areas in which the partnership could collaborate in order to "strengthen public health preparedness and response to the threat of international biological, chemical and radio-nuclear terrorism."
In December 2002, at a meeting in Mexico City , the Ministers broadened the scope of the mandate to include the public health threat posed by pandemic influenza . [ 6 ]
GHSI states that its mandate is "to undertake concerted global action to strengthen public health preparedness and response to chemical, biological, radiological, and nuclear (CBRN) threats, as well as pandemic influenza," including intentional, accidental, and naturally occurring events.
The Global Health Security Action Group (GHSAG) is made up of senior officials from each member country. The GHSI Secretariat organises, manages, and administers meetings and committees and sets priorities.
Various technical and scientific working groups focus on specific areas of knowledge. Current working groups include: [ 7 ]
GHSI conducts research and collaborates to address global health security concerns. Some of the research GHSI has been involved in includes: | https://en.wikipedia.org/wiki/Global_Health_Security_Initiative |
The Global Journal of Environmental Science and Management is a quarterly peer-reviewed open access academic journal covering environmental management . It was established in 2015. The founding and current editor-in-chief is Jafar Nouri ( Tehran University of Medical Sciences ).
The journal is indexed and abstracted in the following bibliographic databases: | https://en.wikipedia.org/wiki/Global_Journal_of_Environmental_Science_and_Management |
The Global Mobile Satellite System ( GMSS ) consists of several satellite phone providers serving private customers. It can be compared to PLMN ( wireless telephony carriers) and PSTN (traditional wire-based telephony ).
As of 2023, ranges of numbers have been assigned to two GMSS carriers:
In 1996, the ITU introduced country code +881 for direct international dialing of phones on GMSS providers. ( Inmarsat had already been allocated country code +870.) The next digit following the country code is allocated (two at a time) to a particular GMSS carrier: [ 1 ] [ 2 ]
Inmarsat is a satellite-based communications provider, but it is primarily a maritime service and is not generally considered part of the GMSS.
Globalstar usually allocates subscribers with a local number in the country they are based rather than using their GMSS country code.
Iridium also uses an Arizona-based access number to call Iridium phones for those unwilling or unable to call the usually expensive GMSS number directly. [ 3 ]
Thuraya has been assigned +882-16, within the +882 range for International Networks . | https://en.wikipedia.org/wiki/Global_Mobile_Satellite_System |
The Global Ocean Data Analysis Project ( GLODAP ) is a synthesis project bringing together oceanographic data, featuring two major releases as of 2018. The central goal of GLODAP is to generate a global climatology of the World Ocean 's carbon cycle for use in studies of both its natural and anthropogenically forced states. GLODAP is funded by the National Oceanic and Atmospheric Administration , the U.S. Department of Energy , and the National Science Foundation .
The first GLODAP release (v1.1) was produced from data collected during the 1990s by research cruises on the World Ocean Circulation Experiment , Joint Global Ocean Flux Study and Ocean-Atmosphere Exchange Study programmes. The second GLODAP release (v2) extended the first using data from cruises from 2000 to 2013. The data are available both as individual "bottle data" from sample sites, and as interpolated fields on a standard longitude, latitude, depth grid.
The GLODAPv1.1 climatology contains analysed fields of "present day" (1990s) dissolved inorganic carbon (DIC), alkalinity , carbon-14 ( 14 C), CFC-11 and CFC-12 . [ 1 ] The fields consist of three-dimensional , objectively-analysed global grids at 1° horizontal resolution , interpolated onto 33 standardised vertical intervals [ 2 ] from the surface (0 m) to the abyssal seafloor (5500 m). In terms of temporal resolution, the relative scarcity of the source data mean that, unlike the World Ocean Atlas , averaged fields are only produced for the annual time-scale. The GLODAP climatology is missing data in certain oceanic provinces including the Arctic Ocean , the Caribbean Sea , the Mediterranean Sea and Maritime Southeast Asia .
Additionally, analysis has attempted to separate natural from anthropogenic DIC, to produce fields of pre- industrial (18th century) DIC and "present day" anthropogenic CO 2 . This separation allows estimation of the magnitude of the ocean sink for anthropogenic CO 2 , and is important for studies of phenomena such as ocean acidification . [ 3 ] [ 4 ] However, as anthropogenic DIC is chemically and physically identical to natural DIC, this separation is difficult. GLODAP used a mathematical technique known as C* (C-star) [ 5 ] to deconvolute anthropogenic from natural DIC (there are a number of alternative methods). This uses information about ocean biogeochemistry and CO 2 surface disequilibrium together with other ocean tracers including carbon-14, CFC-11 and CFC-12 (which indicate water mass age) to try to separate out natural CO 2 from that added during the ongoing anthropogenic transient. The technique is not straightforward and has associated errors, although it is gradually being refined to improve it. Its findings are generally supported by independent predictions made by dynamic models. [ 3 ] [ 6 ]
The GLODAPv2 climatology largely repeats the earlier format, but makes use of the large number of observations of the ocean's carbon cycle made over the intervening period (2000–2013). [ 7 ] [ 8 ] The analysed "present-day" fields in the resulting dataset are normalised to year 2002. Anthropogenic carbon was estimated in GLODAPv2 using a "transit-time distribution" (TTD) method (an approach using a Green's function ). [ 9 ] [ 8 ] In addition to updated fields of DIC (total and anthropogenic) and alkalinity, GLODAPv2 includes fields of seawater pH and calcium carbonate saturation state (Ω; omega). The latter is a non-dimensional number calculated by dividing the local carbonate ion concentration by the ambient saturation concentration for calcium carbonate (for the biomineral polymorphs calcite and aragonite ), and relates to an oceanographic property, the carbonate compensation depth . Values of this below 1 indicate undersaturation , and potential dissolution, while values above 1 indicate supersaturation , and relative stability.
The following panels show sea surface concentrations of fields prepared by GLODAPv1.1. The "pre-industrial" is the 18th century, while "present-day" is approximately the 1990s.
The following panels show sea surface concentrations of fields prepared by GLODAPv2. The "pre-industrial" is the 18th century, while "present-day" is normalised to 2002. Note that these properties are shown in mass units (per kilogram of seawater) rather than the volume units (per cubic metre of seawater) used in the GLODAPv1.1 panels. | https://en.wikipedia.org/wiki/Global_Ocean_Data_Analysis_Project |
The Global Ocean Sampling Expedition (GOS) is an ocean exploration genome project whose goal is to assess genetic diversity in marine microbial communities and to understand their role in nature's fundamental processes. The two-year journey, which used Craig Venter's personal yacht, originated in Halifax , Canada , circumnavigated the globe and terminated in the U.S. in January 2006. The expedition sampled water from Halifax , Nova Scotia to the Eastern Tropical Pacific Ocean . During 2007, sampling continued along the west coast of North America.
The GOS datasets were submitted to both NCBI [ 1 ] and Community Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA), a new online resource for marine metagenomics funded by the Gordon and Betty Moore Foundation , developed by JCVI and hosted by UC San Diego 's Division of the California Institute for Telecommunications and Information Technology (Calit2). CAMERA's toolset was developed by JCVI, and reflects the tools used in the initial publication of the GOS datasets.
The Sorcerer II effort has been funded by:
Sorcerer II , a 95-foot sloop , completed a 2-year scientific expedition circumnavigating the globe in mid latitudes collecting samples of microbes in seawater for genetic sequencing and cataloguing. She was designed to be not just a world cruising yacht, but one that would be capable of handling the extremes in latitudes, from equatorial heat and humidity to latitudes between 60 and 70 degrees. SORCERER II's construction is light for performance, but very strong, with her kevlar and E glass laminates, epoxy bonding and carefully chosen core materials.
The vessel was designed by German Frers and carries 2,400 litres (630 US gal) of water.
The following list is of the official publications of the project and the J. Craig Venter Institute . | https://en.wikipedia.org/wiki/Global_Ocean_Sampling_Expedition |
The Global Outbreak Alert and Response Network ( GOARN ) is a network composed of numerous technical and public health institutions, laboratories, NGOs, and other organizations that work to observe and respond to threatening epidemics. [ 1 ] GOARN works closely with and under the World Health Organization (WHO), which is one of its most notable partners. Its goals are to: examine and study diseases, evaluate the risks that certain diseases pose, and improve international capability to deal with diseases. [ 2 ]
The World Health Organization realized at the start of the 21st century that it did not have the resources required to adequately respond to and prevent epidemics around the world. Thus, a "Framework for Global Outbreak and Response" was created by the Department of Communicable Diseases Surveillance and Response, and Regional Offices. This framework was put then forth in a meeting in Geneva from April 26–28, 2000. In this meeting, which was attended by 121 representatives from 67 institutions, the decision was made to form GOARN to contribute resources, coordination, surveillance, and technical assistance towards combating diseases. [ 3 ]
It was decided that GOARN would be directed by a steering committee made of 20 representatives of GOARN partners and an operational support team (OST) based in WHO. The steering committee oversees and plans the activities of GOARN, and the OST is composed of a minimum of 5–6 WHO staff. Task forces and groups were established to deal with specific issues. [ 4 ] GOARN resources are primarily coordinated by the World Health Organization. [ 3 ]
The WHO's guiding principles are to standardize "epidemiological, laboratory, clinical management, research, communication, logistics, support, security, evacuation, and communication systems" and coordinative international resources to support local efforts by GOARN partners to combat outbreaks. It also focuses on improving long term ability to provide technical assistance to affected areas. [ 4 ]
GOARN has grown to now have over 600 partners in the form of public health institutions, networks, laboratories, and United Nations and non-governmental organizations. Technical institutions, networks, and organizations that have the ability to improve GOARN's capabilities are eligible for partnership. [ 5 ] Through its partners, GOARN is staffed by a variety of individuals who specialize in public health, such as "doctors, nurses, infection control specialists, logisticians, laboratory specialists; communication, anthropology and social mobilization experts, emergency management and public health professionals among others." [ 6 ]
As its biggest partner, WHO plays a large role in GOARN. Alongside coordinating its resources to combat outbreaks, WHO provides much of the staffing and assistance for GOARN, though as will be covered later, does not fund GOARN directly. Since the network is primarily led by the WHO, there is some uncertainty as to whether WHO should be considered a partner in GOARN or if the network should be considered a WHO initiative. [ citation needed ]
Another notable partner is the Center for Disease Control , which sends technical resources and staff to GOARN. The CDC also has a history of resource sharing and cooperation with WHO in order to combat disease. [ 7 ]
The WHO does not directly fund GOARN. Instead, GOARN members and outside fundraising that is carried out each time there is a new incidence is used to support the GOARN response. The Nuclear Threat Initiative provides GOARN with US$500,000 as a revolving fund, meant to be used for quickly mobilizing response teams. This is known as the WHO-NTI Global Emergency Response Fund, and must be repaid after withdrawal. The GOARN is effective at operating from a fairly small budget. [ 8 ]
GOARN has responded to over 120 occurrences in 85 countries and has deployed over 2,300 experts into the field. [ 6 ] Some examples of deployments are the SARS outbreak in Asia in 2003, Rift Valley fever, and the nipah virus around the Indian subcontinent. [ 9 ]
Since its creation, GOARN cooperated with various other organizations to control outbreaks and improve national capacity to respond to diseases. A brief history of GOARN's work against international diseases is as follows. In 2000–2003, GOARN primarily responded to outbreaks of diseases such as cholera, meningitis, and yellow fever in Africa. It supported field investigation and outbreak containment. In 2003, GOARN helped to deploy international teams and helped to coordinate the response against SARS. In 2004, the network was one of the first to deploy against H5N1 influenza. In April–July 2005, GOARN helped to control Marburg Hemorrhagic fever in Angola. It carried out some "risk assessment and preparedness missions" in 2006, along with some response to human bird flu. In 2008/2009, GOARN responded to cholera in Zimbabwe. [ 10 ]
GOARN played a role in containing the 2003 SARS outbreak in Asia. The network sent teams of experts in epidemiology, microbiology/virology, and infection control to Hanoi, Vietnam on March 14, 2003 and then Beijing China on March 25, 2003. GOARN assisted during this outbreak to not only study the outbreak and provide assistance, but also facilitate communication between the Department of Health (Hong Kong) and the WHO. [ 11 ]
Earliest signs of the outbreak in China were reported February 11–24, when multiple people were reported to contract the disease. WHO was notified February 28 and then directly notified GOARN March 13. The first members of a WHO/GOARN outbreak control team arrived in Hong Kong March 14, followed by another 5-person GOARN team 12 days later. This second team transitioned to Guangdong, where they investigated the earliest cases of SARS and conducted interviews with health staff. The outbreak was declared by WHO to be contained by July 5. [ 12 ]
Worldwide, GOARN carried out many of the operations for the initial response to SARS through the mobilization of field teams. Also through GOARN, the WHO developed many international networks to create tools and standards for containing the epidemic. These networks communicated data by teleconference and use of secure websites for sharing of information. [ citation needed ]
Besides these networks and field teams, GOARN also assisted nations by directly providing assistance to affected areas and improving their capacity to respond to such threats in the future. GOARN's role in the outbreak was recognized by the World Health Assembly during the 56th Assembly in resolution WHA56.29. [ 13 ]
On March 23, 2014, the first reports of Ebola in Guinea were reported by WHO's Regional Office in Africa. Five days later, the first GOARN team was sent to Guinea. This team found the situation to be quite severe and its findings were discussed in a press conference in Geneva April 8. [ 14 ]
In the third week of April, WHO collaborated with GOARN to send a new medical team trained in infection prevention/control and intensive care to Guinea's principal hospital, Donka Hospital. Two weeks later, on May 5, WHO deployed experts, thirty three of whom were from GOARN, to West Africa to assist in the response to the outbreak. The outbreak was detected to have spread to Sierre Leone later in the month. [ 14 ]
On June 23, a GOARN steering committee session sent a message to WHO requesting for WHO to lead the response more strongly because it was the only agency with the resources and staff to do so. [ 14 ]
Over the course of the outbreak, the network deployed 895 experts, including "doctors, nurses, infection control specialists, logisticians, laboratory specialists; communication, anthropology and social mobilisation experts, emergency management and public health professionals." The network is still involved in the response to Ebola. [ 15 ]
In Sierre Leone, GOARN has sent case management and laboratory experts from the International Centre for Diarrhoeal Disease Research, Bangladesh to help train the response capacity of health care and laboratory workers case management and diagnosis. [ 16 ]
In Northern Iraq, the Syrian Civil War displaced many refugees into the Kurdistan. The refugee camps suffer from poor sanitation, which has led to cholera outbreaks in the region in 2007 and 2012. GOARN, as per the request of the Ministry of Health of Iraq for support and training in outbreak response, deployed a multidisciplinary team of six experts to the North-Iraqi Dohuk and Erbil camps to assist with assessing the risk of cholera and other diseases as well as assisting MoH to prepare for response to the diseases. [ 17 ]
GOARN supported countries and various other outbreak control organizations to fight against the H1N1 outbreak in the US and Mexico. The GOARN alert and request for assistance started in Mexico April 24, 2009. Over the course of the outbreak, GOARN helped the Pan American Health Organization coordinate and exchange information with the CDC and Public Health Agency of Canada . It was provided with support and training from the Regional Office for the Western Pacific Response so that it could support regional offices in Manila and carry out field missions in Malaysia and Mongolia. The network carried out a joint training course with the Regional Office for the Eastern Mediterranean in Cairo. [ 18 ]
All in all, GOARN carried out 188 missions to 27 countries to strengthen international coordination between these organizations and to improve international capacity to respond to threats. Its activities consisted of assessment of the situation, communication between partners, infection control, laboratory diagnostic, and transportation of specimens. [ 18 ] | https://en.wikipedia.org/wiki/Global_Outbreak_Alert_and_Response_Network |
Global paleoclimate indicators are the proxies sensitive to global paleoclimatic environment changes. They are mostly derived from marine sediments . Paleoclimate indicators derived from terrestrial sediments, on the other hand, are commonly influenced by local tectonic movements and paleogeographic variations. Factors governing the Earth's climate system include plate tectonics, which controls the configuration of continents, the interplay between the atmosphere and the ocean, and the Earth's orbital characteristics ( Milankovitch cycles ). Global paleoclimate indicators are established based on the information extracted from the analyses of geologic materials, including biological , geochemical and mineralogical data preserved in marine sediments. Indicators are generally grouped into three categories; paleontological , geochemical and lithological .
Sedimentary records are influenced by local topography and oceanic and atmospheric currents. Proxies of global climatic significance are, however, less ambiguous in paleotemperature interpretation. Marine biota have offered by far the most proxies for paleotemperature, of which the microfossils, because of their widespread, abundance and sensitive to latitudinal changes, have provided many primary important paleotemperature indicators. Identification of latitudinal indices species is usually the first attempt to tie their presence in sediments to paleotemperature fluctuations. Other properties of marine biota, including morphology, abundance, diversity, and geochemistry have also been successfully established as paleoclimate indicators. More complex statistical analyses ( factor analysis , principal component , etc.) of biogeography have been able to link fauna assemblages to water masses for paleo-current reconstruction. List below are some key paleontological tools utilized by scientists to reconstruct paleotemperature history. [ citation needed ]
Because of their widespread distribution and abundance in sediments, forams have been the most extensively explored for their biological characters linked to paleoclimatic and paleoecologic reconstructions. Numerous reports have documented both planktonic and benthic forams as proxies for paleotemperature. These include the studies of morphological and biogeographical responses to surface temperature. [ citation needed ]
Investigations of planktonic foraminiferal population indicate that tropical species attain their largest test sizes in tropical waters, and polar species reach maximum sizes in polar waters. Species living in subtropical and subpolar waters decrease in test size with both increasing and decreasing temperature. [ 1 ]
The proloculus (the first chamber) sizes of benthic forams are affected by sea water temperature and their mean has been used as proxy for paleoclimatic investigations. [ 2 ]
Mean test diameters of the planktonic foraminifer Orbulina universa have been used to interpret sea surface temperature history in Somali Basin. R-mode factor and Q-mode cluster analyses define five significant factor assemblages and five clusters reflecting different environmental characteristics, including increased oxygenation, high surface productivity. [ 3 ]
A number of forams have been cited to have different coiling directions in response to surface temperature. Globierina pachyderma, for example, exhibits dominant population of right coiling direction in cold water vs. left in warm water, [ 4 ] and the ratio of these two forms have been utilized to estimate paleotemperature. [ 5 ] [ 6 ] A similar dependency of coiling directions on temperatures has been reported for Muricohebergella delrioensis in Cretaceous sediments. [ 7 ]
Globigerina bulloides , a benthic foram, has been documented for its coiling directions related to seawater temperatures in surface sediments of the southern Indian Ocean. [ 8 ]
A similar relationship has been documented for another benthic foram Bulinina marginata. [ 9 ]
Planktonic foraminiferal species diversity depends on available niches , which are in turn related to ocean circulation . By correlation with stable isotope records, maximum diversity has been found to occur after the initiation of a glaciation period. [ 10 ]
Since the deep sea cores became available in the 1960s, paleoclimatic indices of planktonic foraminifera from marine sediments have been used for paleoclimatic reconstruction. Among the early pioneers to apply foraminifera latitudinal abundances, Ericson and Wollin (1968) succeeded in establishing the Pleistocene glacial and interglacial cycles based on the ratios of cold and warm water species in tropical sediments. [ 11 ] Similar work was extended to subantarctic region by Kennett (1970), who, based on subpolar cold and warm water planktonic foraminferal species, reconstructed paleoclimatic changes in the Pleistocene, consistent in trends with those established in the tropical region. [ 12 ]
When drilling cores, which recovered longer sediment columns than piston cores, came along, paleoclimatic reconstruction investigations were pushed back further in geological times. A climatic curve in the Oligocene was constructed in the Gulf of Mexico by using warm water indicators (Turborotalia pseudoampliapertura, Globoquadrina tripartita, Dentoglobigerina globularis, Dentoglobigerina baroemoenensis, “Globigerina” ciperoensis and Globigerinoides groups, and Cassigerinella chipolensis) and cold water indicators (Catapsydrax spp., Globorotaloides spp., Subbotina angiporoides group, Globigerina s. str., and the tenuitellids). [ 13 ] A more extensive geographic coverage was investigated by Spezzaferri in 1995, who analyzed samples from drilling cores in the Atlantic, Indian and South Pacific Oceans and identified and grouped foraminifera into warmer, cooler, warm-temperate and cool-temperate indices. A paleoclimatic curve in the Oligocene and Miocene transition period was established and the curve was supported by the isotope data. [ 14 ]
A more sophisticated approach to reconstruct paleoclimate involves using factor analysis. Thompson (1981) was able to relate six foraminiferal assemblages from core top samples to present water masses in the western North Pacific. A transfer function was generated to link the assemblages to sea surface temperatures. A paleotemperature curve for the past 150,000 years was reconstructed by applying this transfer function to old sediments in the cores. [ 15 ]
Similar technique has been applied to the Eocene and Oligocene sediments and the forams have been categorized in surface, intermediate and deep water-mass groups. Thus water-mass stratification, in addition to paleotemperature fluctuation has been reconstructed. [ 16 ]
A 15-degree of latitude shift has been noted for the distribution of some selected species of Coccoliths between recent sediments and mid-Wisconsin glacier sediments of the North Atlantic. [ 17 ] Concentrations of coccoliths in marine sediments appear to be related to surface temperatures as well. This is demonstrated by the quantitative analysis of coccolith assemblages in the western Mediterranean Pleistocene sediments. [ 18 ]
Because of their resistant to cold water dissolution, which severely destroys the calcareous planktonic fossils at depth worldwide, Radiolarians has become one of the most commonly studied siliceous planktonic fossils for paleotemperature reconstruction.
Study of Radiolarians in the North Pacific deep sea cores has revealed that increases in both species diversity and abundance correspond to major glaciation events of the last 16 million years. Changes in Radiolarian compositions are also evident to reflect in general sea surface temperature. [ 19 ]
By applying statistical analyses (Q-mode factor analysis), many quantitative studies of Radiolarian assemblages from surface sediments have established a transfer function which enables the estimation of paleo-sea surface temperature. For example, Pisias et al. (1997) were able to identify assemblages representative to the present Pacific biogeography and used these assemblages to predict sea surface temperature of the last glacier maximum. [ 20 ]
Diatom species in polar and subpolar marine environments commonly display a narrow range of ecological preferences, in terms of sea surface temperature and sea ice conditions. An established relationship between diatom assemblages and their ecological preferences in surface sediments, could, therefore, be applied to sediments below the surface. For example, statistical analyses of diatom in the Antarctic Peninsula surface sediments have established diatom assemblages indicative to sea ice and open marine conditions, and these assemblages have been used as proxies for glacial and interglacial stages respectively in the Holocene sediments. [ 21 ]
Diatom studies of lacustrine sediments in Siberia and Mongolia demonstrate a close relationship during the last glacial maximum between planktonic diatom diversity and paleoclimate through the correlation with oxygen isotope records, which represent global ice volume changes. [ 22 ]
Investigation on dinoflagellate cyst in the Mediterranean Sea has identified warm and cold temperate dinocyst species and these species have been used to reconstruct the paleoclimate changes during the past 30,000 years. [ 23 ]
Using ostracod crustaceans as palaeoclimate proxies has been well established for the Quaternary. Not only their indicator species, but also the trace element and stable isotope geochemistry of their shells have been documented as evidence of past climate fluctuations. [ 24 ]
Its isotope fractionation is linked to water temperature and its isotope ratios from a variety of sources have been widely used to reconstruct paleoclimate. Oxygen isotope in calcium carbonates has become the most widely applied as geothermometer for estimating ancient ocean temperatures. The most successful applications of isotope paleoclimatology have been the study of foraminifera from deep-sea sediments. For instance, Shackleton and Kennett (1975) have established the Cenozoic paleotemperature history based on analyzing oxygen isotope composition of both planktonic and benthic foraminifera in the Antarctic region. [ 25 ] Since the variations in the 18O/16O ratio in marine fossil records are global, the oxygen isotope stratigraphy has been used for chronological correlation. [ 26 ]
Stable carbon isotope composition is another widely used proxy for interpreting paleoenvironment conditions. The Surface temperature fluctuation from the Paleocene to Miocene has been established based on carbon isotope data from foraminifera in Antarctic region. [ 25 ] The organic matter preserved in sediments records paleoecosystems, and its carbon isotope composition has been also utilized to reconstruct paleoclimatic evolution. For example, Rogers and Koons (1969) have reported that the carbon isotope ratios, derived from organic matter in Quaternary marine sediments in the Gulf of Mexico, correlate well with Pleistocene climate changes. [ 27 ]
Chen et al. (2011) have documented ancient climate fluctuations since the last glacial maximum based on soil samples in Tibet. [ 28 ] Other sources for organic carbon isotope used as proxies for paleoenvironment reconstruction include lacustrine deposits for lake level variations, [ 29 ] fossilized vertebrates for precipitation fluctuations, [ 30 ] oil shales for paleoecological and paleoclimate conditions. [ 31 ]
Lipid:In marine sediments, a stable lipid called IP25 (Ice Proxy with 25 carbon atoms), which is biosynthesized by sea-ice dwelling diatom, has been found to be generally related to spring sea-ice cover in the Arctic region, Thus this proxy could be used to reconstruct sea-ice coverage. [ 32 ] A different biomarker, IPSO25 (Ice Proxy Southern Ocean with 25 carbon atoms) has been documented as a useful proxy for the sea-ice cover in the Antarctic region. [ 33 ]
Among all the lithological indicators, ice-rafted debris (IRD) is the most useful tool to reconstruct paleoclimate. High concentrations of IRD evidence the glacial intervals during which icebergs likely traveled far from Polar Regions。In the South Pacific, IRD has been used as proxy for glaciation in the Cenozoic and a glaciation history has been established for the Subantarctic region. The history is also supported by the foraminifera species diversity data. [ 34 ]
In the western Arctic Ocean, investigation of ice-rafted debris has identified at least six glacial intervals in the last 1 million years. [ 35 ] Deep-sea cores with high rates of sedimentation allow high resolution analyses of IRD. In the North Pacific, records of IRD have delineated interstadials (short time thermal event during glacial interval), which could be correlated with the similar events in the North Atlantic. [ 36 ]
Carbonate in marine sediments predominantly comes from calcifying organisms, with a minor contribution from diagenesis and precipitation. Biogenic calcium carbonate has two polymorphs; calcite by foraminifera and coccolith and aragonite by corals and pteropods. While the distribution of foraminifera is generally global, that of corals is subtropical to tropical. Hence the distribution of fossil corals is commonly used as proxy for paleolatitudes. Kiessling et al. (1999) have compiled a database for the “Phanerozoic reefs” including their paleopostions for paleoclimatological reconstructions [ 37 ] Maillet et al. (2021), based on the distribution of Carboniferous coral reefs demonstrated the warm paleoclimatic conditions during the Mississippian, characterized by the wide spread of coral reefs on the supercontinent of Pangea, and this is followed by early Pennsylvanian cooling, marked by rare occurrence of coral reefs. [ 38 ]
Marine carbonate ooids are formed in warm, supersaturated, shallow, highly agitated marine water intertidal environments, and their presence in geological records provides a key role in paleoclimatic and paleogeographic reconstructions. Huang et al. (2017), for example, based on the distribution of Permian ooids and glaciomarine diamictites, have repositioned the Baoshan Block in southwestern China, with respect to other Gondwana continents. [ 39 ] | https://en.wikipedia.org/wiki/Global_Paleoclimate_Indicators |
The G7 -led Global Partnership Against the Spread of Weapons and Materials of Mass Destruction (Global Partnership) is an international security initiative announced at the 2002 G8 summit in Kananaskis, Canada, in response to the September 11 attacks . It is the primary multilateral group that coordinates funding and in-kind support to help vulnerable countries around the world combat the spread of weapons and materials of mass destruction (WMDs). [ 1 ]
The Global Partnership began as a 10-year, US$20 billion initiative aimed at addressing the threat of WMD proliferation to non-state actors and states of proliferation concern. The initial focus was on programming in Russia and other countries of the Former Soviet Union (FSU) to mitigate serious threats posed by Soviet-era WMD legacies. [ 2 ] [ 3 ] Specific priorities included: destroying stockpiles of chemical weapons , dismantling decommissioned nuclear submarines , safeguarding/disposing of fissile material , and the redirection of former weapons scientists. [ 4 ] In recognition of the Global Partnership’s success and the increasingly global nature of WMD proliferation and terrorism challenges, at the 2008 G8 Summit in Toyako, Japan, leaders agreed to expand the geographic focus of the Global Partnership beyond Russia and the FSU, and to target WMD proliferation threats wherever they presented. Additionally, at the 2011 G8 Summit in Deauville, France, G8 leaders extended the mandate of the Global Partnership beyond its original 10-year timeline (based on work undertaken by Canada during its 2010 G8 Presidency ). [ 5 ]
To date the Global Partnership community has delivered more than US$25 billion in tangible threat-reduction programming and continues to lead international efforts to mitigate all manner of CBRN threats around the world. [ 6 ] As outlined in the Global Partnership’s annual Programming Annex, in 2020 a total of 245 Projects valued at US$669 million (or €555 million) were implemented by Members in dozens of countries in every region of the world. [ 7 ] Many additional contributions were measured not by financial means, but by the leadership and diplomatic efforts of members in the areas of threat reduction or non-proliferation. [ 7 ]
The Global Partnership delivers threat reduction programming in four priority areas: nuclear and radiological security, biological security , chemical security , and implementation of the United Nations Security Council Resolution (UNSCR) 1540 .
Members of the Global Partnership coordinate and collaborate on an ongoing basis to develop and deliver projects and programs to mitigate all manner of threats posed by chemical, biological, radiological and nuclear (CBRN) weapons and related materials. Under the leadership of the rotational G7 Presidency, Global Partnership partners come together twice annually as the Global Partnership Working Group (GPWG) to review progress, assess the threat landscape and discuss where and how members can meaningfully engage to prevent terrorists and states of proliferation concern from acquiring and using weapons of mass destruction. [ 8 ]
There are four sub-working groups subsumed under the GPWG. They aim to facilitate regular dialogue between experts on the Global Partnership's thematic priorities: [ 8 ]
Although originally launched as a G8 initiative and retaining strong affiliation with the G7 (e.g. the Global Partnership Presidency rotates with that of the G7), the Global Partnership currently includes 30 active member countries and the European Union . [ 8 ] Russia does not currently participate, consistent with G7 exclusion.
The Global Partnership coordinates and collaborates with a variety of international organizations , initiatives and non-governmental organizations (NGOs) with similarly aligned objectives. Some of these key partners include the International Atomic Energy Agency (IAEA), the International Police Organization (INTERPOL) , the Organisation for the Prohibition of Chemical Weapons (OPCW), the United Nations Security Council Resolution 1540 Committee , the World Health Organization , and other United Nations Agencies. [ 9 ] | https://en.wikipedia.org/wiki/Global_Partnership_Against_the_Spread_of_Weapons_and_Materials_of_Mass_Destruction |
The Global Powder Metallurgy Database ( GPMD ) is an online searchable database that has been developed as the result of a joint project between leading regional powder metallurgy (PM) trade associations, the EPMA and its sister organisations in Japan ( JPMA ) and North America ( MPIF ).
This database was created in response to a worldwide recognition that the absence of a readily accessible source of design data was acting as a significant impediment to the wider application of PM products.
Primarily aimed at designers and engineers in the industries using PM products, it is designed to provide verified physical, mechanical and fatigue data for a range of commercially available PM materials.
This culminated in the initial launch of the database at the PM World Congress in Vienna in October 2004. The content of the database, at this launch, was restricted to data on low alloy ferrous and stainless steel PM structural part grades and bronze and iron-based PM bearing grades.
However, enhancement and extension of content and searching capability has been an ongoing process ever since. In January 2007, the content was expanded with the addition of data on non-ferrous PM structural part grades, followed, in March 2007, by the introduction of a new section covering data on Metal Injection Moulding (MIM) materials.
The latest extension to capability involves making full SN Fatigue Curve "pages" (comprising SN curves and details of individual test points) accessible to searchers. The initial content comprises over 130 SN Curve pages, covering a range of Fe-Cu-C grades and based on published information that has been analysed and collated by the group led by Professor Paul Beiss at the Technical University of Aachen. The collated SN curves cover a range of material processing conditions and density levels and a range of fatigue testing conditions (fatigue loading mode, mean stress level and notch factor).
In assembling the GPMD content, a broad range of mechanical, fatigue and physical property data has been collected from the associations’ memberships and rigorously evaluated by regional accreditation committees. However, the database's primary targets are designers and material specifiers in end-user industries who may have no prior knowledge of PM. Therefore, the bulk of the search structure has been designed to take such a searcher to the point where they can decide that they ought to contact a PM parts manufacturer to discuss a potential application in more detail.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Global_Powder_Metallurgy_Property_Database |
The Global Public Health Intelligence Network (GPHIN) is an electronic public health early warning system developed by Canada's Public Health Agency , and is part of the World Health Organization's (WHO) Global Outbreak Alert and Response Network (GOARN). This system monitors internet media, such as news wires and websites, in nine languages in order to help detect and report potential disease or other health threats around the world. [ 1 ] The system has been credited with detecting early signs of the 2009 swine flu pandemic in Mexico , Zika in West Africa , H5N1 in Iran , MERS and Ebola .
The system came to greater public awareness after it was revealed that Canada's Federal Government effectively shut it down in May 2019, ultimately preventing the system from providing an early warning of COVID-19 . In August 2020, the system began issuing alerts again.
Ronald St. John, then a government epidemiologist, created GPHIN in 1994 as a way to improve Canada's intelligence surrounding outbreaks. [ 2 ] Growing in parallel with ProMED-mail , [ 3 ] GPHIN was Canada's major contribution to the World Health Organization (WHO), which at one point credited the system with supplying 20 per cent of its "epidemiological intelligence" and described the system as "the foundation" of a global pandemic early-warning system. [ 4 ] [ 5 ]
After the 2003 SARS outbreak , the system became central to Canada's pandemic preparedness. [ 4 ] The system, which eventually fell under the Centre for Emergency Preparedness and Response in the PHAC, [ 6 ] detected early signs of the 2009 swine flu pandemic in Mexico , Zika in West Africa , H5N1 in Iran , MERS and Ebola . [ 5 ] [ 4 ]
A July 2020 investigation by The Globe and Mail revealed that Canada's Federal Government effectively shutdown GPHIN in May 2019, ultimately preventing the system from providing an early warning of COVID-19 . After the government directed for a more domestic focus, the Public Health Agency of Canada (PHAC) assigned employees to different tasks in the department. [ 5 ] The shutdown was gradual: in 2009, there were 877 alerts; 198 in 2013; and only 21 in 2018. The final alert came on May 24, 2019. [ 7 ] In August 2020, more than 400 days after going silent, the system began issuing alerts again. [ 8 ]
In early 2020 before COVID-19 was declared a pandemic , scientists at PHAC "were told to focus on official information coming out of China, rather than unofficial intelligence. Some said they struggled to convey urgent information up the chain of command." [ 2 ] Internal PHAC emails obtained by The Globe indicate that Sally Thornton, vice-president of the Health Security Infrastructure Branch, and Jim Harris, director-general of the Centre for Emergency Preparedness and Response, oversaw the decision that curtailed alerts. [ 9 ]
The [ Public Health Agency of Canada ] was not adequately prepared to respond to a pandemic, and it did not address long-standing health surveillance information issues prior to the pandemic to support its readiness. ... We will never be able to tell Canadians what would have happened if the preparedness issues had been better addressed before the pandemic hit. Perhaps the government's pandemic response would have been different. [ 10 ]
Following The Globe and Mail's report, Canada's Auditor-General began an investigation into why the program was curtailed. [ 11 ] Released in March 2021, the Auditor-General's report described PHAC as ill-prepared for the pandemic. The report focused primarily on the silencing of GPHIN and the inaccurate risk assessments that replaced it. [ 10 ] In September 2020, Canada's Health Minister Patty Hajdu ordered an independent federal review to look into both the shutdown of the system along with allegations that some scientist's voices were marginalized. [ 4 ] Former national security adviser Margaret Bloodworth ; former deputy chief public health officer Paul Gully; and Mylaine Breton, Canada Research Chair in Clinical Governance on Primary Health Care led Hajdu's review. [ 12 ] [ 13 ] Canada's Chief Public Health Officer Theresa Tam announced her support of the review, [ 14 ] while Prime Minister Justin Trudeau issued blame to funding cuts made prior to 2015 by the previous Conservative government under Stephen Harper . [ 15 ]
PHAC's Centre for Emergency Preparedness and Response (CEPR) manages GPHIN. [ 16 ] [ 9 ] In October 2020, Jim Harris was director-general CEPR. [ 9 ]
In September 2020 Brigitte Diogo replaced Sally Thornton as vice-president of the Health Security Infrastructure Branch, the bureaucratic division overseeing GPHIN among other operations. [ 17 ] A week later, President Tina Namiesniowski announced her resignation, with Trudeau nominating Iain Stewart as her successor. [ 18 ] | https://en.wikipedia.org/wiki/Global_Public_Health_Intelligence_Network |
The Global Strategic Trends Programme was established in 2001 to research and forecast potential trends that shape and inform the future strategic context. It is published by the Development, Concepts and Doctrine Centre (DCDC) which is under the UK's Strategic Command based in Shrivenham , Wiltshire .
One of the main findings of "Global Strategic Trends out to 2040" is that the era out to 2040 will be a time of transition, characterised by instability both in the relations between states, and in the relations between groups within states. During this timeframe significant global trends will include; climate change, rapid population growth, resource scarcity, a resurgence in ideology and a shift in global power from West to East. The struggle to establish an effective system of global governance, is likely to be a central theme of the era. [ 1 ]
The analysis conducted in Global Strategic Trends was recently highlighted in the UK Public Administration Select Committee Report on Who does UK National Strategy? [ 2 ]
Comments included:
Professor Peter Hennessy - " You have to have as a good a system for horizon scanning as you possibly can, with all the necessary caveats. For example, we haven’t talked about it yet, but the one that I find the most helpful was an institutionalisation of something that was done in the last defence review, the DCDC people at Shrivenham, the “Shrivenham Scans” as I call them, I find them absolutely fascinating... ...they produced a very good one [scan], the bulk of which was made public in time for this review and, as far as I can see, it's having no salience at all in the way the SDSR is being cut—yet another example of an own goal and being less than the sum of our parts. But I'm not defeatist in the way that you might—I suspect you're teasing me on this because you're not an opt out of the world man either, are you? It's not for me to ask you questions."''
Professor Hew Strachan - "Strategic trends stress those things that are likely to happen to the world, but not much of what they do really focuses on what the United Kingdom is trying to do. It’s extraordinary that DCDC is at Shrivenham, at that distance, (quite apart from the other things that have happened to it), rather than in London and central to the processes that we’re talking about. Professor Hennessy mentioned just now the publication last year of a document called “The Future Character of Conflict”, which was designed to address precisely what its title says, but its arguments are nowhere evident in current thinking in relation to strategy, let alone in relation to the Strategic Defence and Security Review."
Mr Tom Mckane (Director Strategy MOD) - "As to how these documents are produced, within the department we have the benefit of the Development Concepts and Doctrine Centre, who produce long range views of the world. Their document “Global Strategic Trends” I think you are familiar with. That type of document feeds into the work of the staff at the centre of the department who are responsible for assisting ministers and the Defence Board to think about defence strategy."
Recommendation in Written evidence submitted by Professor Julien Lindley-French - " Cross-government structures under the NSC/Cabinet Office should ideally include a Strategy Group made up of both officials and non-government experts to build on the Strategic Trends work of DCDC with a specific remit to establish likely forecasts and context for Intelligence and Planning." | https://en.wikipedia.org/wiki/Global_Strategic_Trends_Programme |
Global biodiversity is the measure of biodiversity on planet Earth and is defined as the total variability of life forms. More than 99 percent of all species [ 1 ] that ever lived on Earth are estimated to be extinct . [ 2 ] [ 3 ] Estimates on the number of Earth's current species range from 2 million to 1 trillion, but most estimates are around 11 million species or fewer. [ 4 ] About 1.74 million species were databased as of 2018, [ 5 ] and over 80 percent have not yet been described. [ 6 ] The total amount of DNA base pairs on Earth, as a possible approximation of global biodiversity, is estimated at 5.0 x 10 37 , and weighs 50 billion tonnes. [ 7 ] In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon). [ 8 ]
In other related studies, around 1.9 million extant species are believed to have been described currently, [ 9 ] but some scientists believe 20% are synonyms, reducing the total valid described species to 1.5 million. In 2013, a study published in Science estimated there to be 5 ± 3 million extant species on Earth although that is disputed. [ 10 ] Another study, published in 2011 by PLoS Biology , estimated there to be 8.7 million ± 1.3 million eukaryotic species on Earth. [ 11 ] Some 250,000 valid fossil species have been described, but this is believed to be a small proportion of all species that have ever lived. [ 12 ]
Global biodiversity is affected by extinction and speciation . The background extinction rate varies among taxa but it is estimated that there is approximately one extinction per million species years. Mammal species, for example, typically persist for 1 million years. Biodiversity has grown and shrunk in earth's past due to (presumably) abiotic factors such as extinction events caused by geologically rapid changes in climate. Climate change 299 million years ago was one such event. A cooling and drying resulted in catastrophic rainforest collapse and subsequently a great loss of diversity, especially of amphibians. [ 13 ]
Chapman, 2005 and 2009 [ 9 ] has attempted to compile perhaps the most comprehensive recent statistics on numbers of extant species, drawing on a range of published and unpublished sources, and has come up with a figure of approximately 1.9 million estimated described taxa, as against possibly a total of between 11 and 12 million anticipated species overall (described plus undescribed), though other reported values for the latter vary widely. In many cases, the values given for "Described" species are an estimate only (sometimes a mean of reported figures in the literature) since for many of the larger groups in particular, comprehensive lists of valid species names do not currently exist. For fossil species, exact or even approximate numbers are harder to find; Raup, 1986 [ 15 ] includes data based on a compilation of 250,000 fossil species so the true number is undoubtedly somewhat higher than this. The number of described species is increasing by around 18,000–19,000 extant, and approaching 2,000 fossil species each year, as of 2012. [ 16 ] [ 17 ] [ 18 ] The number of published species names is higher than the number of described species, sometimes considerably so, on account of the publication, through time, of multiple names ( synonyms ) for the same accepted taxon in many cases.
Based on Chapman's (2009) report, [ 9 ] the estimated numbers of described extant species as of 2009 can be broken down as follows:
However the total number of species for some taxa may be much higher.
In 1982, Terry Erwin published an estimate of global species richness of 30 million, by extrapolating from the numbers of beetles found in a species of tropical tree. In one species of tree, Erwin identified 1200 beetle species, of which he estimated 163 were found only in that type of tree. [ 26 ] Given the 50,000 described tropical tree species, Erwin suggested that there are almost 10 million beetle species in the tropics. [ 27 ] In 2011 a study published in PLoS Biology estimated there to be 8.7 million ± 1.3 million eukaryotic species on Earth. [ 11 ]
By 2017, most estimates projected there to be around 11 million species or fewer on Earth. [ 4 ] A 2017 study estimated there are around at least 1 to 6 billion species, 70-90% of which are bacteria. [ 4 ] A May 2016 study based on scaling laws estimated that 1 trillion species (overwhelmingly microbes) are on Earth currently with only one-thousandth of one percent described, [ 28 ] [ 29 ] though this has been controversial and a 2019 study of varied environmental samples of 16S ribosomal RNA estimated that there exist 0.8-1.6 million species of prokaryotes . [ 30 ]
After the Convention on Biological Diversity was signed in 1992, biological conservation became a priority for the international community. There are several indicators used that describe trends in global biodiversity. However, there is no single indicator for all extant species as not all have been described and measured over time. There are different ways to measure changes in biodiversity. The Living Planet Index (LPI) is a population-based indicator that combines data from individual populations of many vertebrate species to create a single index. [ 31 ] The Global LPI for 2012 decreased by 28%. There are also indices that separate temperate and tropical species for marine and terrestrial species.
The Red List Index is based on the IUCN Red List of Threatened Species and measures changes in conservation status over time and currently includes taxa that have been completely categorized: mammals, birds, amphibians and corals. [ 32 ] The Global Wild Bird Index is another indicator that shows trends in population of wild bird groups on a regional scale from data collected in formal surveys. [ 33 ] Challenges to these indices due to data availability are taxonomic gaps and the length of time of each index.
The Biodiversity Indicators Partnership was established in 2006 to assist biodiversity indicator development, advancement and to increase the availability of indicators.
Biodiversity loss happens when plant or animal species disappear completely from Earth ( extinction ) or when there is a decrease or disappearance of species in a specific area. Biodiversity loss means that there is a reduction in biological diversity in a given area. The decrease can be temporary or permanent. It is temporary if the damage that led to the loss is reversible in time, for example through ecological restoration . If this is not possible, then the decrease is permanent. The cause of most of the biodiversity loss is, generally speaking, human activities that push the planetary boundaries too far. [ 34 ] [ 35 ] [ 36 ] These activities include habitat destruction [ 37 ] (for example deforestation ) and land use intensification (for example monoculture farming). [ 38 ] [ 39 ] Further problem areas are air and water pollution (including nutrient pollution ), over-exploitation , invasive species [ 40 ] and climate change . [ 37 ]
Many scientists, along with the Global Assessment Report on Biodiversity and Ecosystem Services , say that the main reason for biodiversity loss is a growing human population because this leads to human overpopulation and excessive consumption . [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] Others disagree, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption. More important are wealth disparities between and within countries. [ 46 ] In any case, all contemporary biodiversity loss has been attributed to human activities. [ 47 ]
Climate change is another threat to global biodiversity. [ 48 ] [ 49 ] For example, coral reefs —which are biodiversity hotspots —will be lost by the year 2100 if global warming continues at the current rate. [ 50 ] [ 51 ] Still, it is the general habitat destruction (often for expansion of agriculture), not climate change, that is currently the bigger driver of biodiversity loss. [ 52 ] [ 53 ] Invasive species and other disturbances have become more common in forests in the last several decades. These tend to be directly or indirectly connected to climate change and can cause a deterioration of forest ecosystems. [ 54 ] [ 55 ]
Groups that care about the environment have been working for many years to stop the decrease in biodiversity. Nowadays, many global policies include activities to stop biodiversity loss. For example, the UN Convention on Biological Diversity aims to prevent biodiversity loss and to conserve wilderness areas . However, a 2020 United Nations Environment Programme report found that most of these efforts had failed to meet their goals. [ 56 ] For example, of the 20 biodiversity goals laid out by the Aichi Biodiversity Targets in 2010, only six were "partially achieved" by 2020. [ 57 ] [ 58 ] | https://en.wikipedia.org/wiki/Global_biodiversity |
Global change in broad sense refers to planetary-scale changes in the Earth system. It is most commonly used to encompass the variety of changes connected to the rapid increase in human activities which started around mid-20th century, i.e., the Great Acceleration . While the concept stems from research on the climate change , it is used to adopt a more holistic view of the observed changes. Global change refers to the changes of the Earth system, treated in its entirety with interacting physicochemical and biological components as well as the impact human societies have on the components and vice versa . [ 1 ] Therefore, the changes are studied through means of Earth system science .
The first global efforts to address the environmental impact of human activities on the environment worldwide date before the concept of global change was introduced. Most notably, in 1972 United Nations Conference on the Human Environment was held in Stockholm , which led to United Nations Environment Programme . While the efforts were global and the effects across the globe were considered, the Earth system approach was not yet developed at this time. The events, however, started a chain of events that led to the emergence of the field of global change research.
The concept of global change was coined as researchers investigating climate change started that not only the climate but also other components of the Earth system change at a rapid pace, which can be contributed to human activities and follow dynamics similar to many societal changes. [ 1 ] It has its origins in the World Climate Research Programme , or WCRP, an international program under the leadership of Peter Bolin , which at the time of its establishment in 1980 focused on determining if the climate is changing, can it be predicted and do humans cause the change. The first results not only confirmed human impact but led to the realisation of a larger phenomenon of global change. Subsequently Peter Bolin together with James McCarthy , Paul Crutzen , Hans Oeschger and others started International Geosphere-Biosphere Programme , or IGBP, under the sponsorship of International Council for Science . [ 2 ]
In 2001, in Amsterdam, a conference was held focused around the four major global-change research programmes at the time: WCRP, IGBP, International Human Dimensions Programme (IHDP) and Diversitas (now continued as Future Earth ). The conference was titled Challenges of a Changing Earth: Global Change Open Science Conference and was concluded with The Amsterdam Declaration on Global Change , best summarized in its first paragraph: [ 3 ]
In the past, the main drivers of planetary-scale changes have been solar variation , plate tectonics , volcanism , proliferation and abatement of life, meteorite impact, resource depletion , changes in Earth's orbit around the Sun, and changes in the tilt of Earth on its axis. There is overwhelming evidence that now the main driver of the global change is the growing human population's demand for resources; some experts and scientists have described this phenomenon as the anthropocene epoch . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] In the last 250 years, human-caused change has accelerated and caused climate change , widespread species extinctions , fish-stock collapse, desertification , ocean acidification , ozone depletion , pollution, and other large-scale shifts. [ 9 ] Recent analyses indicate that human-induced warming reached 1.14°C (range: 0.9 to 1.4°C) averaged over the 2013–2022 decade and 1.26°C (range: 1.0 to 1.6°C) in 2022. Over this period, human-induced warming has been increasing at an unprecedented rate of over 0.2°C per decade. This acceleration is attributed to record-high greenhouse gas emissions, averaging 54 ± 5.3 GtCO₂e annually, coupled with a decline in aerosol-induced cooling effects. [ 10 ]
Scientists working on the International Geosphere-Biosphere Programme have said that Earth is now operating in a "no analogue" state. [ 11 ] Measurements of Earth system processes, past and present, have led to the conclusion that the planet has moved well outside the range of natural variability in the last half million years at least. Homo sapiens have been around for about 300,000 years.
Humans have always altered their environment. The advent of agriculture around 10,000 years ago led to a radical change in land use that still continues. But, the relatively small human population had little impact on a global scale until the start of the Industrial Revolution in 1750. This event, followed by the invention of the Haber-Bosch process in 1909, which allowed large-scale manufacture of fertilizers , led directly to rapid changes to many of the planet's most important physical, chemical and biological processes.
The 1950s marked a shift in gear: global change began accelerating. Between 1950 and 2010, the population more than doubled. In that time, rapid expansion of international trade coupled with upsurges in capital flows and new technologies, particularly information and communication technologies, led to national economies becoming more fully integrated. There was a tenfold increase in economic activity and the world's human population became more tightly connected than ever before. The period saw sixfold increases in water use and river damming. About 70 percent of the world's freshwater resource is now used for agriculture. This rises to 90 percent in India and China. Half of the Earth's land surface had now been domesticated. By 2010, urban population, for the first time, exceeded rural population. And there has been a fivefold increase in fertilizer use. Indeed, manufactured reactive nitrogen from fertilizer production and industry now exceeds global terrestrial production of reactive nitrogen. Without artificial fertilizers there would not be enough food to sustain a population of seven billion people.
These changes to the human sub-system have a direct influence on all components of the Earth system. The chemical composition of the atmosphere has changed significantly. Concentrations of important greenhouse gases , carbon dioxide , methane and nitrous oxide are rising fast. Over Antarctica a large hole in the ozone layer appeared. Fisheries collapsed: most of the world's fisheries are now fully or over-exploited. Thirty percent of tropical rainforests disappeared.
In 2000, Nobel prize-winning scientist Paul Crutzen announced the scale of change is so great that in just 250 years, human society has pushed the planet into a new geological era: the Anthropocene . This name has stuck and there are calls for the Anthropocene to be adopted officially. If it is, it may be the shortest of all geological eras. Evidence suggests that if human activities continue to change components of the Earth system, which are all interlinked, this could heave the Earth system out of one state and into a new state.
Global change in a societal context encompasses social, cultural, technological, political, economic and legal change. Terms closely related to global change and society are globalization and global integration. Globalization began with long-distance trade and urbanism . The first record of long distance trading routes is in the third millennium BC. Sumerians in Mesopotamia traded with settlers in the Indus Valley , in modern-day India.
Since 1750, but more significantly, since the 1950s, global integration has accelerated. This era has witnessed incredible global changes in communications, transportation, and computer technology. Ideas, cultures, people, goods, services and money move around the planet with ease. This new global interconnectedness and free flow of information has radically altered notions of other cultures, conflicts , religions and taboos . Now, social movements can and do form at a planetary scale.
Evidence, if more were needed, of the link between social and environmental global change came with the 2008–2009 global financial crisis . The crisis pushed the planet's main economic powerhouses, the United States, Europe and much of Asia into recession. According to the Global Carbon Project , global atmospheric emissions of carbon dioxide fell from an annual growth rate of around 3.4% between 2000 and 2008, to a growth rate of about 2% in 2008. [ 12 ]
Societies everywhere are facing unprecedented challenges as a result of rapid global change (including climate change). In such a context there is need for generatively contributing to transformative social learning systems and green skills learning pathways development. Through this focus, the Chair's work contributes enhancing capacity for climate resilient development and a sustainable, socially just society in South Africa and Africa more widely. [ 13 ] Additionally, scholars underscore that climate change is intricately associated with global inequality.Industrialized nations, which have contributed most to historical greenhouse gas emissions, are often more equipped to adapt, whereas low-income countries, having contributed the least, suffer the most severe consequences. This imbalance raises ethical concerns and underscores the need for climate justice in international policy discussions. [ 14 ]
Humans are altering the planet's biogeochemical cycles in a largely unregulated way with limited knowledge of the consequences. [ 11 ] Without steps to effectively manage the Earth system – the planet's physical, chemical, biological and social components – it is likely there will be severe impacts on people and ecosystems. Perhaps the largest concern is that a component of the Earth system, for example, an ocean circulation , the Amazon rainforest , or Arctic sea ice, will reach a tipping point and flip from its current state to another state: flowing to not flowing, rainforest to savanna , or ice to no ice. A domino effect could ensue with other components of the Earth system changing state rapidly.
Intensive research over the last 20 years has shown that tipping points do exist in the Earth system, and wide-scale change can be rapid – a matter of decades. Potential tipping points have been identified and attempts have been made to quantify thresholds. But to date, the best efforts can only identify loosely defined " planetary boundaries " beyond which tipping points exist but their precise locations remain elusive.
There have been calls for a better way to manage the environment on a planetary scale, sometimes referred to as managing "Earth's life support system". [ 15 ] The United Nations was formed to stop wars and provide a platform for dialogue between countries. It was not created to avoid major environmental catastrophe on regional or global scales. But several international environmental conventions exist under the UN, including the Framework Convention on Climate Change , Montreal Protocol , Convention to Combat Desertification , and Convention on Biological Diversity . Additionally, the UN has two bodies charged with coordinating environmental and development activities, the United Nations Environment Programme (UNEP) and the United Nations Development Programme (UNDP).
In 2004, the IGBP published "Global Change and the Earth System, a planet under pressure." [ 11 ] The publication's executive summary concluded: "An overall, comprehensive, internally consistent strategy for stewardship of the Earth system is required". It stated that a research goal is to define and maintain a stable equilibrium in the global environment.
In 2007, France called for UNEP to be replaced by a new and more powerful organization called the " United Nations Environment Organization ". The rationale was that UNEP's status as a "programme", rather than an "organization" in the tradition of the World Health Organization or the World Meteorological Organization , weakened it to the extent that it was no longer fit for purpose given current knowledge of the state of the planet. [ 16 ] | https://en.wikipedia.org/wiki/Global_change |
Global coordination level ( GCL ) is a computational method that evaluates the system-wide dependency in multivariate data , by calculating the distance correlation between random subsets of the variables. Originally applied to gene expression data, GCL assesses the level of coordination between genes , which are fundamentally linked in performing tasks and biological functions. Unlike traditional methods that require precise knowledge of pairwise interactions between genes , GCL can evaluate coordination without such information. The GCL value of zero signifies independent gene expression , while values above zero indicate gene-to-gene regulatory interactions. For instance, when GCL is applied to known genetic pathways in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database, it yields significantly positive values, while random subsets of genes or mock pathways with similar gene expression levels show very low GCL values. Additionally, GCL can be useful in analyzing high-dimensional ecological and biochemical dynamics.
Genes interact with each other in a complex structure known as the gene regulatory network , which plays a crucial role in implementing various biological functions and performing different tasks within cells . However, inferring the precise pairwise interactions of the gene regulatory network remains challenging due to the large number of functional genes and the inherent stochasticity of these systems. [ 1 ] [ 2 ] Despite these challenges, certain features of the gene regulatory network can still be extracted without fully inferring all the interactions. For instance, the network connectivity, which refers to the density of actual gene-gene interactions compared to all possible interactions, may have important implications for general cellular processes .
The calculation of the Conditional Likelihood (CL) is based on multivariate dependencies among genes in a given cohort of cells. This involves a repeated procedure of randomly selecting subsets of genes and calculating the distance correlation between them, as described in the work. [ 3 ] By averaging over many such gene subsets, a single numerical value, known as the Gene Connectivity Landscape (GCL), is obtained to assess the overall dependencies between the genes.
However, there are several important pre- and post-processing steps that need to be taken into account to ensure the accuracy and reliability of the GCL. Firstly, clustering methods should be applied to divide the analyzed cohort of cells into subsets, and the GCL should be calculated separately for each subset or the largest one to ensure homogeneity . Secondly, cells that deviate significantly from the rest of the cells (referred to as ' outliers ') or cells that are too similar to each other (referred to as 'inliers') should be filtered out to avoid their undue influence on the GCL calculation.
Additionally, jackknife analysis, which involves systematically omitting subsets of cells from the analysis and recalculating the GCL, should be performed to test the stability of the results. These steps are necessary because the GCL, like other correlation measures, can be sensitive to unusual cells and heterogeneous cohorts, especially in the context of sparse, noisy, and outlier-prone scRNA -seq data.
Aging: Stochastic aberration of transcriptional regulation is a dominant factor in the process of aging . [ 4 ] However, when assessing GCLs in multiple single-cell RNA -sequencing datasets, the decline of GCL with age has been consistently observed across various organisms and cell types. Notably, significant decreases in GCL were found in mouse hematopoietic stem cells based on single-cell RNA-seq data, supporting the hypothesis of aging as dys-differentiation. This idea, originally posited by Richard Cutler in the 1970s, suggests that cells deviate from their proper state of differentiation as they age, as evidenced by the activation of genes that should normally be silent in aged tissues . [ 5 ]
Measuring biological variability: The GCL decreases in cohorts of cells with increased 'biological variability' only when it arises from gene interactions. The GCL can be used to assess and compare the ratio between introduced biological and technical variability in cohorts with similar cell-to-cell variability. [ 6 ] | https://en.wikipedia.org/wiki/Global_coordination_level |
The global distance test ( GDT ), also written as GDT_TS to represent "total score", is a measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences ) but different tertiary structures . It is most commonly used to compare the results of protein structure prediction to the experimentally determined structure as measured by X-ray crystallography , protein NMR , or, increasingly, cryoelectron microscopy .
The GDT metric was developed by Adam Zemla at Lawrence Livermore National Laboratory and originally implemented in the Local-Global Alignment ( LGA ) program. [ 1 ] [ 2 ] It is intended as a more accurate measurement than the common root-mean-square deviation (RMSD) metric - which is sensitive to outlier regions created, for example, by poor modeling of individual loop regions in a structure that is otherwise reasonably accurate. [ 1 ] The conventional GDT_TS score is computed over the alpha carbon atoms and is reported as a percentage, ranging from 0 to 100. In general, the higher the GDT_TS score, the more closely a model approximates a given reference structure.
GDT_TS measurements are used as major assessment criteria in the production of results from the Critical Assessment of Structure Prediction (CASP), a large-scale experiment in the structure prediction community dedicated to assessing current modeling techniques. [ 1 ] [ 3 ] [ 4 ] The metric was first introduced as an evaluation standard in the third iteration of the biannual experiment (CASP3) in 1998. [ 3 ] Various extensions to the original method have been developed; variations that accounts for the positions of the side chains are known as global distance calculations ( GDC ). [ 5 ]
The GDT score is calculated as the largest set of amino acid residues' alpha carbon atoms in the model structure falling within a defined distance cutoff of their position in the experimental structure, after iteratively superimposing the two structures. By the original design the GDT algorithm calculates 20 GDT scores, i.e. for each of 20 consecutive distance cutoffs (0.5 Å , 1.0 Å, 1.5 Å, ... 10.0 Å). [ 2 ] For structure similarity assessment it is intended to use the GDT scores from several cutoff distances, and scores generally increase with increasing cutoff. A plateau in this increase may indicate an extreme divergence between the experimental and predicted structures, such that no additional atoms are included in any cutoff of a reasonable distance. The conventional GDT_TS total score in CASP is the average result of cutoffs at 1, 2, 4, and 8 Å. [ 1 ] [ 6 ] [ 7 ]
The original GDT_TS is calculated based on the superimpositions and GDT scores produced by the Local-Global Alignment (LGA) program. [ 1 ] A "high accuracy" version called GDT_HA is computed by selection of smaller cutoff distances (half the size of GDT_TS) and thus more heavily penalizes larger deviations from the reference structure. It was used in the high accuracy category of CASP7. [ 8 ] CASP8 defined a new "TR score", which is GDT_TS minus a penalty for residues clustered too close, meant to penalize steric clashes in the predicted structure, sometimes to game the cutoff measure of GDT. [ 9 ] [ 10 ]
The primary GDT assessment uses only the alpha carbon atoms. To apply superposition‐based scoring to the amino acid residue side chains , a GDT‐like score called "global distance calculation for sidechains" (GDC_sc) was designed and implemented within the LGA program in 2008. [ 1 ] [ 5 ] Instead of comparing residue positions on the basis of alpha carbons, GDC_sc uses a predefined "characteristic atom" near the end of each residue for the evaluation of inter-residue distance deviations. An "all atoms" variant of the GDC score (GDC_all) is calculated using full-model information, and is one of the standard measures used by CASP's organizers and assessors to evaluate accuracy of predicted structural models. [ 5 ] [ 7 ] [ 11 ]
GDT scores are generally computed with respect to a single reference structure. In some cases, structural models with lower GDT scores to a reference structure determined by protein NMR are nevertheless better fits to the underlying experimental data. [ 12 ] Methods have been developed to estimate the uncertainty of GDT scores due to protein flexibility and uncertainty in the reference structure. [ 13 ] | https://en.wikipedia.org/wiki/Global_distance_test |
The global hectare ( gha ) is a measurement unit for the ecological footprint of people or activities and the biocapacity of the Earth or its regions. One global hectare is the world's annual amount of biological production for human use and human waste assimilation, per hectare of biologically productive land and fisheries.
It measures production and consumption of different products. It starts with the total biological production and waste assimilation in the world, including crops, forests (both wood production and CO 2 absorption), grazing and fishing. [ 1 ] The total of these kinds of production, weighted by the richness of the land they use, [ 1 ] is divided by the number of hectares used. Biologically productive areas include cropland, forest and fishing grounds, and do not include deserts, glaciers and the open ocean. [ 2 ]
"Global hectares per person" refers to the amount of production and waste assimilation per person on the planet. In 2012 there were approximately 12.2 billion global hectares of production and waste assimilation, averaging 1.7 global hectares per person. [ 3 ] Consumption totaled 20.1 billion global hectares or 2.8 global hectares per person, meaning about 65% more was consumed than produced. This is possible because there are natural reserves all around the globe that function as backup food, material and energy supplies, although only for a relatively short period of time. Due to overconsumption, these reserves are being depleted at an ever increasing tempo (see Earth Overshoot Day ).
The term "global hectare" was introduced in the early 2000s, [ 4 ] based on a similar concept from the 1970s named "ghost acreage". [ 5 ] Opponents and defenders of the concept have discussed its strengths and weaknesses. [ 6 ]
The global hectare is a useful measure of biocapacity as it can convert things like human dietary requirements into common units, which can show how many people a certain region on earth can sustain, assuming current technologies and agricultural methods. It can be used as a way of determining the relative carrying capacity of the earth.
Different hectares of land can provide different amounts of global hectares. For example, a hectare of lush area with high rainfall would scale higher in global hectares than would a hectare of desert.
It can also be used to show that consuming different foods may increase the earth's ability to support larger populations. To illustrate, producing meat generally requires more land and energy than what producing vegetables requires; sustaining a meat-based diet would require a less populated planet.
On average, a global hectare can be produced in the area of a standard hectare. A hectare ( / ˈ h ɛ k t ɛər / ; symbol ha ) is a unit of area equal to 10,000 square metres (107,639 sq ft) (a square 100 metres on each side or 328 feet on each side), 2.471 acres, 0.01 square kilometers, 0.00386102 square miles, or one square hectometre (100 metres squared). | https://en.wikipedia.org/wiki/Global_hectare |
Global information system is an information system which is developed and / or used in a global context. Some examples of GIS are SAP, The Global Learning Objects Brokered Exchange and other systems.
There are a variety of definitions and understandings of a global information system (GIS, GLIS), such as
Common to this class of information systems is that the context is a global setting, either for its use or development process. This means that it highly relates to distributed systems / distributed computing where the distribution is global. The term also incorporates aspects of global software development and there outsourcing (when the outsourcing locations are globally distributed) and offshoring aspects. A specific aspect of global information systems is the case (domain) of global software development . [ 2 ] A main research aspect in this field concerns the coordination of and collaboration between virtual teams. [ 3 ] [ 4 ] Further important aspects are the internationalization and language localization of system components.
Critical tasks in designing global information systems are
A variety of examples can be given. Basically every multi-lingual website can be seen as a global information system. However, mostly the term GLIS is used to refer to a specific system developed or used in a global context.
Specific examples are | https://en.wikipedia.org/wiki/Global_information_system |
The Global Meteoric Water Line (GMWL) describes the global annual average relationship between hydrogen and oxygen isotope ( oxygen-18 [ 18 O] and deuterium [ 2 H]) ratios in natural meteoric waters . The GMWL was first developed in 1961 by Harmon Craig , and has subsequently been widely used to track water masses in environmental geochemistry and hydrogeology .
When working on the global annual average isotopic composition of 18 O and 2 H in meteoric water, geochemist Harmon Craig observed a correlation between these two isotopes, and subsequently developed and defined the equation for GMWL: [ 2 ]
Where δ 18 O and δ 2 H (aka δD) reflect the enrichment of the heavy isotopes (e.g. 18 O versus 16 O , or 2 H versus 1 H).
The relationship of δ 18 O and δ 2 H in meteoric water is caused by mass dependent fractionation of oxygen and hydrogen isotopes between evaporation from ocean seawater and condensation from vapor. [ 3 ] As oxygen isotopes ( 18, 16 O) and hydrogen isotopes ( 2, 1 H) have different masses, they behave differently in the evaporation and condensation processes, and thus result in the fractionation between 18 O and 16 O as well as 2 H and 1 H. Equilibrium fractionation causes the isotope ratios of δ 18 O and δ 2 H to vary between localities within the area. The fractionation processes can be influenced by a number of factors including: temperature , latitude , continentality, and most importantly, humidity . [ 3 ] [ 4 ]
Craig observed that δ 18 O and δ 2 H isotopic composition of cold meteoric water from sea ice in the Arctic and Antarctica are much more negative than that in warm meteoric water from the tropic. [ 2 ] A correlation between temperature (T) and δ 18 O was proposed later [ 6 ] in the 1970s. Such correlation is then applied to study surface temperature change over time. [ 7 ] The δ 18 O of ancient meteoric water, preserved in ice cores, can also be collected and applied to reconstruct paleoclimate . [ 8 ] [ 9 ]
A meteoric water line can be calculated for a given area, named as local meteoric water line (LMWL), and used as a baseline within that area. Local meteoric water line can differ from the global meteoric water line in slope and intercept. Such deviated slope and intercept is a result largely from humidity. In 1964, the concept of deuterium excess d (d = δ 2 H - 8δ 18 O) [ 3 ] was proposed. Later, a parameter of deuterium excess as a function of humidity has been established, as such the isotopic composition in local meteoric water can be applied to trace local relative humidity, [ 10 ] study local climate and used as a tracer of climate change. [ 6 ]
In hydrogeology, the δ 18 O and δ 2 H of groundwater are often used to study the origin of groundwater [ 11 ] and groundwater recharge . [ 12 ]
It has been shown that, even taking into account the standard deviation related to instrumental errors and the natural variability of the amount-weighted precipitations, the LMWL calculated with the EIV (error in variable regression) [ 13 ] method has no differences on the slope compared to classic OLSR (ordinary least square regression) or other regression methods. [ 14 ] However, for certain purposes such as the evaluation of the shifts from the line of the geothermal waters, it would be more appropriate to calculate the so-called "prediction interval" or "error wings" related to LMWL. [ 13 ] | https://en.wikipedia.org/wiki/Global_meteoric_water_line |
The genomic epidemiological database for global identification of microorganisms or global microbial identifier [ 1 ] is a platform for storing whole genome sequencing data of microorganisms , for the identification of relevant genes and for the comparison of genomes to detect and track-and-trace infectious disease outbreaks and emerging pathogens . [ 2 ] The database holds two types of information: 1) genomic information of microorganisms , linked to, 2) metadata of those microorganism such as epidemiological details. The database includes all genera of microorganisms: bacteria , viruses , parasites and fungi . [ citation needed ]
For genotyping of microorganisms for medical diagnosis , or other purposes, scientists may use a wide variety of DNA profiling techniques, such as polymerase chain reaction , pulsed-field gel electrophoresis or multilocus sequence typing . A complication of this broad variety of techniques is the difficulty to standardize between techniques, laboratories and microorganisms, which may be overcome using the complete DNA code of the genome generated by whole genome sequencing. [ 3 ] For straightforward diagnostic identification, the whole genome sequencing information of a microbiological sample is fed into a global genomic database and compared using BLAST procedures to the genomes already present in the database. [ 4 ] In addition, whole genome sequencing data may be used to back calculate to the different pre-whole genome sequencing genotyping methods, so previous collected valuable information is not lost. [ 5 ] [ 6 ] For the global microbial identifier the genomic information is coupled to a wide spectrum of metadata about the specific microbial clone and includes important clinical and epidemiological information such as the global finding places, treatment options and antimicrobial resistance , making it a general microbiological identification tool. This makes personalized treatment of microbial disease possible as well as real-time tracing systems for global surveillance of infectious diseases for food safety and serving human health . [ citation needed ]
The initiative for building the database arose in 2011 and when several preconditions were met: 1) whole genome sequencing has become mature and serious alternative for other genotyping techniques, [ 7 ] [ 8 ] 2) the price of whole genome sequencing has started falling dramatically and in some cases below the price of traditional identifications, 3) vast amounts of IT resources and a fast Internet have become available, and 4) there is the idea that via a cross sectoral and One Health approach infectious diseases may be better controlled. [ 9 ] [ 10 ]
Starting the second millennium, many microbiological laboratories, as well as national health institutes, started genome sequencing projects for sequencing the infectious agents collections they had in their biobanks . [ 11 ] [ 12 ] Thereby generating private databases and sending model genomes to global nucleotide databases such as GenBank of the National Center for Biotechnology Information [ 13 ] or the nucleotide database of the EMBL . [ 14 ] This created a wealth of genomic information and independent databases for eukaryotic as well as prokaryotic genomes. [ 15 ] [ 16 ] [ 17 ] The need to further integrate these databases and to harmonize data collection, and to link the genomic data to metadata for optimal prevention of infectious diseases, was generally recognized by the scientific community. [ 18 ] In 2011, several infectious disease control centers and other organizations took the initiative of a series of international scientific- and policy-meetings, to develop a common platform and to better understand the potentials of an interactive microbiological genomic database. The first meeting was in Brussels, September 2011, [ 19 ] [ 20 ] followed by meetings in Washington (March 2012) and Copenhagen [ 21 ] (February 2013). In addition to experts from around the globe, Intergovernmental Organizations have been included in the action, notably the World Health Organization and the World Organisation for Animal Health . [ citation needed ]
A detailed roadmap [ 22 ] for the development of the database was set up with the following general timeline:
Current members:
Former members: | https://en.wikipedia.org/wiki/Global_microbial_identifier |
A global network is any communication network that spans the entire Earth . The term, as used in this article, refers in a more restricted way to bidirectional communication networks based on technology . Early networks such as international mail and unidirectional communication networks, such as radio and television , are described elsewhere.
The first global network was established using electrical telegraphy and global span was achieved in 1899. The telephony network was the second to achieve global status, in the 1950s. More recently, interconnected IP networks (principally the Internet , with estimated 2.5 billion users worldwide in 2014 [ 1 ] ), and the GSM mobile communication network (with over 6 billion worldwide users in 2014) form the largest global networks of all.
Setting up global networks requires immensely costly and lengthy efforts lasting for decades. Elaborate interconnections, switching and routing devices, laying out physical carriers of information, such as land and submarine cables and earth stations must be set in operation. In addition, international communication protocols , legislation and agreements are involved.
Global networks might also refer to networks of individuals (such as scientists ), communities (such as cities ) and organizations (such as civil organizations ) worldwide which, for instance, might have formed for the management, mitigation and resolution of global issues .
Communication satellites are an important part of global networks. However, there are specific low Earth orbit (LEO) global satellite constellations , such as Iridium , Globalstar and Orbcomm , which are comprised by dozens of similar satellites which are put in orbit at regularly spaced positions and form a mesh network , sometimes sending and receiving information directly among themselves. Using VSAT technology, satellite internet access has become possible.
It is estimated that 80% of the global mobile market uses the GSM standard, present in more than 212 countries and territories. Its ubiquity makes international roaming very common between mobile phone operators, enabling subscribers to use their phones in many parts of the world. In order to achieve this, these networks must be interconnected by way of peering arrangements, and therefore the GSM network is a truly global one.
The telegraph and telex communication networks have been phased out, so interconnection among existing global networks arise at several points, such as between the voice telephony and digital data networks , and between these and satellite networks. Many applications run now on several networks, such as VoIP (voice over IP). Mobile communication (voice and data) networks are also intimately intertwined, because the majority of 21st century cell phones have both voice and data (internet navigation and emailing ) capabilities.
Digital global networks require huge carrying capacity in the main backbones . This is currently achieved by fiber-optic cables .
The Canadian sociologist Marshall McLuhan was the first to forecast the huge impact of the matrix of global networks upon society , coining the term global village . His work, however, related to radio and television networks, which are broadcast (unidirectional) networks, thus predating the much larger impact of the internet. [ 2 ]
Global networks have revolutionized human communication several times. The first to do so was the electrical telegraph. Its impact was so large that it has been dubbed the Victorian Internet . It was expanded many times in its coverage with the advent of radiotelegraphy , and with text messaging using telex machines.
The Internet and mobile communication networks have made possible entirely new forms of social interaction , activities and organizing, thanks to its basic features such as widespread usability and access, and instant communication from any connected point to another. Thus, its social impact has been, and still is, enormous. Finally, the impact on governance have been significant facilitating the emergence of 'transnational policy networks' [ 3 ] | https://en.wikipedia.org/wiki/Global_network |
The Global Stocktake is a fundamental component of the Paris Agreement which is used to monitor its implementation and evaluate the collective progress made in achieving the agreed goals. The Global Stocktake thus links implementation of nationally determined contributions (NDCs) with the overarching goals of the Paris Agreement, and has the ultimate aim of raising climate ambition.
The synthesis report was published in 2023 before COP28 . [ 1 ]
The Paris Agreement marked a turning point in international climate policy. Binding under international law and global in scope, it not only sets out ambitious global goals, such as limiting the rise in average global temperature to well below 2 °C compared with pre-industrial levels, but also introduces an innovative architecture that gives Parties considerable leeway in setting their own climate change targets. In contrast to common practice under international environmental law , states' individual contributions are not negotiated at international level and achievement of set targets is not binding. To ensure that the targets are implemented nonetheless, international-level review and transparency mechanisms have been made integral to the Agreement.
The Paris Agreement requires its signatory states (known as Parties) to regularly formulate their own climate action plans, so-called nationally determined contributions (NDCs), and to implement measures that help them achieve their climate action goals. [ 2 ] There is, however, no obligation under international law for Parties to achieve their NDCs. [ 3 ]
Parties are, however, required to regularly report on their progress in implementing their NDCs and the reports are subject to international peer review. In addition to this Enhanced Transparency Framework , the Paris Agreement stipulates that Parties must regularly update their NDCs, that the updated NDCs must not fall short of the targets applicable prior to the update and that they should reflect the highest possible level of ambition. [ 4 ] In addition, a Global Stocktake is carried out once every five years to assess the collective progress made towards achieving the long-term goals. [ 5 ] [ 6 ] The outcomes of the stocktake are to be taken into account when developing nationally determined contributions. [ 7 ] The Global Stocktake is thus a fundamental component of the Paris Agreement in that it regularly takes stock of progress made and provides a basis for use in updating Parties' NDCs.
The Global Stocktake is designed to raise ambition by helping Parties to: [ 8 ]
In this way, it is hoped that the Global Stocktake will become a driver of ambition. However, the Global Stocktake takes a collective rather than an individual approach. This means that individual countries are not singled out and the outcomes of the stocktaking process should not allow conclusions to be drawn about the state of implementation in individual states. [ 9 ]
The question of whether the Global Stocktake should be limited to mitigation or should also include other aspects such as adaptation and the provision of climate finance has been the subject of controversial debate. In the run-up to the Climate Change Conference in Paris, however, the view prevailed that the Global Stocktake should take in all three. [ 8 ] As part of the Global Stocktake, Article 14 of the Paris Agreement lists adaptation and the means of implementation and support. [ 10 ]
The modalities for implementation agreed at the Climate Change Conference in Katowice provide for three stocktake phases: [ 11 ]
Phase 1 involves collecting and preparing information needed to conduct the stocktake. Information is taken from various sources. In addition to Parties' nationally determined contributions (NDCs) and the associated reports submitted under the Paris Agreement, the most recent scientific findings of the Intergovernmental Panel on Climate Change (IPCC) as well as inputs from non-governmental stakeholders and observer organisations are also used. [ 12 ] The information gathered is published in the public domain and also collated in the form of synthesis reports. Individual reports are also prepared on various focus topics – mitigation, adaptation, means of implementation, and cross-cutting issues – and on issues such as the status of global greenhouse gas emissions , the overall contribution made by NDCs and the status concerning action taken to adapt to climate change. [ 13 ]
In Phase 2, the information is assessed for collective progress in implementing the Paris Agreement and its long-term goals. This sees various stakeholders entering into a series of technical dialogues to discuss the information gathered in Phase 1. Phase 2 is also used to highlight the opportunities to strengthen and enhance response measures in dealing with climate change. The results are documented in a series of reports, including summary reports of each technical dialogue and the final synthesis report.
In Phase 3, the outcomes of the assessment flow into the policy process. The aim here is to support Parties to the Paris Agreement in enhancing both their climate change policies and the action they take to support other Parties. The outcomes are also used to promote international cooperation. On this point, it is unclear as to how the outcomes are to be documented – perhaps a political declaration or even a formal decision by the Conference of the Parties .
The first Global Stocktake will take place in 2023. [ 6 ] However, the transparency framework established by the Paris Agreement, which requires each individual state to report on the status of implementation of its NDC targets, and its national emissions, will not come into effect until 2024. Since Parties' reports compiled under the transparency framework are a vital source of information in conducting the Global Stocktake, the first Global Stocktake will have to build on earlier reporting requirements. These have numerous informational gaps, however, and it is uncertain as to what extent those gaps can be filled using other sources of information. For example, it is conceivable that greater use could be made of analyses and recommendations from non-governmental stakeholders, including civil society initiatives, companies and city administrations . Another aspect that still needs to be worked out concerns the exact timing of the three Global Stocktake phases. In particular, it must be ensured that the outputs of the process are completed in time and prepared in such a way that they can be taken into account appropriately when developing Parties' NDCs. | https://en.wikipedia.org/wiki/Global_stocktake |
The Globally Harmonized System of Classification and Labelling of Chemicals ( GHS ) is an internationally agreed-upon standard managed by the United Nations that was set up to replace the assortment of hazardous material classification and labelling schemes previously used around the world. Core elements of the GHS include standardized hazard testing criteria, universal warning pictograms, and safety data sheets which provide users of dangerous goods relevant information with consistent organization. The system acts as a complement to the UN numbered system of regulated hazardous material transport. Implementation is managed through the UN Secretariat . Although adoption has taken time, as of 2017, the system has been enacted to significant extents in most major countries of the world. [ 1 ] This includes the European Union , which has implemented the United Nations' GHS into EU law as the CLP Regulation , and United States Occupational Safety and Health Administration standards. [ 2 ]
Before the GHS was created and implemented, there were many different regulations on hazard classification in use in different countries, resulting in multiple standards, classifications and labels for the same hazard. Given the $1.7 trillion per year international trade in chemicals requiring hazard classification, the cost of compliance with multiple systems of classification and labeling is significant. Developing a worldwide standard accepted as an alternative to local and regional systems presented an opportunity to reduce costs and improve compliance. [ 3 ]
The GHS development began at the 1992 Rio Conference on Environment and Development by the United Nations, [ 4 ] also called Earth Summit (1992) , when the International Labour Organization (ILO), the Organisation for Economic Co-operation and Development (OECD), various governments, and other stakeholders agreed that "A globally harmonized hazard classification and compatible labelling system, including material safety data sheets and easily understandable symbols , should be available if feasible, by the year 2000". [ 5 ]
The universal standard for all countries was to replace all the diverse classification systems; however, it is not a compulsory provision of any treaty. The GHS provides a common infrastructure for participating countries to use when implementing a hazard classification and Hazard Communication Standard . [ 3 ]
The GHS classification system defines and classifies the physical, health, and/or environmental hazards of a substance. Each category within the classifications has associated pictograms to be used when applied to a material or mixture.
As of the 10th revision of the GHS, [ 6 ] substances or articles are assigned to 17 different hazard classes largely based on the United Nations Dangerous Goods System . [ 7 ]
The GHS approach to the classification of mixtures for health and environmental hazards uses a tiered approach and is dependent upon the amount of information available for the mixture itself and for its components. Principles that have been developed for the classification of mixtures, drawing on existing systems such as the European Union (EU) system for classification of preparations laid down in Directive 1999/45/EC . [ 9 ] The process for the classification of mixtures is based on the following steps:
Companies are encouraged to replace hazardous substances with substances featuring a reduced health risk. As an assistance to assess possible substitute substances, the Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) has developed the Column Model. On the basis of just a small amount of information on a product, substitute substances can be evaluated with the support of this table. The current version from 2020 already includes the amendments of the 12th CLP Adaptation Regulation 2019/521. [ 10 ]
The GHS generally defers to the United States Environmental Protection Agency and OECD to provide and verify toxicity testing requirements for substances or mixtures. [ 11 ] [ 12 ] Overall, the GHS criteria for determining health and environmental hazards are test method neutral, allowing different approaches as long as they are scientifically sound and validated according to international procedures and criteria already referred to in existing systems. Test data already generated for the classification of chemicals under existing systems should be accepted when classifying these chemicals under the GHS, thereby avoiding duplicative testing and the unnecessary use of test animals. [ 6 ]
For physical hazards, the test criteria are linked to specific UN test methods . [ 6 ]
Per GHS, hazards need to be communicated: [ 11 ] [ 6 ] : 4
Comprehensibility is a significant consideration in GHS implementation. The GHS Purple Book includes a comprehensibility-testing instrument in Annex 6. Factors that were considered in developing the GHS communication tools include: [ 11 ]
The standardized label elements included in the GHS are: [ 13 ] : 12
The additional label elements included in the GHS are:
The GHS includes directions for application of the hazard communication elements on the label. In particular, it specifies for each hazard, and for each class within the hazard, what signal word, pictogram , and hazard statement should be used. The GHS hazard pictograms, signal words and hazard statements should be located together on the label. The actual label format or layout is not specified. National authorities may choose to specify where information should appear on the label, or to allow supplier discretion in the placement of GHS information.
The diamond shape of GHS pictograms resembles the shape of signs mandated for use by the United States Department of Transportation . To address this, in cases where a pictogram would be required by both the Department of Transportation and the GHS indicating the same hazard, only the Transportation pictogram is to be used. [ 15 ]
Safety data sheets or SDS are specifically aimed at use in the workplace. Safety data sheets take precedence over and are intended to replace the previously used material safety data sheets (MSDS), [ 16 ] which did not have a standard layout and section format. It should provide comprehensive information about the chemical product that allows employers and workers to obtain concise, relevant and accurate information in perspective to the hazards, uses and risk management of the chemical product in the workplace. Compared to the differences found between manufacturers in MSDS, SDS have specific requirements to include the following headings in the order specified: [ 17 ]
The primary difference between the GHS and previous international industry recommendations is that sections 2 and 3 have been reversed in order. The GHS SDS headings, sequence, and content are similar to the ISO , European Union and ANSI MSDS/SDS requirements. A table comparing the content and format of a MSDS/SDS versus the GHS SDS is provided in Appendix A of the U.S. Occupational Safety and Health Administration (OSHA) GHS guidance. [ 18 ]
Current training procedures for hazard communication in the United States are more detailed than the GHS training recommendations. [ 3 ] Training is a key component of the overall GHS approach. Employees and emergency responders must be trained on all program elements, though there has been confusion among these groups of workers in the implementation process regarding which training elements have changed and are required to maintain regulatory compliance . [ 19 ]
The United Nations goal was for broad international adoption of the system, and as of 2017, the GHS had been adopted to varying degrees in many major countries. Smaller economies continue to develop regulations to implement the GHS throughout the 2020s. [ 20 ] | https://en.wikipedia.org/wiki/Globally_Harmonized_System_of_Classification_and_Labelling_of_Chemicals |
Globalsat Group is a consortium of companies providing satellite communication services worldwide with headquarters located in the United States . [ 1 ]
Globalsat Group was founded by J. Alberto Palacios in 1999. Palacios is also the active Executive Chairman of this multi-company entity.
Most products and services provided by the consortium are in the field of mission-critical satellite-based communications including voice, Internet, and Machine-to-Machine (M2M) with emphasis on L band mobile satellite service (MSS) through the Iridium , Inmarsat , Globalstar and other satellite constellations. [ 2 ] Fixed-satellite service (FSS) / VSAT and system integration are also provided in some markets, as well as Inmarsat's Ka band mobile service known as Global Express (GX). [ 3 ] One of the group's largest projects as of 2022 involves a utility company in Brazil which uses mixed L-band and terrestrial cellular for redundant high availability mobile push-to-talk communications and telemetry.
In November 2016 LeoSat and Globalsat Group sign a strategic worldwide agreement. Leo Sat Enterprises is planning to launch a constellation of as many as 108 LEO communications satellites. Under the agreement, Global sat Group will provide market access and Leo Sat will provide infrastructure for service. J. Alberto Palacios, CEO of Global sat, will hold a seat in the representation of the group on the Leo Sat Customer Technical Advisory Committee (CT AC). The committee will advise on system configuration, product design and launch of Leo Sat's upcoming satellite constellation. [ 4 ]
In December 2016 Globalsat Group obtained a license for all Inmarsat services through its Mexican affiliate Multi SAT. This includes official authorization and landing rights for foreign satellite signals in the Ka and L bands. [ 5 ]
In March 2017 Globalsat Group has been appointed as a Tier 1 Enterprise Distribution Partner for Inmarsat Global, the mobile satellite operator. This improved status enables Globalsat Group access to upcoming Inmarsat upcoming products, direct hotlines to operations personnel for faster customer support and a higher degree of solution customization. [ 6 ]
Also in March 2017 Globalsat Group and Sky and Space Global signed an MoU towards testing and offering satellite service in Latin America. Under the agreement, the group will take part in early trials of the Sky and Space Global satellite system. The non-binding deal also involves working towards establishing a commercial agreement for providing services to end-users across Global sat Groups' multi-country footprint. [ 7 ]
In January 2018, Globalsat Group bought major shares in Peruvian connectivity provider ST 2. [ 8 ] Sky and Space Global has joined hands with Globalsat Group for the launch of nano-satellites to revolutionize the telecommunication. [ 9 ]
In January 2020 Globalsat Group, Inmarsat, and Cobham signed a multi-year contract to provide satellite communications across the Brazilian railway network operated by Rumo. [ 10 ]
In November 2023, Globalsat Group partnered with Rivada to introduce Rivada's OuterNET in Latin America. This partnership aims to enhance connectivity in remote areas, with satellite-to-satellite laser links for resilient communication. Launches begin in 2025, with global service starting in 2026. [ 11 ]
In March 2024 Globalsat Group agreed to form a strategic alliance to create a new affiliate with a focus on IoT opportunities throughout Central America and the Caribbean. [ 12 ]
The companies that are part of the consortium as of May 2024 include: | https://en.wikipedia.org/wiki/Globalsat_Group |
Globalstar, Inc. is an American telecommunications company that operates a satellite constellation in low Earth orbit (LEO) for satellite phone , low-speed data transmission and earth observation. The Globalstar second-generation constellation consists of 25 satellites. [ 1 ]
The Globalstar project was launched in 1991 as a joint venture of Loral Corporation and Qualcomm . On March 24, 1994, the two sponsors announced the formation of Globalstar LP, a limited partnership established in the U.S., with financial participation from eight other companies, including Alcatel , AirTouch , Deutsche Aerospace , Hyundai , and Vodafone . At that time, the company predicted the system would launch in 1998, based on an investment of $1.8 billion.
Globalstar said in March 1994 that it expected to charge $0.65 per minute for cellular service, compared to $3 per minute from Iridium. By then it had a worldwide license from the World Administrative Radio Conference . [ 2 ] Globalstar received its US spectrum allocation from the FCC in January 1995, and continued to negotiate with other nations for rights to use the same radio frequencies in their countries.
The first satellites were launched in February 1998, but system deployment was delayed due to a launch failure in September 1998 that resulted in the loss of 12 satellites in a launch by the Russian Space Agency .
The first call on the original Globalstar system was placed on November 1, 1998, from Qualcomm chairman Irwin Jacobs in San Diego to Loral Space & Communications CEO and chairman Bernard Schwartz in New York City .
In October 1999, the system began "friendly user" trials with 44 of 48 planned satellites. In December 1999, the system began limited commercial service for 200 users with the full 48 satellites (no spares in orbit). In February 2000, it began full commercial service with its 48 satellites and 4 spares in North America, Europe, and Brazil. Another eight satellites were maintained as ground spares. Initial prices were $1.79/minute for satellite phone calls.
Following the September 11 attacks in 2001, Irwin Jacobs had his private jet re-classified as an experimental aircraft for the purpose of developing an aviation application of Globalstar. The experimental system not only provided voice and data services to the cockpit and passenger cabin, but also tied in with the aircraft's data bus and provided GPS location service. Ground monitoring of aircraft location, heading, speed, and mechanical parameters such as oil pressure and engine RPM were demonstrated. Video surveillance of the cockpit and cabin were also demonstrated. To work around Globalstar's low data rate, the experimental system used multiple user terminals (UTs) in parallel. Each UT could be configured for voice, or as a member of a bonded link group for internet access. [ 3 ]
On February 15, 2002, the predecessor company Globalstar (old Globalstar) and three of its subsidiaries filed voluntary petitions under Chapter 11 of the United States Bankruptcy Code.
In 2004, restructuring of the old Globalstar was completed. The first stage of the restructuring was completed on December 5, 2003, when Thermo Capital Partners LLC was deemed to obtain operational control of the business, as well as certain ownership rights and risks. Thermo Capital Partners became the principal owner.
Globalstar LLC was formed as a Delaware limited liability company in November 2003 and was converted into Globalstar, Inc., on March 17, 2006.
In 2007, Globalstar launched eight additional first-generation spare satellites into space to help compensate for the premature failure of their in-orbit satellites. Between 2010 and 2013, Globalstar launched 24 second-generation satellites in an effort to restore their system to full service.
Between 2010 and 2011, Globalstar moved its headquarters from Silicon Valley to Covington, Louisiana in part to take advantage of the state's tax breaks and low cost of living. [ 4 ]
In April 2018, Globalstar announced it would merge with FiberLight in a deal valued at $1.65 billion. [ 5 ] That deal was called off in August 2018 following a lawsuit from Globalstar's largest investor, Mudrick Capital Management . [ 6 ]
In March 2020, Globalstar announced that the Third Generation Partnership Project ("3GPP") had approved the 5G variant of Globalstar's Band 53, to become known as n53. [ 7 ]
On March 6, 2021, Globalstar announced to customers that the Sat-Fi2 (Satellite Wifi Hotspot) and Sat-Fi2 RAS (Remote Antenna Station) services would be discontinued as of March 12, 2021.
On September 7, 2022, Apple announced a cooperation with Globalstar Inc that "would allow iPhone 14 users to send emergency messages" via satellite, starting in the U.S. and Canada. [ 8 ] Apple would go on to release the feature on future iPhone models.
On October 29th 2024, Globalstar disclosed in an SEC filing that Apple had agreed to purchase a 20% stake in the company. [ 9 ] [ 10 ]
Globalstar is a provider of satellite and terrestrial connectivity services. Globalstar offers these services to commercial and recreational users in more than 120 countries around the world.
Globalstar's terrestrial spectrum, Band 53 and its 5G variant n53 offers carriers, cable companies and system integrators a versatile, fully licensed channel for private networks, while Globalstar's XCOMP technology offers capacity gains in dense wireless deployments.
The company's products include simplex and duplex satellite devices, data modems, and satellite airtime packages.
Many land-based and maritime industries make use of the various Globalstar products and services from remote areas beyond the reach of cellular and landline telephone services. However, many areas of the Earth's surface are left without service coverage, since a satellite requires being in range of an Earth station gateway.
Global customer segments include oil and gas, government, mining, forestry, commercial fishing, utilities, military, transportation, heavy construction, emergency preparedness, and business continuity as well as individual recreational users.
Globalstar data communication services are used for a variety of asset and personal tracking, data monitoring, and SCADA applications.
In late 2007, Globalstar subsidiary SPOT LLC launched a handheld satellite messaging and tracking personal safety device known as the SPOT Satellite Messenger . SPOT X, a two-way satellite messenger with GPS location tracking, navigational capabilities, social media linking and direct communication options to emergency services, was launched in 2018.
Globalstar satellites are simple " bent pipe " analog repeaters , [ 11 ] unlike Iridium . [ 12 ]
A network of ground gateway stations provides connectivity from the 40 satellites to the public switched telephone network and Internet. A satellite must have a Gateway station in view to provide service to any users. Twenty four Globalstar Gateways are located around the world, including seven in North America. [ 13 ] Globalstar Gateways are the largest cellular base station in the world with a design capacity for over 10,000 concurrent phone calls over a coverage area that is roughly 50% of the size of the US. Globalstar supports CDMA technology such as the rake receiver and soft handoffs , so a handset may be talking via two spot beams to two Gateways for path diversity.
Globalstar users are assigned telephone numbers on the North American Numbering Plan in North America or the appropriate telephone numbering plan for the country that the overseas gateway is located in, except for Brazil, where the official Globalstar country code (+8818) is used. The use of gateway ground stations provides customers with localized regional phone numbers for their satellite handsets. But service cannot be provided in remote areas (such as areas of the South Pacific and the polar regions) if there are no gateway stations to cover the area. As of May 2012, voice and full-duplex data services are currently non-functional over much of Africa, the South Asian subcontinent, and most mid-ocean regions due to the lack of nearby gateway earth stations . [ 14 ]
The Globalstar system uses the Qualcomm CDMA air interface; however, the Ericsson and Telit phones accept standard GSM SIM cards, while the Qualcomm GSP-1600/1700 phones do not have a SIM card interface, but use CDMA / IS-41 based authentication. Therefore, the Globalstar gateways need to support both the CDMA / IS-41 and the GSM standards.
Globalstar has roaming agreements with local cellular operators in some regions, enabling the use of a single phone number in satellite and cellular mode on multi-mode Globalstar handsets. [ 15 ] These cellular roaming agreements are not in place in North America. Because of improvements in cellular phones and networks and the limitations inherent to satellite phones, the newest Globalstar handset (released in 2006) does not include cellular connectivity as Globalstar does not expect subscribers to carry it as their only mobile phone. [ 16 ]
Globalstar orbits have an inclination of 52 degrees. Therefore, Globalstar does not cover polar areas, due to the lower orbital inclination.
Globalstar orbits have an orbital height of approximately 1400 km and latency is still relatively low (approximately 60ms).
A Globalstar satellite has two body-mounted, Earth-facing arrays. First-generation Globalstars weigh approximately 550 kg. However, the second-generation Globalstar design will gain significant mass.
In 2005, some of the satellites began to reach the limit of their operational lifetime of 7.5 years. In December 2005, Globalstar began to move some of its satellites into a graveyard orbit above LEO. [ 17 ]
According to documents filed with the SEC on January 30, 2007, Globalstar's previously identified problems with its S-band amplifiers used on its satellites for two-way communications are occurring at a higher rate than expected, possibly eventually leading to reduced levels of two-way voice and duplex data service in 2008. The company's simplex data services used to support the asset tracking products as well as the SPOT Satellite Messenger are not affected by the S-band satellite issue mentioned above. Globalstar also launched eight ground spare satellites in 2007 to help reduce the impact of the issue.
In the filing, Globalstar made the following statements:
Based on data recently collected from satellite operations, the Company has concluded that the degradation of the amplifiers is now occurring at a rate that is faster than previously experienced and faster than the Company had previously anticipated.
Based on its most recent analysis, the Company now believes that, if the degradation of the S-band antenna amplifiers continues at the current rate or further accelerates, and if the Company is unsuccessful in developing additional technical solutions, the quality of two-way communications services will decline, and by some time in 2008 substantially all of the Company's currently in-orbit satellites will cease to be able to support two-way communications services.
[ 18 ]
Industry analysts speculate the problem is caused by radiation exposure the satellites receive when they pass through the South Atlantic Anomaly in their 876-mile (1414 km) altitude orbits. [ 19 ]
The S-band antenna amplifier degradation does not affect adversely the Company's one-way "Simplex" data transmission services, which utilize only the L-band uplink from a subscriber's "Simplex" terminal to the satellites. [ 18 ]
The Company is working on plans, including new products and services and pricing programs, and exploring the feasibility of accelerating procurement and launch of its second-generation satellite constellation, to attempt to reduce the effects of this problem upon its customers and operations. The Company will be able to forecast the duration of service coverage at any particular location in its service area and intends to make this information available without charge to its service providers, including its wholly owned operating subsidiaries, so that they may work with their subscribers to reduce the impact of the degradation in service quality in their respective service areas. The Company is also reviewing its business plan in light of these developments. [ 18 ]
The Company's liquidity remains strong. At December 31, 2006, in addition to its credit agreement, the Company had unrestricted cash on hand and undrawn amounts under the Thermo Funding Company irrevocable standby stock purchase agreement of approximately $195 million. [ 18 ]
In 2007, Globalstar launched eight spare satellites for its existing constellation with a view to reducing the gaps in its two-way voice and data services pending commercial availability of its second-generation satellite constellation. Globalstar continued to operate its failing satellite constellation to provide and support services on an intermittently-available until the second-generation Globalstar satellites became available for service.
Until the new second-generation Globalstar satellite constellation became operational, Globalstar offered its "Optimum Satellite Availability Tool" website, which subscribers could use to predict when one or more unaffected satellites would be overhead at a specific geographic location.
In December 2006, Globalstar announced that Alcatel Alenia Space, now Thales Alenia Space in its Cannes headquarters , has been awarded a €661 million contract for the second-generation constellation. The satellites were designed with a life expectancy of 15 years, significantly longer than the design life of Globalstar's first-generation constellation. The second- generation constellation will consist of 24 satellites. [ 1 ]
In addition, Globalstar announced on April 3, 2007, that it has signed a €9 million agreement with Thales Alenia Space to upgrade the Globalstar satellite constellation, including necessary hardware and software upgrades to Globalstar's satellite control network facilities. [ 20 ]
In August 2008, Thales Alenia Space began production assembly, integration, and testing of the second-generation flight model satellites, in its Rome factory, for launch as early as Q3 2009. [ 21 ]
In July 2009, Globalstar, Inc. announced that it has received complete financing for its second-generation satellite constellation and signed an amendment to the initial contract, specifying in particular, the adjusted conditions for production and the new satellite delivery timetable. [ 22 ]
The first six second-generation satellites were launched on October 19, 2010, from the Baikonur Cosmodrome in Kazakhstan. [ 23 ] [ 24 ] The launch used a Soyuz-2 launch vehicle with a Fregat upper stage. [ 25 ] These second-generation satellites are expected to provide Globalstar customers with satellite voice and data services until at least 2025. Six more second-generation satellites were launched in July 2011 [ 26 ] followed by another six satellites in December 2011. [ 27 ]
The launch of the second-generation constellation was completed on February 6, 2013, with the launch of the final six satellites using a Soyuz 2-1a launch vehicle. [ 28 ] The 24 second-generation spacecraft weighed approximately 700 kg (1,500 lb) each at launch, and are 3-axis stabilized . [ 29 ]
In February 2022, it was announced that Globalstar purchased 17 new satellites to continue its constellation built by MDA and Rocket Lab for $327 million. The satellites are expected to be launched by 2025. [ 30 ]
On June 19, 2022, a backup satellite for Globalstar was launched on a Falcon 9 rocket. This was the first Globalstar satellite to launch in over 9 years. Prior to the launch, Globalstar did not announce the mission, besides a vague quarterly report stating the satellite would launch. [ 31 ]
Predecessor Company – Globalstar LP . In February 1995, Globalstar Telecommunications Ltd. raised $200 million from its initial public offering in the NASDAQ market. The IPO price of $20 per share was equivalent to $5 per share after two stock splits. The stock price peaked at (post-split) $50 per share in January 2000, but institutional investors began predicting bankruptcy as early as June 2000. The stock price eventually fell below $1 per share, and the stock was delisted by NASDAQ in June 2001.
After the IPO, the publicly traded Globalstar Telecommunications (NASDAQ symbol GSTRF) owned part of system operator Globalstar LP. From that point on, the primary financing for Globalstar LP was vendor financing from its suppliers (including Loral and Qualcomm), supplemented by junk bonds .
After a total debt and equity investment of $4.3 billion, on February 15, 2002, Globalstar Telecommunications filed for Chapter 11 bankruptcy protection, listing assets of $570 million and liabilities of $3.3 billion. The assets were later bought for $43 million by Thermo Capital Partners LLC.
Globalstar LLC and Globalstar, Inc. When the new Globalstar emerged from bankruptcy in April 2004, it was owned by Thermo Capital Partners (81.25%) and the original creditors of Globalstar L.P. (18.75%). Globalstar LLC was incorporated in April 2006 to become Globalstar, Inc.
Globalstar, Inc. completed an IPO in November 2006. The stock currently trades on the NYSE American under the symbol GSAT.
In August 2007, Globalstar announced the introduction of the SPOT Satellite Messenger product, to be marketed through its latest subsidiary SPOT, Inc., later named SPOT LLC. The SPOT Messenger is manufactured by Globalstar partner Axonn LLC and combines the company's simplex data technology with a Nemerix GPS chipset. SPOT is intended to leverage Globalstar's still adequate L-Band uplink, which is used by simplex modems. The product was launched in early November 2007. Subsequent launches included the SPOT Trace, SPOT X with Bluetooth and Gen4.
Globalstar provides the infrastructure for the Emergency SOS via satellite functionality [ 32 ] announced in 2022 for all versions of the iPhone 14 [ 33 ] series and newer. Globalstar reserves 85% of its network capacity for the service, and previous to the announcement of the service, Globalstar invested in expanding its infrastructure, including "material upgrades to Globalstar’s ground network to enhance redundancy and coverage" and "construction of 10 new gateways around the world". [ 34 ] In February 2023, Globalstar announced it would be repaying a $150 million debt under a 2019 agreement to EchoStar , which could have prevented the Apple partnership. [ 35 ]
The first five employees of Globalstar were transferred from the founding companies in 1991. Although few figures were publicly disclosed, the company apparently reached a peak of about 350 employees until layoffs in March 2001. However, this figure was misleading, as most of the development, operations, and sales employees were employed by the company's strategic partners. [ citation needed ]
The company then appointed satellite telecommunications veteran Olof Lundberg to lead a turnaround at the company to serve as chairman and CEO. After beginning his career with Swedish Telecom, Lundberg had been founding Director General (later CEO) of Inmarsat from 1979 to 1995. He served as founding CEO and later CEO and Chairman of ICO Global Communications from 1995 to 1999.
Lundberg resigned from the company (then in bankruptcy) on June 30, 2003.
Paul E. Jacobs was named CEO of Globalstar on Aug. 29, 2023, replacing David Kagan.
Official website | https://en.wikipedia.org/wiki/Globalstar |
A Globar is used as a thermal light source for infrared spectroscopy . The preferred material for making Globar is silicon carbide that is shaped as rods or arches of various sizes. When inserted into a circuit that provides it with electric current, it emits radiation from ~ 2 to 50 micrometres wavelength via the Joule heating phenomenon. Globars are used as infrared sources for spectroscopy because their spectral behavior corresponds approximately to that of a Planck radiator (i.e. a black body ). Alternative infrared sources are Nernst lamps , coils of chrome–nickel alloy or high-pressure mercury lamps .
The technical term Globar is an English portmanteau word consisting of glow and bar . The term glowbar is sometimes used synonymously in English (which is an incorrect spelling in the strict sense). [ 1 ]
The American Resistor Company in Milwaukee , Wisconsin , had word and lettering Globar registered as a trademark (in a special decorative script font ) with the United States Patent and Trademark Office on June 30, 1925 (registration number 0200201) and on October 18, 1927 (registration number 0234147). This registration had been renewed for the third time in 1987 (by various companies throughout 60 years). | https://en.wikipedia.org/wiki/Globar |
Globe at Night is an international scientific research program that crowdsources measurements of light pollution in the night sky. At set time periods within each year, the project asks people to count the number of stars that they can see from their location and report it to the project's website. The coordinating researchers compile this information to produce a public, freely available map of global light pollution. By September 2011, almost 70,000 measurements had been made. [ 1 ] The use of data collected by the public makes the program an example of citizen science . [ 2 ] Globe at Night began as a NASA educational program in the US organized by the NOAO , and was expanded internationally during the 2009 International Year of Astronomy ; [ 3 ] it is an offshoot of the GLOBE Program , which focuses on school-based science education.
Light pollution , the introduction of artificial light into formerly dark ecosystems, has numerous adverse ecological effects . Exposure to artificial light can prove fatal for some organisms (e.g. moths that fly into a burning flame), can interrupt a life cycle phase for others (e.g. glowworms are unable to attract mates), and can reduce the possibilities for finding food (because of increased risk of predation). [ 4 ] Light at night can also interfere with the chronobiology of many animals, including humans, through suppression of melatonin secretion. [ 5 ]
There are also cultural and economic reasons for concern about excessive light at night. Skyglow prevents large fractions of the Earth's population from viewing the Milky Way , [ 6 ] which drove the development of much of ancient science, mythology, and religion. In the US, the cost of generating wasted light is estimated to be 7 billion US dollars per year; [ 7 ] the production of the electricity for this wasted light also results in the release of chemical pollution and greenhouse gases .
The Globe at Night project has two main goals: raise public awareness of light pollution and its effects, and provide global mapping data for light pollution. [ 1 ] [ 8 ] [ 9 ]
The project asks members of the public to go outside on dark moonless nights and report how many stars are visible in particular constellations . [ 1 ] [ 3 ] The project focuses on students, teachers, and families, and has produced activity packets in 13 languages. [ 3 ] [ 10 ] NASA encourages students in its INSPIRE program to participate. [ 11 ]
Participating individuals are asked to go outside on specified dates at least an hour after sunset, then let their eyes adjust to the ambient light level, and observe a specific constellation: Orion or Leo in the Northern Hemisphere, Crux in the Southern Hemisphere. [ 8 ] [ 10 ] [ 12 ] The choice of a two-week span of dates near the new moon removes any effect on sky brightness from scattered moonlight , and observing well after sunset prevents any lingering light from twilight. [ 13 ] By comparing the stars they see with star charts showing stellar visibility under different light pollution conditions, they qualitatively measure light pollution. [ 14 ] [ 15 ] Stellar visibility can also be measured for the project using a Sky Quality Meter , a tool used by amateur astronomers . [ 14 ] These light pollution data are then submitted to the coordinating website via a web browser. [ 16 ] [ 17 ] The assembled data are provided to researchers and the public via a mapping interface that displays the data overlaid on Google Maps . [ 18 ]
With this technique, observers are reporting a naked eye limiting magnitude (NELM) between 1 and 7. Humans are able to observe stars below 7th magnitude, although this may require blocking out other sources of light. [ 13 ] Under clear, unpolluted skies, the measurement of NELM should be strongly correlated with the level of light pollution. Other factors, particularly those that reduce the seeing , can reduce NELM: [ 13 ]
Globe at Night also distributes teaching kits that demonstrate how fully shielded lights reduce glare and improve the visibility of the night sky. [ 3 ] [ 25 ]
The standard deviation of an individual Globe at Night observation is approximately 1.2 stellar magnitudes. [ 2 ] Due to the law of large numbers , when the observations are considered in aggregate, the errors from individual observations cancel each other out, leading to very stable mean values. This means that Globe at Night observations could be used to estimate global or regional trends in sky luminance. [ 2 ]
Globe at Night observations identify the dimmest stars that are visible given the surrounding conditions. Assuming normal visible acuity and clear skies, it is possible to approximately convert Globe at Night naked eye limiting maximum estimates into other units: [ 26 ]
The Globe at Night project was launched as a NASA program in the United States. [ 1 ] The project quickly expanded internationally, and was part of the outreach effort of the International Year of Astronomy in 2009. [ 27 ] The size of the project (in terms of number of observations) expanded dramatically in that year. In 2014, the project expanded to also include data obtained via the Loss of the Night app for Android devices, and the Dark Sky Meter app for iOS devices. In addition, new star charts were added to extend the standard map based campaign throughout the whole year. In 2015, as part of the International Year of Light , two "International Nights of Skyglow Observation" were introduced, to encourage data submission in March and September. [ 28 ]
The number of observations for each year are reported on the Globe at Night webpage: [ 9 ]
Data from the Globe at Night program has also been used in a study of the effects of artificial lighting on the foraging habits of bats. [ 29 ] | https://en.wikipedia.org/wiki/Globe_at_Night |
The Globe of Matelica (Globo of Matelica) is an ancient Roman sundial sculpted on a marble ball. The artifact was found during the 1985 reconstruction of the medieval Palazzo Pretorio, presently Museo Civico Archeologico, of Matelica in the Marches , region of Italy.
The globe measures nearly 29 cm in diameter and appears to be sculpted from a crystalline marble originating near Ephesus in present-day Turkey. It is thought to date from the first two centuries CE. There is one similar item, identified in 1939 by Carl William Blegen in a Museum in Nafplio , Greece.
All that remains is the stone component, which is engraved with a variety of inscribed lines and letters. The sphere is bisected by a center line, while on its top are three concentric circles of various diameters, intersected by an arc of a circle and on which words in ancient Greek alphabet are still visible. Additionally it features 13 holes, each marked by a Greek letter. In these holes there were - probably - metallic insertions that delineated the hour. [ 1 ]
In the lower part there is a large conical depression which ends with a big rectangular hole, likely made to secure the base. Other theories for the sphere are that it was used for astronomical calculations, thus as an armillary sphere or for use in spherical astronomy . [ 2 ]
This history of science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Globe_of_Matelica |
A globe valve , different from ball valve , is a type of valve used for regulating flow in a pipeline , consisting of a movable plug or disc element and a stationary ring seat in a generally spherical body. [ 1 ]
Globe valves are named for their spherical body shape with the two halves of the body being separated by an internal baffle . This has an opening that forms a seat onto which a movable plug [ 2 ] can be screwed in to close (or shut) the valve. The plug is also called a disc . [ 3 ] In globe valves, the plug is connected to a stem which is operated by screw action using a handwheel in manual valves. Typically, automated globe valves use smooth stems rather than threaded and are opened and closed by an actuator assembly.
Although globe valves in the past had the spherical bodies which gave them their name, many modern globe valves do not have much of a spherical shape. However, the term globe valve is still often used for valves that have such an internal mechanism. In plumbing , valves with such a mechanism are also often called stop valves since they don't have the spherical housing, but the term stop valve may refer to valves which are used to stop flow even when they have other mechanisms or designs.
The body is the main pressure -containing structure of the valve and the most easily identified as it forms the mass of the valve. It contains all of the valve's internal parts that will come in contact with the substance being controlled by the valve. The bonnet is connected to the body and provides the containment of the fluid, gas , or slurry that is being controlled.
Globe valves are typically two-port valves , although three-port valves are also produced mostly in straight-flow configuration. Ports are openings in the body for fluid flowing in or out. The two ports may be oriented straight across from each other or anywhere on the body, [ 4 ] or oriented at an angle (such as a 90°). [ 5 ] Globe valves with ports at such an angle are called angle globe valves . Globe valves are mainly used for corrosive or highly viscous fluids that solidify at room temperature. This is because straight valves are designed so that the outlet pipe is in line with the inlet pipe and the fluid has a good chance of staying there in the case of horizontal piping. In the case of angle valves, the outlet pipe is directed towards the bottom. This allows the fluid to drain off. In turn, this prevents clogging and/or corrosion of the valve components over a period of time.
A globe valve can also have a body in the shape of a "Y". This will allow the construction of the valve to be straight at the bottom as opposed to the conventional pot-type construction (to arrange bottom seat) in case of other valves. This will again allow the fluid to pass through without difficulty and minimizes fluid clogging/corrosion in the long term. [ citation needed ]
The bonnet provides a leak-proof closure for the valve body. The threaded section of the stem goes through a hole with matching threads in the bonnet. Globe valves may have a screw-in, union, or bolted [ 4 ] bonnet. Screw-in bonnet is the simplest bonnet, offering a durable, pressure-tight seal. Union bonnet is suitable for applications
The valve's closure mechanism involves plugs that connect to a stem, which is adjusted either by sliding or screwing it up or down to regulate flow. Plugs come in balanced or unbalanced types. Unbalanced plugs, typically solid, are suitable for smaller valves or those with low pressure drops. They offer advantages such as simpler design, with potential leakage only at the seat, and usually lower cost. However, they are limited in size, as larger unbalanced plugs may require impractical forces to seal and control flow. On the other hand, balanced plugs feature holes through the plug itself. They offer advantages such as easier shut-off due to reduced static forces required. However, they introduce a second potential leak path between the plug and the cage, and they tend to be more expensive.
The stem serves as a connector from the actuator to the inside of the valve and transmits this actuation force. Stems are either smooth for actuator-controlled valves or threaded for manual valves. The smooth stems are surrounded by packing material to prevent leaking material from the valve. This packing is a wearable material and will have to be replaced during maintenance. With a smooth stem the ends are threaded to allow connection to the plug and the actuator. The stem must not only withstand a large amount of compression force during valve closure, but also have high tensile strength during valve opening. In addition, the stem must be very straight, or have low run-out , in order to ensure good valve closure. This minimum run-out also minimizes wear of the packing contained in the bonnet, which provides the seal against leakage. The stem may be provided with a shroud over the packing nut to prevent foreign bodies entering the packing material, which would accelerate wear.
The cage is a part of the valve that surrounds the plug and is located inside the body of the valve. Typically, the cage is one of the greatest determiners of flow within the valve. As the plug is moved, more of the openings in the cage are exposed and flow is increased and vice versa. The design and layout of the openings can have a large effect on flow of material (the flow characteristics of different materials at temperatures , pressures that are in a range). Cages are also used to guide the plug to the seat of the valve for a good shutoff, substituting the guiding from the bonnet.
The seat ring provides a stable, uniform and replaceable shut-off surface. The seat is usually screwed in or torqued. This pushes the cage down on the lip of the seat and holds it firmly to the body of the valve. The seat may also be threaded and screwed into a thread cut in the same area of the body. However this method makes removal of the seat ring during maintenance difficult if not impossible. Seat rings are also typically beveled at the seating surface to allow for some guiding during the final stages of closing the valve.
Economical globe valves or stop valves with a similar mechanism used in plumbing often have a rubber washer at the bottom of the disc for the seating surface, so that rubber can be compressed against the seat to form a leak-tight seal when shut. | https://en.wikipedia.org/wiki/Globe_valve |
Globo H ( globohexaosylceramide ) is a globo-series glycosphingolipid antigen that is present on the outer membrane of some cancer cells . [ 1 ] [ 2 ] Globo H is not expressed in normal tissue cells, but is expressed in a number of types of cancers , including cancers of the breast, prostate, and pancreas. [ 1 ] [ 3 ] Globo H's exclusivity for cancer cells makes it a target of interest for cancer therapies. [ 1 ] [ 2 ]
Defined by the monoclonal antibody MBr1, Globo H has been isolated from breast cancer cell line MCF-7 , and its structure has been determined through several analyses, including NMR spectroscopy and methylation analysis. [ 5 ] Globo H consists of a hexasaccharide of the structure Fucα(1-2)Galβ(1-3)GalNAcβ(1-3)Galα(1-4)Galβ(1-4)Glcβ(1) with a ceramide attached to its terminal glucose ring at the 1 position in a beta linkage. [ 6 ]
Globo H's biosynthetic pathway is involved in the synthesis pathways of other globo-series glycosphingolipid antigens that are also specific to cancer cells, including stage-specific embryonic antigen-3 (SSEA3) and stage-specific embryonic antigen-4 (SSEA4). [ 1 ] The biosynthetic pathway of these antigens includes the enzyme β 1,3-galactosyltransferase V (β3GalT5). [ 1 ] β3GalT5 catalyzes the galactosylation of globoside-4 (Gb4) to SSEA3. [ 1 ] SSEA3 can then be converted to SSEA4 by sialyltransferase adding a sialic acid group to its end, or it can be converted to Globo H by fucosyltransferase adding a fucose ring to its end. [ 1 ] Playing a part in the formation of three different cancer-specific antigens, β3GalT5 is of particular interest in its relevance to cancer treatment, and it has been shown to be critical for cancer cell survival. [ 8 ]
In order to study its potential as a cancer therapy target, Globo H has been synthesized in the laboratory. [ 9 ] One synthesis is achieved by first building two trisaccharides from their component sugars, and then linking them. [ 9 ] The trisaccharides, with most of their functional groups protected to prevent side reactions, are linked by creating the GalNAcβ(1-3)Gal bond. [ 9 ] A thioethyl group is added to the 1 position on one of the protected galactose rings, and in the presence of methyl triflate , this reacts with the hydroxyl group on the 3 position of the other galactose to link the trisaccharides and form the hexasaccharide. [ 9 ] The ceramide is added to the 1 position of the terminal glucose ring after hexasaccharide formation. [ 9 ]
As a Tumor Associated Carbohydrate Antigen (TACA), Globo-H is a promising clinical target for immunotherapy. While absent in normal tissues, the glycosphingolipid is overexpressed in a variety of epithelial cancer cell types including human pancreatic, gastric, lung, colorectal, esophageal, and breast tumors. [ 10 ] [ 11 ]
Globo-H's TACA character allows for its utilization as an anticancer vaccine, inducing antibody response against the epitope. The resulting humoral immunity could enable the selective eradication of Globo H-presenting tumors. [ 12 ] The Taiwanese biopharma company OBI Pharma, Inc. , was first to develop Adagloxad Simolenin (OBI-822), a Globo H hexasaccharide conjugated with the immunostimulatory carrier protein KLH . [ 12 ] The Phase III GLORIA study is underway evaluating the carbohydrate-based immunogen's effects in high risk triple-negative breast cancer (TNBC) patients with an estimated completion date in 2027. [ 13 ]
Alternative vaccine conjugates have been developed which avoid issues associated with the protein carrier KLH by substituting it with a lipid or carbohydrate-based carrier. Examples include the use of lipid A derivatives [ 14 ] or entirely carbohydrate vaccine conjugates such as Globo H-PS A1 [ 15 ]
Globo H-targeting antibodies are another strategy currently being evaluated in the cancer therapeutic space. OBI Pharma's OBI-888 is a humanized IgG1 antibody that selectively binds to the Globo H antigen among other Globo series glycosphingolipids such as SSEA-3 and SSEA-4. [ 16 ] Additionally, in vivo studies of OBI-888 in various Globo H-positive (GH + ) xenografts models showed promising tumor growth inhibition results. [ 17 ] OBI-888's human Phase I/II study for the treatment of metastatic and locally advanced solid tumors is estimated to finish in December 2022. [ 18 ]
Based on OBI-888, the first-in-class antibody-drug conjugate (ADC) 0BI-999 was additionally developed, linking OBI-888 to monomethyl auristatin E , a synthetic antineoplastic agent. [ 19 ] The ADC is currently undergoing phase II trial in patients with advanced solid tumors, with an estimated completion date in Dec 2023. [ 20 ] In Dec 2019 & Jan 2020, OBI-999 was granted two Orphan Drug Designations by the FDA for the treatment of pancreatic and gastric cancer. [ 21 ] | https://en.wikipedia.org/wiki/Globo_H |
A globoid is a spherical crystalline inclusion in a protein body found in seed tissues that contains phytate and other nutrients for plant growth. These are found in several plants, including wheat and the genus Cucurbita . These nutrients are eventually completely depleted during seedling growth. [ 1 ] [ 2 ] In Cucurbita maxima , globoids form as early as the 3rd day of seedling growth. [ 3 ] They are located in conjunction with a larger crystalloid . [ 4 ] They are electron–dense and vary widely in size. [ 5 ]
This botany article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Globoid_(botany) |
Globus Aerostaticus ( Latin for hot air balloon ) or Ballon Aerostatique (the French equivalent) was a constellation created by Jérôme Lalande in 1798. It lay between the constellations Piscis Austrinus , Capricornus and Microscopium . It is no longer in use. [ 1 ]
The constellation was created to honor the invention of the Montgolfier brothers , who launched the first hot air balloon in the late eighteenth century. [ 1 ]
This constellation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Globus_Aerostaticus |
Globus Cassus is an art project and book by Swiss architect and artist Christian Waldvogel presenting a conceptual transformation of planet Earth into a much bigger, hollow, artificial world with an ecosphere on its inner surface. [ 1 ] It was the Swiss contribution to the 2004 Venice Architecture Biennale and was awarded the gold medal in the category "Most beautiful books of the World" at the Leipzig Book Fair in 2005. [ 2 ] [ 3 ] It consists of a meticulous description of the transformation process, a narrative of its construction, and suggestions on the organizational workings on Globus Cassus.
Waldvogel described it as an " open source " art project and stated that anyone could contribute designs and narratives to it on the project wiki. [ 4 ] As of August 2012, the Globus Cassus wiki is no longer operational.
The proposed megastructure would incorporate all of Earth's matter. Sunlight would enter through two large windows, and gravity would be simulated by the centrifugal effect . Humans would live on two vast regions that face each other and that are connected through the empty center. The hydrosphere and atmosphere would be retained on its inside. The ecosphere would be restricted to the equatorial zones, while at the low-gravity tropic zones a thin atmosphere would allow only for plantations. The polar regions would have neither gravity nor atmosphere and would therefore be used for storage of raw materials and microgravity production processes.
Globus Cassus has the form of a compressed geodesic icosahedron with two diagonal openings. Along the edges of the icosahedron run the skeleton beams, the gaps between the beams contain a shell and, where there are windows, inward-curving domes.
Earth's crust , mantle and core are gradually excavated, transported outwards and then transformed to larger strength and reduced density. While the crust is mined from open pits in the continents' centers, magma and the liquid mantle are pumped across transfer hoses. The core is dismantled from the surface.
Since the stationary cables would stay clear inside the moon 's trajectory, the construction of Globus Cassus would not alter the Earth-Moon system. However, on a planetary scale the proportions would be altered, with Globus Cassus being only slightly smaller than Saturn , the Solar System's second-largest planet.
Starting at four precisely defined points in the geostationary orbit , four space elevators are built. Eventually they become massive towers, each measuring several hundred kilometers in diameter and extending to a length of about 165,000 km. The towers contain elevators which are used to transport silicate building material to the construction sites at geostationary orbit.
The building material is converted into vacuum-porous aggregate and used to form the skeleton. It is built retaining constant symmetry and balance at every moment and will ultimately span around all sides of the earth. Then magma is pumped towards the skeleton, where it is used to form thin shells in the skeletal openings. Eight of these openings are fitted with large, inward-curving window domes made out of silicon glass.
Having been used up to a large degree, the Earth has shrunk, the polar ice caps have melted and the Earth's mass and therefore gravity has declined. This leads to the sudden loss of the atmosphere and hydrosphere , which wander outwards towards the new World. Globus Cassus' equator zones are equipped with a system of trenches and moulds that will become rivers , lakes and seas as soon as the water has settled. The transfer process of atmosphere and hydrosphere is called "The Great Rains".
The moment the Great Rains start, the Earth becomes uninhabitable. Along with massive amounts of seed for all existing plants, the regions of high cultural value, that need to be conserved and reapplied on Globus Cassus have been stored in the skeleton nodes which touch the towers. Humans and animals rise in the towers to await the end of the rains and start settling on the two equator regions.
The remaining Earth core is dismantled to build the shells that lie in the pole regions. During this process, the massive heat radiation of the core accelerates plant growth and therefore aids the process of establishing a functioning biosphere . | https://en.wikipedia.org/wiki/Globus_Cassus |
Gloger's rule is an ecogeographical rule which states that within a species of endotherms , more heavily pigmented forms tend to be found in more humid environments, e.g. near the equator . It was named after the zoologist Constantin Wilhelm Lambert Gloger , who first remarked upon this phenomenon in 1833 in a review of covariation of climate and avian plumage color. [ 1 ] Erwin Stresemann later noted that the idea had been expressed even earlier by Peter Simon Pallas in Zoographia Rosso-Asiatica (1811). [ 2 ] Gloger found that birds in more humid habitats tended to be darker than their relatives from regions with higher aridity . Over 90% of 52 North American bird species studies conform to this rule. [ 3 ]
One explanation of Gloger's rule in the case of birds appears to be the increased resistance of dark feathers to feather- or hair-degrading bacteria such as Bacillus licheniformis . Feathers in humid environments have a greater bacterial load, and humid environments are more suitable for microbial growth; dark feathers or hair are more difficult to break down. [ 4 ] More resilient eumelanins (dark brown to black) are deposited in hot and humid regions, whereas in arid regions, pheomelanins (reddish to sandy color) predominate due to the benefit of crypsis .
Among mammals , there is a marked tendency in equatorial and tropical regions to have a darker skin color than poleward relatives. In this case, the underlying cause is probably the need to better protect against the more intense solar UV radiation at lower latitudes. However, absorption of a certain amount of UV radiation is necessary for the production of certain vitamins , notably vitamin D (see also osteomalacia ).
Gloger's rule is also found among human populations. [ 5 ] [ 6 ] Populations that evolved in sunnier environments closer to the equator tend to be darker-pigmented than populations originating farther from the equator. There are exceptions, however; among the most well known are the Tibetans and Inuit , who have darker skin than might be expected from their native latitudes. In the first case, this is apparently an adaptation to the extremely high UV radiation on the Tibetan Plateau , whereas in the second case, the necessity to absorb UV radiation is alleviated by the Inuit's diet, which is naturally rich in vitamin D , and the year-round snow and ice, which effectively reflects UV into the environment. [ citation needed ] | https://en.wikipedia.org/wiki/Gloger's_rule |
Glomalin is a hypothetical glycoprotein produced abundantly on hyphae and spores of arbuscular mycorrhizal (AM) fungi in soil and in roots . Glomalin was proposed in 1996 by Sara F. Wright, a scientist at the USDA Agricultural Research Service , but it was not isolated and described yet. [ 1 ] The name comes from Glomerales , an order of fungi. Most AM fungi are of the division Glomeromycota . [ 2 ] An elusive [ clarification needed ] substance, it is mostly assumed to have a glue-like effect on soil, but it has not been isolated yet. [ 3 ]
The specific protein glomalin has not yet been isolated and described. [ 3 ] What has been described is an extraction process involving heat and citrate, producing a mixture containing a substance that is reactive to a monoclonal antibody Mab32B11 raised against crushed AM fungi spores. The substance is then provisionally named "glomalin". [ 4 ] As many laboratories do not have the equipment to perform an antibody-based isolation ( ELISA ), a crude mixture called glomalin-related soil proteins ( GRSP ) is used to refer to the extract portion reactive to the Bradford protein assay . There is significant confusion between the ideal glomalin protein, the antibody-reactive extract portion termed "glomalin", and GRSP. [ 4 ]
"Glomalin" was first detected by the Mab32B11 ELISA assay in 1987. According to the scientist that proposed the hypothetical protein, Sarah F. Wright, it eluded extraction until 1996 because "It requires an unusual effort to dislodge glomalin for study: a bath in citrate combined with heating at 250 °F (121 °C) for at least an hour.... No other soil glue found to date required anything as drastic as this." [ 1 ] However, using advanced analytical methods in 2010, the citrate-heating extraction procedure for GRSP was proven to co-extract humic substances , so it is still not clear if this "glue effect" comes from glomalin or the other substances that are co-extracted using that method. [ 4 ]
Based on her extraction, Wright thinks the "glomalin molecule is a clump of small glycoproteins with iron and other ions attached... glomalin contains from 1 to 9% tightly bound iron.... We've seen glomalin on the outside of hyphae, and we believe this is how the hyphae seal themselves so they can carry water and nutrients. It may also be what gives them the rigidity they need to span the air spaces between soil particles."
There is other circumstantial evidence to show that glomalin is of AM fungal origin. When AM fungi are eliminated from soil through incubation of soil without host plants, the concentration of GRSP declines. A similar decline in GRSP has also been observed in incubated soils from forested, afforested, and agricultural land [ 5 ] and grasslands treated with fungicide. [ 3 ]
Glomalin is not synonym of GRSP. [ 4 ] Glomalin is the protein that binds to the Mab32B11 antibody, while GRSP is a crude extract containing many substances, including humic acids . [ 4 ]
The chemistry of GRSP is not yet fully understood, and the link between glomalin, GRSP, and AM fungi is not yet clear. [ 3 ] [ 4 ] The physiological function of glomalin in fungi is also a topic of current research. [ 6 ]
Glomalin is hypothesized to improve soil aggregate stability and decrease soil erosion . [ 5 ] However, since glomalin can not be adequate quantified in soil, studies usually analyze the GRSPs extract, which is a complex mixture including proteins and other substances. [ 4 ]
GRSPs, the mixture of proteins and humic substances , [ 4 ] are a significant component of soil organic matter and act to bind mineral particles together, improving soil quality . [ 1 ] [ 3 ] GRSPs have been investigated for its carbon and nitrogen storing properties, including as a potential method of carbon sequestration . [ 7 ] [ 8 ] Sampled GRSP takes 7–42 years to biodegrade and is thought to contribute up to 30 percent of the soil carbon where mycorrhizal fungi is present. The highest levels of GRSP were found in volcanic soils of Hawaii and Japan. [ 1 ] Concentrations of glomalin in soil were correlated with the primary productivity of an ecosystem . [ 7 ] A strong correlation has been found between GRSP and soil aggregate water stability in a wide variety of soils where organic material is the main binding agent, although the mechanism is not known. [ 3 ] | https://en.wikipedia.org/wiki/Glomalin |
Glomeromycota (often referred to as glomeromycetes , as they include only one class, Glomeromycetes) are one of eight currently recognized divisions within the kingdom Fungi , [ 3 ] with approximately 230 described species. [ 4 ] Members of the Glomeromycota form arbuscular mycorrhizas (AMs) with the thalli of bryophytes and the roots of vascular land plants . Not all species have been shown to form AMs, and one, Geosiphon pyriformis , is known not to do so. Instead, it forms an endocytobiotic association with Nostoc cyanobacteria . [ 5 ] The majority of evidence shows that the Glomeromycota are dependent on land plants ( Nostoc in the case of Geosiphon ) for carbon and energy, but there is recent circumstantial evidence that some species may be able to lead an independent existence. [ 6 ] The arbuscular mycorrhizal species are terrestrial and widely distributed in soils worldwide where they form symbioses with the roots of the majority of plant species (>80%). They can also be found in wetlands , including salt-marshes, and associated with epiphytic plants.
According to multigene phylogenetic analyses, this taxon is located as a member of the phylum Mucoromycota . [ 7 ] Currently, the phylum name Glomeromycota is invalid, and the subphylum Glomeromycotina should be used to describe this taxon. [ 8 ]
The Glomeromycota have generally coenocytic (occasionally sparsely septate ) mycelia and reproduce asexually through blastic development of the hyphal tip to produce spores [ 2 ] (Glomerospores,blastospore) with diameters of 80–500 μm . [ 9 ] In some, complex spores form within a terminal saccule. [ 2 ] Recently it was shown that Glomus species contain 51 genes encoding all the tools necessary for meiosis . [ 10 ] Based on these and related findings, it was suggested that Glomus species may have a cryptic sexual cycle. [ 10 ] [ 11 ] [ 12 ]
New colonization of AM fungi largely depends on the amount of inoculum present in the soil. [ 13 ] Although pre-existing hyphae and infected root fragments have been shown to colonize the roots of a host successfully, germinating spores are considered to be the key players in new host establishment. Spores are commonly dispersed by fungal and plant burrowing herbivore partners, but some air dispersal capabilities are also known. [ 14 ] Studies have shown that spore germination is specific to particular environmental conditions such as right amount of nutrients, temperature or host availability. It has also been observed that the rate of root system colonization is directly correlated to spore density in the soil. [ 13 ] In addition, new data also suggests that AM fungi host plants also secrete chemical factors that attract and enhance the growth of developing spore hyphae towards the root system. [ 14 ]
The necessary components for the colonization of Glomeromycota include the host's fine root system, proper development of intracellular arbuscular structures, and a well-established external fungal mycelium . Colonization is accomplished by the interactions between germinating spore hyphae and the root hairs of the host or by the development of appressoria between epidermal root cells. The process is regulated by specialized chemical signaling and changes in gene expression of both the host and AM fungi. Intracellular hyphae extend up to the cortical cells of the root and penetrate the cell walls but not the inner cellular membrane creating an internal invagination . The penetrating hyphae develop a highly branched structure called an arbuscule , which has low functional periods before degradation and absorption by the host's root cells. A fully developed arbuscular mycorrhizal structure facilitates the two-way movement of nutrients between the host and mutualistic fungal partner. The symbiotic association allows the host plant to respond better to environmental stresses, and the non-photosynthetic fungi to obtain carbohydrates produced by photosynthesis. [ 14 ]
Initial studies of the Glomeromycota were based on the morphology of soil-borne sporocarps (spore clusters) found in or near colonized plant roots. [ 15 ] Distinguishing features such as wall morphologies, size, shape, color, hyphal attachment and reaction to staining compounds allowed a phylogeny to be constructed. [ 16 ] Superficial similarities led to the initial placement of genus Glomus in the unrelated family Endogonaceae . [ 17 ] Following broader reviews that cleared up the sporocarp confusion, the Glomeromycota were first proposed in the genera Acaulospora and Gigaspora [ 18 ] before being accorded their own order with the three families Glomaceae (now Glomeraceae ), Acaulosporaceae and Gigasporaceae. [ 19 ]
With the advent of molecular techniques this classification has undergone major revision. An analysis of small subunit (SSU) rRNA sequences [ 20 ] indicated that they share a common ancestor with the Dikarya . [ 2 ] Nowadays it is accepted that Glomeromycota consists of 4 orders. [ 21 ]
Diversisporales
Glomerales
Archaeosporales
Paraglomerales
Several species which produce glomoid spores (i.e. spores similar to Glomus ) in fact belong to other deeply divergent lineages [ 22 ] and were placed in the orders, Paraglomerales and Archaeosporales . [ 2 ] This new classification includes the Geosiphonaceae , which presently contains one fungus ( Geosiphon pyriformis ) that forms endosymbiotic associations with the cyanobacterium Nostoc punctiforme [ 23 ] and produces spores typical to this division, in the Archaeosporales .
Work in this field is incomplete, and members of Glomus may be better suited to different genera [ 24 ] or families. [ 9 ]
The biochemical and genetic characterization of the Glomeromycota has been hindered by their biotrophic nature, which impedes laboratory culturing. This obstacle was eventually surpassed with the use of root cultures and, most recently, a method which applies sequencing of single nucleus from spores has also been developed to circumvent this challenge. [ 25 ] The first mycorrhizal gene to be sequenced was the small-subunit ribosomal RNA (SSU rRNA). [ 26 ] This gene is highly conserved and commonly used in phylogenetic studies so was isolated from spores of each taxonomic group before amplification through the polymerase chain reaction (PCR). [ 27 ] A metatranscriptomic survey of the Sevilleta Arid Lands found that 5.4% of the fungal rRNA reads mapped to Glomeromycota. This result was inconsistent with previous PCR-based studies of community structure in the region, suggesting that previous PCR-based studies may have underestimated Glomeromycota abundance due to amplification biases. [ 28 ] | https://en.wikipedia.org/wiki/Glomeromycota |
This is a list of the notation used in Alfred North Whitehead and Bertrand Russell 's Principia Mathematica (1910–1913).
The second (but not the first) edition of Volume I has a list of notation used at the end.
This is a glossary of some of the technical terms in Principia Mathematica that are no longer widely used or whose meaning has changed. | https://en.wikipedia.org/wiki/Glossary_of_Principia_Mathematica |
This glossary of aerospace engineering terms pertains specifically to aerospace engineering , its sub-disciplines, and related fields including aviation and aeronautics . For a broad overview of engineering, see glossary of engineering .
This stabilizes the ballute as it decelerates through different flow regimes (from supersonic to subsonic). | https://en.wikipedia.org/wiki/Glossary_of_aerospace_engineering |
This page is a glossary of architecture . | https://en.wikipedia.org/wiki/Glossary_of_architecture |
Mathematics is a broad subject that is commonly divided in many areas or branches that may be defined by their objects of study , by the used methods, or by both. For example, analytic number theory is a subarea of number theory devoted to the use of methods of analysis for the study of natural numbers .
This glossary is alphabetically sorted. This hides a large part of the relationships between areas. For the broadest areas of mathematics, see Mathematics § Areas of mathematics . The Mathematics Subject Classification is a hierarchical list of areas and subjects of study that has been elaborated by the community of mathematicians. It is used by most publishers for classifying mathematical articles and books.
Also called infinitesimal calculus
Also called absolute differential calculus . | https://en.wikipedia.org/wiki/Glossary_of_areas_of_mathematics |
This glossary of astronomy is a list of definitions of terms and concepts relevant to astronomy and cosmology , their sub-disciplines, and related fields. Astronomy is concerned with the study of celestial objects and phenomena that originate outside the atmosphere of Earth . The field of astronomy features an extensive vocabulary and a significant amount of jargon.
Also visual brightness (V) .
Also argument of perifocus or argument of pericenter .
Also the north node .
Also exobiology .
Also planetary geology .
Also celestial body .
Also spelled astronomical catalog .
Also celestial object .
Also obliquity .
Also critical velocity or critical rotation .
Also spelled circumstellar disk .
Also compact object .
Also space dust .
Also cosmic microwave background radiation (CMBR) .
Also break-up velocity .
Also meridian transit .
Also the south node .
Also distant detached object and extended scattered disc object .
Also ecliptic plane or plane of the ecliptic .
Also elliptic orbit .
Also exoplanet .
Also the Cusp of Aries .
Also background stars .
Also galactic core or galactic center .
Also galactic year or cosmic year .
Also group of galaxies (GrG) .
Also geosynchronous equatorial orbit ( GEO ).
Also the Hill radius .
Also Laplace's invariable plane or the Laplace plane .
Also Keplerian orbit .
Also Edgeworth–Kuiper belt .
Also Lagrange point , libration point , or L-point .
Also the Lenakaeia Supercluster , Local Supercluster , or Local SCI .
Also Moon phase .
Also the Northward equinox .
Also shooting star or falling star .
Also normalized polar moment of inertia .
Also minor moon or minor natural satellite .
Also MK classification .
Also rise width .
Also stellar association .
Also bare eye or unaided eye .
Also moon .
Also arc length .
Also the Öpik–Oort cloud .
Also orbital plot .
Also revolution period .
Also simply called space .
Also pericenter .
Also reference plane .
Also planetary object .
Also sometimes called planetology .
Also planemo or planetary body .
Also gravitational primary , primary body , or central body .
Also direct motion .
Also quasi-stellar radio source
Also interstellar planet , nomad planet , orphan planet , and starless planet .
Also twinkling .
Also major semi-axis .
Also southward equinox .
Also positional astronomy .
Also standard acceleration due to gravity .
Also spelled star catalog .
Also stellar system .
Also stellar envelope .
Also spectral classification .
Also simply stellar model .
Also substar .
Also synodic rotation period .
Also tidal acceleration .
Also Tisserand parameter .
Also the Johnson system or Johnson–Morgan system .
Also the Local Supercluster ( LSC or LC ).
An acronym of X-ray bright optically normal galaxy . | https://en.wikipedia.org/wiki/Glossary_of_astronomy |
This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology , the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology , Glossary of genetics , Glossary of evolutionary biology , Glossary of ecology , Glossary of environmental science and Glossary of scientific naming , or any of the organism-specific glossaries in Category:Glossaries of biology .
Also called an antibacterial .
Also called selective breeding .
Sometimes used interchangeably with primary producer .
Also called the biosynthetic phase , light-independent reactions , dark reactions , or photosynthetic carbon reduction (PCR) cycle .
Also called carbon assimilation .
Also called cytology .
Also called the Krebs cycle and tricarboxylic acid cycle (TCA) .
Also called the macula adhaerens .
Also called a trophic pyramid , eltonian pyramid , energy pyramid , or sometimes food pyramid .
Sometimes called an ecospecies .
Also called a nonspontaneous reaction or unfavorable reaction .
Also called symbiogenesis .
Also spelled foetus .
(pl.) flagella
(pl.) foramimina
Also called an exotic species , foreign species , alien species , non-native species , or non-indigenous species .
Also called a white blood cell .
(sing.) mitochondrion
Also called neuroscience .
Also called autoecology .
Also called behavioral neuroscience , biological psychology , and biopsychology .
Also called procreation or breeding .
(pl.) taxa
Also called a neoplasm . | https://en.wikipedia.org/wiki/Glossary_of_biology |
This glossary of botanical terms is a list of definitions of terms and concepts relevant to botany and plants in general. Terms of plant morphology are included here as well as at the more specific Glossary of plant morphology and Glossary of leaf morphology . For other related terms, see Glossary of phytopathology , Glossary of lichen terms , and List of Latin and Greek words commonly used in systematic names .
pl. adelphiae
Also graminology .
pl. apices
pl. aphlebiae
adj. apomictic
pl. arboreta
Plural archegonia .
pl. brochi
pl. calli
pl. calyces
pl. caudices
adj. cauliflorous
sing. cilium ; adj. ciliate
adj. clinal
adj. cormose , cormous
pl. cortexes or cortices
adj. corymbose
pl. cyathia
adj. cymose
Also abbreviated dicot .
Also spelled disk .
sing. domatium
Also aglandular
Also elliptic .
adj. fasciculate
pl. fimbriae
pl. genera
Also globular .
Also gramineous
pl. herbaria
(never capitalized)
adj. keeled
pl. lamellae
adj. lamellate
Also midvein .
dim. mucronule .
pl. mycorrhizae ; adj. mycorrhizal
adj. mycotrophic
adj. nectariferous
Also spelled ochrea .
Also imparipinnate
pl. opera utique oppressa
pl. paleae
adj. paniculate
pl. papillae ; adj. papillose or papillate
Also paraperigone .
Also patulous .
adj. pedicellate
adj. pedunculate
adj. penninerved
adj. perulate
adj. phyllodineous
Also phytomelanin ; adj. phytomelanous
pl. pinnae
adj. prickly
Also puberulent .
adj. racemose ,
pl. rachises or rachides
adj. saprophytic
adj. saprotrophic
Also scabrous
adj. scapose
adj. sclerophyllous
pl. septa
pl. setae; adj. setose , setaceous
pl. sori
adj. spathaceous
adj. spicate
adj. spicate
adj. spinose
pl. squamulae ; adj. squamulose
adj. staminate
Also male flower .
Also stipel ; pl. stipellae
Also runner .
pl. stomata
pl. strobili
Also undershrub
pl. suffrutices
Also sym- .
pl. taxa
Also semiterete
Obsolete
pl. thalli
Also tomentose
Often variety in common usage and abbreviated as var.
Also nerve .
Diminutive: virgulate
pl. vittae | https://en.wikipedia.org/wiki/Glossary_of_botanical_terms |
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
This glossary of calculus is a list of definitions about calculus , its sub-disciplines, and related fields.
where b is a positive real number, and in which the argument x occurs as an exponent. For real numbers c and d, a function of the form f ( x ) = a b c x + d {\displaystyle f(x)=ab^{cx+d}} is also an exponential function, as it can be rewritten as | https://en.wikipedia.org/wiki/Glossary_of_calculus |
This glossary of cellular and molecular biology is a list of definitions of terms and concepts commonly used in the study of cell biology , molecular biology , and related disciplines, including genetics , biochemistry , and microbiology . [ 1 ] It is split across two articles:
This glossary is intended as introductory material for novices (for more specific and technical detail, see the article corresponding to each term). It has been designed as a companion to Glossary of genetics and evolutionary biology , which contains many overlapping and related terms; other related glossaries include Glossary of virology and Glossary of chemistry .
Also three-prime untranslated region , 3' non-translated region (3'-NTR) , and trailer sequence .
Also three-prime end .
Also five-prime cap .
Also five-prime untranslated region , 5' non-translated region (5'-NTR) , and leader sequence .
Also five-prime end .
Also binding site and catalytic site .
Also sex chromosome , heterochromosome , or idiochromosome .
Also differential splicing or simply splicing .
Also tRNA-ligase .
Also aminoacylated tRNA and charged tRNA .
Also antisense transcript and antisense oligonucleotide (ASO) .
Also anuclear .
Also compound X .
Also biological molecule .
Also 5-bromodeoxyuridine .
Also CAAT box or CAT box .
Also cellular biology .
Also plasma membrane , cytoplasmic membrane , and plasmalemma .
Also cell communication .
Also cell-mediated immunity .
Also map unit (m.u.) .
(pl.) chiasmata
Also idiomere .
Also crossing over .
(pl.) cilia
Also cis -regulatory module (CRM) .
(pl.) cisternae
Also sense strand , positive (+) sense strand , and nontemplate strand .
Also abbreviated SHCoA and CoASH .
Also copy DNA .
Also confluency .
Also canonical sequence .
Also contact inhibition of growth or density-dependent inhibition .
Also cooperative binding .
Also CG island and C-G island .
Also CG site and C-G site .
Also CRISPR/Cas9 gene editing .
(pl.) cristae
Also cross-link .
Also carboxyl terminus .
Also C-value paradox .
Also protoplasmic streaming and cyclosis .
Also hyaloplasm and groundplasm .
Denoted in shorthand with the symbol Δ .
Abbreviated in shorthand with dA .
Abbreviated in shorthand with dC .
Abbreviated in shorthand with dG .
Also 2-deoxyribose .
Denoted in shorthand with the somatic number 2n .
Also diplotene stage .
Also repression or suppression .
Also electropermeabilization .
Also extension .
Also antigenic determinant .
Also open chromatin .
Also expression construct .
Also intercellular matrix .
Also extranuclear DNA and cytoplasmic DNA .
(pl.) flagella
Formerly known by the abbreviation MGED .
Also Giemsa banding or G-banding .
Also gene amplification .
Also genetic modification or genetic manipulation .
Also DNA testing or genetic screening .
Also chromosome walking .
Also chromosomal DNA .
Also abbreviated GC-content .
Also single guide RNA (sgRNA) .
Denoted in shorthand with the somatic number n .
Also inheritance .
Also histone octamer and core particle .
Also homeodomain responsive element .
Also homologs or homologues .
Also lateral gene transfer (LGT) .
Sometimes used interchangeably with lipophilic .
Also ideogram .
Also insertion element or simply insert .
Also intrinsic membrane protein .
Also transmembrane protein .
Also interphase II .
Also intragenic region .
Also karyosphere .
Also simply Kozak sequence .
Also tagging .
Also donor splicing junction or donor splicing site .
Also leptotene stage .
Also phospholipid bilayer .
Plural loci .
Denoted in shorthand with the symbol q . | https://en.wikipedia.org/wiki/Glossary_of_cellular_and_molecular_biology_(0–L) |
This glossary of cellular and molecular biology is a list of definitions of terms and concepts commonly used in the study of cell biology , molecular biology , and related disciplines, including molecular genetics , biochemistry , and microbiology . [ 1 ] It is split across two articles:
This glossary is intended as introductory material for novices (for more specific and technical detail, see the article corresponding to each term). It has been designed as a companion to Glossary of genetics and evolutionary biology , which contains many overlapping and related terms; other related glossaries include Glossary of virology and Glossary of chemistry .
Also meganucleus .
Also next-generation sequencing (NGS) and second-generation sequencing .
Also short tandem repeat (STR) or simple sequence repeat (SSR) .
(pl.) microtrabeculae
Also ectosome and microparticle .
Also mispairing .
(pl.) mitochondria ; also formerly chondriosome .
Also M phase .
Also somatic crossing over .
Also phosphorodiamidate Morpholino oligomer .
Also polylinker .
Also negative regulation .
Sometimes used interchangeably with nucleobase or simply base .
Also non-standard amino acid .
Also point-nonsense mutation .
Also nonsynonymous substitution or replacement mutation .
Also amine terminus and amino terminus .
Also nuclear localization sequence .
Sometimes used interchangeably with nitrogenous base or simply base .
Also prokaryon .
Also karyoplasm .
Also karyolymph or nuclear hyaloplasm .
Also nucleoside monophosphate (NMP) .
pl. nuclei
Also abbreviated oligo .
Also one gene–one protein or one gene–one enzyme .
Also umber .
Also replication origin or simply origin .
Also osmotic stress .
Also Tumor protein P53 (TP53) , transformation-related protein 53 (TRP53) , and cellular tumor antigen p53 .
Also pachytene stage .
Also palindrome .
Also Pasteur-Meyerhof effect .
Also extrinsic membrane protein .
Also periplasm .
Also phosphodiester backbone , sugar–phosphate backbone , and phosphate–sugar backbone .
Also polyribosome or ergosome .
Also map-based cloning .
Also positive regulation .
Also blast cell .
Also peptidase .
Also protein targeting .
Abbreviated in shorthand with the letter R .
Also pycnosis or karyopyknosis .
Abbreviated in shorthand with the letter Y .
Also real-time PCR (rtPCR) .
Also repetitious DNA .
Also replication bubble .
Also Y fork .
Also restriction endonuclease , restriction exonuclease , or restrictase .
Also restriction recognition site .
Also ribonucleoside diphosphate reductase .
Often abbreviated RNAP or RNApol .
Also synthesis phase or synthetic phase .
Also selfish DNA or parasitic DNA .
Denoted in shorthand with the symbol p .
Also vegetal cell or soma .
Also intergenic spacer (IGS) or non-transcribed spacer (NTS) .
Also hairpin or hairpin loop .
Also termination codon .
Also sumoylation .
Also symplasm ; pl. syncytia .
Also synonymous substitution or samesense mutation .
Also Goldberg-Hogness box .
Also antisense strand , negative (-) sense strand , and noncoding strand .
Also deoxythymidine .
Also 5-methyluracil .
Also transcription initiation site .
Formerly referred to as soluble RNA (sRNA) .
Also transporter .
Also transposon .
Also triacylglycerol and triacylglyceride .
Also tropic movement .
Also turgidity .
Also ubiquitylation .
Also non-repetitive DNA .
Also promotion .
Denoted in shorthand with a + superscript.
Also zygotene stage . | https://en.wikipedia.org/wiki/Glossary_of_cellular_and_molecular_biology_(M–Z) |
This is a list of common chemical compounds with chemical formulae and CAS numbers , indexed by formula. This complements alternative listing at list of inorganic compounds .
There is no complete list of chemical compounds since by nature the list would be infinite.
Note: There are elements for which spellings may differ, such as aluminum/aluminium, sulfur/sulphur, and caesium/cesium. | https://en.wikipedia.org/wiki/Glossary_of_chemical_formulae |
This glossary of chemistry terms is a list of terms and definitions relevant to chemistry , including chemical laws, diagrams and formulae, laboratory tools, glassware, and equipment. Chemistry is a physical science concerned with the composition, structure, and properties of matter , as well as the changes it undergoes during chemical reactions ; it features an extensive vocabulary and a significant amount of jargon.
Note: All periodic table references refer to the IUPAC Style of the Periodic Table.
Also acid ionization constant or acidity constant .
Also actinoids .
Also paraffin .
Also olefin .
Also acetylene .
Also enplethy , chemical amount , or simply amount .
Also amphiprotic .
Also proton number .
Also kindling point .
Also main chain .
Also Rutherford–Bohr model .
Also ebullition .
Also Florence flask .
Also vaporization point .
Also simply called a buffer .
Also stopper or cork .
Also spelled buret .
Also simply CAS Number .
Also simply called a chemical .
Also pure substance or simply substance .
Also chromometer .
Also molecular bond .
Also unified atomic mass unit ( u ).
Also drying agent .
Also hydrogen-2 or heavy hydrogen , and symbolized 2 H or D .
Also coordinate covalent bond , coordinate bond , dative bond, and semipolar bond .
Also solvation .
Also malleability .
Also electron magnetic moment .
Also crystallization point .
Also depression of freezing point .
Also family .
Also simply called Hess' law .
Informally synonymous with proton .
Also universal gas constant .
Also general gas equation .
Also ketoacid .
Also lanthanoids .
Also referred to as visible light .
Also atomic mass number or nucleon number .
Also liquefaction point .
Also carbinyl .
Also molality .
Also molarity , amount concentration , or substance concentration .
Also mole fraction .
Sometimes used interchangeably with molecular weight and formula weight .
Also inert gas .
Also Lewis octet rule .
Also orbital hybridization .
Also osmolarity .
Also oxidation number .
Also oxidant , oxidizer , or electron acceptor .
Also oxyacid or oxacid .
Also amyl .
Also simply the periodic table .
Also peroxide and sometimes peroxo .
Also spelled pipet .
Also protogenic .
(pl.) quanta
Also free radical .
Also radioisotope .
Also called rare-earth metals or used interchangeably with lanthanides .
Also rate law .
Also rate-limiting step .
Sometimes used interchangeably with reagent .
Also simply intermediate .
Also activity series .
Also reductant , reducer , or electron donor .
Also ultrasonication .
Also massic heat capacity .
Also stereocenter .
Also spatial isomer .
Also constitutional isomer .
Also titrimetry or volumetric analysis .
Also superheavy elements .
Also transuranium elements .
Also Dewar flask or thermos .
Also equilibrium vapor pressure .
Also boiling .
Also water of hydration .
Also bench chemistry or classical chemistry .
Also inner salt and dipolar ion . | https://en.wikipedia.org/wiki/Glossary_of_chemistry_terms |
This glossary of civil engineering terms is a list of definitions of terms and concepts pertaining specifically to civil engineering , its sub-disciplines, and related fields. For a more general overview of concepts within engineering as a whole, see Glossary of engineering .
Also Abrams' water-cement ratio law . [ 3 ]
Also decadic absorbance .
Also paraffin .
Also non-crystalline solid .
Also building engineering or architecture engineering .
Also statement of financial position .
Also sometimes capillarity , capillary motion , capillary effect , or wicking .
Also called Dalton's law of partial pressures .
Also called engineering science .
Also house wrap .
Also ultimate strength or simply tensile strength ( TS ). | https://en.wikipedia.org/wiki/Glossary_of_civil_engineering |
The terminology of algebraic geometry changed drastically during the twentieth century, with the introduction of the general methods, initiated by David Hilbert and the Italian school of algebraic geometry in the beginning of the century, and later formalized by André Weil , Jean-Pierre Serre and Alexander Grothendieck . Much of the classical terminology, mainly based on case study, was simply abandoned, with the result that books and papers written before this time can be hard to read. This article lists some of this classical terminology, and describes some of the changes in conventions.
Dolgachev ( 2012 ) translates many of the classical terms in algebraic geometry into scheme-theoretic terminology. Other books defining some of the classical terminology include Baker ( 1922a , 1922b , 1923 , 1925 , 1933a , 1933b ), Coolidge (1931) , Coxeter (1969) , Hudson (1990) , Salmon (1879) , Semple & Roth (1949) .
On the other hand, while most of the material treated in the book exists in classical treatises in algebraic geometry, their somewhat archaic terminology and what is by now completely forgotten background knowledge makes these books useful to but a handful of experts in the classical literature.
The change in terminology from around 1948 to 1960 is not the only difficulty in understanding classical algebraic geometry. There was also a lot of background knowledge and assumptions, much of which has now changed. This section lists some of these changes.
...we refer to a certain degree of informality of language, sacrificing precision to brevity, ..., and which has long characterized most geometrical writing. ...[The meaning] depends always on the context and is invariably assumed to be capable of unambiguous interpretation by the reader.
Most particularly we refer to the recurrent use of such adjectives as `general' or `generic', or such phrases as `in general', whose meaning, wherever they are used, depends always on the context and is invariably assumed to be capable of unambiguous interpretation by the reader. | https://en.wikipedia.org/wiki/Glossary_of_classical_algebraic_geometry |
This glossary of computer science is a list of definitions of terms and concepts used in computer science , its sub-disciplines, and related fields, including terms relevant to software , data science , and computer programming .
Also simply application or app .
Also simply array .
Also machine intelligence .
Also simply binary search , half-interval search , [ 24 ] logarithmic search , [ 25 ] or binary chop . [ 26 ]
Also bitrate .
Also block list .
Also bitmap image file , device independent bitmap (DIB) file format , or simply bitmap .
Also cypher .
Also class-orientation .
Also lexical closure or function closure .
Also theoretical neuroscience or mathematical neuroscience .
Also scientific computing and scientific computation ( SC ).
Also simply storage or memory .
Also data network .
Also cybersecurity [ 67 ] or information technology security ( IT security ).
Also conditional statement , conditional expression , and conditional construct .
Also flow of control .
Also cyberharassment or online bullying .
Also data centre .
Also simply type .
Also executable code , executable file , executable program , or simply executable .
Also for-loop .
Also informally io or IO .
Also fetch–decode–execute cycle or simply fetch-execute cycle .
Also web robot , robot , or simply bot .
Also sequential search .
Also mergesort .
Portmanteau of modulator-demodulator .
Also object module .
Also formal argument .
Also partition-exchange sort .
Also base .
Also rounding error . [ 185 ]
Colloquially web address . [ 231 ]
Also user interface engineering .
Also WAVE or WAV due to its filename extension .
Also spider , spiderbot , or simply crawler .
Abbreviaton of eXtensible HyperText Markup Language . | https://en.wikipedia.org/wiki/Glossary_of_computer_science |
The following is a glossary of terms relating to construction cost estimating . | https://en.wikipedia.org/wiki/Glossary_of_construction_cost_estimating |
This glossary of developmental biology is a list of definitions of terms and concepts commonly used in the study of developmental biology and related disciplines in biology , including embryology and reproductive biology , primarily as they pertain to vertebrate animals and particularly to humans and other mammals. The developmental biology of invertebrates, plants, fungi, and other organisms is treated in other articles; e.g terms relating to the reproduction and development of insects are listed in Glossary of entomology , and those relating to plants are listed in Glossary of botany .
This glossary is intended as introductory material for novices; for more specific and technical detail, see the article corresponding to each term. Additional terms relevant to vertebrate reproduction and development may also be found in Glossary of biology , Glossary of cell biology , Glossary of genetics , and Glossary of evolutionary biology .
Also gastrocoel .
Also blastocoele , blastocele , cleavage cavity , and segmentation cavity .
Also serosa and false amnion .
Also diestrus .
Also embryogeny .
Also oestrous cycle . | https://en.wikipedia.org/wiki/Glossary_of_developmental_biology |
This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology , Glossary of evolutionary biology , and Glossary of environmental science .
Also Gause's law .
Also ecoevolution .
Also aposematism . | https://en.wikipedia.org/wiki/Glossary_of_ecology |
This glossary of electrical and electronics engineering is a list of definitions of terms and concepts related specifically to electrical engineering and electronics engineering . For terms related to engineering in general, see Glossary of engineering . | https://en.wikipedia.org/wiki/Glossary_of_electrical_and_electronics_engineering |
This is a glossary for the terminology often encountered in undergraduate quantum mechanics courses.
Cautions:
In this situation, the SE is given by the form i ℏ ∂ ∂ t Ψ α ( r , t ) = H ^ Ψ α ( r , t ) = ( − ℏ 2 2 m ∇ 2 + V ( r ) ) Ψ α ( r , t ) = − ℏ 2 2 m ∇ 2 Ψ α ( r , t ) + V ( r ) Ψ α ( r , t ) {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi _{\alpha }(\mathbf {r} ,\,t)={\hat {H}}\Psi _{\alpha }(\mathbf {r} ,\,t)=\left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} )\right)\Psi _{\alpha }(\mathbf {r} ,\,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi _{\alpha }(\mathbf {r} ,\,t)+V(\mathbf {r} )\Psi _{\alpha }(\mathbf {r} ,\,t)} It can be derived from (1) by considering Ψ α ( x , t ) := ⟨ x | α ⟩ {\displaystyle \Psi _{\alpha }(x,t):=\langle x|\alpha \rangle } and H ^ := − ℏ 2 2 m ∇ 2 + V ^ {\displaystyle {\hat {H}}:=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+{\hat {V}}} | https://en.wikipedia.org/wiki/Glossary_of_elementary_quantum_mechanics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.