text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Gewässerkennzahl ( GKZ , rarely GWK or GEWKZ ) or "waterbody index number/ waterbody number " is an identifier with which all watercourses in Germany are numbered, together with their catchments and precipitation areas. It is also referred to as a Gebietskennzahl or "basin number". A Gewässerkennzahl may have up to 13 figures (theoretically even 19). Basins normally are only defined up to seven figures. For a more detailed subdivision, the Gewässerkennzahl may be enlarged by ten more figures. Only that enlarged version is called Fließgewässerkenn ziffer . The Gewässerkennzahlen are defined by the environment offices of the states .
In order to have comparable values and usable data across the state of Germany, the Federal and State Water Authorities agreed in December 1970 to create a unified system for hydrological work on certain important rivers and their above-ground catchment areas and to issue them with index numbers. Linked to that was the establishment of the size and boundaries of their catchment areas.
Every waterbody ( streams , rivers , canals and ditches , but also lakes and even some bays ) and its catchment area was given a waterbody number in such a way that it could be clearly identified. The waterbody numbers are built in hierarchical fashion so that, based on the number, the next river system of the waterbody can be deduced.
At first the course of water has to be defined from source to mouth. Then the four major tributaries (or 'affluents') are identified. They are marked by even figures in downstream sequence, "-2, -4, -6, -8". This way, the (main) course is divided into five sections, which are marked by odd figures, "-1, -3, -5, -7, -9". A number with an even end-digit is the number of a whole watercourse, while a number with an odd end-digit is the number of a section. Lowest sections are always given a nine, even if not all figures between one and nine have been used. In the next step of numbering, each section defined by the first step is dealt in the same way, selecting four major tributaries and marking five sections.
Lakes are integrated as a part of the watercourse formed by their main tributary and their outlet.
For coastal regions, the scheme of numbering was altered in different ways by different states:
Some waterbodies indexed under this classification system have un-indexed headstreams or lateral tributaries in the form of very small streams or ditches. If such an unclassified waterbody is relevant for the water management of the region, it may be given a number within the local classification system of the regional Wasser- und Bodenverband (association for water and ground management).
The numbers for a watercourse and its catchment area are thus identical. If a river has subsidiary watercourses, an additional figure is allocated to its index number for each further branch. So in theory even a rivulet could be allocated its own catchment. In practice, catchment numbering for water sources below the level of streams is not used.
Catchment areas are thus distinguished by a number with up to a maximum of seven digits. Watercourse index numbers, by contrast, may have up to 13 digits in order to be able to classify all their tributaries and headstreams; although in practice only 10 digits are used.
The first digit of the number indicates which major river basin the waterbody belongs to, as follows:
The second and subsequent digits of the index number represent further subdivisions of the river and its catchment area.
The Heusiepen stream in Remscheid has waterbody number 27366462. This can be decoded as follows:
Listed below are all the rivers with up to a three-figure index number, and some rivers with four-figure numbers above a length of 50 km.
Here some three-figure numbers are not listed. | https://en.wikipedia.org/wiki/Gewässerkennzahl |
Martian geysers (or CO 2 jets ) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms [ 1 ] – are the two most visible types of features ascribed to these eruptions.
Martian geysers are distinct from geysers on Earth, which are typically associated with hydrothermal activity. These are unlike any terrestrial geological phenomenon. The reflectance ( albedo ), shapes and unusual spider appearance of these features have stimulated a variety of hypotheses about their origin, ranging from differences in frosting reflectance, to explanations involving biological processes. However, all current geophysical models assume some sort of jet or geyser -like activity on Mars. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] Their characteristics, and the process of their formation, are still a matter of debate.
These features are unique to the south polar region of Mars in an area informally called the 'cryptic region', at latitudes 60° to 80° south and longitudes 150°W to 310°W; [ 11 ] [ 12 ] [ 13 ] this 1 meter deep carbon dioxide (CO 2 ) ice transition area—between the scarps of the thick polar ice layer and the permafrost—is where clusters of the apparent geyser systems are located.
The seasonal frosting and defrosting of carbon dioxide ice results in the appearance of a number of features, such dark dune spots with spider-like rilles or channels below the ice, [ 3 ] where spider-like radial channels are carved between the ground and the carbon dioxide ice, giving it an appearance of spider webs, then, pressure accumulating in their interior ejects gas and dark basaltic sand or dust, which is deposited on the ice surface and thus, forming dark dune spots. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars. [ 14 ] However, it would seem that multiple years would be required to carve the larger spider-like channels. [ 2 ] There is no direct data on these features other than images taken in the visible and infrared spectra .
The geological features informally called dark dune spots and spiders were separately discovered on images acquired by the MOC camera on board the Mars Global Surveyor during 1998–1999. [ 15 ] [ 16 ] At first it was generally thought they were unrelated features because of their appearance, so from 1998 through 2000 they were reported separately on different research publications ( [ 16 ] [ 17 ] and [ 18 ] -respectively). "Jet" or "geyser" models were proposed and refined from 2000 onwards. [ 4 ] [ 5 ]
The name 'spiders' was coined by Malin Space Science Systems personnel, the developers of the camera. One of the first and most interesting spider photos was found by Greg Orme in October 2000. [ 19 ] The unusual shape and appearance of these 'spider webs' and spots caused a lot of speculation about their origin. The first years' surveillance showed that during the following Martian years, 70% of the spots appear at exactly the same place, and a preliminary statistical study obtained between September 1999 and March 2005, indicated that dark dune spots and spiders are related phenomena as functions of the cycle of carbon dioxide (CO 2 ) condensing as " dry ice " and sublimating. [ 20 ]
It was also initially suggested that the dark spots were simply warm patches of bare ground, but thermal imaging during 2006 revealed that these structures were as cold as the ice that covers the area, [ 9 ] [ 20 ] indicating they were a thin layer of dark material lying on top of the ice and kept chilled by it. [ 9 ] However, soon after their first detection, they were discovered to be negative topographical features – i.e. radial troughs or channels of what today are thought to be geyser-like vent systems. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The geysers' two most prominent features (dark dune spots and spider channels) appear at the beginning of the Martian spring on dune fields covered with carbon dioxide (CO 2 or 'dry ice'), mainly at the ridges and slopes of the dunes; by the beginning of winter, they disappear. Dark spots' shape is generally round, on the slopes it is usually elongated, sometimes with streams—possibly of water—that accumulate in pools at the bottom of the dunes. [ 21 ] [ 22 ] Dark dune spots are typically 15 to 46 metres (50 to 150 feet) wide and spaced several hundred feet apart. [ 9 ] The size of spots varies, and some are as small as 20 m across, [ 16 ] [ 23 ] —however, the smaller size seen is limited by imaging resolution—and can grow and coalesce into formations several kilometres wide.
Spider features, when viewed individually, form a round lobed structure reminiscent of a spider web radiating outward in lobes from a central point. [ 24 ] Its radial patterns represent shallow channels or ducts in the ice formed by the flow of the sublimation gas toward the vents. [ 3 ] [ 4 ] The entire spider channel network is typically 160–300 m across, although there are large variations. [ 2 ]
Each geyser's characteristic form appears to depend on a combination of such factors as local fluid or gas composition and pressure, ice thickness, underlying gravel type, local climate and meteorological conditions. [ 14 ] The geysers' boundary does not seem to correlate with any other properties of the surface such as elevation, geological structure, slope, chemical composition or thermal properties. [ 6 ] The geyser-like system produce low-albedo spots, fans and blotches, with small radial spider-like channel networks most often associated with their location. [ 2 ] [ 14 ] [ 20 ] At first, the spots seem to be grey, but later their centres darken because they gradually get covered with dark ejecta, [ 18 ] thought to be mainly basaltic sand. [ 17 ] Not all dark spots observed in early spring are associated with spider landforms, however, a preponderance of dark spots and streaks on the cryptic terrain are associated with the appearance of spiders later in the season. [ 2 ]
Time-lapsed imagery performed by NASA confirms the apparent ejection of dark material following the radial growth of spider channels in the ice. [ 9 ] Time-lapsed imaging of a single area of interest also shows that small dark spots generally indicate the position of spider features not yet visible; it also shows that spots expand significantly, including dark fans emanating from some of the spots, which increase in prominence and develop clear directionality indicative of wind action. [ 2 ]
Some branching ravines modify, some destroy and others create crust in a dynamic near-surface process that extensively reworks the terrain creating and destroying surface layers. Thus, Mars seems to have a dynamic process of recycling of its near surface crust of carbon dioxide. Growth process is rapid, happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars. [ 14 ] A number of geophysical models have been investigated to explain the various colors and shapes' development of these geysers on the southern polar ice cap of Mars.
The strength of the eruptions is estimated to range from simple upsurges to high-pressure eruptions at speeds of 160 kilometres per hour (99 mph) or more, [ 4 ] [ 25 ] carrying dark basaltic sand and dust plumes high aloft. [ 9 ] The current proposed models dealing with the possible forces powering the geyser-like system are discussed next.
The surface atmospheric pressure on Mars varies annually around: 6.7–8.8 mbar and 7.5–9.7 mbar; daily around 6.4–6.8 mbar. Because of the pressure changes subsurface gases expand and contract periodically, causing a downward gas flow during increase of and expulsion during decrease of atmospheric pressure. [ 7 ] This cycle was first quantified with measurements of the surface pressure, which varies annually with amplitude of 25%. [ 2 ]
This model proposes downward gas flow during increase of and upward flow during decrease of atmospheric pressure. In the defrosting process, ices (clathrate) may partly migrate into the soil and partly may evaporate. [ 7 ] [ 14 ] These locations can be in connection with the formation of dark dune spots and the arms of spiders as gas travel paths. [ 7 ]
Some teams propose dry venting of carbon dioxide (CO 2 ) gas and sand, occurring between the ice and the underlying bedrock. It is known that a CO 2 ice slab is virtually transparent to solar radiation where 72% of solar energy incident at 60 degrees off vertical will reach the bottom of a 1 m thick layer. [ 4 ] [ 27 ] In addition, separate teams from Taiwan and France measured the ice thickness in several target areas, and discovered that the greatest thickness of the CO 2 frost layer in the geysers' area is about 0.76–0.78 m, supporting the geophysical model of dry venting powered by sunlight. [ 8 ] [ 28 ] [ 29 ] As the southern spring CO 2 ice receives enough solar energy, it starts sublimation of the CO 2 ice from the bottom. [ 2 ] This vapor accumulates under the slab rapidly increasing pressure and erupting. [ 6 ] [ 9 ] [ 14 ] [ 30 ] [ 31 ] High-pressure gas flows through at speeds of 160 kilometres per hour (99 mph) or more; [ 4 ] [ 25 ] under the slab, the gas erodes ground as it rushes toward the vents, snatching up loose particles of sand and carving the spidery network of grooves. [ 8 ] The dark material falls back to the surface and may be taken up slope by wind, creating dark wind streak patterns on the ice cap. [ 20 ] [ 25 ] This model is consistent with past observations. [ 25 ] [ 32 ] The location, size and direction of these fans
are useful to quantifying seasonal winds and sublimation activity. [ 26 ]
It is clear that sublimation of the base of the seasonal ice cap is more than capable of generating a substantial overpressure, [ 2 ] which is four orders of magnitude higher than the ice overburden pressure and five orders of magnitude higher than atmospheric pressure as discussed above. [ 2 ]
The observation that a few dark spots form before sunrise, with significant spot formation occurring immediately following sunrise, supports the notion that the system is powered by solar energy. [ 33 ] Eventually the ice is completely removed and the dark granular material is back on the surface; [ 33 ] the cycle repeats many times. [ 20 ] [ 34 ] [ 35 ]
Laboratory experiments performed in 2016 were able to trigger dust eruptions from a layer of dust inside a CO 2 ice slab under Martian atmospheric conditions, lending support to the CO 2 jet and fan production model. [ 26 ]
Data obtained by the Mars Express satellite, made it possible in 2004 to confirm that the southern polar cap has an average of 3 kilometres (1.9 mi) thick slab of CO 2 ice [ 36 ] with varying contents of frozen water, depending on its latitude: the bright polar cap itself, is a mixture of 85% CO 2 ice and 15% water ice. [ 37 ] The second part comprises steep slopes known as 'scarps', made almost entirely of water ice, that fall away from the polar cap to the surrounding plains. [ 37 ] This transition area between the scarps and the permafrost is the 'cryptic region', where clusters of geysers are located.
This model explores the possibility of active water-driven erosive structures, where soil and water derived from the shallow sub-surface layer is expelled up by CO 2 gas through fissures eroding joints to create spider-like radiating tributaries capped with mud-like material and/or ice. [ 14 ] [ 38 ] [ 39 ] [ 40 ]
A European team proposes that the features could be a sign that non-solar energy source is responsible of the jets, subsurface heat wave for instance. [ 14 ] [ 41 ] This model is difficult to reconcile with the evidence collected in the form of thermal emission (infrared) imaging, which shows that the fans, spots and blotches are produced by expulsion of cold fluids or cold gases. [ 31 ] [ 42 ]
Michael C. Malin , a planetary scientist and designer of the cameras used by the Mars Global Surveyor that obtained the earliest images of the CO 2 geyser phenomenon, is studying the images acquired of specific areas and he tracks their changes over a period of a few years. In 2000, he modelled the fans and spots' dynamics as a complex process of carbon dioxide (CO 2 ) and water sublimation and re-precipitation. The typical pattern of defrosting proceeds from the initiation of small, dark spots typically located at the margins of dunes; these spots individually enlarge and eventually all coalesce. [ 34 ] The pattern the enlargement follows is distinct and characteristic: a dark nuclear spot enlarges slowly, often with a bright outer zone or 'halo'. As these are progressive, centripetal phenomena, each location of the light zone is overtaken by an expanding dark zone. Although initially developed along dune margins, spot formation quickly spreads onto and between dunes. As spring progresses, fan-shaped tails ('spiders') develop from the central spot. Defrosting occurs as the low albedo polar sand heats beneath an optically thin layer of frost, causing the frost to evaporate. This is the dark nucleus of the spots seen on dunes. As the vapor moves laterally, it encounters cold air and precipitates, forming the bright halo. This precipitated frost is again vaporized as the uncovered zone of sand expands; the cycle repeats many times. [ 20 ] [ 34 ] [ 35 ]
While the European Space Agency (ESA) has not yet formulated a theory or model, they have stated that the process of frost sublimation is not compatible with a few important features observed in the images, and that the location and shape of the spots is at odds with a physical explanation, specifically, because the channels appear to radiate downhill as much as they radiate uphill, defying gravity. [ 43 ]
A team of Hungarian scientists propose that the dark dune spots and channels may be colonies of photosynthetic Martian microorganisms, which over-winter beneath the ice cap, and as the sunlight returns to the pole during early spring, light penetrates the ice, the microorganisms photosynthesise and heat their immediate surroundings. A pocket of liquid water, which would normally evaporate instantly in the thin Martian atmosphere, is trapped around them by the overlying ice. As this ice layer thins, the microorganisms show through grey. When it has completely melted, they rapidly desiccate and turn black surrounded by a grey aureole. [ 22 ] [ 44 ] [ 45 ] [ 46 ] The Hungarian scientists think that even a complex sublimation process is insufficient to explain the formation and evolution of the dark dune spots in space and time. [ 23 ] [ 47 ] Since their discovery, fiction writer Arthur C. Clarke promoted these formations as deserving of study from an astrobiological perspective. [ 19 ]
A multinational European team suggests that if liquid water is present in the spiders' channels during their annual defrost cycle, the structures might provide a niche where certain microscopic life forms could have retreated and adapted while sheltered from UV solar radiation. [ 3 ] British and German teams also consider the possibility that organic matter , microbes , or even simple plants might co-exist with these inorganic formations, especially if the mechanism includes liquid water and a geothermal energy source. [ 14 ] [ 48 ] However, they also remark that the majority of geological structures may be accounted for without invoking any organic "life on Mars" hypothesis. [ 14 ] (See also: Life on Mars .)
There is no direct data on these features other than images taken in the visible and infrared spectra, and development of the Mars Geyser Hopper lander is under consideration to study the geyser-like systems. [ 49 ] [ 50 ] It has not yet been formally proposed nor funded. | https://en.wikipedia.org/wiki/Geysers_on_Mars |
Gezel is a hardware description language , allowing the implementation of a Finite-State Machine + Datapath (FSMD) model. [ 1 ] The tools included in Gezel allows for simulation, cosimulation as well as compiling into VHDL code. It is possible to extend Gezel through library-blocks written in C++.
This technology-related article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gezel |
In German-speaking countries, the miner's toolset is known as a Gezähe (derived from gizouuun , zu zawen , gezawen – to be usable, advantageous [ 1 ] ) formerly also abbreviated to Gezäh . It is a set of personally-owned mining tools and equipment needed by the miner in his daily work.
In coal mining in central Europe during the 19th and 20th centuries, every miner had his own set of tools. So that they could not be stolen, before the end of his shift they were either locked in a tool chest ( Gezähekiste ) or threaded onto a tool ring ( Gezähering ) which was then locked. To that end, all tools had a hole or eyelet. Tools that were not part of a miner's personal equipment and were only needed now and then, could be issued to the miner in the tool store ( Gezähekammer or Magazin ) in return for a token ( Gezähemarke ). Most tools were marked with a number, which was either stamped or welded to the tool. | https://en.wikipedia.org/wiki/Gezähe |
Ghabrivirales is an order of double-stranded RNA viruses . It is the only order in the class Chrysmotiviricetes . The name of the class is a portmanteau of member families: chrys oviridae, m egabirnaviridae, and toti viridae; and - viricetes which is the suffix for a virus class. The name of the order derives from Said Ghabri al, a pioneering researcher who studied viruses in this order, and - virales which is the suffix for a virus order. [ 1 ]
The order contains three suborders and 19 families, listed hereafter (- virineae denotes suborders, and - viridae denotes families): [ 2 ]
This virus -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ghabrivirales |
The Ghana Atomic Energy Commission (GAEC) is the state organization in Ghana involved with surveillance of the use of nuclear energy in Ghana . It is similar in aim to the Ghana Nuclear Society (GNS), with the difference being that the GNS is a nonprofit organisation, whereas the GAEC is part of the parliament of Ghana . Its primary objectives were set out by the parliament act 588, which involve investigating the use of nuclear energy for Ghana and supporting research and development both in Ghana and abroad. [ 1 ]
Ghana Atomic Energy Commission (GAEC) was formed as way back as 1952 when radioisotopes began in Ghana. Around that time, radiostrontium was what used in experiments. In the year 1958, the Department of physics of the University College of the Gold Coast, which is now (University of Ghana, Legon) started a radioactive fallout monitoring service on behalf of the Defence Ministry. In the year 1961, work in radioisotopes in Ghana had gained grounds in most of the Government institutions to explain establishment of a Radioisotope Unit. [ 2 ]
Responsible for preservation, maintenance and enhancement of nuclear knowledge in Ghana and Africa through the provision of high-quality teaching, research, entrepreneurship training, service and development of postgraduate programmes in the nuclear sciences and technology.
Authorize, inspect and control all activities and practices involving sealed radiation sources, ionizing radiation and other sources, radioactive materials and x-rays used in hospitals in Ghana. Implementation of safety culture by providing adequate human resource development in radiation and waste safety for management and operating organizations. Conduct research and technical services in radiation and waste safety. | https://en.wikipedia.org/wiki/Ghana_Atomic_Energy_Commission |
The Ghana Institution of Engineering (GhIE) is professional bodies responsible for licensing practicing engineers in Ghana . It was founded in 1968 to succeed the Ghana Group of Professional Engineers. The Institution derives its authority from the Engineering Council Act 2011, Act 819 and the Professional Bodies Registration Decree NRCD143 of 1973. [ 1 ] It regulates the activities of engineers and engineering firms in Ghana. It also sets standards in engineering sector of Ghana and organises professional exams for engineers. [ 2 ]
As part of its mission, the GhIE aims to: [ 1 ]
Membership categories include Fellows, Members, Associates, Graduate Members, Affiliates and Technicians.
Merged content from Ghana Institutite of Engineers . See Talk:Ghana Institutite of Engineers
This Ghana -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ghana_Institution_of_Engineers |
The Ghana Nuclear Society (GNS) is a nonprofit organization that advocates for the introduction of nuclear energy in Ghana . It is headquartered at the Ghana Atomic Energy Commission (GAEC) in Accra . With the establishment of The Ghana Nuclear Society, Ghana has joined the league of those countries with National Nuclear Societies. Its head office is located at the Ghana Atomic Energy Commission (GAEC) in Accra. The current national president is Prof. John Justice Fletcher. The society is not for science inclined persons alone.
The Ghana Nuclear Society received its certificate of incorporation on 13 May 2008. The society, which operates under the motto "Nuclear for Sustainable Energy Development," has an eight-member Advisory Panel that consults with the Board of Directors, which is made up of 13 persons and four members from the National and Student Chapter Executives.
The society has created public information programs on nuclear matters, and it has produced seminars, educational outreach programs and interactive media presentations on local radio and television stations. It also publishes a newsletter that outlines issues relating to nuclear energy.
It is planning educational programs at the SAMBEL Academy and the GAEC, [ 1 ] and at primary schools located near the Graduate School of Nuclear and Allied Sciences [ 2 ] at Atomic, Kwabeyna.
1) Prof. D. Adzei Bekoe - (Chairman, Council of state & Chairman, GAEC Board)
2) Prof. S.K.A. Danso - UG
3) Prof. Ebenezer Laing - UG
Board Of Directors
1) Prof J.J. Fletcher – President, GNS
2) Prof. B.J.B Nyarko – 1st Vice President, GNS
3) Dr. E. O. Darko – General Secretary, GNS
4) Prof. E.H.K. Akaho - Director General, GAEC
5) Prof. (Mrs.) Aba Andam - KNUST
6) Ms. Elizabeth Ohene - Minister of State in charge of Tertiary Education
7) Prof. J.H. Amuasi - Coordinator, SNAS-UG
8) Mr. Samuel Manteaw - Faculty of Law, UG
9) Prof. G.K. Tetteh - Physics Dept. U. G
10) Prof. Cyril Schandorf - HOD, Nuclear Safety and Security
11) Prof. G.Y.P. Klu - HOD, NUAG
12) Prof. (Mrs.) Victoria Appiah- Programme Coordinator, RAPR
13) Prof. P. O. Yeboah - Programme Coordinator, ENVP
14) Nana Prof. Kwesi Ayensu – CSIR
15) Prof. E. K. Osae – Physics Department, UG
16) Prof. E. K. Adjei – Physics Department, UG
17) Mr. Erwin Alhassan – Student Director
National Executives
1) President - Prof J.J. Fletcher (HOD, Nuclear Sciences and Applications, SNAS)
2) 1st Vice President - Prof. B.J.B Nyarko (Director, NNRI - GAEC)
3) 2nd Vice President – Dr. Isaac Newton Acquah (IAEA)
4) Coordinator for International Affairs - Prof. Yaw Serfor-Armah (Deputy Director General, GAEC)
5) General Secretary – Dr. E. O. Darko (Deputy Director, RPI)
6) Vice General Secretary – Mr. E.T. Glover (NNRI-GAEC)
7) National Organizer - Mr. I.J. Kwame Aboh (HOD, Physics Dept – GAEC)
8) Deputy National Organiser - Dr. S.B. Dampare (Ag. Reactor Manager – GAEC)
9) Treasurer - Dr. (Mrs.) Rose Boatin (Food Science Department, GAEC)
Co-ordinators For The Various Divisions
1) Dr. G.K. Banini Accelerator Applications
2) Prof. A.W. K. Kyere/Mr. Emmanuel Kwaku Nani Biology and Medicine
3) Prof. J. J. Fletcher Education and Training & IRPS
4) Prof. P.O. Yeboah Nuclear and Environmental Protection
5) Mr. E.T Glover Fuel Cycle and Waste Management
6) Dr. Shiloh Osae Isotopes and Radiation
7) Dr. K.A. Danso Materials Science and Technology
8) Prof. Kwesi Ayensu Mathematics and Computation
9) Prof. A.A. Golow Nuclear Criticality Safety
10) Dr. David Kuwornu Nuclear Power Plants
11) Prof. Y. Serfor-Armah Nuclear and Radiochemistry
12) Mr. S. Anim Sampong Research Reactor Operations
13) Dr. E. O. Darko Radiation Protection and Shielding & SRP
14) Prof. B.J.B Nyarko Reactor Physics and Engineering
15) Prof. E.H. K. Akaho Thermal Hydraulics
16) Prof. J.N. Tabiri Nuclear Agriculture and Radiation Processing Technology
17) Prof. G. Emi-Reynolds Radiation Safety and Security of Sources IRPA
18) Mrs. Margaret Ahiadeke Nuclear Law & Legislation
19) Prof. Cyril Schandorf Intl. Org. Of Medical Physicists
20) Prof. E.K. Agyei Nuclear Dating and Archeometry
21) Dr. (Mrs.) Mary Boadu CT and Mammography
Board Of Governors Of The National Student Chapter
1) Prof. J. J. Fletcher
2) Prof. Yaw Serfor-Armah
3) Dr. (Mrs.) Mary Boadu
4) Mr. Erwin Alhassan – President of the Chapter
5) Prof. P. O. Yeboah – Advisor to the Board of Governors
National Student Chapter Executives
1) President - Erwin Alhassan (Nuclear Engineering)
2) Vice-President - Anita Osei-Tutu (Environmental Protection)
3) General Secretary - Nana Ansah Adoo (Nuclear Engineering)
4) Deputy General Secretary - Alfred K. Anim (Nuclear and Radiochemistry)
5) Organizing Secretary - Andrew Sarkodie Appiah (Nuclear Agriculture)
6) Deputy Organizing Secretary - Frank Quarshie (Applied Nuclear Physics)
7) Publicity Secretary - Christian Adjei Amevi (Applied Nuclear Physics)
8) Deputy Publicity Secretary - David Kpeglo (Radiation Protection)
9) Treasurer - Matilda Owusu-Ansah (Nuclear Agriculture)
10) Deputy Treasurer - Christopher Yaw Bansah (Nuclear Engineering)
11) National Student Coordinator - Godfred Odame Duodu (Nuclear and Radiochemistry)
12) Deputy National Student Coordinator - Vincent Yao Agbodemegbe (Nuclear Engineering)
13) Kumasi Technical University President - Ebenezer Blay (Chemical Engineering)
Officers for the GNS Newsletter
Members of Editorial Board
1) Prof. J. J. Fletcher Editor
2) Prof. Cyril Schandorf Member
3) Dr. Shiloh Osae Member
4) Prof. J.N. Tabiri/Dr. Harry M. Amoatey Member
5) Dr. K.E. Danso Member
6) Mr. O.C. Oppong Member
7) Mr. Owiredu Gyampo Member
8) Mr. Stephen Yamoah Member
9) Mr. Ahialey Kingsford Member
Production Team
1) Mr. Richard Della
2) Mr. Daniel N. P. Dadebo
3) Mr. Godfred Odame Duodu
4) Mr. Amartey Edmund Okoe
5) Mr. Agbemava Sylvester Ebo
6) Ms. Ekua Mensimah
7) Mr. James Coffie
To enhance public acceptance and awareness of the Nuclear Power Option, the society organized a three-day conference under the theme: “Energy Security for Accelerated Development of the African Region”. This conference hoped to promote the acceptance of the Nuclear Power in Africa by bringing together nuclear power vendors, reactor manufactures, scientists and experts in the field to share knowledge with those in Africa. | https://en.wikipedia.org/wiki/Ghana_Nuclear_Society |
Ghemical is a computational chemistry software package written in C++ and released under the GNU General Public License . [ 3 ] The program has graphical user interface based on GTK+2 and supports quantum mechanical and molecular mechanic models, with geometry optimization, molecular dynamics, and a large set of visualization tools. Ghemical relies on external code to provide the quantum-mechanical calculations — MOPAC provides the semi-empirical MNDO , MINDO , AM1 , and PM3 methods, and MPQC methods based on Hartree–Fock calculations.
The chemical expert system is based on OpenBabel , which provides basis functionality like atom typing, rotamer generation and import/export of chemical file formats .
This article about chemistry software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ghemical |
Gheorghe Spacu (December 5, 1883 – July 23, 1955) was a Romanian inorganic chemist.
Born in Iași , he attended the city's National College from 1894 to 1901. He subsequently enrolled in the physics and chemistry section of the sciences faculty at the University of Iași . [ 1 ] There, his professors included Petru Poni (inorganic chemistry), Vasile Buțureanu (mineralogy and crystallography), Anastasie Obregia (organic chemistry) and Dragomir Hurmuzescu (electrochemistry). [ 2 ] Upon graduating in 1905, he went to deepen his studies at the universities of University of Vienna and then Berlin . He returned in 1907, when he began working as an assistant in the inorganic chemistry laboratory of Neculai Costăchescu , and was promoted to head of operations in 1916. That year, he received a doctorate in chemistry, having submitted a thesis on iron compounds. After Costăchescu, he was the second individual to receive a doctorate in chemistry from a Romanian university. Subsequently, he was named associate professor at Iași University. [ 3 ]
In 1919, following the union of Transylvania with Romania and the establishment of the University of Cluj , he was invited to teach as associate professor of inorganic and analytic chemistry. He was promoted to full professor in 1922. While at Cluj , he put together a school of chemistry, founding and supplying laboratories for students and researches and advising sixteen doctoral students. Among those he trained were Raluca Ripan , Ilie G. Murgulescu , Petru Spacu and Coriolan Drăgulescu . Petru, his only son, worked in his father's department after obtaining a doctorate, later moving to Bucharest . Of Gheorghe Spacu's 274 scientific articles, around two-thirds were written while he was at Cluj. These were generally published in the bulletin of the local scientific society, of which he was a founding member in 1921. [ 3 ] He was elected a corresponding member of the Romanian Academy in 1927, and was promoted to titular member in 1935. [ 4 ] [ 5 ] He was assistant dean of the science faculty in 1923–1924, dean in 1924–1925, rector of the university in 1925–1926 and assistant rector in 1926–1927. [ 6 ]
In 1939, he was asked to join the inorganic and analytic chemistry department of the University of Bucharest , where he began work in October 1940. As before, he focused on establishing laboratories and training chemists. Maria Brezeanu was one of his doctoral students. During his Bucharest period, his work appeared in the bulletin of the Academy's scientific section. He received three awards from the communist authorities : the state prize (1952 and 1954) and the order of labor (1953). He continued working until his death in 1955. [ 4 ] | https://en.wikipedia.org/wiki/Gheorghe_Spacu |
Gheorghe Țițeica ( Romanian pronunciation: [ˈɡe̯orɡe t͡siˈt͡sejka] ; 4 October 1873 – 5 February 1939) publishing as George or Georges Tzitzéica ) was a Romanian mathematician who made important contributions in geometry . He is recognized as the founder of the Romanian school of differential geometry .
He was born in Turnu Severin , western Oltenia , the son of Anca (née Ciolănescu) and Radu Țiței, originally from Cilibia , in Buzău County . His name was registered as Țițeica–a combination of his parents' surnames. [ 1 ] He showed an early interest in science, as well as music and literature. Țițeica was an accomplished violinist, having studied music since childhood: music was to remain his hobby. While studying at the Carol I High School in Craiova , he contributed to the school's magazine, writing the columns on mathematics and studies of literary critique. After graduation in 1892, [ 1 ] he obtained a scholarship at the preparatory school in Bucharest , where he also was admitted as a student in the Mathematics Department of University of Bucharest 's Faculty of Sciences. His teachers there included David Emmanuel , Spiru Haret , Constantin Gogu, Dimitrie Petrescu, and Iacob Lahovary . In June 1895, he graduated with a Bachelor of Mathematics . [ 2 ] [ 3 ]
In the summer of 1896, after a stint as a substitute teacher at the Bucharest theological seminary, Țițeica passed his exams for promotion to a secondary school position, becoming teacher in Galați . [ 2 ]
In 1897, on the advice of teachers and friends, Țițeica completed his studies at a preparatory school in Paris . Among his mates were Henri Lebesgue and Paul Montel . After ranking first in his class and earning a second undergraduate degree from the Sorbonne in 1897, he was admitted at the École Normale Supérieure , where he took classes with Paul Appell , Gaston Darboux , Édouard Goursat , Charles Hermite , Gabriel Koenigs , Émile Picard , Henri Poincaré , and Jules Tannery . [ 4 ] Țițeica chose Darboux to be his thesis advisor; after working for two years on his doctoral dissertation, titled Sur les congruences cycliques et sur les systèmes triplement conjugués , he defended it on 30 June 1899 before a board of examiners consisting of Darboux (as chair), Goursat, and Koenigs. [ 4 ] [ 2 ] [ 5 ]
Upon his return to Romania, Țițeica was appointed assistant professor at the University of Bucharest . He was promoted to full professor on 3 May 1903, retaining this position until his death in 1939. He also taught mathematics at the Polytechnic University of Bucharest , starting in 1928. [ 6 ] In 1913, at age 40, Țițeica was elected as a permanent member of the Romanian Academy , replacing Spiru Haret. Later he was appointed in leading roles: in 1922, vice-president of the scientific section, in 1928, vice-president and in 1929 secretary general. Țițeica was also president of the Romanian Mathematical Society , of the Romanian Association of Science, and of the Association of the development and the spreading of science. He was a vice-president of the Polytechnics Association of Romania and member of the High Council of Public Teaching. [ 2 ]
Țițeica was the president of the geometry section at the International Congress of Mathematicians (ICM) in Toronto (1924), Zürich (1932), and Oslo (1936). With 5 invited ICM talks (1908, 1912, 1924, 1932, and 1936), he is in a tie for 7th place among mathematicians with the most invited ICM talks .
He was elected correspondent of the Association of Sciences of Liège and doctor honoris causa of the University of Warsaw . [ 2 ] In 1926, 1930, and 1937 he gave a series of lectures as titular professor at the Faculty of Sciences in Sorbonne . He also gave many lectures at the Free University of Brussels (1926) and the University of Rome (1927). [ 7 ]
His Ph.D. students include Dan Barbilian [ 4 ] and Grigore Moisil . [ 5 ]
Țițeica wrote about 400 articles, of which 96 are scientific projects, most addressing problems of differential geometry. His bibliography includes over 200 published papers and books, which appeared in many editions. [ 8 ] Carrying on the researches of the American geometer of German origin Ernest Wilczynski , Țițeica discovered a new class of surfaces and a new class of curves which now carry his name. His contributions represent the beginning of a new chapter in mathematics, namely, affine differential geometry . [ 3 ] He also studied webs in n-dimensional space, defined through Laplace equations . He investigated what is now known as the Tzitzeica equation , which was further generalized by Robin Bullough and Roger Dodd (the Tzitzéica–Bullough–Dodd equation).
He is also known for a result on the geometry of circles and triangles in the plane, referred to as Țițeica's 5 lei coin problem [ ro ] , a problem he proposed (and solved) at the Gazeta Matematică [ ro ] contest in Galați in 1908. [ 9 ] The problem was posed independently by Roger Arthur Johnson in 1916, and the resulting configuration is also referred to as the Johnson circles .
Țițeica married Florence Thierin (1882–1965) and the couple had three children — Radu (1905–1987), Gabriela (1907–1987), and Șerban (1908–1985) — all of whom pursued careers in academia; [ 10 ] [ 11 ] the youngest one became a renowned quantum physicist. The family lived in a 19th-century house on Dionisie Lupu Street, close to Lahovari Plaza, in Sector 1 of Bucharest; Țițeica moved there around 1913, when he was elected to the academy. [ 11 ] A commemorative plaque was affixed to the house by the city administration in 1998. He died in 1939 in Bucharest [ 2 ] and was buried in the city's Bellu Cemetery . [ 12 ]
A high school in Drobeta-Turnu Severin [ 13 ] and a gymnasium in Craiova [ 14 ] bear his name, and so does a street in Sector 2 of Bucharest. The Romanian Academy offers an annual "Gheorghe Țițeica Prize" for achievements in mathematics. [ 15 ] The logo of the 40th International Mathematical Olympiad , held in Bucharest in 1999, [ 16 ] was inspired by Țițeica's 5 lei coin problem. [ 17 ]
In 1961, Poșta Română issued a 1.55 lei stamp in his honor ( Scott #1415); he also figures on a 2 lei stamp from 1945 commemorating the founding of Gazeta Matematică in 1895 ( Scott #596). | https://en.wikipedia.org/wiki/Gheorghe_Țițeica |
Ghost imaging (also called "coincidence imaging", "two-photon imaging" or "correlated-photon imaging") is a technique that produces an image of an object by combining information from two light detectors: a conventional, multi- pixel detector that does not view the object, and a single-pixel (bucket) detector that does view the object. [ 1 ] Two techniques have been demonstrated. A quantum method uses a source of pairs of entangled photons , each pair shared between the two detectors, while a classical method uses a pair of correlated coherent beams without exploiting entanglement. Both approaches may be understood within the framework of a single theory. [ 2 ]
The first demonstration of ghost imaging, performed by T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko in 1995, was based on quantum correlations between entangled photon pairs. [ 3 ] One of the photons of the pair strikes the object and then the bucket detector while the other follows a different path to a (multi-pixel) camera . The camera is constructed to only record pixels from entangled photon pairs that hit both the bucket detector and the camera's image plane (as opposed to entangled photon pairs where one hit the image plane but the other does not hit the bucket detector, which are not registered). Then a large number of registered entangled pairs gradually forms a full image.
Later experiments indicated that the correlations between the light beam that hits the camera and the beam that hits the object may be explained by purely classical physics. [ 4 ] If quantum correlations are present, the signal-to-noise ratio of the reconstructed image can be improved. In 2009 'pseudothermal ghost imaging' and 'ghost diffraction ' were demonstrated by implementing the 'computational ghost-imaging' scheme, [ 5 ] which relaxed the need to evoke quantum correlations arguments for the pseudothermal source case. [ 6 ]
Recently, it was shown that the principles of 'Compressed-Sensing' can be directly utilized to reduce the number of measurements required for image reconstruction in ghost imaging. [ 7 ] This technique allows an N pixel image to be produced with far less than N measurements and may have applications in LIDAR and microscopy .
The U.S. Army Research Laboratory (ARL) developed remote ghost imaging in 2007 with the goal of applying advanced technology to the ground, satellites and unmanned aerial vehicles . [ 8 ] Ronald E. Meyers and Keith S. Deacon of ARL, received a patent in 2013 for their quantum imaging technology called, "System and Method for Image Enhancement and Improvement." [ 9 ] The researchers received the Army Research and Development Achievement Award for outstanding research in 2009 with the first ghost image of a remote object. [ 10 ]
A simple example clarifies the basic principle of ghost imaging. [ 11 ] Imagine two transparent boxes: one that is empty and one that has an object within it. The back wall of the empty box contains a grid of many pixels (i.e. a camera), while the back wall of the box with the object is a large single-pixel (a bucket detector). Next, shine laser light into a beamsplitter and reflect the two resulting beams such that each passes through the same part of its respective box at the same time. For example, while the first beam passes through the empty box to hit the pixel in the top-left corner at the back of the box, the second beam passes through the filled box to hit the top-left corner of the bucket detector.
Now imagine moving the laser beam around in order to hit each of the pixels at the back of the empty box, meanwhile moving the corresponding beam around the box with the object. While the first light beam will always hit a pixel at the back of the empty box, the second light beam will sometimes be blocked by the object and will not reach the bucket detector. A processor receiving a signal from both light detectors only records a pixel of an image when light hits both detectors at the same time. In this way, a silhouette image can be constructed, even though the light going towards the multi-pixel camera did not touch the object.
In this simple example, the two boxes are illuminated one pixel at a time. However, using quantum correlation between photons from the two beams, the correct image can also be recorded using complex light distributions. Also, the correct image can be recorded using only the single beam passing through a computer-controlled light modulator to a single-pixel detector. [ 6 ]
As of 2012 [update] , ARL scientists developed a diffraction-free light beam, also called Bessel beam illumination. In a paper published February 10, 2012, the team outlined their feasibility study of virtual ghost imaging using the Bessel beam, to address adverse conditions with limited visibility, such as cloudy water, jungle foliage, or around corners. [ 10 ] [ 12 ] Bessel beams produce concentric-circle patterns. When the beam is blocked or obscured along its trajectory, the original pattern eventually reforms to create a clear picture. [ 13 ]
The spontaneous parametric down-conversion (SPDC) process provides a convenient source of entangled-photon pairs with strong spatial correlations. [ 14 ] Such heralded single photons can be used to achieve a high signal-to-noise ratio, virtually eliminating background counts from the recorded images. By applying principles of image compression and associated image reconstruction, high-quality images of objects can be formed from raw data with an average of fewer than one detected photon per image pixel. [ 15 ]
Infrared cameras that combine low-noise with single-photon sensitivity are not readily available. Infrared illumination of a vulnerable target with sparse photons can be combined with a camera counting visible photons through the use of ghost imaging with correlated photons that have significantly different wavelengths, generated by a highly non- degenerate SPDC process. Infrared photons with a wavelength of 1550 nm illuminate the target and are detected by an InGaAs/InP single-photon avalanche diode. The image data are recorded from the coincidently detected, position-correlated, visible photons with a wavelength of 460 nm using a highly efficient, low-noise, photon-counting camera. Light-sensitive biological samples can thereby be imaged. [ 16 ]
Ghost imaging is being considered for application in remote-sensing systems as a possible competitor with imaging laser radars ( LIDAR ). A theoretical performance comparison between a pulsed, computational ghost imager and a pulsed, floodlight-illumination imaging laser radar identified scenarios in which a reflective ghost-imaging system has advantages. [ 17 ]
Ghost-imaging has been demonstrated for a variety of photon science applications. A ghost-imaging experiment for hard x-rays was recently achieved using data obtained at the European Synchrotron. [ 18 ] Here, speckled pulses of x-rays from individual electron synchrotron bunches were used to generate a ghost-image basis, enabling proof-of-concept for experimental x-ray ghost imaging. At the same time that this experiment was reported, a Fourier-space variant of x-ray ghost imaging was published. [ 19 ] Ghost imaging has also been proposed for X-ray FEL applications. [ 20 ] Classical ghost imaging with compressive sensing has also been demonstrated with ultra-relativistic electrons . [ 21 ] | https://en.wikipedia.org/wiki/Ghost_imaging |
A ghost lineage is a hypothesized ancestor in a species lineage that has left no fossil evidence, but can still be inferred to exist or have existed because of gaps in the fossil record or genomic evidence. [ 1 ] [ 2 ] The process of determining a ghost lineage relies on fossilized evidence before and after the hypothetical existence of the lineage and extrapolating relationships between organisms based on phylogenetic analysis . [ 3 ] Ghost lineages assume unseen diversity in the fossil record and serve as predictions for what the fossil record could eventually yield; these hypotheses can be tested by unearthing new fossils or running phylogenetic analyses. [ 4 ]
Ghost lineages and Lazarus taxa are related concepts, as both stem from gaps in the fossil record. [ 2 ] A ghost lineage is any gap in a taxon 's fossil record, with or without reappearance, while a Lazarus taxon is a type of ghost lineage wherein a species is believed to have gone extinct due to an absence of it in the fossil record, but then reappears after a period of time. [ 2 ] Examples of Lazarus taxa include the famous coelacanths , as well as the Philippine naked-backed fruit bat . [ 5 ]
In 1992, an article stated: "These additional entities are taxa [groups] that are predicted to occur by the internal branching structure of phylogenetic trees .... I refer to these as ghost lineages because they are invisible to the fossil record." [ 6 ] Phylogenetic trees constructed based on fossil records and Darwin's theory of evolution often give an indication that species with similar phenotypes existed, although its fossil has not been discovered. [ 7 ]
It is important to note that ghost lineages and ghost taxa are not the same. A ghost lineage is a one direct connection between the descendant and the ancestor, whereas a ghost taxon has many split descendants. [ 3 ]
When looking back at extinct organisms, there are some groups of organisms (or lineages) that have gaps in their fossil records. These organisms or species may be closely related to one another, but there are no traces in the fossil records or sediment beds that might shed some light on their origins. Biologists may infer the existence of ghost lineages by examining sequential stratigraphic units in the fossil record. [ 8 ] Fossils can then be mapped onto cladograms and range charts to assess which lineages are missing in the fossil record. [ 8 ] A classic example is the coelacanth , a type of fish related to the lungfishes and to primitive tetrapods . Coelacanths have been around for the past 80 million years but have failed to leave any fossils. The reason for this is their environment, which is deep water near volcanic islands; therefore, these sediments are hard to get to, giving these coelacanths an 80 million year gap or ghost lineage. [ 2 ] Another ghost lineage was that of the averostran theropods , a ghost lineage now reduced considerably due to the discovery of Tachiraptor . [ 9 ]
The duration between distinct fossils can be measured and used to derive information about the evolutionary history of a species. A study conducted in 1998 showed that a correlation exists between the diversification of a species and the duration of its ghost lineage; namely that a shorter ghost lineage implies that there will be greater species diversification. [ 10 ]
Genetic evidence has revealed ghost populations in many species, including modern bonobos and chimpanzees, allopolyploid frogs, polyploid parthenogenetic crayfish, a variety of plants, and humans. [ 11 ] [ 12 ] [ 13 ] A study comparing the genomes of 69 modern bonobos and chimpanzees found between 0.9–4.2% of gene flow from ancient bonobos and an archaic great ape lineage to modern bonobos, allowing researchers to reconstruct 4.8% of this ghost population's genome. [ 11 ] Furthermore, previous models for European ancestry suggested that European populations descended from two ancient populations, but genetic evidence now suggests that a third ghost population, the Ancient North Eurasians , has also contributed to European ancestry. [ 13 ] | https://en.wikipedia.org/wiki/Ghost_lineage |
Ghost nets are fishing nets that have been abandoned, lost, or otherwise discarded in the ocean, lakes, and rivers. [ 1 ] These nets, often nearly invisible in the dim light, can be left tangled on a rocky reef or drifting in the open sea . They can entangle fish , dolphins , sea turtles , sharks , dugongs , crocodiles , seabirds , crabs , and other creatures, including the occasional human diver. [ 2 ] Acting as designed, the nets restrict movement, causing starvation, laceration and infection, and suffocation in those that need to return to the surface to breathe. [ 3 ] It's estimated that around 48 million tons (48,000 kt) of lost fishing gear is generated each year, not including those that were abandoned or discarded [ 4 ] and these may linger in the oceans for a considerable time before breaking-up.
Some commercial fishermen use gillnets . These are suspended in the sea by flotation buoys , such as glass floats , along one edge. In this way they can form a vertical wall hundreds of metres long, where any fish within a certain size range can be caught. Normally these nets are collected by fishermen and the catch removed.
If this is not done, the net can continue to catch fish until the weight of the catch exceeds the buoyancy of the floats. The net then sinks, and the fish are devoured by bottom-dwelling crustaceans and other fish. Then the floats pull the net up again and the cycle continues. Given the high-quality synthetics that are used today, the destruction can continue for a long time. [ 5 ]
The problem is not just nets but ghost gear in general; [ 6 ] old-fashioned crab traps , without the required "rot-out panel", also sit on the bottom, where they become self-baiting traps that can continue to trap marine life for years. Even balled-up fishing line can be deadly for a variety of creatures, including birds and marine mammals. [ 7 ] Over time the nets become more and more tangled. In general, fish are less likely to be trapped in gear that has been down a long time. [ 8 ]
Fishermen sometimes abandon worn-out nets because it is often the easiest way to get rid of them. [ 6 ]
The French government offered a reward for ghost nets handed in to local coastguards along sections of the Normandy coast between 1980 and 1981. The project was abandoned when people vandalized nets to claim rewards, without retrieving anything at all from the shoreline or ocean. [ 9 ]
In September 2015, the Global Ghost Gear Initiative (GGGI) was created by the World Animal Protection to give a unique and stronger voice to the cause.
The term ALDFG means "abandoned, lost and discarded fishing gear". [ 7 ]
From 2000 to 2012, the National Marine Fisheries Service reported an average of 11 large whales entangled in ghost nets every year along the US west coast. From 2002 to 2010, 870 nets were recovered in Washington (state) with over 32,000 marine animals trapped inside. Ghost gear is estimated to account for 10% (640,000 tonnes) of all marine litter . [ 6 ]
An estimated 46% of the Great Pacific Garbage Patch consists of fishing related plastics. [ 10 ] Fishing nets account for about 1% of the total mass of all marine macroplastics larger than 200 millimetres (7.9 in), and plastic fishing gear overall constitutes over two-thirds of the total mass. [ 11 ]
According to the SeaDoc Society, each ghost net kills $20,000 worth of Dungeness crab over 10 years. The Virginia Institute of Marine Science calculated that ghost crab pots capture 1.25 million blue crabs each year in the Chesapeake Bay alone. [ 6 ]
In May 2016, the Australian Fisheries Management Authority (AFMA) recovered 10 tonnes of abandoned nets within the Australian Exclusive Economic Zone and Torres Strait protected zone perimeters. One protected turtle was rescued. [ 12 ]
The northern Australian olive ridley sea turtle Lepidochelys olivacea , is a genetically distinct variation of the olive ridley sea turtle. Ghost nets pose a threat to the continued existence of the northern Australian variety. Without further action to preserve the northern Australian olive ridley sea turtle, the population could face extinction. [ 13 ]
Researchers in Brazil used social media to estimate how ghost nets have negatively affected the Brazilian marine biota. Footage of ghost nets found on Google and YouTube were obtained and analyzed to arrive at the results of the study. They found that ghost nets have an adverse effect on several marine species, including large marine animals, such as the Bryde's whale and Guiana dolphin. [ 14 ]
Unlike synthetic fishing nets, biodegradable fishing nets decompose naturally under water after a certain period of time. Coconut fibre ( coir ) fishing nets are commercially made and are hence a practical solution that can be taken by fishermen. [ 15 ] [ 16 ]
Technology systems for marking and tracking fishing gear, including GPS tracking, are being trialled to promote greater accountability and transparency. [ 17 ]
Legalizing gear retrievals and establishing waste management systems is required to manage and mitigate abandoned, lost, and discarded fishing gear at-sea. [ 7 ] The company Net-works worked out a solution to turn discarded fishing nets into carpet tiles. [ 18 ]
Between 2008 and 2015, the US Fishing for Energy initiative collected 2.8 million pounds of fishing gear, and in partnership with Reworld turned this into enough electricity to power 182 homes for one year by incineration. [ 6 ] [ 19 ]
One retrieval initiative in Southwest Nova Scotia in Canada conducted 60 retrieval trips, searched ~1523 square kilometers of the seafloor and removed 7064 kg of abandoned, lost, and discarded fishing gear (ALDFG) (comprising 66% lobster traps and 22% dragger cable). Lost traps continued to capture target and non-target species. A total of 15 different species were released from retrieved ALDFG, including 239 lobsters (67% were market-sized) and seven groundfish (including five species-at-risk). The commercial losses from ALDFG in Southwest Nova Scotia were estimated at $175,000 CAD annually. [ 20 ]
In 2009 world-renowned Dutch technical diver Pascal van Erp started to recover abandoned ghost fishing gear entangled on North Sea wrecks. He soon inspired others. Organised teams of volunteer technical divers recovered tons of ghost fishing gear off the Netherlands coastline. The loop was then closed - after a season's diving 22 tons of fishing gear was sent to the Aquafil Group for recycling back into new Nylon 6 material. In 2012 Pascal van Erp formally founded the not-for-profit Ghost Fishing organisation. [ 21 ] In 2020 the Ghost Fishing Foundation rebranded as the Ghost Diving Foundation. [ 22 ]
A plan to protect UK seas from ghost fishing was backed by the European Parliament Fisheries Committee in 2018. Mr. Flack, who led the committee, said: "Abandoned fishing nets are polluting our seas, wasting fishing stocks and indiscriminately killing whales, sea lions or even dolphins. The tragedy of ghost fishing must end". [ 23 ]
Net amnesty schemes such as Fishing for Litter create incentives for the collection and responsible disposal of end of life fishing gear. These schemes address the root cause for many net abandonments, which is the financial cost of their disposal. [ 24 ]
Fishing nets are often made from extremely high quality plastics to ensure suitable strength, which makes them desirable for recycling. Initiatives like Healthy Seas are connecting environmental cleanup projects to manufacturers to re-use these materials. [ 25 ] Recycled waste nets can be made into yarn and consumer products, such as swimwear. [ 26 ] [ 27 ]
In Australia, the Carpentaria Ghost Nets Program has collaborated with indigenous communities to increase awareness of ghost nets and to foster long term solutions. The program has trained indigenous northern Australians in scouting for ghost nets and in removing ghost nets and other plastic pollution. [ 28 ]
General:
1 | https://en.wikipedia.org/wiki/Ghost_net |
A ghost population is a population that has been inferred through using statistical techniques. [ 1 ]
In 2004, it was proposed that maximum likelihood or Bayesian approaches that estimate the migration rates and population sizes using coalescent theory can use datasets which contain a population that has no data. This is referred to as a "ghost population". The manipulation allows exploration in the effects of missing populations on the estimation of population sizes and migration rates between two specific populations. The biases of the inferred population parameters depend on the magnitude of the migration rate from the unknown populations. [ 1 ] The technique for deriving ghost populations attracted criticism because ghost populations were the result of statistical models, along with their limitations. [ 2 ]
In 2012, DNA analysis and statistical techniques were used to infer that a now-extinct human population in northern Eurasia had interbred with both the ancestors of Europeans and a Siberian group that later migrated to the Americas. The group was referred to as a ghost population because they were identified by the echoes that they leave in genomes—not by bones or ancient DNA. [ 3 ] In 2013, another study found the remains of a member of this ghost group, fulfilling the earlier prediction that they had existed. [ 4 ] [ 5 ]
According to a study published in 2020, there are indications that 2% to 19% (or about ≃6.6 and ≃7.0%) of the DNA of four West African populations may have come from an unknown archaic hominin which split from the ancestor of Sapiens (Modern Humans) and Neanderthals between 360 kya to 1.02 mya.
Basal West Africans did not split before Neanderthals split from modern humans. [ 6 ] Even before 300,000 BP to 200,000 BP, when the ancestors of the modern San split from other modern humans, the group to split the most early from modern humans may have been Basal West Africans. [ 6 ]
However, the study also suggests that at least part of this archaic admixture is also present in Eurasians/non-Africans, and that the admixture event or events range from 0 to 124 ka B.P, which includes the period before the Out-of-Africa migration and prior to the African/Eurasian split (thus affecting in part the common ancestors of both Africans and Eurasians/non-Africans). [ 7 ] [ 8 ] [ 9 ] Another recent study, which discovered substantial amounts of previously undescribed human genetic variation, also found ancestral genetic variation in Africans that predates modern humans and was lost in most non-Africans. [ 10 ]
In 2015, a study of the lineage and early migration of the domestic pig found that the best model that fitted the data included gene flow from a ghost population during the Pleistocene that is now extinct. [ 11 ]
A 2018 study suggests that the common ancestor of the wolf and the coyote may have interbred with an unknown canid related to the dhole . [ 12 ] | https://en.wikipedia.org/wiki/Ghost_population |
In television , a ghost is a replica of the transmitted image, offset in position, that is superimposed on top of the main image. It is often caused when a TV signal travels by two different paths to a receiving antenna, with a slight difference in timing. [ 1 ]
Common causes of ghosts (in the more specific sense) are:
Note that ghosts are a problem specific to the video portion of television, largely because it uses AM for transmission. The audio portion uses FM , which has the desirable property that a stronger signal tends to overpower interference from weaker signals due to the capture effect . Even when ghosts are particularly bad in the picture, there may be little audio interference.
SECAM TV uses FM for the chrominance signal, hence ghosting only affects the luma portion of its signal. TV is broadcast on VHF and UHF , which have line-of-sight propagation , and easily reflect off of buildings, mountains, and other objects.
If the ghost is seen on the left of the main picture, then it is likely that the problem is pre-echo, which is seen in buildings with very long TV downleads where an RF leakage has allowed the TV signal to enter the tuner by a second route. For instance, plugging in an additional aerial to a TV which already has a communal TV aerial connection (or cable TV ) can cause this condition.
Ghosting is not specific to analog transmission. It may appear in digital television when interlaced video is incorrectly deinterlaced for display on progressive-scan output devices. The mechanisms that cause ghosting in analog television may corrupt the signal beyond use for digital television . 8VSB, COFDM , and other modulation schemes seek to correct this. | https://en.wikipedia.org/wiki/Ghosting_(television) |
Gi-Fi or gigabit wireless refers to wireless communication at a bit rate of at least one gigabit per second (Gbit/s).
By 2004 some trade press used the term "Gi-Fi" to refer to faster versions of the IEEE 802.11 standards marketed under the trademark Wi-Fi . [ 1 ]
In 2008 researchers at the University of Melbourne demonstrated a transceiver on a single integrated circuit (chip) operating at 60 GHz on the CMOS process, allowing wireless communication speeds of up to 5 Gbit/s within a 10-metre (33-foot) range. [ 2 ] Some press reports called this "GiFi". [ 3 ] [ 4 ] It was developed by the Melbourne University-based laboratories of NICTA (National ICT Australia Limited). [ 3 ]
In 2009, the Wireless Gigabit Alliance was formed to promote the technology. It used the term " WiGig " which avoided trademark confusion. [ 5 ]
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gi-Fi |
In mathematics, Giambelli's formula , named after Giovanni Giambelli , expresses Schubert classes as determinants in terms of special Schubert classes.
It states
where σ λ is the Schubert class of a partition λ.
Giambelli's formula may be derived as a consequence of Pieri's formula . The Porteous formula is a generalization to morphisms of vector bundles over a variety.
In the theory of symmetric functions, the same identity, known as the first Jacobi-Trudi identity expresses Schur functions as determinants in terms of complete symmetric functions . There is also the dual second Jacobi-Trudi identity which expresses Schur functions as determinants in terms of elementary symmetric functions . The corresponding identity also holds for Schubert classes.
There is another Giambelli identity , expressing Schur functions as determinants of matrices whose entries are Schur functions corresponding to hook partitions contained within the same Young diagram . This too is valid for Schubert classes, as are all Schur function identities. For instance, hook partition Schur functions can be expressed bilinearly in terms of elementary and complete symmetric functions, and Schubert classes satisfy these same relations.
This algebraic geometry –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Giambelli's_formula |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/Giannini_code |
The Giant Radio Array for Neutrino Detection ( GRAND ) is a proposed large-scale detector designed to collect ultra-high energy cosmic particles as cosmic rays , neutrinos and photons with energies exceeding 10 17 eV . This project aims at solving the mystery of their origin and the early stages of the universe itself. The proposal, formulated by an international group of researchers, calls for an array of 200,000 receivers to be placed on mountain ranges around the world.
The GRAND detector would search for neutrinos , exotic particles emitted by some and the black holes in the center of galaxies. These neutrinos could help astronomers find the source of other energetic particles called ultra-high-energy cosmic rays . When neutrinos reach Earth, they often collide with particles either in the air or on the ground, creating showers of secondary particles. These secondary particles can be picked up by the radio antennas, which lets researchers calculate the trajectory of the initial neutrinos and trace them back to their source. [ 1 ] [ 2 ] The concept was first published in 2017. [ 3 ]
The giant radio detector array would comprise 200,000 low-cost antennas in groups of 10,000 spread out over nearly 200,000 square kilometres (77,000 sq mi) at different locations around the world. [ 2 ] This would make it the largest detector in the world. Construction, installation and networking the 200,000 antennae, would cost approximately US$226 million, [ 1 ] excluding the price for renting the land and manpower. [ 4 ]
The strategy of GRAND is to detect the radio emission coming from particle showers that develop in the terrestrial atmosphere as a result of the interaction of ultra-high energy (UHE) cosmic rays, gamma rays, and neutrinos. [ 5 ] Astrophysical tau neutrinos ( ν τ ) can be detected through extensive air showers (EAS) induced by tau ( τ − ) decays in the atmosphere. [ 3 ] The short-lived tau decays in the atmosphere generates an EAS that emits measurable electromagnetic emissions up to frequencies of hundreds of MHz . [ 3 ] The antennae are foreseen to operate in the 60-200 MHz band to avoid the short-wave background noise at lower frequencies. [ 3 ]
Each individual antenna is a simple Bow-tie design , featuring 3 perpendicular bows with an additional vertical arm to sample all three polarization directions. [ 5 ] Each antenna is mounted on a single 5-meter-tall pole, and each antenna in the grid is spaced at 1 km within a square grid. If the full array of 200,000 antennae is built, GRAND would reach an all-flavor sensitivity of 4 x10 −10 GeV cm −2 s −1 sr −1 above 5 x10 17 eV. Because of its sub-degree angular resolution, GRAND will also search for point sources of UHE neutrinos, steady and transient, potentially starting UHE neutrino astronomy, allowing for the discovery and follow-up of large numbers of radio transients, fast radio bursts , giant radio pulses, and for precise studies of the epoch of reionization . [ 5 ]
The researchers estimate that GRAND could allow not just the detection of neutrinos, but could also allow a differentiation of the source types, such as galaxy clusters with central sources, fast-spinning newborn pulsars , active galactic nuclei , and afterglows of gamma-ray bursts . [ 3 ]
Simulation and experimental work is ongoing on technological development and background rejection strategies. Phase one is called GRANDProto35, that includes 35 antennas and 24 scintillators , deployed in the Tian Shan mountains in China. [ 3 ] If a pulse is observed simultaneously in the signals from three or more scintillators, the signals are recorded. As of October 2018, GRANDProto35 is in commissioning phase. [ 5 ] So far, the system achieves 100% detection efficiency for trigger rates up to 20 kHz .
The following step is planned for 2020, and it is a dedicated setup called GRANDProto300 within an area of 300 square kilometres (120 sq mi). [ 3 ] The baseline layout is a square grid with a 1 kilometre (0.62 mi) inter-antenna spacing, just as for later stages. Because GRANDProto300 will not be large enough to detect cosmogenic neutrinos, the viability will be tested using instead extensive air showers initiated by very inclined cosmic rays, thus providing an opportunity to do cosmic-ray science. [ 5 ] The site would be hosted at the Chinese provinces of XinJiang, Inner Mongolia, Yunnan, and Gansu. [ 5 ] If funded, the later phases would build GRAND10k in 2025, and finally GRAND200k (200,000 receivers) in the 2030s. [ 5 ] | https://en.wikipedia.org/wiki/Giant_Radio_Array_for_Neutrino_Detection |
The Giant Void (also known as the Giant Void in NGH , Canes Venatici Supervoid , and AR-Lp 36 ) is an extremely large region of space with an underdensity of galaxies and located in the constellation Canes Venatici . It is the second-largest-confirmed void to date, with an estimated diameter of 300 to 400 Mpc (1 to 1.3 billion light-years ) [ 1 ] and its centre is approximately 460 Mpc (1.5 billion light-years) away ( z = 0.116). [ 1 ] It was discovered in 1988, [ 2 ] and was the largest void in the Northern Galactic Hemisphere, [ 1 ] and possibly the second-largest ever detected. Even the hypothesized "Eridanus Supervoid" corresponding to the location of the WMAP cold spot is dwarfed by this void, although the Giant Void does not correspond to any significant cooling to the cosmic microwave background .
Inside this vast void there are 17 galaxy clusters , concentrated in a spherically shaped region 50 Mpc in diameter. [ 1 ] Studies of the motion of these clusters show that they have no interaction to each other, meaning the density of the clusters is very low resulting in weak gravitational interaction . [ 1 ] The void's location in the sky is close to the Boötes Void .
In a series of papers published between 2004 and 2006, cosmologist and theoretical physicist Laura Mersini-Houghton presented a theory that the universe arose from a multiverse , and made a series of testable predictions which included the existence of the Giant Void. [ 3 ] [ 4 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it .
This galaxy-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Giant_Void |
A giant cell (also known as a multinucleated giant cell, or multinucleate giant cell ) is a mass formed by the union of several distinct cells (usually histiocytes ), often forming a granuloma . [ 1 ]
Although there is typically a focus on the pathological aspects of multinucleate giant cells (MGCs), they also play many important physiological roles. Osteoclasts are a type of MGC that are critical for the maintenance, repair, and remodeling of bone and are present normally in a healthy human body. Osteoclasts are frequently classified and discussed separately from other MGCs which are more closely linked with disease.
Non-osteoclast MGCs can arise in response to an infection , such as tuberculosis , herpes , or HIV , or as part of a foreign body reaction . These MGCs are cells of monocyte or macrophage lineage fused together. Similar to their monocyte precursors, they can phagocytose foreign materials. However, their large size and extensive membrane ruffling make them better equipped to clear up larger particles. They utilize activated CR3s to ingest complement-opsonized targets. Non-osteoclast MGCs are also responsible for the clearance of cell debris, which is necessary for tissue remodeling after injuries. [ 2 ]
Types include foreign-body giant cells , Langhans giant cells , Touton giant cells , Giant-cell arteritis
Osteoclasts were discovered in 1873. [ 3 ] However, it was not until the development of the organ culture in the 1970s that their origin and function could be deduced. Although there was a consensus early on about the physiological function of osteoclasts, theories on their origins were heavily debated. Many believed osteoclasts and osteoblasts came from the same progenitor cell. Because of this, osteoclasts were thought to be derived from cells in connective tissue. Studies that observed that bone resorption could be restored by bone marrow and spleen transplants helped prove osteoclasts' hematopoietic origin. [ 3 ]
Other multinucleated giant cell formations can arise from numerous types of bacteria , diseases, and cell formations. Giant cells are also known to develop when infections are present. They were first observed as early as the middle of the last century, but it is not fully understood why these reactions occur. In the process of giant cell formation, monocytes or macrophages fuse together, which could cause multiple problems for the immune system. [ citation needed ]
Osteoclasts are the most prominent examples of MGCs and are responsible for the resorption of bones in the body. Like other MGCs, they are formed from the fusion of monocyte/macrophage precursors. [ 4 ] However, unlike other MGCs, the fusion pathway they originate from is well elucidated. They also do not ingest foreign materials and instead absorb bone matrix and minerals.
Osteoclasts are typically associated more with healthy physiological functions than they are with pathological states. They function alongside osteoblasts to remodel and maintain the integrity of bones in the body. They also contribute to the creation of the niche necessary for hematopoiesis and negatively regulate T cells . However, while the primary functions of osteoclasts are integral to maintaining a healthy physiological state, they have also been linked to osteoporosis and the formation of bone tumors. [ 5 ]
Giant cell arteritis , [ 6 ] also known as temporal arteritis or cranial arteritis, is the most common MGC-linked disease. This type of arteritis causes the arteries in the head, neck, and arm area to swell to abnormal sizes. Although the cause of this disease is not currently known, it appears to be related to polymyalgia rheumatica . [ 7 ]
Giant cell arteritis is most prevalent in older individuals, with the rate of disease being seen to increase from age 50. Women are 2–3 times more likely to develop the disease than men.
Northern Europeans have been observed to have a higher incidence of giant cell arteritis compared to southern European, Hispanic, and Asian populations. It has been suggested that this difference may lie in the criteria used to diagnose giant cell arteritis rather than actual disease incidence, in addition to genetic and geographic factors. [ 8 ]
Symptoms may include a mild fever, loss of appetite, fatigue, vision loss, and severe headaches. [ 9 ] These symptoms are often misinterpreted leading to a delay in treatment. [ 10 ] If left untreated, this disease can result in permanent blindness. [ 11 ]
The current highest standard for diagnosis is a temporal artery biopsy . [ 12 ] The skin on the patient's face is anesthetized , and an incision is made in the face around the area of the temples to obtain a sample of the temporal artery. The incision is then sutured. A histopathologist examines the sample under a microscope and issues a pathology report (pending extra tests that may be requested by the pathologist).
The management regime consists primarily of systemic corticosteroids (e.g. prednisolone), commencing at a high dose.
Langhans giant cells are named for the pathologist who discovered them, Theodor Langhans. Like many of the other kinds of giant cell formations, epithelioid macrophages fuse together and form a multinucleated giant cell. The nuclei form a circle or semicircle similar to the shape of a horseshoe away from the center of the cell. Langhans giant cell was typically associated with tuberculosis but has been found to occur in many types of granulomatous diseases .
Langhans giant cell could be closely related to tuberculosis, syphilis , sarcoidosis, and deep fungal infections . Langhans giant cell occurs frequently in delayed hypersensitivity .
Symptoms may include fever, weight loss, fatigue and loss of appetite.
This type of giant cell could be caused by bacteria that spread from person to person through the air. Tuberculosis is related to HIV; many people who have HIV also have a hard time fighting off diseases and sicknesses. Many tests may be performed to treat other related diseases to obtain the correct diagnosis for Langhans giant cell.
Also known as xanthelasmatic giant cells, Touton giant cells consist of fused epithelioid macrophages and have multiple nuclei. They are characterized by the ring-shaped arrangement of their nuclei and the presence of foamy cytoplasm surrounding the nucleus. Touton giant cells have been observed in lipid-laden lesions such as fat necrosis .
The formation of Touton giant cell is most common in men and women aged 37–78.
Touton giant cells typically cause similar symptoms to other forms of giant cell, such as fever, weight loss, fatigue and loss of appetite.
Foreign-body giant cells form when a subject is exposed to a foreign substance. Exogenous substances can include talc or sutures . As with other types of giant cells, epithelioid macrophages fusing together causes these giant cells to form and grow. [ 13 ] In this form of giant cell, the nuclei are arranged in an overlapping manner. This giant cell is often found in tissue because of medical devices , prostheses , and biomaterials .
Reed-Sternberg cells are generally thought to originate from B-lymphocytes. [ 14 ] They are hard to study due to their rarity, and there are other theories about the origins of these cells. Some less popular theories speculate that they may arise from the fusion between reticulum cells, lymphocytes, and virus-infected cells. [ 15 ]
Similar to other MGCs, Reed-Sternberg cells are large and are either multinucleated or have a bilobed nucleus. Their nuclei are irregularly shaped, contain clear chromatin, and possess an eosinophilic nucleolus.
Some researchers have conjectured that Giant cells may be instrumental in the formation of tumours, and that their origin may be in the stress-induced genomic reorganization proposed by Nobel Laureate Barbara McClintock. [ 16 ] It had previously been suggested that such genomic stress could be aggravated by some genotoxic agents used in cancer therapy. [ 17 ]
Poly-aneuploid cancer cells (PACCs) may serve as efficient sources of heritable variation that allows cancer cells to evolve rapidly. [ 18 ]
Endogenous substances such as keratin , fat , and cholesterol crystals (cholesteatoma) can induce mast cell formation. [ 13 ]
Coronavirus disease 2019 (COVID-19) is caused by a novel coronavirus called SARS-CoV-2. Multinucleated giant cells have been detected in biopsy specimens from patients with COVID-19 disease. This type of giant cell was first found in pulmonary pathology of early phase 2019 novel coronavirus (COVID-19) pneumonia in two patients with lung cancer after a biopsy. Specifically, they were located in inflammatory fibrin clusters, sometimes together with mononuclear inflammatory cells. [ 19 ] Another pathological study also detected this type of giant cell in COVID-19 and described it as a "multinucleated syncytial cell". The morphological analysis showed that multinucleated syncytial cells and atypical enlarged pneumocytes demonstrating cytomorphological changes consistent with viral infection were found in the intra-alveolar spaces. The viral antigen was detected in the cytoplasm of multinucleated syncytial cells, indicating the presence of the SARS-CoV-2 virus. [ 20 ] However, a later post-mortem study has described these cells as 'giant cell-like' rather than true giant cells derived from histiocytes. Instead, they are derived from type II pneumocyte clusters with cytopathic changes , which was confirmed by cytokeratin staining. [ 21 ] The infection and pathogenesis of the SARS-CoV-2 virus in the human patient largely remained unknown. [ 20 ]
Multinucleate giant cells have also been described in MERS-CoV, a closely related coronavirus. [ 20 ]
A further study to characterize the role of multinucleated giant cells in human immune defense against COVID-19 may lead to more effective therapies. | https://en.wikipedia.org/wiki/Giant_cell |
Giant magnetofossils are microscopic (1 - 4 μm) magnetic minerals, typically a product of the biomineralization of magnetite (Fe 3 O 4 ). [ 1 ] [ 2 ] They are associated with the umbrella of magnetofossils coinciding with conventional magnetofossils (20-200nm) which are ancestral remains of bacterial organisms ( magnetotactic bacteria ). The organisms associated with giant magnetofossils are hypothesized to be microbial but largely unknown because they have no modern or fossil analogs. Giant magnetofossils are found in marine sediments spanning from the Cretaceous (97 Ma) to the modern geologic record in the Cenozoic. [ 3 ] [ 4 ] Though they may occur in a wide range of various aquatic environments globally. [ 5 ] They are particularly well-studied surrounding global climactic events i.e. the Paleocene-Eocene Thermal Maximum (~56 Ma).
There are 4 widely accepted morphologies of giant magnetofossils (bullets, spindles, needles, and spearheads). Largely their purpose is poorly understood outside of the preliminary hypothesis of their necessity is the organisms' navigational capabilities, protective armor, structural integrity, magnetic properties, or hardness which are all shape and size dependent. Their distinct morphology and chemical signature indicate that they must be of biogenic origin which is undisputed; however, the organism responsible for their creation and purpose is unknown.
Preliminary evidence suggests that giant magnetofossils occur during climate shifts like global warming which causes a dramatic increase in weathering and sedimentation. Increases in weathering fuel an iron-rich environment and are associated with eutrophication of the water column which promotes biomineralization along or near the sediment-water interface in suboxic conditions. [ 1 ] [ 2 ] More work needs to be done to pinpoint ideal environments, however; there is likely more at play climactically in the biomineralization of giant magnetofossils.
Giant magnetofossil morphology is larger during increased weathering and potential abundance is still poorly understood, but appears in other intervals in smaller sizes and lower abundance in less ideal conditions. [ 1 ] They correlate positively with warmer suboxic intervals but are not limited to this correlation. [ 6 ] [ 5 ]
Giant magnetofossil magnetization is variable and is highly dependent on their size and morphology. They display high remnant capabilities highlighting their ability to remember the magnetization of the particle upon crystallization, this has potential for use as environmental proxies.
It is proposed that giant magnetofossil needle structures are used for navigational purposes highlighted by their single-domain magnetism while other varieties; i.e. spindles, bullets, and spearheads are not used for navigation but their purpose is still unclear. Bullets, depending on shape and size can be multi-domain (MD) magnetism or single-domain (SD). Large bullet structures appear to be in the vortex state. Spindles are stable single-domain or metastable single-domain. Needles are typically SD. A large spearhead resembles a pseudo-single domain with some vortex magnetic signatures indicating they could be for protection rather than navigation. [ 1 ] [ 2 ] [ 4 ]
Giant Magnetofossils are poorly understood; their origin, purpose, and abundance have not been constrained. Though it is understood that they are biogenically produced, much is left to be studied, especially their potential as environmental proxies and connection with widescale climactic events. They hold potentially valuable information that has yet to be determined. | https://en.wikipedia.org/wiki/Giant_magnetofossils |
Giant oscillator strength is inherent in excitons that are weakly bound to impurities or defects in crystals.
The spectrum of fundamental absorption of direct-gap semiconductors such as gallium arsenide (GaAs) and cadmium sulfide (CdS) is continuous and corresponds to band-to-band transitions. It begins with transitions at the center of the Brillouin zone , k = 0 {\displaystyle {\boldsymbol {k}}=0} . In a perfect crystal, this spectrum is preceded by a hydrogen-like series of the transitions to s -states of Wannier-Mott excitons. [ 1 ] In addition to the exciton lines, there are surprisingly strong additional absorption lines in the same spectral region. [ 2 ] They belong to excitons weakly bound to impurities and defects and are termed 'impurity excitons'. Anomalously high intensity of the impurity-exciton lines indicate their giant oscillator strength of about f i ∼ 10 {\displaystyle f_{i}\sim 10} per impurity center while the oscillator strength of free excitons is only of about f e x ∼ 10 − 4 {\displaystyle f_{\rm {ex}}\sim 10^{-4}} per unit cell. Shallow impurity-exciton states are working as antennas borrowing their giant oscillator strength from vast areas of the crystal around them. They were predicted by Emmanuel Rashba first for molecular excitons [ 3 ] and afterwards for excitons in semiconductors. [ 4 ] Giant oscillator strengths of impurity excitons endow them with ultra-short radiational life-times τ i ∼ 1 {\displaystyle \tau _{i}\sim 1} ns.
Interband optical transitions happen at the scale of the lattice constant which is small compared to the exciton radius. Therefore, for large excitons in direct-gap crystals the oscillator strength f e x {\displaystyle f_{\rm {ex}}} of exciton absorption is proportional to | Φ e x ( 0 ) | 2 {\displaystyle |\Phi _{\rm {ex}}(0)|^{2}} which is the value of the square of the wave function of the internal motion inside the exciton Φ e x ( r e − r h ) {\displaystyle \Phi _{\rm {ex}}({\boldsymbol {r}}_{e}-{\boldsymbol {r}}_{h})} at coinciding values of the electron r e {\displaystyle {\boldsymbol {r}}_{e}} and hole r h {\displaystyle {\boldsymbol {r}}_{h}} coordinates. For large excitons | Φ e x ( 0 ) | 2 ≈ 1 / a e x 3 {\displaystyle |\Phi _{\rm {ex}}(0)|^{2}\approx 1/a_{\rm {ex}}^{3}} where a e x {\displaystyle a_{\rm {ex}}} is the exciton radius, hence, f e x ≈ v / a e x 3 ≪ 1 {\displaystyle f_{\rm {ex}}\approx v/a_{\rm {ex}}^{3}\ll 1} , here v {\displaystyle v} is the unit cell volume. The oscillator strength f i {\displaystyle f_{i}} for producing a bound exciton can be expressed through its wave function Ψ i ( r e , r h ) {\displaystyle \Psi _{i}({\boldsymbol {r}}_{e},{\boldsymbol {r}}_{h})} and f e x {\displaystyle f_{\rm {ex}}} as
f i = 1 v ( ∫ d r e Ψ i ( r e , r e ) ) 2 | Φ e x ( 0 ) | 2 f e x {\displaystyle f_{i}={\frac {1}{v}}{\frac {(\int d{\boldsymbol {r}}_{e}\Psi _{i}({\boldsymbol {r}}_{e},{\boldsymbol {r}}_{e}))^{2}}{|\Phi _{\rm {ex}}(0)|^{2}}}f_{\rm {ex}}} .
Coinciding coordinates in the numerator, r e = r h {\displaystyle {\boldsymbol {r}}_{e}={\boldsymbol {r}}_{h}} , reflect the fact the exciton is created at a spatial scale small compared with its radius. The integral in the numerator can only be performed for specific models of impurity excitons. However, if the exciton is weakly bound to impurity, hence, the radius of the bound exciton a i {\displaystyle a_{i}} satisfies the condition a i {\displaystyle a_{i}} ≥ a e x {\displaystyle a_{\rm {ex}}} and its wave function of the internal motion Φ e x ( r e − r h ) {\displaystyle \Phi _{\rm {ex}}({\boldsymbol {r}}_{e}-{\boldsymbol {r}}_{h})} is only slightly distorted, then the integral in the numerator can be evaluated as ( a i / a e x ) 3 / 2 {\displaystyle (a_{i}/a_{\rm {ex}})^{3/2}} . This immediately results in an estimate for f i {\displaystyle f_{i}}
f i ≈ a i 3 v f e x {\displaystyle f_{i}\approx {\frac {a_{i}^{3}}{v}}f_{\rm {ex}}} .
This simple result reflects physics of the phenomenon of giant oscillator strength : coherent oscillation of electron polarization in the volume of about a i 3 >> v {\displaystyle a_{i}^{3}>>v} .
If the exciton is bound to a defect by a weak short-range potential, a more accurate estimate holds
f i = 8 ( μ m E e x E i ) 3 / 2 π a e x 3 v f e x {\displaystyle f_{i}=8\left({\frac {\mu }{m}}{\frac {E_{\rm {ex}}}{E_{i}}}\right)^{3/2}{\frac {\pi a_{\rm {ex}}^{3}}{v}}f_{\rm {ex}}} .
Here m = m e + m h {\displaystyle m=m_{e}+m_{h}} is the exciton effective mass, μ = ( m e − 1 + m h − 1 ) − 1 {\displaystyle \mu =(m_{e}^{-1}+m_{h}^{-1})^{-1}} is its reduced mass, E e x {\displaystyle E_{\rm {ex}}} is the exciton ionization energy, E i {\displaystyle E_{i}} is the binding energy of the exciton to impurity, and m e {\displaystyle m_{e}} and m h {\displaystyle m_{h}} are the electron and hole effective masses.
Giant oscillator strength for shallow trapped excitons results in their short radiative lifetimes
τ i ≈ 3 m 0 c 3 2 e 2 n ω i 2 f i . {\displaystyle \tau _{i}\approx {\frac {3m_{0}c^{3}}{2e^{2}n\omega _{i}^{2}f_{i}}}.}
Here m 0 {\displaystyle m_{0}} is the electron mass in vacuum, c {\displaystyle c} is the speed of light, n {\displaystyle n} is the refraction index, and ω i {\displaystyle \omega _{i}} is the frequency of emitted light. Typical values of τ i {\displaystyle \tau _{i}} are about nanoseconds, and these short radiative lifetimes favor the radiative recombination of excitons over the non-radiative one. [ 5 ] When quantum yield of radiative emission is high, the process can be considered as resonance fluorescence .
Similar effects exist for optical transitions between exciton and biexciton states.
An alternative description of the same phenomenon is in terms of polaritons : giant cross-sections of the resonance scattering of electronic polaritons on impurities and lattice defects.
While specific values of f i {\displaystyle f_{i}} and τ i {\displaystyle \tau _{i}} are not universal and change within collections of specimens, typical values confirm the above regularities. In CdS, with E i ≈ 6 {\displaystyle E_{i}\approx 6} meV, were observed impurity-exciton oscillator strengths f i ≈ 10 {\displaystyle f_{i}\approx 10} . [ 6 ] The value f i > 1 {\displaystyle f_{i}>1} per a single impurity center should not be surprising because the transition is a collective process including many electrons in the region of the volume of about a i 3 >> v {\displaystyle a_{i}^{3}>>v} . High oscillator strength results in low-power optical saturation and radiative life times τ i ≈ 500 {\displaystyle \tau _{i}\approx 500} ps. [ 7 ] [ 8 ] Similarly, radiative life times of about 1 ns were reported for impurity excitons in GaAs. [ 9 ] The same mechanism is responsible for short radiative times down to 100 ps for excitons confined in CuCl microcrystallites. [ 10 ]
Similarly, spectra of weakly trapped molecular excitons are also strongly influenced by adjacent exciton bands. It is an important property of typical molecular crystals with two or more symmetrically-equivalent molecules in the elementary cell, such as benzine and naphthalene, that their exciton absorption spectra consist of doublets (or multiplets) of bands strongly polarized along the crystal axes as was demonstrated by Antonina Prikhot'ko . This splitting of strongly polarized absorption bands that originated from the same molecular level and is known as the 'Davydov splitting' is the primary manifestation of molecular excitons. If the low-frequency component of the exciton multiplet is situated at the bottom of the exciton energy spectrum, then the absorption band of an impurity exciton approaching the bottom from below is enhanced in this component of the spectrum and reduced in two other components; in the spectroscopy of molecular excitons this phenomenon is sometimes referred to as the 'Rashba effect'. [ 11 ] [ 12 ] [ 13 ] As a result, the polarization ratio of an impurity exciton band depends on its spectral position and becomes indicative of the energy spectrum of free excitons. [ 14 ] In large organic molecules the energy of impurity excitons can be shifted gradually by changing the isotopic content of guest molecules. Building on this option, Vladimir Broude developed a method of studying the energy spectrum of excitons in the host crystal by changing the isotopic content of guest molecules. [ 15 ] Interchanging the host and the guest allows studying energy spectrum of excitons from the top. The isotopic technique has been more recently applied to study the energy transport in biological systems. [ 16 ] | https://en.wikipedia.org/wiki/Giant_oscillator_strength |
A giant planet , sometimes referred to as a jovian planet ( Jove being another name for the Roman god Jupiter ), is a diverse type of planet much larger than Earth. Giant planets are usually primarily composed of low- boiling point materials ( volatiles ), rather than rock or other solid matter, but massive solid planets can also exist. There are four such planets in the Solar System : Jupiter , Saturn , Uranus , and Neptune . Many extrasolar giant planets have been identified.
Giant planets are sometimes known as gas giants , but many astronomers now apply the term only to Jupiter and Saturn, classifying Uranus and Neptune, which have different compositions, as ice giants . Both names are potentially misleading; the Solar System's giant planets all consist primarily of fluids above their critical points , where distinct gas and liquid phases do not exist. Jupiter and Saturn are principally made of hydrogen and helium , whilst Uranus and Neptune consist of water, ammonia , and methane .
The defining differences between a very low-mass brown dwarf and a massive gas giant ( ~13 M J ) are debated. One school of thought is based on planetary formation; the other, on the physics of the interior of planets. Part of the debate concerns whether brown dwarfs must, by definition, have experienced nuclear fusion at some point in their history. [ 1 ]
The term gas giant was coined in 1952 by science fiction writer James Blish and was originally used to refer to all giant planets. Arguably it is something of a misnomer, because throughout most of the volume of these planets the pressure is so high that matter is not in gaseous form. [ 2 ] Other than the upper layers of the atmosphere, [ 3 ] all matter is likely beyond the critical point , where there is no distinction between liquids and gases. Fluid planet would be a more accurate term. Jupiter also has metallic hydrogen near its center, but much of its volume is hydrogen, helium, and traces of other gases above their critical points. The observable atmospheres of all these planets (at less than a unit optical depth ) are quite thin compared to their radii, only extending perhaps one percent of the way to the center. Thus, the observable parts are gaseous (in contrast to Mars and Earth, which have gaseous atmospheres through which the crust can be seen).
The rather misleading term has caught on because planetary scientists typically use rock , gas , and ice as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of the matter's phase . In the outer Solar System, hydrogen and helium are referred to as gas ; water, methane, and ammonia as ice ; and silicates and metals as rock . When deep planetary interiors are considered, it may not be far off to say that, by ice astronomers mean oxygen and carbon , by rock they mean silicon , and by gas they mean hydrogen and helium. The many ways in which Uranus and Neptune differ from Jupiter and Saturn have led some to use the term only for planets similar to the latter two. With this terminology in mind, some astronomers have started referring to Uranus and Neptune as ice giants to indicate the predominance of the ices (in fluid form) in their interior composition. [ 4 ]
The alternative term jovian planet refers to the Roman god Jupiter —the genitive form of which is Jovis , hence Jovian —and was intended to indicate that all of these planets were similar to Jupiter.
Objects large enough to start deuterium fusion (above 13 Jupiter masses for solar composition) are called brown dwarfs , and these occupy the mass range between that of large giant planets and the lowest-mass stars . The 13-Jupiter-mass ( M J ) cutoff is a rule of thumb rather than something of precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the 13 M J value is somewhere in between. [ 5 ] The amount of deuterium burnt depends not only on the mass but also on the composition of the planet, especially on the amount of helium and deuterium present. [ 6 ] The Extrasolar Planets Encyclopaedia includes objects up to 60 Jupiter masses, and the Exoplanet Data Explorer up to 24 Jupiter masses. [ 7 ] [ 8 ]
A giant planet is a massive planet and has a thick atmosphere of hydrogen and helium . They may have a condensed "core" of heavier elements, delivered during the formation process. [ 9 ] This core may be partially or completely dissolved and dispersed throughout the hydrogen/helium envelope. [ 10 ] [ 9 ] In "traditional" giant planets such as Jupiter and Saturn (the gas giants) hydrogen and helium make up most of the mass of the planet, whereas they only make up an outer envelope on Uranus and Neptune , which are instead mostly composed of water , ammonia , and methane and therefore increasingly referred to as " ice giants ".
Extrasolar giant planets that orbit very close to their stars are the exoplanets that are easiest to detect. These are called hot Jupiters and hot Neptunes because they have very high surface temperatures. Hot Jupiters were, until the advent of space-borne telescopes, the most common form of exoplanet known, due to the relative ease of detecting them with ground-based instruments.
Giant planets are commonly said to lack solid surfaces, but it is more accurate to say that they lack surfaces altogether since the gases that form them simply become thinner and thinner with increasing distance from the planets' centers, eventually becoming indistinguishable from the interplanetary medium. Therefore, landing on a giant planet may or may not be possible, depending on the size and composition of its core.
Gas giants consist mostly of hydrogen and helium. The Solar System's gas giants, Jupiter and Saturn , have heavier elements making up between 3 and 13 percent of their mass. [ 11 ] Gas giants are thought to consist of an outer layer of molecular hydrogen , surrounding a layer of liquid metallic hydrogen , with a probable molten core with a rocky composition.
Jupiter and Saturn's outermost portion of the hydrogen atmosphere has many layers of visible clouds that are mostly composed of water and ammonia. The layer of metallic hydrogen makes up the bulk of each planet, and is referred to as "metallic" because the very high pressure turns hydrogen into an electrical conductor. The core is thought to consist of heavier elements at such high temperatures (20,000 K) and pressures that their properties are poorly understood. [ 11 ]
Ice giants have distinctly different interior compositions from gas giants. The Solar System's ice giants, Uranus and Neptune , have a hydrogen-rich atmosphere that extends from the cloud tops down to about 80% (Uranus) or 85% (Neptune) of their radius. Below this, they are predominantly "icy", i.e. consisting mostly of water, methane, and ammonia. There is also some rock and gas, but various proportions of ice–rock–gas could mimic pure ice, so that the exact proportions are unknown. [ 12 ]
Uranus and Neptune have very hazy atmospheric layers with small amounts of methane, giving them light aquamarine colors. Both have magnetic fields that are sharply inclined to their axes of rotation.
Unlike the other giant planets, Uranus has an extreme tilt that causes its seasons to be severely pronounced. The two planets also have other subtle but important differences. Uranus has more hydrogen and helium than Neptune despite being less massive overall. Neptune is therefore denser and has much more internal heat and a more active atmosphere. The Nice model , in fact, suggests that Neptune formed closer to the Sun than Uranus did, and should therefore have more heavy elements.
Massive solid planets seemingly can also exist, though their formation mechanisms and occurrence remain subjects of ongoing research and debate.
The possibility of solid planets up to thousands of Earth masses forming around massive stars ( B-type and O-type stars; 5–120 solar masses) has been suggested in some earlier studies. [ 13 ] The hypothesis proposed that the protoplanetary disk around such stars would contain enough heavy elements, and that high UV radiation and strong winds could photoevaporate the gas in the disk, leaving just the heavy elements. For comparison, Neptune's mass equals 17 Earth masses, Jupiter has 318 Earth masses, and the 13 Jupiter-mass limit used in the IAU 's working definition of an exoplanet equals approximately 4000 Earth masses. [ 13 ]
However, it is important to note that more recent research has called into question the likelihood of massive solid planet formation around very massive stars ( https://arxiv.org/pdf/1103.0556 ). Studies have shown that the ratio of protoplanetary disk mass to stellar mass decreases rapidly for stars above 10 solar masses, falling to less than 10^-4. Furthermore, no protoplanetary disks have been observed around O-type stars to date.
The original suggestion of massive solid planets forming around 5-120 solar mass stars, presented in earlier literature, lacks substantial supporting evidence or citations to planetary formation theories. [ 13 ] The study in question primarily focused on simulating mass-radius relationships for rocky planets, including hypothetical super-massive solid planets, but did not investigate whether planetary formation theories actually support the existence of such objects. The authors of that study acknowledged that "Such massive exoplanets are not yet known to exist." [ 13 ]
Given these considerations, the formation and existence of massive solid planets around very massive stars remain speculative and require further research and observational evidence.
A super-puff is a type of exoplanet with a mass only a few times larger than Earth ’s but a radius larger than Neptune , giving it a very low mean density . They are cooler and less massive than the inflated low-density hot-Jupiters . The most extreme examples known are the three planets around Kepler-51 which are all Jupiter -sized but with densities below 0.1 g/cm 3 . [ 14 ]
Because of the limited techniques currently available to detect exoplanets , many of those found to date have been of a size associated, in the Solar System, with giant planets. Because these large planets are inferred to share more in common with Jupiter than with the other giant planets, some have claimed that "jovian planet" is a more accurate term for them. Many of the exoplanets are much closer to their parent stars and hence much hotter than the giant planets in the Solar System, making it possible that some of those planets are a type not observed in the Solar System. Considering the relative abundances of the elements in the universe (approximately 98% hydrogen and helium) it would be surprising to find a predominantly rocky planet more massive than Jupiter. On the other hand, models of planetary-system formation have suggested that giant planets would be inhibited from forming as close to their stars as many of the extrasolar giant planets have been observed to orbit.
The bands seen in the atmosphere of Jupiter are due to counter-circulating streams of material called zones and belts, encircling the planet parallel to its equator. The zones are the lighter bands, and are at higher altitudes in the atmosphere. They have an internal updraft and are high-pressure regions. The belts are the darker bands, are lower in the atmosphere, and have an internal downdraft. They are low-pressure regions. These structures are somewhat analogous to the high and low-pressure cells in Earth's atmosphere, but they have a very different structure—latitudinal bands that circle the entire planet, as opposed to small confined cells of pressure. This appears to be a result of the rapid rotation and underlying symmetry of the planet. There are no oceans or landmasses to cause local heating and the rotation speed is much higher than that of Earth.
There are smaller structures as well: spots of different sizes and colors. On Jupiter, the most noticeable of these features is the Great Red Spot , which has been present for at least 300 years. These structures are huge storms. Some such spots are thunderheads as well.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Giant_planet |
In nuclear physics , giant resonance is a high-frequency collective excitation of atomic nuclei , as a property of many-body quantum systems . In the macroscopic interpretation of such an excitation in terms of an oscillation, the most prominent giant resonance is a collective oscillation of all protons against all neutrons in a nucleus.
In 1947, G. C. Baldwin and G. S. Klaiber observed the giant dipole resonance (GDR) in photonuclear reactions , [ 1 ] [ 2 ] and in 1972 the giant quadrupole resonance (GQR) was discovered, [ 3 ] and in 1977 the giant monopole resonance (GMR) was discovered in medium and heavy nuclei. [ 4 ]
Giant dipole resonances may result in a number of de-excitation events, such as nuclear fission , emission of neutrons or gamma rays, or combinations of these.
Giant dipole resonances can be caused by any mechanism that imparts enough energy to the nucleus. Classical causes are irradiation with gamma rays at energies from 7 to 40 MeV, which couple to nuclei and either cause or increase the dipole moment of the nucleus by adding energy that separates charges in the nucleus. The process is the inverse of gamma decay , but the energies involved are typically much larger, and the dipole moments induced are larger than occur in the excited nuclear states that cause the average gamma decay.
High energy electrons of >50 MeV may cause the same phenomenon, by coupling to the nucleus via a "virtual gamma photon", in a nuclear reaction that is the inverse (i.e., reverse) of internal conversion decay.
This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Giant_resonance |
In the fields of mechanism design and social choice theory , Gibbard's theorem is a result proven by philosopher Allan Gibbard in 1973. [ 1 ] It states that for any deterministic process of collective decision, at least one of the following three properties must hold:
A corollary of this theorem is the Gibbard–Satterthwaite theorem about voting rules. The key difference between the two theorems is that Gibbard–Satterthwaite applies only to ranked voting . Because of its broader scope, Gibbard's theorem makes no claim about whether voters need to reverse their ranking of candidates, only that their optimal ballots depend on the other voters' ballots. [ note 1 ]
Gibbard's theorem is more general, and considers processes of collective decision that may not be ordinal: for example, voting systems where voters assign grades to or otherwise rate candidates ( cardinal voting ). Gibbard's theorem can be proven using Arrow's impossibility theorem . [ citation needed ]
Gibbard's theorem is itself generalized by Gibbard's 1978 theorem [ 3 ] and Hylland's theorem , [ 4 ] which extend these results to non-deterministic processes, i.e. where the outcome may not only depend on the agents' actions but may also involve an element of chance.
Gibbard's theorem assumes the collective decision results in exactly one winner and does not apply to multi-winner voting . A similar result for multi-winner voting is the Duggan–Schwartz theorem .
Consider some voters 1 {\displaystyle 1} , 2 {\displaystyle 2} and 3 {\displaystyle 3} who wish to select an option among three alternatives: a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} . Assume they use approval voting : each voter assigns to each candidate the grade 1 (approval) or 0 (withhold approval). For example, ( 1 , 1 , 0 ) {\displaystyle (1,1,0)} is an authorized ballot: it means that the voter approves of candidates a {\displaystyle a} and b {\displaystyle b} but does not approve of candidate c {\displaystyle c} . Once the ballots are collected, the candidate with highest total grade is declared the winner. Ties between candidates are broken by alphabetical order: for example, if there is a tie between candidates a {\displaystyle a} and b {\displaystyle b} , then a {\displaystyle a} wins.
Assume that voter 1 {\displaystyle 1} prefers alternative a {\displaystyle a} , then b {\displaystyle b} and then c {\displaystyle c} . Which ballot will best defend her opinions? For example, consider the two following situations.
To sum up, voter 1 {\displaystyle 1} faces a strategic voting dilemma: depending on the ballots that the other voters will cast, ( 1 , 0 , 0 ) {\displaystyle (1,0,0)} or ( 1 , 1 , 0 ) {\displaystyle (1,1,0)} can be a ballot that best defends her opinions. We then say that approval voting is not strategyproof : once the voter has identified her own preferences, she does not have a ballot at her disposal that best defends her opinions in all situations; she needs to act strategically, possibly by spying over the other voters to determine how they intend to vote.
Gibbard's theorem states that a deterministic process of collective decision cannot be strategyproof, except possibly in two cases: if there is a distinguished agent who has a dictatorial power ( unilateral ), or if the process limits the outcome to two possible options only ( duple ).
Let A {\displaystyle {\mathcal {A}}} be the set of alternatives , which can also be called candidates in a context of voting. Let N = { 1 , … , n } {\displaystyle {\mathcal {N}}=\{1,\ldots ,n\}} be the set of agents , which can also be called players or voters, depending on the context of application. For each agent i {\displaystyle i} , let S i {\displaystyle {\mathcal {S}}_{i}} be a set that represents the available strategies for agent i {\displaystyle i} ; assume that S i {\displaystyle {\mathcal {S}}_{i}} is finite. Let g {\displaystyle g} be a function that, to each n {\displaystyle n} -tuple of strategies ( s 1 , … , s n ) ∈ S 1 × ⋯ × S n {\displaystyle (s_{1},\ldots ,s_{n})\in {\mathcal {S}}_{1}\times \cdots \times {\mathcal {S}}_{n}} , maps an alternative. The function g {\displaystyle g} is called a game form . In other words, a game form is essentially defined like an n -player game , but with no utilities associated to the possible outcomes: it describes the procedure only, without specifying a priori the gain that each agent would get from each outcome.
We say that g {\displaystyle g} is strategyproof (originally called: straightforward ) if for any agent i {\displaystyle i} and for any strict weak order P i {\displaystyle P_{i}} over the alternatives, there exists a strategy s i ∗ ( P i ) {\displaystyle s_{i}^{*}(P_{i})} that is dominant for agent i {\displaystyle i} when she has preferences P i {\displaystyle P_{i}} : there is no profile of strategies for the other agents such that another strategy s i {\displaystyle s_{i}} , different from s i ∗ ( P i ) {\displaystyle s_{i}^{*}(P_{i})} , would lead to a strictly better outcome (in the sense of P i {\displaystyle P_{i}} ). This property is desirable for a democratic decision process: it means that once the agent i {\displaystyle i} has identified her own preferences P i {\displaystyle P_{i}} , she can choose a strategy s i ∗ ( P i ) {\displaystyle s_{i}^{*}(P_{i})} that best defends her preferences, with no need to know or guess the strategies chosen by the other agents.
We let S = S 1 × ⋯ × S n {\displaystyle {\mathcal {S}}={\mathcal {S}}_{1}\times \cdots \times {\mathcal {S}}_{n}} and denote by g ( S ) {\displaystyle g({\mathcal {S}})} the range of g {\displaystyle g} , i.e. the set of the possible outcomes of the game form. For example, we say that g {\displaystyle g} has at least 3 possible outcomes if and only if the cardinality of g ( S ) {\displaystyle g({\mathcal {S}})} is 3 or more. Since the strategy sets are finite, g ( S ) {\displaystyle g({\mathcal {S}})} is finite also; thus, even if the set of alternatives A {\displaystyle {\mathcal {A}}} is not assumed to be finite, the subset of possible outcomes g ( S ) {\displaystyle g({\mathcal {S}})} is necessarily so.
We say that g {\displaystyle g} is dictatorial if there exists an agent i {\displaystyle i} who is a dictator , in the sense that for any possible outcome a ∈ g ( S ) {\displaystyle a\in g({\mathcal {S}})} , agent i {\displaystyle i} has a strategy at her disposal that ensures that the result is a {\displaystyle a} , whatever the strategies chosen by the other agents.
Gibbard's theorem — If a game form is not dictatorial and has at least 3 possible outcomes, then it is not strategyproof.
We assume that each voter communicates a strict weak order over the candidates. The serial dictatorship is defined as follows. If voter 1 has a unique most-liked candidate, then this candidate is elected. Otherwise, possible outcomes are restricted to his equally most-liked candidates and the other candidates are eliminated. Then voter 2's ballot is examined: if he has a unique best-liked candidate among the non-eliminated ones, then this candidate is elected. Otherwise, the list of possible outcomes is reduced again, etc. If there is still several non-eliminated candidates after all ballots have been examined, then an arbitrary tie-breaking rule is used.
This game form is strategyproof: whatever the preferences of a voter, he has a dominant strategy that consists in declaring his sincere preference order. It is also dictatorial, and its dictator is voter 1: if he wishes to see candidate a {\displaystyle a} elected, then he just has to communicate a preference order where a {\displaystyle a} is the unique most-liked candidate.
If there are only 2 possible outcomes, a game form may be strategyproof and not dictatorial. For example, it is the case of the simple majority vote: each voter casts a ballot for her most-liked alternative (among the two possible outcomes), and the alternative with most votes is declared the winner. This game form is strategyproof because it is always optimal to vote for one's most-liked alternative (unless one is indifferent between them). However, it is clearly not dictatorial. Many other game forms are strategyproof and not dictatorial: for example, assume that the alternative a {\displaystyle a} wins if it gets two thirds of the votes, and b {\displaystyle b} wins otherwise.
Consider the following game form. Voter 1 can vote for a candidate of her choice, or she can abstain. In the first case, the specified candidate is automatically elected. Otherwise, the other voters use a classic voting rule, for example the Borda count . This game form is clearly dictatorial, because voter 1 can impose the result. However, it is not strategyproof: the other voters face the same issue of strategic voting as in the usual Borda count. Thus, Gibbard's theorem is an implication and not an equivalence.
Gibbard's 1978 theorem states that a nondeterministic voting method is only strategyproof if it's a mixture of unilateral and duple rules. For instance, the rule that flips a coin and chooses a random dictator if the coin lands on heads, or chooses the pairwise winner between two random candidates if the coin lands on tails, is strategyproof. Nondeterministic methods have been devised that approximate the results of deterministic methods while being strategyproof. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Gibbard's_theorem |
The Gibbard–Satterthwaite theorem is a theorem in social choice theory . It was first conjectured by the philosopher Michael Dummett and the mathematician Robin Farquharson in 1961 [ 1 ] and then proved independently by the philosopher Allan Gibbard in 1973 [ 2 ] and economist Mark Satterthwaite in 1975. [ 3 ] It deals with deterministic ordinal electoral systems that choose a single winner, and shows that for every voting rule of this form, at least one of the following three things must hold:
Gibbard's proof of the theorem is more general and covers processes of collective decision that may not be ordinal, such as cardinal voting . [ note 1 ] Gibbard's 1978 theorem and Hylland's theorem are even more general and extend these results to non-deterministic processes, where the outcome may depend partly on chance; the Duggan–Schwartz theorem extends these results to multiwinner electoral systems.
Consider three voters named Alice, Bob and Carol, who wish to select a winner among four candidates named a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} and d {\displaystyle d} . Assume that they use the Borda count : each voter communicates his or her preference order over the candidates. For each ballot, 3 points are assigned to the top candidate, 2 points to the second candidate, 1 point to the third one and 0 points to the last one. Once all ballots have been counted, the candidate with the most points is declared the winner.
Assume that their preferences are as follows.
If the voters cast sincere ballots, then the scores are: ( a : 3 , b : 6 , c : 7 , d : 2 ) {\displaystyle (a:3,b:6,c:7,d:2)} . Hence, candidate c {\displaystyle c} will be elected, with 7 points.
But Alice can vote strategically and change the result. Assume that she modifies her ballot, in order to produce the following situation.
Alice has strategically upgraded candidate b {\displaystyle b} and downgraded candidate c {\displaystyle c} . Now, the scores are: ( a : 2 , b : 7 , c : 6 , d : 3 ) {\displaystyle (a:2,b:7,c:6,d:3)} . Hence, b {\displaystyle b} is elected. Alice is satisfied by her ballot modification, because she prefers the outcome b {\displaystyle b} to c {\displaystyle c} , which is the outcome she would obtain if she voted sincerely.
We say that the Borda count is manipulable : there exists situations where a sincere ballot does not defend a voter's preferences best.
The Gibbard–Satterthwaite theorem states that every ranked-choice voting is manipulable, except possibly in two cases: if there is a distinguished voter who has a dictatorial power, or if the rule limits the possible outcomes to two options only.
Let A {\displaystyle {\mathcal {A}}} be the set of alternatives (which is assumed finite), also called candidates , even if they are not necessarily persons: they can also be several possible decisions about a given issue. We denote by N = { 1 , … , n } {\displaystyle {\mathcal {N}}=\{1,\ldots ,n\}} the set of voters . Let P {\displaystyle {\mathcal {P}}} be the set of strict weak orders over A {\displaystyle {\mathcal {A}}} : an element of this set can represent the preferences of a voter, where a voter may be indifferent regarding the ordering of some alternatives. A voting rule is a function f : P n → A {\displaystyle f:{\mathcal {P}}^{n}\to {\mathcal {A}}} . Its input is a profile of preferences ( P 1 , … , P n ) ∈ P n {\displaystyle (P_{1},\ldots ,P_{n})\in {\mathcal {P}}^{n}} and it yields the identity of the winning candidate.
We say that f {\displaystyle f} is manipulable if and only if there exists a profile ( P 1 , … , P n ) ∈ P n {\displaystyle (P_{1},\ldots ,P_{n})\in {\mathcal {P}}^{n}} where some voter i {\displaystyle i} , by replacing her ballot P i {\displaystyle P_{i}} with another ballot P i ′ {\displaystyle P_{i}'} , can get an outcome that she prefers (in the sense of P i {\displaystyle P_{i}} ).
We denote by f ( P n ) {\displaystyle f({\mathcal {P}}^{n})} the image of f {\displaystyle f} , i.e. the set of possible outcomes for the election. For example, we say that f {\displaystyle f} has at least three possible outcomes if and only if the cardinality of f ( P n ) {\displaystyle f({\mathcal {P}}^{n})} is 3 or more.
We say that f {\displaystyle f} is dictatorial if and only if there exists a voter i {\displaystyle i} who is a dictator , in the sense that the winning alternative is always her most-liked one among the possible outcomes regardless of the preferences of other voters . If the dictator has several equally most-liked alternatives among the possible outcomes, then the winning alternative is simply one of them.
Gibbard–Satterthwaite theorem — If an ordinal voting rule has at least 3 possible outcomes and is non-dictatorial, then it is manipulable.
A variety of "counterexamples" to the Gibbard-Satterthwaite theorem exist when the conditions of the theorem do not apply.
Consider a three-candidate election conducted by score voting . It is always optimal for a voter to give the best candidate the highest possible score, and the worst candidate the lowest possible score. Then, no matter which score the voter assigns to the middle candidate, it will always fall (non-strictly) between the first and last scores; this implies the voter's score ballot will be weakly consistent with that voter's honest ranking. However, the actual optimal score may depend on the other ballots cast, as indicated by Gibbard's theorem .
The serial dictatorship is defined as follows. If voter 1 has a unique most-liked candidate, then this candidate is elected. Otherwise, possible outcomes are restricted to the most-liked candidates, whereas the other candidates are eliminated. Then voter 2's ballot is examined: if there is a unique best-liked candidate among the non-eliminated ones, then this candidate is elected. Otherwise, the list of possible outcomes is reduced again, etc. If there are still several non-eliminated candidates after all ballots have been examined, then an arbitrary tie-breaking rule is used.
This voting rule is not manipulable: a voter is always better off communicating his or her sincere preferences. It is also dictatorial, and its dictator is voter 1: the winning alternative is always that specific voter's most-liked one or, if there are several most-liked alternatives, it is chosen among them.
If there are only 2 possible outcomes, a voting rule may be non-manipulable without being dictatorial. For example, it is the case of the simple majority vote: each voter assigns 1 point to her top alternative and 0 to the other, and the alternative with most points is declared the winner. (If both alternatives reach the same number of points, the tie is broken in an arbitrary but deterministic manner, e.g. outcome a {\displaystyle a} wins.) This voting rule is not manipulable because a voter is always better off communicating his or her sincere preferences; and it is clearly not dictatorial. Many other rules are neither manipulable nor dictatorial: for example, assume that the alternative a {\displaystyle a} wins if it gets two thirds of the votes, and b {\displaystyle b} wins otherwise.
We now consider the case where by assumption, a voter cannot be indifferent between two candidates. We denote by L {\displaystyle {\mathcal {L}}} the set of strict total orders over A {\displaystyle {\mathcal {A}}} and we define a strict voting rule as a function f : L n → A {\displaystyle f:{\mathcal {L}}^{n}\to {\mathcal {A}}} . The definitions of possible outcomes , manipulable , dictatorial have natural adaptations to this framework.
For a strict voting rule, the converse of the Gibbard–Satterthwaite theorem is true. Indeed, a strict voting rule is dictatorial if and only if it always selects the most-liked candidate of the dictator among the possible outcomes; in particular, it does not depend on the other voters' ballots. As a consequence, it is not manipulable: the dictator is perfectly defended by her sincere ballot, and the other voters have no impact on the outcome, hence they have no incentive to deviate from sincere voting. Thus, we obtain the following equivalence.
Theorem — If a strict voting rule has at least 3 possible outcomes, it is non-manipulable if and only if it is dictatorial.
In the theorem, as well as in the corollary, it is not needed to assume that any alternative can be elected. It is only assumed that at least three of them can win, i.e. are possible outcomes of the voting rule. It is possible that some other alternatives can be elected in no circumstances: the theorem and the corollary still apply. However, the corollary is sometimes presented under a less general form: [ 4 ] instead of assuming that the rule has at least three possible outcomes, it is sometimes assumed that A {\displaystyle {\mathcal {A}}} contains at least three elements and that the voting rule is onto , i.e. every alternative is a possible outcome. [ 5 ] The assumption of being onto is sometimes even replaced with the assumption that the rule is unanimous , in the sense that if all voters prefer the same candidate, then she must be elected. [ 6 ] [ 7 ]
The Gibbard–Satterthwaite theorem can be proved using Arrow's impossibility theorem for social ranking functions . We give a sketch of proof in the simplified case where some voting rule f {\displaystyle f} is assumed to be Pareto-efficient .
It is possible to build a social ranking function Rank {\displaystyle \operatorname {Rank} } , as follows: in order to decide whether a ≺ b {\displaystyle a\prec b} , the Rank {\displaystyle \operatorname {Rank} } function creates new preferences in which a {\displaystyle a} and b {\displaystyle b} are moved to the top of all voters' preferences. [ clarification needed ] Then, Rank {\displaystyle \operatorname {Rank} } examines whether f {\displaystyle f} chooses a {\displaystyle a} or b {\displaystyle b} .
It is possible to prove that, if f {\displaystyle f} is non-manipulable and non-dictatorial, Rank {\displaystyle \operatorname {Rank} } satisfies independence of irrelevant alternatives. Arrow's impossibility theorem says that, when there are three or more alternatives, such a Rank {\displaystyle \operatorname {Rank} } function must be a dictatorship . Hence, such a voting rule f {\displaystyle f} must also be a dictatorship. [ 8 ] : 214–215
Later authors have developed other variants of the proof. [ 5 ] [ 6 ] [ 7 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ excessive citations ]
The strategic aspect of voting is already noticed in 1876 by Charles Dodgson, also known as Lewis Carroll , a pioneer in social choice theory. His quote (about a particular voting system) was made famous by Duncan Black : [ 15 ]
This principle of voting makes an election more of a game of skill than a real test of the wishes of the electors.
During the 1950s, Robin Farquharson published influential articles on voting theory. [ 16 ] In an article with Michael Dummett , [ 17 ] he conjectures that deterministic voting rules with at least three outcomes are never straightforward tactical voting . [ 18 ] This conjecture was later proven independently by Allan Gibbard and Mark Satterthwaite . In a 1973 article, Gibbard exploits Arrow's impossibility theorem from 1951 to prove the result we now know as Gibbard's theorem . [ 2 ] Independently, Satterthwaite proved the same result in his PhD dissertation in 1973, then published it in a 1975 article. [ 3 ] This proof is also based on Arrow's impossibility theorem, but does not involve the more general version given by Gibbard's theorem.
Gibbard's theorem deals with processes of collective choice that may not be ordinal, i.e. where a voter's action may not consist in communicating a preference order over the candidates. Gibbard's 1978 theorem and Hylland's theorem extend these results to non-deterministic mechanisms, i.e. where the outcome may not only depend on the ballots but may also involve a part of chance.
The Duggan–Schwartz theorem extend this result in another direction, by dealing with deterministic voting rules that choose multiple winners. [ 19 ]
The Gibbard–Satterthwaite theorem is generally presented as a result about voting systems, but it can also be seen as an important result of mechanism design , which deals with a broader class of decision rules. Noam Nisan describes this relation: [ 8 ] : 215
The GS theorem seems to quash any hope of designing incentive-compatible social-choice functions. The whole field of Mechanism Design attempts escaping from this impossibility result using various modifications in the model.
The main idea of these "escape routes" is that they allow for a broader class of mechanisms than ranked voting, similarly to the escape routes from Arrow's impossibility theorem . | https://en.wikipedia.org/wiki/Gibbard–Satterthwaite_theorem |
In information theory , Gibbs' inequality is a statement about the information entropy of a discrete probability distribution . Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality .
It was first presented by J. Willard Gibbs in the 19th century.
Suppose that P = { p 1 , … , p n } {\displaystyle P=\{p_{1},\ldots ,p_{n}\}} and Q = { q 1 , … , q n } {\displaystyle Q=\{q_{1},\ldots ,q_{n}\}} are discrete probability distributions . Then
with equality if and only if p i = q i {\displaystyle p_{i}=q_{i}} for i = 1 , … n {\displaystyle i=1,\dots n} . [ 1 ] : 68 Put in words, the information entropy of a distribution P {\displaystyle P} is less than or equal to its cross entropy with any other distribution Q {\displaystyle Q} .
The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written: [ 2 ] : 34
Note that the use of base-2 logarithms is optional, and
allows one to refer to the quantity on each side of the inequality as an
"average surprisal " measured in bits .
For simplicity, we prove the statement using the natural logarithm, denoted by ln , since
so the particular logarithm base b > 1 that we choose only scales the relationship by the factor 1 / ln b .
Let I {\displaystyle I} denote the set of all i {\displaystyle i} for which p i is non-zero. Then, since ln x ≤ x − 1 {\displaystyle \ln x\leq x-1} for all x > 0 , with equality if and only if x=1 , we have:
The last inequality is a consequence of the p i and q i being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero q i , however, may have been excluded since the choice of indices is conditioned upon the p i being non-zero. Therefore, the sum of the q i may be less than 1.
So far, over the index set I {\displaystyle I} , we have:
or equivalently
Both sums can be extended to all i = 1 , … , n {\displaystyle i=1,\ldots ,n} , i.e. including p i = 0 {\displaystyle p_{i}=0} , by recalling that the expression p ln p {\displaystyle p\ln p} tends to 0 as p {\displaystyle p} tends to 0, and ( − ln q ) {\displaystyle (-\ln q)} tends to ∞ {\displaystyle \infty } as q {\displaystyle q} tends to 0. We arrive at
For equality to hold, we require
This can happen if and only if p i = q i {\displaystyle p_{i}=q_{i}} for i = 1 , … , n {\displaystyle i=1,\ldots ,n} .
The result can alternatively be proved using Jensen's inequality , the log sum inequality , or the fact that the Kullback-Leibler divergence is a form of Bregman divergence .
Because log is a concave function, we have that:
where the first inequality is due to Jensen's inequality, and q {\displaystyle q} being a probability distribution implies the last equality.
Furthermore, since log {\displaystyle \log } is strictly concave, by the equality condition of Jensen's inequality we get equality when
and
Suppose that this ratio is σ {\displaystyle \sigma } , then we have that
where we use the fact that p , q {\displaystyle p,q} are probability distributions. Therefore, the equality happens when p = q {\displaystyle p=q} .
Alternatively, it can be proved by noting that q − p − p ln q p ≥ 0 {\displaystyle q-p-p\ln {\frac {q}{p}}\geq 0} for all p , q > 0 {\displaystyle p,q>0} , with equality holding iff p = q {\displaystyle p=q} . Then, sum over the states, we have ∑ i q i − p i − p i ln q i p i ≥ 0 {\displaystyle \sum _{i}q_{i}-p_{i}-p_{i}\ln {\frac {q_{i}}{p_{i}}}\geq 0} with equality holding iff p = q {\displaystyle p=q} .
This is because the KL divergence is the Bregman divergence generated by the function t ↦ ln t {\displaystyle t\mapsto \ln t} .
The entropy of P {\displaystyle P} is bounded by: [ 1 ] : 68
The proof is trivial – simply set q i = 1 / n {\displaystyle q_{i}=1/n} for all i . | https://en.wikipedia.org/wiki/Gibbs'_inequality |
The Gibbs Brothers Medal is awarded by the U.S. National Academy of Sciences for "outstanding contributions in the field of naval architecture and marine engineering". It was established by a gift from William Francis Gibbs and Frederic Herbert Gibbs . [ 1 ]
Source: [ 1 ] | https://en.wikipedia.org/wiki/Gibbs_Brothers_Medal |
In thermodynamics , the Gibbs free energy (or Gibbs energy as the recommended name; symbol G {\displaystyle G} ) is a thermodynamic potential that can be used to calculate the maximum amount of work , other than pressure–volume work , that may be performed by a thermodynamically closed system at constant temperature and pressure . It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed as G ( p , T ) = U + p V − T S = H − T S {\displaystyle G(p,T)=U+pV-TS=H-TS} where:
The Gibbs free energy change ( Δ G = Δ H − T Δ S {\displaystyle \Delta G=\Delta H-T\Delta S} , measured in joules in SI ) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process . When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces. [ 1 ]
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in G {\displaystyle G} is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy , was developed in the 1870s by the American scientist Josiah Willard Gibbs . In 1873, Gibbs described this "available energy" as [ 2 ] : 400
the greatest amount of mechanical work which can be obtained from a given quantity of a certain substance in a given initial state, without increasing its total volume or allowing heat to pass to or from external bodies, except such as at the close of the processes are left in their initial condition.
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes ". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances , a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states , then the defining equation is written as Δ G ∘ = Δ H ∘ − T Δ S ∘ {\displaystyle \Delta G^{\circ }=\Delta H^{\circ }-T\Delta S^{\circ }} , where H {\displaystyle H} is enthalpy , T {\displaystyle T} is absolute temperature , and S {\displaystyle S} is entropy .
According to the second law of thermodynamics , for systems reacting at fixed temperature and pressure without input of non- Pressure Volume (pV) work , there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change Δ G (sometimes written "delta G " or "d G ") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, Δ G must be smaller than the non-pressure-volume (non- pV , e.g. electrical) work, which is often equal to zero (then Δ G must be negative). Δ G equals the maximum amount of non- pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive Δ G for a reaction, then energy — in the form of electrical or other non- pV work — would have to be added to the reacting system for Δ G to be smaller than the non- pV work and make it possible for the reaction to occur. [ 3 ] : 298–299
One can think of ∆G as the amount of "free" or "useful" energy available to do non- pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative Δ G , and the reaction is called an exergonic process .
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive Δ G ) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene , can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". [ 1 ] The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. [ 4 ] (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy , for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. [ 5 ] [ 6 ] [ 7 ] This standard, however, has not yet been universally adopted.
The name "free enthalpy " was also used for G in the past. [ 6 ]
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity , which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions .
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces , in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume - entropy - internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated: [ 2 ]
If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure p and temperature T , this equation may be written:
when δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system ( Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system ( internal energy ). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world. [ 8 ] : 206
The Gibbs free energy is defined as G ( p , T ) = U + p V − T S , {\displaystyle G(p,T)=U+pV-TS,} which is the same as G ( p , T ) = H − T S , {\displaystyle G(p,T)=H-TS,} where:
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T , for an open system , subjected to the operation of external forces (for instance, electrical or magnetic) X i , which cause the external parameters of the system a i to change by an amount d a i , can be derived as follows from the first law for reversible processes: T d S = d U + p d V − ∑ i = 1 k μ i d N i + ∑ i = 1 n X i d a i + ⋯ d ( T S ) − S d T = d U + d ( p V ) − V d p − ∑ i = 1 k μ i d N i + ∑ i = 1 n X i d a i + ⋯ d ( U − T S + p V ) = V d p − S d T + ∑ i = 1 k μ i d N i − ∑ i = 1 n X i d a i + ⋯ d G = V d p − S d T + ∑ i = 1 k μ i d N i − ∑ i = 1 n X i d a i + ⋯ {\displaystyle {\begin{aligned}T\,\mathrm {d} S&=\mathrm {d} U+p\,\mathrm {d} V-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (TS)-S\,\mathrm {d} T&=\mathrm {d} U+\mathrm {d} (pV)-V\,\mathrm {d} p-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (U-TS+pV)&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} G&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \end{aligned}}} where:
This is one form of the Gibbs fundamental equation . [ 10 ] In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed , chemically reacting system where the N i are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work , a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −d l under a force f would result in a term f d l being added. If a quantity of charge −d e is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ d e , which would be included in the infinitesimal expression. Other work terms are added on per system requirements. [ 11 ]
Each quantity in the equations above can be divided by the amount of substance, measured in moles , to form molar Gibbs free energy . The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell , and the equilibrium constant for a reversible reaction . In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic [ clarification needed ] and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation , and its pressure dependence is given by [ 12 ] G N = G ∘ N + k T ln p p ∘ . {\displaystyle {\frac {G}{N}}={\frac {G^{\circ }}{N}}+kT\ln {\frac {p}{p^{\circ }}}.} or more conveniently as its chemical potential : G N = μ = μ ∘ + k T ln p p ∘ . {\displaystyle {\frac {G}{N}}=\mu =\mu ^{\circ }+kT\ln {\frac {p}{p^{\circ }}}.}
In non-ideal systems, fugacity comes into play.
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy .
The definition of G from above is
Taking the total differential, we have
Replacing d U with the result from the first law gives [ 13 ]
The natural variables of G are then p , T , and { N i }.
Because S , V , and N i are extensive variables , an Euler relation allows easy integration of d U : [ 13 ]
Because some of the natural variables of G are intensive, d G may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G : [ 13 ]
This result shows that the chemical potential of a substance i {\displaystyle i} is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems. [ 14 ]
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is G = U + p V − T S {\displaystyle G=U+pV-TS} and an infinitesimal change in G , at constant temperature and pressure, yields
By the first law of thermodynamics , a change in the internal energy U is given by
where δQ is energy added as heat, and δW is energy added as work. The work done on the system may be written as δW = − pdV + δW x , where − pdV is the mechanical work of compression/expansion done on or by the system and δW x is all other forms of work, which may include electrical, magnetic, etc. Then
and the infinitesimal change in G is
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath), T d S ≥ δ Q {\displaystyle TdS\geq \delta Q} , and so it follows that
Assuming that only mechanical work is done, this simplifies to
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
When electric charge dQ ele is passed between the electrodes of an electrochemical cell generating an emf E {\displaystyle {\mathcal {E}}} , an electrical work term appears in the expression for the change in Gibbs energy: d G = − S d T + V d p + E d Q e l e , {\displaystyle dG=-SdT+Vdp+{\mathcal {E}}dQ_{ele},} where S is the entropy , V is the system volume, p is its pressure and T is its absolute temperature .
The combination ( E {\displaystyle {\mathcal {E}}} , Q ele ) is an example of a conjugate pair of variables . At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically . The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is: [ 15 ]
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
where n 0 is the number of electrons/ion, and F 0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
where Δ H is the enthalpy of reaction . The quantities on the right are all directly measurable.
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
and rearranging gives n F E ∘ = R T ln K eq , n F E = n F E ∘ − R T ln Q r , E = E ∘ − R T n F ln Q r , {\displaystyle {\begin{aligned}nF{\mathcal {E}}^{\circ }&=RT\ln K_{\text{eq}},\\nF{\mathcal {E}}&=nF{\mathcal {E}}^{\circ }-RT\ln Q_{\text{r}},\\{\mathcal {E}}&={\mathcal {E}}^{\circ }-{\frac {RT}{nF}}\ln Q_{\text{r}},\end{aligned}}} which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction ( Nernst equation ),
where
Moreover, we also have K eq = e − Δ r G ∘ R T , Δ r G ∘ = − R T ( ln K eq ) = − 2.303 R T ( log 10 K eq ) , {\displaystyle {\begin{aligned}K_{\text{eq}}&=e^{-{\frac {\Delta _{\text{r}}G^{\circ }}{RT}}},\\\Delta _{\text{r}}G^{\circ }&=-RT\left(\ln K_{\text{eq}}\right)=-2.303\,RT\left(\log _{10}K_{\text{eq}}\right),\end{aligned}}} which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium Q r = K eq {\displaystyle Q_{\text{r}}=K_{\text{eq}}} and Δ r G = 0. {\displaystyle \Delta _{\text{r}}G=0.}
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa ). Its symbol is Δ f G ˚.
All elements in their standard states (diatomic oxygen gas, graphite , etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
where Q f is the reaction quotient .
At equilibrium, Δ f G = 0, and Q f = K , so the equation becomes
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. [ 17 ] Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure . | https://en.wikipedia.org/wiki/Gibbs_free_energy |
The Gibbs adsorption isotherm for multicomponent systems is an equation used to relate the changes in concentration of a component in contact with a surface with changes in the surface tension , which results in a corresponding change in surface energy . For a binary system, the Gibbs adsorption equation in terms of surface excess is
where
Different influences at the interface may cause changes in the composition of the near-surface layer. [ 1 ] Substances may either accumulate near the surface or, conversely, move into the bulk. [ 1 ] The movement of the molecules characterizes the phenomena of adsorption . Adsorption influences changes in surface tension and colloid stability. Adsorption layers at the surface of a liquid dispersion medium may affect the interactions of the dispersed particles in the media, and consequently these layers may play crucial role in colloid stability [ 2 ] The adsorption of molecules of liquid phase at an interface occurs when this liquid phase is in contact with other immiscible phases that may be gas, liquid, or solid [ 3 ]
Surface tension describes how difficult it is to extend the area of a surface (by stretching or distorting it). If surface tension is high, there is a large free energy required to increase the surface area, so the surface will tend to contract and hold together like a rubber sheet.
There are various factors affecting surface tension, one of which is that the composition of the surface may be different from the bulk. For example, if water is mixed with a tiny amount of surfactants (for example, hand soap ), the bulk water may be 99% water molecules and 1% soap molecules, but the topmost surface of the water may be 50% water molecules and 50% soap molecules. In this case, the soap has a large and positive "surface excess". In other examples, the surface excess may be negative: For example, if water is mixed with an inorganic salt like sodium chloride , the surface of the water is on average less salty and more pure than the bulk average.
Consider again the example of water with a bit of soap. Since the water surface needs to have higher concentration of soap than the bulk, whenever the water's surface area is increased, it is necessary to remove soap molecules from the bulk and add them to the new surface. If the concentration of soap is increased a bit, the soap molecules are more readily available (they have higher chemical potential ), so it is easier to pull them from the bulk in order to create the new surface. Since it is easier to create new surface, the surface tension is lowered. The general principle is:
Next consider the example of water with salt. The water surface is less salty than bulk, so whenever the water's surface area is increased, it is necessary to remove salt molecules from the new surface and push them into bulk. If the concentration of salt is increased a bit (raising the salt's chemical potential ), it becomes harder to push away the salt molecules. Since it is now harder to create the new surface, the surface tension is higher. The general principle is:
The Gibbs isotherm equation gives the exact quantitative relationship for these trends.
In the presence of two phases ( α and β ), the surface (surface phase) is located in between the phase α and phase β . Experimentally, it is difficult to determine the exact structure of an inhomogeneous surface phase that is in contact with a bulk liquid phase containing more than one solute. [ 2 ] Inhomogeneity of the surface phase is a result of variation of mole ratios. [ 1 ] A model proposed by Josiah Willard Gibbs proposed that the surface phase as an idealized model that had zero thickness. In reality, although the bulk regions of α and β phases are constant, the concentrations of components in the interfacial region will gradually vary from the bulk concentration of α to the bulk concentration of β over the distance x. This is in contrast to the idealized Gibbs model where the distance x takes on the value of zero. The diagram to the right illustrates the differences between the real and idealized models.
In the idealized model, the chemical components of the α and β bulk phases remain unchanged except when approaching the dividing surface. [ 3 ] The total moles of any component (Examples include: water, ethylene glycol etc.) remains constant in the bulk phases but varies in the surface phase for the real system model as shown below.
In the real system, however, the total moles of a component varies depending on the arbitrary placement of the dividing surface. The quantitative measure of adsorption of the i -th component is captured by the surface excess quantity. [ 1 ] The surface excess represents the difference between the total moles of the i -th component in a system and the moles of the i -th component in a particular phase (either α or β ) and is represented by:
where Γ i is the surface excess of the i -th component, n are the moles, α and β are the phases, and A is the area of the dividing surface.
Γ represents excess of solute per unit area of the surface over what would be present if the bulk concentration prevailed all the way to the surface, it can be positive, negative or zero. It has units of mol/m 2 .
Relative Surface Excess quantities are more useful than arbitrary surface excess quantities. [ 3 ] The Relative surface excess relates the adsorption at the interface to a solvent in the bulk phase. An advantage of using the relative surface excess quantities is that they don't depend on the location of the dividing surface. The relative surface excess of species i and solvent 1 is therefore:
For a two-phase system consisting of the α and β phase in equilibrium with a surface S dividing the phases, the total Gibbs free energy of a system can be written as:
where G is the Gibbs free energy.
The equation of the Gibbs Adsorption Isotherm can be derived from the “particularization to the thermodynamics of the Euler theorem on homogeneous first-order forms.” [ 4 ] The Gibbs free energy of each phase α , phase β , and the surface phase can be represented by the equation:
where U is the internal energy, p is the pressure, V is the volume, T is the temperature, S is the entropy, and μ i is the chemical potential of the i -th component.
By taking the total derivative of the Euler form of the Gibbs equation for the α phase, β phase and the surface phase:
where A is the area of the dividing surface, and γ is the surface tension .
For reversible processes, the first law of thermodynamics requires that:
where q is the heat energy and w is the work.
Substituting the above equation into the total derivative of the Gibbs energy equation and by utilizing the result γd A is equated to the non-pressure volume work when surface energy is considered:
by utilizing the fundamental equation of Gibbs energy of a multicomponent system:
The equation relating the α phase, β phase and the surface phase becomes:
When considering the bulk phases ( α phase, β phase), at equilibrium at constant temperature and pressure the Gibbs–Duhem equation requires that:
The resulting equation is the Gibbs adsorption isotherm equation:
The Gibbs adsorption isotherm is an equation which could be considered an adsorption isotherm that connects surface tension of a solution with the concentration of the solute.
For a binary system containing two components the Gibbs Adsorption Equation in terms of surface excess is:
The chemical potential of species i in solution, μ i {\displaystyle \mu _{i}} , depends on its activity a i {\displaystyle a_{i}} by the following equation: [ 2 ]
where μ i o {\displaystyle {\mu _{i}}^{o}} is the chemical potential of the i -th component at a reference state, R is the gas constant and T is the temperature.
Differentiation of the chemical potential equation results in:
where f is the activity coefficient of component i , and C is the concentration of species i in the bulk phase.
If the solutions in the α and β phases are dilute (rich in one particular component i ) then activity coefficient of the component i approaches unity and the Gibbs isotherm becomes:
The above equation assumes the interface to be bidimensional, which is not always true. Further models, such as Guggenheim's, correct this flaw.
Consider a system composed of water that contains an organic electrolyte RNaz and an inorganic electrolyte NaCl that both dissociate completely such that:
The Gibbs Adsorption equation in terms of the relative surface excess becomes:
The Relation Between Surface Tension and The Surface Excess Concentration becomes:
where m is the coefficient of the Gibbs adsorption. [ 3 ] Values of m are calculated using the Double layer (interfacial) models of Helmholtz , Gouy , and Stern .
Substances can have different effects on surface tension as shown :
Therefore, the Gibbs isotherm predicts that inorganic salts have negative surface concentrations. However, this view has been challenged extensively in recent years due to a combination of more precise interfacially sensitive experiments and theoretical models, both of which predict an increase in surface propensity of the halides with increasing size and polarizability. [ 5 ] As such, surface tension is not a reliable method for determining the relative propensity of ions toward the air-water interface.
A method for determining surface concentrations is needed in order to prove the validity of the model: two different techniques are normally used: ellipsometry and following the decay of 14 C present in the surfactant molecules.
Ionic surfactants require special considerations, as they are electrolytes :
where Γ S {\displaystyle \Gamma _{S}} refers to the surface concentration of surfactant molecules, without considering the counter ion .
The extent of adsorption at a liquid interface can be evaluated using the surface tension concentration data and the Gibbs adsorption equation. [ 3 ] The microtome blade method is used to determine the weight and molal concentration of an interface . The method involves attaining a one square meter portion of air-liquid interface of binary solutions using a microtome blade.
Another method that is used to determine the extent of adsorption at an air-water interface is the emulsion technique, which can be used to estimate the relative surface excess with respect to water. [ 3 ]
Additionally, the Gibbs surface excess of a surface active component for an aqueous solution can be found using the radioactive tracer method. The surface active component is usually labeled with carbon-14 or sulfur-35 . [ 3 ] | https://en.wikipedia.org/wiki/Gibbs_isotherm |
In game theory and in particular the study of Blotto games and operational research , the Gibbs lemma is a result that is useful in maximization problems. [ 1 ] It is named for Josiah Willard Gibbs .
Consider ϕ = ∑ i = 1 n f i ( x i ) {\displaystyle \phi =\sum _{i=1}^{n}f_{i}(x_{i})} . Suppose ϕ {\displaystyle \phi } is maximized, subject to ∑ x i = X {\displaystyle \sum x_{i}=X} and x i ≥ 0 {\displaystyle x_{i}\geq 0} , at x 0 = ( x 1 0 , … , x n 0 ) {\displaystyle x^{0}=(x_{1}^{0},\ldots ,x_{n}^{0})} . If the f i {\displaystyle f_{i}} are differentiable , then the Gibbs lemma states that there exists a λ {\displaystyle \lambda } such that
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
This game theory article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gibbs_lemma |
In statistical mechanics , a semi-classical derivation of entropy that does not take into account the indistinguishability of particles yields an expression for entropy which is not extensive (is not proportional to the amount of substance in question). This leads to a paradox known as the Gibbs paradox , after Josiah Willard Gibbs , who proposed this thought experiment in 1874‒1875. [ 1 ] [ 2 ] The paradox allows for the entropy of closed systems to decrease, violating the second law of thermodynamics . A related paradox is the " mixing paradox ". If one takes the perspective that the definition of entropy must be changed so as to ignore particle permutation, in the thermodynamic limit , the paradox is averted.
Gibbs considered the following difficulty that arises if the ideal gas entropy is not extensive. [ 1 ] Two containers of an ideal gas sit side-by-side. The gas in container #1 is identical in every respect to the gas in container #2 (i.e. in volume, mass, temperature, pressure, etc). Accordingly, they have the same entropy S . Now a door in the container wall is opened to allow the gas particles to mix between the containers. No macroscopic changes occur, as the system is in equilibrium. But if the formula for entropy is not extensive, the entropy of the combined system will not be 2 S . In fact, the particular non-extensive entropy quantity considered by Gibbs predicts additional entropy (more than 2 S ). Closing the door then reduces the entropy again to S per box, in apparent violation of the second law of thermodynamics .
As understood by Gibbs, [ 2 ] and reemphasized more recently, [ 3 ] [ 4 ] this is a misuse of Gibbs' non-extensive entropy quantity. If the gas particles are distinguishable, closing the doors will not return the system to its original state – many of the particles will have switched containers. There is a freedom in defining what is "ordered", and it would be a mistake to conclude that the entropy has not increased. In particular, Gibbs' non-extensive entropy quantity for an ideal gas is not intended for situations where the number of particles changes.
The paradox is averted by assuming the indistinguishability (at least effective indistinguishability) of the particles in the volume. This results in the extensive Sackur–Tetrode equation for entropy, as derived next.
In classical mechanics, the state of an ideal gas of energy U , volume V and with N particles, each particle having mass m , is represented by specifying the momentum vector p and the position vector x for each particle. This can be thought of as specifying a point in a 6 N -dimensional phase space , where each of the axes corresponds to one of the momentum or position coordinates of one of the particles. The set of points in phase space that the gas could occupy is specified by the constraint that the gas will have a particular energy: U = 1 2 m ∑ i = 1 N ( p i x 2 + p i y 2 + p i z 2 ) {\displaystyle U={\frac {1}{2m}}\sum _{i=1}^{N}\left(p_{ix}^{2}+p_{iy}^{2}+p_{iz}^{2}\right)} and be contained inside of the volume V (let's say V is a cube of side X so that V = X 3 ): 0 ≤ x i j ≤ X {\displaystyle 0\leq x_{ij}\leq X} for i = 1 , … , N {\displaystyle i=1,\dots ,N} and j = 1 , 2 , 3 {\displaystyle j=1,2,3}
The first constraint defines the surface of a 3N-dimensional hypersphere of radius (2 mU ) 1/2 and the second is a 3 N -dimensional hypercube of volume V N . These combine to form a 6 N -dimensional hypercylinder. Just as the area of the wall of a cylinder is the circumference of the base times the height, so the area φ of the wall of this hypercylinder is:
The entropy is proportional to the logarithm of the number of states that the gas could have while satisfying these constraints. In classical physics, the number of states is infinitely large, but according to quantum mechanics it is finite. Before the advent of quantum mechanics, this infinity was regularized by making phase space discrete. Phase space was divided up in blocks of volume h 3 N . The constant h thus appeared as a result of a mathematical trick and thought to have no physical significance. However, using quantum mechanics one recovers the same formalism in the semi-classical limit, but now with h being the Planck constant . One can qualitatively see this from Heisenberg's uncertainty principle ; a volume in N phase space smaller than h 3 N ( h is the Planck constant) cannot be specified.
To compute the number of states we must compute the volume in phase space in which the system can be found and divide that by h 3 N . This leads us to another problem: The volume seems to approach zero, as the region in phase space in which the system can be is an area of zero thickness. This problem is an artifact of having specified the energy U with infinite accuracy. In a generic system without symmetries, a full quantum treatment would yield a discrete non-degenerate set of energy eigenstates. An exact specification of the energy would then fix the precise state the system is in, so the number of states available to the system would be one, the entropy would thus be zero.
When we specify the internal energy to be U , what we really mean is that the total energy of the gas lies somewhere in an interval of length δ U {\displaystyle \delta U} around U . Here δ U {\displaystyle \delta U} is taken to be very small, it turns out that the entropy doesn't depend strongly on the choice of δ U {\displaystyle \delta U} for large N . This means that the above "area" φ must be extended to a shell of a thickness equal to an uncertainty in momentum δ p = δ ( 2 m U ) = m / 2 U δ U {\textstyle \delta p=\delta \left({\sqrt {2mU}}\right)={\sqrt {{m}/{2U}}}\delta U} , so the entropy is given by: S = k B ln ( ϕ δ p / h 3 N ) {\displaystyle S=k_{\text{B}}\,\ln(\phi \delta p/h^{3N})} where the constant of proportionality is k B , the Boltzmann constant . Using Stirling's approximation for the Gamma function which omits terms of less than order N , the entropy for large N becomes: S = k B N ln ( V U 3 / 2 N 3 / 2 ) + 3 2 k B N ( 1 + ln 4 π m 3 h 2 ) {\displaystyle S=k_{\text{B}}N\ln \left({\frac {VU^{3/2}}{N^{3/2}}}\right)+{\frac {3}{2}}k_{\text{B}}N\left(1+\ln {\frac {4\pi m}{3h^{2}}}\right)}
This quantity is not extensive as can be seen by considering two identical volumes with the same particle number and the same energy. Suppose the two volumes are separated by a barrier in the beginning. Removing or reinserting the wall is reversible, but the entropy increases when the barrier is removed by the amount δ S = k B [ 2 N ln ( 2 V ) − N ln V − N ln V ] = 2 k B N ln 2 > 0 {\displaystyle \delta S=k_{\text{B}}\left[2N\ln(2V)-N\ln V-N\ln V\right]=2k_{\text{B}}N\ln 2>0} which is in contradiction to thermodynamics if you re-insert the barrier. This is the Gibbs paradox.
The paradox is resolved by postulating that the gas particles are in fact indistinguishable. This means that all states that differ only by a permutation of particles should be considered as the same state. For example, if we have a 2-particle gas and we specify AB as a state of the gas where the first particle ( A ) has momentum p 1 and the second particle ( B ) has momentum p 2 , then this state as well as the BA state where the B particle has momentum p 1 and the A particle has momentum p 2 should be counted as the same state.
For an N -particle gas, there are N ! states which are identical in this sense, if one assumes that each particle is in a different single particle state. One can safely make this assumption provided the gas isn't at an extremely high density. Under normal conditions, one can thus calculate the volume of phase space occupied by the gas, by dividing Equation 1 by N !. Using the Stirling approximation again for large N , ln( N !) ≈ N ln( N ) − N , the entropy for large N is: S = k B N ln ( V U 3 / 2 N 5 / 2 ) + k B N ( 5 2 + 3 2 ln 4 π m 3 h 2 ) {\displaystyle S=k_{\text{B}}N\ln \left({\frac {VU^{3/2}}{N^{5/2}}}\right)+k_{\text{B}}N\left({\frac {5}{2}}+{\frac {3}{2}}\ln {\frac {4\pi m}{3h^{2}}}\right)} which can be easily shown to be extensive. This is the Sackur–Tetrode equation .
A closely related paradox to the Gibbs paradox is the mixing paradox . The Gibbs paradox is a special case of the "mixing paradox" which contains all the salient features. The difference is that the mixing paradox deals with arbitrary distinctions in the two gases, not just distinctions in particle ordering as Gibbs had considered. In this sense, it is a straightforward generalization to the argument laid out by Gibbs. Again take a box with a partition in it, with gas A on one side, gas B on the other side, and both gases are at the same temperature and pressure. If gas A and B are different gases, there is an entropy that arises once the gases are mixed, the entropy of mixing . If the gases are the same, no additional entropy is calculated. The additional entropy from mixing does not depend on the character of the gases; it only depends on the fact that the gases are different. The two gases may be arbitrarily similar, but the entropy from mixing does not disappear unless they are the same gas – a paradoxical discontinuity.
This "paradox" can be explained by carefully considering the definition of entropy. In particular, as concisely explained by Edwin Thompson Jaynes , [ 2 ] definitions of entropy are arbitrary.
As a central example in Jaynes' paper points out, one can develop a theory that treats two gases as similar even if those gases may in reality be distinguished through sufficiently detailed measurement. As long as we do not perform these detailed measurements, the theory will have no internal inconsistencies. (In other words, it does not matter that we call gases A and B by the same name if we have not yet discovered that they are distinct.) If our theory calls gases A and B the same, then entropy does not change when we mix them. If our theory calls gases A and B different, then entropy does increase when they are mixed. This insight suggests that the ideas of "thermodynamic state" and of "entropy" are somewhat subjective.
The differential increase in entropy ( dS ) as a result of mixing dissimilar gases, multiplied by the temperature ( T ), equals the minimum amount of work we must do to restore the gases to their original separated state. Suppose that two gases are different, but that we are unable to detect their differences. If these gases are in a box, segregated from one another by a partition, how much work does it take to restore the system's original state after we remove the partition and let the gases mix?
None – simply reinsert the partition. Even though the gases have mixed, there was never a detectable change of state in the system, because by hypothesis the gases are experimentally indistinguishable.
As soon as we can distinguish the difference between gases, the work necessary to recover the pre-mixing macroscopic configuration from the post-mixing state becomes nonzero. This amount of work does not depend on how different the gases are, but only on whether they are distinguishable.
This line of reasoning is particularly informative when considering the concepts of indistinguishable particles and correct Boltzmann counting . Boltzmann's original expression for the number of states available to a gas assumed that a state could be expressed in terms of a number of energy "sublevels" each of which contain a particular number of particles. While the particles in a given sublevel were considered indistinguishable from each other, particles in different sublevels were considered distinguishable from particles in any other sublevel. This amounts to saying that the exchange of two particles in two different sublevels will result in a detectably different "exchange macrostate" of the gas. For example, if we consider a simple gas with N particles, at sufficiently low density that it is practically certain that each sublevel contains either one particle or none (i.e. a Maxwell–Boltzmann gas), this means that a simple container of gas will be in one of N ! detectably different "exchange macrostates", one for each possible particle exchange.
Just as the mixing paradox begins with two detectably different containers, and the extra entropy that results upon mixing is proportional to the average amount of work needed to restore that initial state after mixing, so the extra entropy in Boltzmann's original derivation is proportional to the average amount of work required to restore the simple gas from some "exchange macrostate" to its original "exchange macrostate". If we assume that there is in fact no experimentally detectable difference in these "exchange macrostates" available, then using the entropy which results from assuming the particles are indistinguishable will yield a consistent theory. This is "correct Boltzmann counting".
It is often said that the resolution to the Gibbs paradox derives from the fact that, according to the quantum theory, like particles are indistinguishable in principle. By Jaynes' reasoning, if the particles are experimentally indistinguishable for whatever reason, Gibbs paradox is resolved, and quantum mechanics only provides an assurance that in the quantum realm, this indistinguishability will be true as a matter of principle, rather than being due to an insufficiently refined experimental capability.
In this section, we present in rough outline a purely classical derivation of the non-extensive entropy for an ideal gas considered by Gibbs before "correct counting" (indistinguishability of particles) is accounted for. This is followed by a brief discussion of two standard methods for making the entropy extensive. Finally, we present a third method, due to R. Swendsen, for an extensive (additive) result for the entropy of two systems if they are allowed to exchange particles with each other. [ 5 ] [ 6 ]
We will present a simplified version of the calculation. It differs from the full calculation in three ways:
We begin with a version of Boltzmann's entropy in which the integrand is all of accessible phase space : S = k B ln Ω = k B ln ∮ H ( p , q ) = E d n p d n q {\displaystyle S=k_{\text{B}}\ln \Omega =k_{\text{B}}\ln {\!\!\oint \limits _{H(\mathbf {p} ,\mathbf {q} )=E}\!\!d^{n}\mathbf {p} \,d^{n}\mathbf {q} }}
The integral is restricted to a contour of available regions of phase space, subject to conservation of energy. In contrast to the one-dimensional line integrals encountered in elementary physics, the contour of constant energy possesses a vast number of dimensions. The justification for integrating over phase space using the canonical measure involves the assumption of equal probability. The assumption can be made by invoking the ergodic hypothesis as well as the Liouville's theorem of Hamiltonian systems.
(The ergodic hypothesis underlies the ability of a physical system to reach thermal equilibrium , but this may not always hold for computer simulations (see the Fermi–Pasta–Ulam–Tsingou problem ) or in certain real-world systems such as non-thermal plasmas .)
Liouville's theorem assumes a fixed number of dimensions that the system 'explores'. In calculations of entropy, the number dimensions is proportional to the number of particles in the system, which forces phase space to abruptly change dimensionality when particles are added or subtracted. This may explain the difficulties in constructing a clear and simple derivation for the dependence of entropy on the number of particles.
For the ideal gas, the accessible phase space is an ( n − 1)-sphere (also called a hypersphere) in the n -dimensional v {\displaystyle \mathbf {v} } space: E = ∑ j = 1 n 1 2 m v j 2 , {\displaystyle E=\sum _{j=1}^{n}{\frac {1}{2}}mv_{j}^{2}\,,}
To recover the paradoxical result that entropy is not extensive, we integrate over phase space for a gas of n {\displaystyle n} monatomic particles confined to a single spatial dimension by 0 < x < ℓ {\displaystyle 0<x<\ell } . Since our only purpose is to illuminate a paradox, we simplify notation by taking the particle's mass and the Boltzmann constant equal to unity: m = k = 1 {\displaystyle m=k=1} . We represent points in phase-space and its x and v parts by n and 2 n dimensional vectors: ξ = [ x 1 , … , x n , v 1 , … , v n ] = [ x , v ] {\displaystyle {\boldsymbol {\xi }}=[x_{1},\dots ,x_{n},v_{1},\dots ,v_{n}]=[\mathbf {x} ,\mathbf {v} ]} where x = [ x 1 , … , x n ] v = [ v 1 , … , v n ] . {\displaystyle {\begin{aligned}\mathbf {x} &=[x_{1},\dots ,x_{n}]\\\mathbf {v} &=[v_{1},\dots ,v_{n}]\,.\end{aligned}}}
To calculate entropy, we use the fact that the (n-1)-sphere, ∑ v j 2 = R 2 , {\textstyle \sum v_{j}^{2}=R^{2},} has an ( n − 1) -dimensional "hypersurface volume" of A ~ n ( R ) = n π n / 2 ( n / 2 ) ! R n − 1 . {\displaystyle {\tilde {A}}_{n}(R)={\frac {n\pi ^{n/2}}{(n/2)!}}R^{n-1}\,.}
For example, if n = 2, the 1-sphere is the circle A ~ 2 ( R ) = 2 π R {\displaystyle {\tilde {A}}_{2}(R)=2\pi R} , a "hypersurface" in the plane. When the sphere is even-dimensional ( n odd), it will be necessary to use the gamma function to give meaning to the factorial; see below.
Gibbs paradox arises when entropy is calculated using an n {\displaystyle n} dimensional phase space, where n {\displaystyle n} is also the number of particles in the gas. These particles are spatially confined to the one-dimensional interval ℓ n {\displaystyle \ell ^{n}} . The volume of the surface of fixed energy is Ω E , ℓ = ( ∫ d x 1 ∫ d x 2 ⋯ ∫ d x n ) ( ∫ d v 1 ∫ d v 2 ⋯ ∫ d v n ) ⏟ ∑ v i 2 = 2 E {\displaystyle \Omega _{E,\ell }=\left(\int dx_{1}\int dx_{2}\cdots \int dx_{n}\right)\underbrace {\left(\int dv_{1}\int dv_{2}\cdots \int dv_{n}\right)} _{\sum v_{i}^{2}=2E}}
The subscripts on Ω {\displaystyle \Omega } are used to define the 'state variables' and will be discussed later, when it is argued that the number of particles, n {\displaystyle n} lacks full status as a state variable in this calculation. The integral over configuration space is ℓ n {\displaystyle \ell ^{n}} . As indicated by the underbrace, the integral over velocity space is restricted to the "surface area" of the n − 1 dimensional hypersphere of radius 2 E {\displaystyle {\sqrt {2E}}} , and is therefore equal to the "area" of that hypersurface. Thus Ω E , ℓ = ℓ n n π n / 2 ( n / 2 ) ! ( 2 E ) n − 1 2 {\displaystyle \Omega _{E,\ell }=\ell ^{n}{\frac {n\pi ^{n/2}}{(n/2)!}}(2E)^{\frac {n-1}{2}}}
Ω E , ℓ = ℓ n n π n / 2 ( n / 2 ) ! ( 2 E ) n − 1 2 {\displaystyle \Omega _{E,\ell }=\ell ^{n}{\frac {n\pi ^{n/2}}{(n/2)!}}{\left(2E\right)}^{\frac {n-1}{2}}} ln Ω E , ℓ = ln ( ℓ n n π n / 2 ( 2 E ) n − 1 2 ) − ln [ ( n / 2 ) ! ] {\displaystyle \ln \Omega _{E,\ell }=\ln \left(\ell ^{n}n\pi ^{n/2}{\left(2E\right)}^{\frac {n-1}{2}}\right)-\ln \left[(n/2)!\right]} Both terms on the right hand side have dominant terms. Using the Stirling approximation for large M, ln M ! ≈ M ln M {\displaystyle \ln M!\approx M\ln M} − M + ln 2 π M {\displaystyle -M+\ln {\sqrt {2\pi M}}} , we have: ln ( ℓ n n π n / 2 ( 2 E ) n − 1 2 ) = ln ( ℓ n E n 2 ) ⏟ important + ln ( n ( 2 π ) n 2 2 E ) ⏟ drop {\displaystyle \ln \left(\ell ^{n}n\pi ^{n/2}{\left(2E\right)}^{\frac {n-1}{2}}\right)=\underbrace {\ln \left(\ell ^{n}E^{\frac {n}{2}}\right)} _{\text{important}}+\underbrace {\ln \left({\frac {n{\left(2\pi \right)}^{\frac {n}{2}}}{\sqrt {2E}}}\right)} _{\text{drop}}} − ln [ ( n / 2 ) ! ] ≈ − n 2 ln n 2 + n 2 − ln n π = − n 2 ln n ⏟ keep + n 2 ln 2 + n 2 − ln n π ⏟ drop {\displaystyle {\begin{aligned}-\ln[(n/2)!]&\approx -{\frac {n}{2}}\ln {\frac {n}{2}}+{\frac {n}{2}}-\ln {\sqrt {n\pi }}\\&=\underbrace {-{\frac {n}{2}}\ln n} _{\text{keep}}+\underbrace {{\frac {n}{2}}\ln 2+{\frac {n}{2}}-\ln {\sqrt {n\pi }}} _{\text{drop}}\\\end{aligned}}}
Terms are neglected if they exhibit less variation with a parameter, and we compare terms that vary with the same parameter. Entropy is defined with an additive arbitrary constant because the area in phase space depends on what units are used. For that reason it does not matter if entropy is large or small for a given value of E. We instead to seek how entropy varies with E, i.e., we seek ∂ S / ∂ E {\displaystyle \partial S/\partial E} :
Combining the important terms: ln Ω E , ℓ = ln ( ℓ n E n 2 ) − n 2 ln n = n ln ℓ + n 2 ln ( E n ) {\displaystyle {\begin{aligned}\ln \Omega _{E,\ell }&=\ln \left(\ell ^{n}E^{\frac {n}{2}}\right)-{\frac {n}{2}}\ln n\\&=n\ln \ell +{\frac {n}{2}}\ln \left({\frac {E}{n}}\right)\\\end{aligned}}}
After approximating the factorial and dropping the small terms, we obtain ln Ω E , ℓ ≈ n ln ℓ + n ln E n + const. = n ln ℓ n + n ln E n ⏟ extensive + n ln n + const. {\displaystyle {\begin{aligned}\ln \Omega _{E,\ell }&\approx n\ln \ell +n\ln {\sqrt {\frac {E}{n}}}+{\text{const.}}\\&=\underbrace {n\ln {\frac {\ell }{n}}+n\ln {\sqrt {\frac {E}{n}}}} _{\text{extensive}}+\,n\ln n+{\text{const.}}\\\end{aligned}}}
In the second expression, the term n ln n {\displaystyle n\ln n} was subtracted and added, using the fact that ln ℓ − ln n = ln ( ℓ / n ) {\displaystyle \ln \ell -\ln n=\ln(\ell /n)} . This was done to highlight exactly how the "entropy" defined here fails to be an extensive property of matter. The first two terms are extensive: if the volume of the system doubles, but gets filled with the same density of particles with the same energy, then each of these terms doubles. But the third term is neither extensive nor intensive and is therefore wrong.
The arbitrary constant has been added because entropy can usually be viewed as being defined up to an arbitrary additive constant. This is especially necessary when entropy is defined as the logarithm of a phase space volume measured in units of momentum-position. Any change in how these units are defined will add or subtract a constant from the value of the entropy.
As discussed above , an extensive form of entropy is recovered if we divide the volume of phase space, Ω E , ℓ {\displaystyle \Omega _{E,\ell }} , by n !. An alternative approach is to argue that the dependence on particle number cannot be trusted on the grounds that changing n {\displaystyle n} also changes the dimensionality of phase space. Such changes in dimensionality lie outside the scope of Hamiltonian mechanics and Liouville's theorem . For that reason it is plausible to allow the arbitrary constant to be a function of n {\displaystyle n} . [ 7 ] Defining the function to be, f ( n ) = − 3 2 n ln n {\displaystyle f(n)=-{\frac {3}{2}}n\ln n} , we have: S = ln Ω E , ℓ ≈ n ln ℓ + n ln E + const. = n ln ℓ + n ln E + f ( n ) ln Ω E , ℓ , n ≈ n ln ℓ n + n ln E n + const. , {\displaystyle {\begin{aligned}S=\ln \Omega _{E,\ell }&\approx n\ln \ell +n\ln {\sqrt {E}}+{\text{const.}}\\&=n\ln \ell +n\ln {\sqrt {E}}+f(n)\\\ln \Omega _{E,\ell ,n}&\approx n\ln {\frac {\ell }{n}}+n\ln {\sqrt {\frac {E}{n}}}+{\text{const.}},\\\end{aligned}}} which has extensive scaling: S ( α E , α ℓ , α n ) = α S ( E , ℓ , n ) {\displaystyle S(\alpha E,\alpha \ell ,\alpha n)=\alpha \,S(E,\ell ,n)}
Following Swendsen, [ 5 ] [ 6 ] we allow two systems to exchange particles. This essentially 'makes room' in phase space for particles to enter or leave without requiring a change in the number of dimensions of phase space. The total number of particles is N {\displaystyle N} :
Taking the integral over phase space, we have: Ω E , ℓ , N = ( ∫ d x ? ⋯ ∫ d x ? ⏟ n A terms ∫ d x ? ⋯ ∫ d x N ) ⏟ n B terms ( ∫ d v 1 ∫ d v 2 ⋯ ∫ d v N ) ⏟ ∑ v 2 = 2 E A or ∑ v 2 = 2 E B = ( ℓ A ) n A ( ℓ B ) n B ( N ! n A ! n B ! ) ⏟ combination ( n A π n A / 2 ( n A / 2 ) ! ( 2 E A ) n A − 1 2 ) ⏟ n A − sphere ( n B π n B / 2 ( n B / 2 ) ! ( 2 E B ) n B − 1 2 ) ⏟ n B − sphere {\displaystyle {\begin{aligned}\Omega _{E,\ell ,N}&=\underbrace {\left(\int dx_{?}\cdots \int dx_{?}\right.} _{n_{A}\;{\text{terms}}}\underbrace {\left.\int dx_{?}\cdots \int dx_{N}\right)} _{n_{B}\;{\text{terms}}}\underbrace {\left(\int dv_{1}\int dv_{2}\cdots \int dv_{N}\right)} _{\sum v^{2}=2E_{A}\;{\text{or}}\;\sum v^{2}=2E_{B}}\\[2ex]&={\left(\ell _{A}\right)}^{n_{A}}{\left(\ell _{B}\right)}^{n_{B}}\underbrace {\left({\frac {N!}{n_{A}!n_{B}!}}\right)} _{\text{combination}}\underbrace {\left({\frac {n_{A}\pi ^{n_{A}/2}}{\left(n_{A}/2\right)!}}{\left(2E_{A}\right)}^{\frac {n_{A}-1}{2}}\right)} _{n_{A}-{\text{sphere}}}\underbrace {\left({\frac {n_{B}\pi ^{n_{B}/2}}{\left(n_{B}/2\right)!}}{\left(2E_{B}\right)}^{\frac {n_{B}-1}{2}}\right)} _{n_{B}-{\text{sphere}}}\end{aligned}}}
The question marks (?) serve as a reminder that we may not assume that the first n A particles (i.e. 1 through n A ) are in system A while the other particles ( n B through N ) are in system B . (This is further discussed in the next section.)
Taking the logarithm and keeping only the largest terms, we have: S = ln Ω E , ℓ , N ≈ n A ln ( n A ℓ A E A ℓ A ) + n B ln ( n B ℓ B E B ℓ B ) + N ln N + const. {\displaystyle S=\ln \Omega _{E,\ell ,N}\approx n_{A}\ln \left({\frac {n_{A}}{\ell _{A}}}{\sqrt {\frac {E_{A}}{\ell _{A}}}}\right)+n_{B}\ln \left({\frac {n_{B}}{\ell _{B}}}{\sqrt {\frac {E_{B}}{\ell _{B}}}}\right)+N\ln N+{\text{const.}}}
This can be interpreted as the sum of the entropy of system A and system B , both extensive. And there is a term, N ln N {\displaystyle N\ln N} , that is not extensive.
The correct (extensive) formulas for systems A and B were obtained because we included all the possible ways that the two systems could exchange particles. The use of combinations (i.e. N particles choose N A ) was used to ascertain the number of ways N particles can be divided into system A containing n A particles and system B containing n B particles. This counting is not justified on physical grounds, but on the need to integrate over phase space. As will be illustrated below, phase space contains not a single n A -sphere and a single n B -sphere, but instead ( N n A ) = N ! n A ! n B ! {\displaystyle {\binom {N}{n_{A}}}={\frac {N!}{n_{A}!n_{B}!}}} pairs of n -spheres, all situated in the same ( N + 1) -dimensional velocity space. The integral over accessible phase space must include all of these n -spheres, as can be seen in the figure, which shows the actual velocity phase space associated with a gas that consists of three particles. Moreover, this gas has been divided into two systems, A and B .
If we ignore the spatial variables, the phase space of a gas with three particles is three dimensional, which permits one to sketch the n -spheres over which the integral over phase space must be taken. If all three particles are together, the split between the two gases is 3|0. Accessible phase space is delimited by an ordinary sphere ( 2-sphere ) with a radius that is either 2 E 1 {\displaystyle {\sqrt {2E_{1}}}} or 2 E 2 {\displaystyle {\sqrt {2E_{2}}}} (depending which system has the particles).
If the split is 2|1, then phase space consists of circles and points . Each circle occupies two dimensions, and for each circle, two points lie on the third axis, equidistant from the center of the circle. In other words, if system A has 2 particles, accessible phase space consists of 3 pairs of n -spheres , each pair being a 1-sphere and a 0-sphere : v 1 2 + v 2 2 = 2 E A , v 3 2 = 2 E B , v 2 2 + v 3 2 = 2 E A , v 1 2 = 2 E B , v 3 2 + v 1 2 = 2 E A , v 2 2 = 2 E B {\displaystyle {\begin{aligned}v_{1}^{2}+v_{2}^{2}&=2E_{A},&v_{3}^{2}&=2E_{B},\\v_{2}^{2}+v_{3}^{2}&=2E_{A},&v_{1}^{2}&=2E_{B},\\v_{3}^{2}+v_{1}^{2}&=2E_{A},&v_{2}^{2}&=2E_{B}\end{aligned}}}
Note that ( 3 2 ) = 3. {\displaystyle {\binom {3}{2}}=3.} | https://en.wikipedia.org/wiki/Gibbs_paradox |
The Gibbs rotational ensemble represents the possible states of a mechanical system in thermal and rotational equilibrium at temperature T {\displaystyle T} and angular velocity ω {\displaystyle {\boldsymbol {\omega }}} . [ 1 ] The Jaynes procedure can be used to obtain this ensemble. [ 2 ] An ensemble is the set of microstates corresponding to a given macrostate .
The Gibbs rotational ensemble assigns a probability p i {\displaystyle p_{i}} to a given microstate characterized by energy E i {\displaystyle E_{i}} and angular momentum J i {\displaystyle \mathbf {J} _{i}} for a given temperature T {\displaystyle T} and rotational velocity ω {\displaystyle {\boldsymbol {\omega }}} . [ 1 ] [ 3 ]
p i = 1 Z e − β ( E i − ω ⋅ J i ) {\displaystyle p_{i}={\frac {1}{Z}}e^{-\beta (E_{i}-{\boldsymbol {\omega }}\cdot \mathbf {J} _{i})}}
where Z {\displaystyle Z} is the partition function
Z = ∑ i e − β ( E i − ω ⋅ J i ) {\displaystyle Z=\sum _{i}e^{-\beta (E_{i}-{\boldsymbol {\omega }}\cdot \mathbf {J} _{i})}}
The Gibbs rotational ensemble can be derived using the same general method as to derive any ensemble, as given by E.T. Jaynes in his 1956 paper Information Theory and Statistical Mechanics. [ 3 ] Let f ( x ) {\displaystyle f(x)} be a function with expectation value
⟨ f ( x ) ⟩ = ∑ i p i f ( x i ) {\displaystyle \langle f(x)\rangle =\sum _{i}p_{i}f(x_{i})}
where p i {\displaystyle p_{i}} is the probability of x i {\displaystyle x_{i}} , which is not known a priori . The probabilities p i {\displaystyle p_{i}} obey normalization
∑ i p i = 1 {\displaystyle \sum _{i}p_{i}=1}
To find p i {\displaystyle p_{i}} , the Shannon entropy H {\displaystyle H} is maximized , where the Shannon entropy goes as
H ∼ ∑ i p i ln ( p i ) {\displaystyle H\sim \sum _{i}p_{i}\ln(p_{i})}
The method of Lagrange multipliers is used to maximize H {\displaystyle H} under the constraints ⟨ f ( x ) ⟩ {\displaystyle \langle f(x)\rangle } and the normalization condition, using Lagrange multipliers λ {\displaystyle \lambda } and μ {\displaystyle \mu } to find
p i = e − λ − μ f ( x i ) {\displaystyle p_{i}=e^{-\lambda -\mu f(x_{i})}}
λ {\displaystyle \lambda } is found via normalization:
λ = ln ( ∑ i e − μ f ( x i ) ) = ln ( Z ( μ ) ) {\displaystyle \lambda =\ln \left(\sum _{i}e^{-\mu f(x_{i})}\right)=\ln(Z(\mu ))}
and ⟨ f ( x ) ⟩ {\displaystyle \langle f(x)\rangle } can be written as
⟨ f ( x ) ⟩ = − ∂ ∂ μ ln ( ∑ i e − μ f ( x i ) ) = − ∂ ∂ μ ln ( Z ( μ ) ) {\displaystyle \langle f(x)\rangle =-{\frac {\partial }{\partial \mu }}\ln \left(\sum _{i}e^{-\mu f(x_{i})}\right)=-{\frac {\partial }{\partial \mu }}\ln(Z(\mu ))}
where Z {\displaystyle Z} is the partition function
Z ( μ ) = ∑ i e − μ f ( x i ) {\displaystyle Z(\mu )=\sum _{i}e^{-\mu f(x_{i})}}
This is easily generalized to any number of equations f ( x ) {\displaystyle f(x)} via the incorporation of more Lagrange multipliers. [ 3 ]
Now investigating the Gibbs rotational ensemble, the method of Lagrange multipliers is again used to maximize the Shannon entropy H {\displaystyle H} , but this time under the constraints of energy expectation value ⟨ E ⟩ {\displaystyle \langle E\rangle } and angular momentum expectation value ⟨ J ⟩ {\displaystyle \langle J\rangle } , [ 3 ] which gives p i {\displaystyle p_{i}} as
p i = e − λ 0 E i − λ 1 ⋅ J i − λ 3 {\displaystyle p_{i}=e^{-\lambda _{0}E_{i}-{\boldsymbol {\lambda }}_{1}\cdot \mathbf {J} _{i}-\lambda _{3}}}
Via normalization, λ 3 {\displaystyle \lambda _{3}} is found to be
λ 3 = ln ( ∑ i e − λ 0 E i − λ 1 ⋅ J i ) = ln ( Z ) {\displaystyle \lambda _{3}=\ln \left(\sum _{i}e^{-\lambda _{0}E_{i}-{\boldsymbol {\lambda }}_{1}\cdot \mathbf {J} _{i}}\right)=\ln(Z)}
Like before, ⟨ E ⟩ {\displaystyle \langle E\rangle } and ⟨ J ⟩ {\displaystyle \langle J\rangle } are given by
⟨ E ⟩ = − ∂ ∂ λ 0 ln ( ∑ i e − λ 0 E i − λ 1 ⋅ J i ) = − ∂ ∂ λ 0 ln ( Z ) {\displaystyle \langle E\rangle =-{\frac {\partial }{\partial \lambda _{0}}}\ln \left(\sum _{i}e^{-\lambda _{0}E_{i}-{\boldsymbol {\lambda }}_{1}\cdot \mathbf {J} _{i}}\right)=-{\frac {\partial }{\partial \lambda _{0}}}\ln \left(Z\right)}
⟨ J ⟩ = − ∂ ∂ λ 1 ln ( ∑ i e − λ 0 E i − λ 1 ⋅ J i ) = − ∂ ∂ λ 1 ln ( Z ) {\displaystyle \langle J\rangle =-{\frac {\partial }{\partial \lambda _{1}}}\ln \left(\sum _{i}e^{-\lambda _{0}E_{i}-{\boldsymbol {\lambda }}_{1}\cdot \mathbf {J} _{i}}\right)=-{\frac {\partial }{\partial \lambda _{1}}}\ln(Z)}
The entropy S {\displaystyle S} of the system is given by
S = − k ∑ i p i ln ( p i ) = k ( λ 0 ⟨ E ⟩ + λ 1 ⋅ ⟨ J ⟩ + ln ( Z ) ) {\displaystyle S=-k\sum _{i}p_{i}\ln(p_{i})=k(\lambda _{0}\langle E\rangle +{\boldsymbol {\lambda }}_{1}\cdot \langle \mathbf {J} \rangle +\ln(Z))}
such that
d S = k ( λ 0 d ⟨ E ⟩ + λ 1 d ⟨ J ⟩ + d ln ( Z ) ) {\displaystyle dS=k\left(\lambda _{0}\mathrm {d} \langle E\rangle +{\boldsymbol {\lambda }}_{1}\mathrm {d} \langle \mathbf {J} \rangle +\mathrm {d} \ln(Z)\right)}
where k {\displaystyle k} is the Boltzmann constant . The system is assumed to be in equilibrium, follow the laws of thermodynamics , and have fixed uniform temperature T {\displaystyle T} and angular velocity ω {\displaystyle {\boldsymbol {\omega }}} . The first law of thermodynamics as applied to this system is
d E = d Q + ω ⋅ d ⟨ J ⟩ {\displaystyle \mathrm {d} E=\mathrm {d} Q+{\boldsymbol {\omega }}\cdot \mathrm {d} \langle \mathbf {J} \rangle }
Recalling the entropy differential d S = d Q T {\displaystyle \mathrm {d} S={\frac {\mathrm {d} Q}{T}}}
Combining the first law of thermodynamics with the entropy differential gives
d S = d E T − ω ⋅ d ⟨ J ⟩ T {\displaystyle \mathrm {d} S={\frac {\mathrm {d} E}{T}}-{\frac {{\boldsymbol {\omega }}\cdot \mathrm {d} \langle \mathbf {J} \rangle }{T}}}
Comparing this result with the entropy differential given by entropy maximization allows determination of λ 0 {\displaystyle \lambda _{0}} and λ 1 {\displaystyle {\boldsymbol {\lambda }}_{1}}
λ 0 = β {\displaystyle \lambda _{0}=\beta }
λ 1 = − β ω {\displaystyle {\boldsymbol {\lambda }}_{1}=-\beta {\boldsymbol {\omega }}}
which allows the probability of a given state p i {\displaystyle p_{i}} to be written as
p i = 1 Z e − β ( E i − ω → ⋅ J i ) {\displaystyle p_{i}={\frac {1}{Z}}e^{-\beta (E_{i}-{\vec {\omega }}\cdot \mathbf {J} _{i})}}
which is recognized as the probability of some microstate given a prescribed macrostate using the Gibbs rotational ensemble. [ 1 ] [ 3 ] [ 2 ] The term E i − ω ⋅ J i {\displaystyle E_{i}-{\boldsymbol {\omega }}\cdot \mathbf {J} _{i}} can be recognized as the effective Hamiltonian H {\displaystyle {\mathcal {H}}} for the system, which then simplifies the Gibbs rotational partition function to that of a normal canonical system
Z = ∑ i e − β H i {\displaystyle Z=\sum _{i}e^{-\beta {\mathcal {H}}_{i}}}
The Gibbs rotational ensemble is useful for calculations regarding rotating systems. It is commonly used for describing particle distribution in centrifuges. For example, take a rotating cylinder (height Z {\displaystyle Z} , radius R {\displaystyle R} ) with fixed particle number N {\displaystyle N} , fixed volume V {\displaystyle V} , fixed average energy ⟨ E ⟩ {\displaystyle \langle E\rangle } , and average angular momentum ⟨ J ⟩ {\displaystyle \langle \mathbf {J} \rangle } . The expectation value of number density of particles ⟨ n ( r ) ⟩ {\displaystyle \langle n(r)\rangle } at radius r {\displaystyle r} can be written as
⟨ n ( r ) ⟩ = 1 Z ∫ n ( r ) d 3 p d 3 q h 3 e − β ( E − ω ⋅ J ) {\displaystyle \langle n(r)\rangle ={\frac {1}{Z}}\int n(r){\frac {\mathrm {d} ^{3}p\;\mathrm {d} ^{3}q}{h^{3}}}e^{-\beta (E-{\boldsymbol {\omega }}\cdot \mathbf {J} )}}
Using the Gibbs rotational partition function, Z {\displaystyle Z} can be calculated to be
Z = π 5 / 2 Z β m ( e 1 2 β m R 2 ω 2 − 1 ) 2 β 3 h 3 ω 2 {\displaystyle Z={\frac {\pi ^{5/2}Z{\sqrt {\beta m}}\left(e^{{\frac {1}{2}}\beta mR^{2}\omega ^{2}}-1\right)}{{\sqrt {2}}\beta ^{3}h^{3}\omega ^{2}}}}
Density of a particle at a given point can be thought of as unity divided by an infinitesimal volume, which can be represented as a delta function.
n ( r ) = 1 d r r d θ d z → δ ( r ′ − r ) δ ( θ ′ − θ ) δ ( z ′ − z ) r ′ {\displaystyle n(r)={\frac {1}{\mathrm {d} r\;r\;\mathrm {d} \theta \;\mathrm {d} z}}\rightarrow {\frac {\delta (r'-r)\delta (\theta '-\theta )\delta (z'-z)}{r'}}}
which finally gives ⟨ n ( r ) ⟩ {\displaystyle \langle n(r)\rangle } as
⟨ n ( r ) ⟩ = β m ω 2 2 π Z e 1 2 β m r 2 ω 2 e 1 2 β m R 2 ω 2 − 1 {\displaystyle \langle n(r)\rangle ={\frac {\beta m\omega ^{2}}{2\pi Z}}{\frac {e^{{\frac {1}{2}}\beta mr^{2}\omega ^{2}}}{e^{{\frac {1}{2}}\beta mR^{2}\omega ^{2}}-1}}}
which is the expected result.
The Grand canonical ensemble and the Gibbs canonical ensemble are two different statistical ensembles used in statistical mechanics to describe systems with different constraints.
The grand canonical ensemble describes a system that can exchange both energy and particles with a reservoir. It is characterized by three variables: the temperature (T), chemical potential (μ), and volume (V) of the system. [ 4 ] The chemical potential determines the average particle number in this ensemble, which allows for some variation in the number of particles. The grand canonical ensemble is commonly used to study systems with a fixed temperature and chemical potential, but a variable particle number, such as gases in contact with a particle reservoir. [ 5 ]
On the other hand, the Gibbs canonical ensemble describes a system that can exchange energy but has a fixed number of particles. It is characterized by two variables: the temperature (T) and volume (V) of the system. In this ensemble, the energy of the system can fluctuate, but the number of particles remains fixed. The Gibbs canonical ensemble is commonly used to study systems with a fixed temperature and particle number, but variable energy, such as systems in thermal equilibrium. [ 6 ] | https://en.wikipedia.org/wiki/Gibbs_rotational_ensemble |
In probability theory and statistical mechanics , a Gibbs state is an equilibrium probability distribution which remains invariant under future evolution of the system. For example, a stationary or steady-state distribution of a Markov chain , such as that achieved by running a Markov chain Monte Carlo iteration for a sufficiently long time, is a Gibbs state.
Precisely, suppose L {\displaystyle L} is a generator of evolutions for an initial state ρ 0 {\displaystyle \rho _{0}} , so that the state at any later time is given by ρ ( t ) = e L t [ ρ 0 ] {\displaystyle \rho (t)=e^{Lt}[\rho _{0}]} . Then the condition for ρ ∞ {\displaystyle \rho _{\infty }} to be a Gibbs state is
In physics there may be several physically distinct Gibbs states in which a system may be trapped, particularly at lower temperatures.
They are named after Josiah Willard Gibbs , for his work in determining equilibrium properties of statistical ensembles . Gibbs himself referred to this type of statistical ensemble as being in "statistical equilibrium". [ 1 ]
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
This applied mathematics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gibbs_state |
The Gibbs–Donnan effect (also known as the Donnan's effect , Donnan law , Donnan equilibrium , or Gibbs–Donnan equilibrium ) is a name for the behaviour of charged particles near a semi-permeable membrane that sometimes fail to distribute evenly across the two sides of the membrane. [ 1 ] The usual cause is the presence of a different charged substance that is unable to pass through the membrane and thus creates an uneven electrical charge . [ 2 ] For example, the large anionic proteins in blood plasma are not permeable to capillary walls. Because small cations are attracted, but are not bound to the proteins, small anions will cross capillary walls away from the anionic proteins more readily than small cations.
Thus, some ionic species can pass through the barrier while others cannot. The solutions may be gels or colloids as well as solutions of electrolytes , and as such the phase boundary between gels, or a gel and a liquid, can also act as a selective barrier. The electric potential arising between two such solutions is called the Donnan potential .
The effect is named after the American Josiah Willard Gibbs who proposed it in 1878 and the British chemist Frederick G. Donnan who studied it experimentally in 1911. [ 3 ]
The Donnan equilibrium is prominent in the triphasic model for articular cartilage proposed by Mow and Lai, [ 4 ] as well as in electrochemical fuel cells and dialysis .
The Donnan effect is tactic pressure attributable to cations (Na + and K + ) attached to dissolved plasma proteins.
The presence of a charged impermeant ion (for example, a protein) on one side of a membrane will result in an asymmetric distribution of permeant charged ions. The Gibbs–Donnan equation at equilibrium states (assuming permeant ions are Na + and Cl − ): [ Na + ] α [ Cl − ] α = [ Na + ] β [ Cl − ] β {\displaystyle [{\text{Na}}^{+}]_{\alpha }[{\text{Cl}}^{-}]_{\alpha }=[{\text{Na}}^{+}]_{\beta }[{\text{Cl}}^{-}]_{\beta }} Equivalently, [ Na + ] α [ Na + ] β = [ Cl − ] β [ Cl − ] α {\displaystyle {\frac {[{\text{Na}}^{+}]_{\alpha }}{[{\text{Na}}^{+}]_{\beta }}}={\frac {[{\text{Cl}}^{-}]_{\beta }}{[{\text{Cl}}^{-}]_{\alpha }}}}
Note that Sides 1 and 2 are no longer in osmotic equilibrium (i.e. the total osmolytes on each side are not the same)
In vivo , ion balance does equilibriate at the proportions that would be predicted by the Gibbs–Donnan model, because the cell cannot tolerate the attendant large influx of water. This is balanced by instating a functionally impermeant cation, Na + , extracellularly to counter the anionic protein. Na + does cross the membrane via leak channels (the permeability is approximately 1/10 that of K + , the most permeant ion) but, as per the pump-leak model, it is extruded by the Na + /K + -ATPase . [ 5 ]
Because there is a difference in concentration of ions on either side of the membrane, the pH (defined using the relative activity ) may also differ when protons are involved [ citation needed ] . In many instances, from ultrafiltration of proteins to ion exchange chromatography, the pH of the buffer adjacent to the charged groups of the membrane is different from the pH of the rest of the buffer solution. [ 6 ] When the charged groups are negative (basic), then they will attract protons so that the pH will be lower than the surrounding buffer. When the charged groups are positive (acidic), then they will repel protons so that the pH will be higher than the surrounding buffer.
When tissue cells are in a protein-containing fluid, the Donnan effect of the cytoplasmic proteins is equal and opposite to the Donnan effect of the extracellular proteins. The opposing Donnan effects cause chloride ions to migrate inside the cell, increasing the intracellular chloride concentration. The Donnan effect may explain why some red blood cells do not have active sodium pumps; the effect relieves the osmotic pressure of plasma proteins, which is why sodium pumping is less important for maintaining the cell volume . [ 7 ]
Brain tissue swelling, known as cerebral oedema , results from brain injury and other traumatic head injuries that can increase intracranial pressure (ICP). Negatively charged molecules within cells create a fixed charge density, which increases intracranial pressure through the Donnan effect. ATP pumps maintain a negative membrane potential even though negative charges leak across the membrane; this action establishes a chemical and electrical gradient. [ 8 ]
The negative charge in the cell and ions outside the cell creates a thermodynamic potential; if damage occurs to the brain and cells lose their membrane integrity, ions will rush into the cell to balance chemical and electrical gradients that were previously established. The membrane voltage will become zero, but the chemical gradient will still exist. To neutralize the negative charges within the cell, cations flow in, which increases the osmotic pressure inside relative to the outside of the cell. The increased osmotic pressure forces water to flow into the cell and tissue swelling occurs. [ 9 ] | https://en.wikipedia.org/wiki/Gibbs–Donnan_effect |
In thermodynamics , the Gibbs–Duhem equation describes the relationship between changes in chemical potential for components in a thermodynamic system : [ 1 ]
∑ i = 1 I N i d μ i = − S d T + V d p {\displaystyle \sum _{i=1}^{I}N_{i}\mathrm {d} \mu _{i}=-S\mathrm {d} T+V\mathrm {d} p}
where N i {\displaystyle N_{i}} is the number of moles of component i , d μ i {\displaystyle i,\mathrm {d} \mu _{i}} the infinitesimal increase in chemical potential for this component, S {\displaystyle S} the entropy , T {\displaystyle T} the absolute temperature , V {\displaystyle V} volume and p {\displaystyle p} the pressure . I {\displaystyle I} is the number of different components in the system. This equation shows that in thermodynamics intensive properties are not independent but related, making it a mathematical statement of the state postulate . When pressure and temperature are variable, only I − 1 {\displaystyle I-1} of I {\displaystyle I} components have independent values for chemical potential and Gibbs' phase rule follows.
The Gibbs−Duhem equation applies to homogeneous thermodynamic systems. It does not apply to inhomogeneous systems such as small thermodynamic systems, [ 2 ] systems subject to long-range forces like electricity and gravity, [ 3 ] [ 4 ] or to fluids in porous media. [ 5 ]
The equation is named after Josiah Willard Gibbs and Pierre Duhem .
The Gibbs–Duhem equation follows from assuming the system can be scaled in amount perfectly. Gibbs derived the relationship based on the thought experiment of varying the amount of substance starting from zero, keeping its nature and state the same. [ 3 ]
Mathematically, this means the internal energy U {\displaystyle U} scales with its extensive variables as follows: [ 6 ] U ( λ S , λ V , λ N 1 , λ N 2 , … ) = λ U ( S , V , N 1 , N 2 , … ) {\displaystyle U(\lambda S,\lambda V,\lambda N_{1},\lambda N_{2},\ldots )=\lambda U(S,V,N_{1},N_{2},\ldots )} where S , V , N 1 , N 2 , … {\displaystyle S,V,N_{1},N_{2},\ldots } are all of the extensive variables of system: entropy, volume, and particle numbers. The internal energy is thus a first-order homogenous function . Applying Euler's homogeneous function theorem , one finds the following relation:
U = T S − p V + ∑ i = 1 I μ i N i {\displaystyle U=TS-pV+\sum _{i=1}^{I}\mu _{i}N_{i}}
Taking the total differential, one finds
d U = T d S + S d T − p d V − V d p + ∑ i = 1 I μ i d N i + ∑ i = 1 I N i d μ i {\displaystyle \mathrm {d} U=T\mathrm {d} S+S\mathrm {d} T-p\mathrm {d} V-V\mathrm {d} p+\sum _{i=1}^{I}\mu _{i}\mathrm {d} N_{i}+\sum _{i=1}^{I}N_{i}\mathrm {d} \mu _{i}}
From both sides one can subtract the fundamental thermodynamic relation , d U = T d S − p d V + ∑ i = 1 I μ i d N i {\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V+\sum _{i=1}^{I}\mu _{i}\mathrm {d} N_{i}}
yielding the Gibbs–Duhem equation [ 6 ]
0 = S d T − V d p + ∑ i = 1 I N i d μ i . {\displaystyle 0=S\mathrm {d} T-V\mathrm {d} p+\sum _{i=1}^{I}N_{i}\mathrm {d} \mu _{i}.}
By normalizing the above equation by the extent of a system, such as the total number of moles, the Gibbs–Duhem equation provides a relationship between the intensive variables of the system. For a simple system with I {\displaystyle I} different components, there will be I + 1 {\displaystyle I+1} independent parameters or "degrees of freedom". For example, if we know a gas cylinder filled with pure nitrogen is at room temperature (298 K) and 25 MPa, we can determine the fluid density (258 kg/m 3 ), enthalpy (272 kJ/kg), entropy (5.07 kJ/kg⋅K) or any other intensive thermodynamic variable. [ 7 ] If instead the cylinder contains a nitrogen/oxygen mixture, we require an additional piece of information, usually the ratio of oxygen-to-nitrogen.
If multiple phases of matter are present, the chemical potentials across a phase boundary are equal. [ 8 ] Combining expressions for the Gibbs–Duhem equation in each phase and assuming systematic equilibrium (i.e. that the temperature and pressure is constant throughout the system), we recover the Gibbs' phase rule .
One particularly useful expression arises when considering binary solutions. [ 9 ] At constant P ( isobaric ) and T ( isothermal ) it becomes:
0 = N 1 d μ 1 + N 2 d μ 2 {\displaystyle 0=N_{1}\mathrm {d} \mu _{1}+N_{2}\mathrm {d} \mu _{2}}
or, normalizing by total number of moles in the system N 1 + N 2 , {\displaystyle N_{1}+N_{2},} substituting in the definition of activity coefficient γ {\displaystyle \gamma } and using the identity x 1 + x 2 = 1 {\displaystyle x_{1}+x_{2}=1} : [ 10 ]
0 = x 1 d ln ( γ 1 ) + x 2 d ln ( γ 2 ) {\displaystyle 0=x_{1}\mathrm {d} \ln(\gamma _{1})+x_{2}\mathrm {d} \ln(\gamma _{2})}
This equation is instrumental in the calculation of thermodynamically consistent and thus more accurate expressions for the vapor pressure of a binary mixture from limited experimental data. One can develop this further to the Duhem–Margules equation which relates to vapor pressures directly.
Lawrence Stamper Darken has shown that the Gibbs–Duhem equation can be applied to the determination of chemical potentials of components from a multicomponent system from experimental data regarding the chemical potential G 2 ¯ {\displaystyle {\bar {G_{2}}}} of only one component (here component 2) at all compositions. He has deduced the following relation [ 11 ]
G 2 ¯ = G + ( 1 − x 2 ) ( ∂ G ∂ x 2 ) x 1 x 3 {\displaystyle {\bar {G_{2}}}=G+(1-x_{2})\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}}
x i , amount (mole) fractions of components.
Making some rearrangements and dividing by (1 – x 2 ) 2 gives:
G ( 1 − x 2 ) 2 + 1 1 − x 2 ( ∂ G ∂ x 2 ) x 1 x 3 = G 2 ¯ ( 1 − x 2 ) 2 {\displaystyle {\frac {G}{(1-x_{2})^{2}}}+{\frac {1}{1-x_{2}}}\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}={\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}}
or
( d G 1 − x 2 d x 2 ) x 1 x 3 = G 2 ¯ ( 1 − x 2 ) 2 {\displaystyle \left({\mathfrak {d}}{\frac {G}{\frac {1-x_{2}}{{\mathfrak {d}}x_{2}}}}\right)_{\frac {x_{1}}{x_{3}}}={\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}}
or
( ∂ G 1 − x 2 ∂ x 2 ) x 1 x 3 = G 2 ¯ ( 1 − x 2 ) 2 {\displaystyle \left({\frac {\frac {\partial G}{1-x_{2}}}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}={\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}} as formatting variant
The derivative with respect to one mole fraction x 2 is taken at constant ratios of amounts (and therefore of mole fractions) of the other components of the solution representable in a diagram like ternary plot .
The last equality can be integrated from x 2 = 1 {\displaystyle x_{2}=1} to x 2 {\displaystyle x_{2}} gives:
G − ( 1 − x 2 ) lim x 2 → 1 G 1 − x 2 = ( 1 − x 2 ) ∫ 1 x 2 G 2 ¯ ( 1 − x 2 ) 2 d x 2 {\displaystyle G-(1-x_{2})\lim _{x_{2}\to 1}{\frac {G}{1-x_{2}}}=(1-x_{2})\int _{1}^{x_{2}}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}}
Applying LHopital's rule gives:
lim x 2 → 1 G 1 − x 2 = lim x 2 → 1 ( ∂ G ∂ x 2 ) x 1 x 3 . {\displaystyle \lim _{x_{2}\to 1}{\frac {G}{1-x_{2}}}=\lim _{x_{2}\to 1}\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}.}
This becomes further:
lim x 2 → 1 G 1 − x 2 = − lim x 2 → 1 G 2 ¯ − G 1 − x 2 . {\displaystyle \lim _{x_{2}\to 1}{\frac {G}{1-x_{2}}}=-\lim _{x_{2}\to 1}{\frac {{\bar {G_{2}}}-G}{1-x_{2}}}.}
Express the mole fractions of component 1 and 3 as functions of component 2 mole fraction and binary mole ratios:
x 1 = 1 − x 2 1 + x 3 x 1 {\displaystyle x_{1}={\frac {1-x_{2}}{1+{\frac {x_{3}}{x_{1}}}}}} x 3 = 1 − x 2 1 + x 1 x 3 {\displaystyle x_{3}={\frac {1-x_{2}}{1+{\frac {x_{1}}{x_{3}}}}}}
and the sum of partial molar quantities
G = ∑ i = 1 3 x i G i ¯ , {\displaystyle G=\sum _{i=1}^{3}x_{i}{\bar {G_{i}}},}
gives
G = x 1 ( G 1 ¯ ) x 2 = 1 + x 3 ( G 3 ¯ ) x 2 = 1 + ( 1 − x 2 ) ∫ 1 x 2 G 2 ¯ ( 1 − x 2 ) 2 d x 2 {\displaystyle G=x_{1}({\bar {G_{1}}})_{x_{2}=1}+x_{3}({\bar {G_{3}}})_{x_{2}=1}+(1-x_{2})\int _{1}^{x_{2}}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}}
( G 1 ¯ ) x 2 = 1 {\displaystyle ({\bar {G_{1}}})_{x_{2}=1}} and ( G 3 ¯ ) x 2 = 1 {\displaystyle ({\bar {G_{3}}})_{x_{2}=1}} are constants which can be determined from the binary systems 1_2 and 2_3. These constants can be obtained from the previous equality by putting the complementary mole fraction x 3 = 0 for x 1 and vice versa.
Thus
( G 1 ¯ ) x 2 = 1 = − ( ∫ 1 0 G 2 ¯ ( 1 − x 2 ) 2 d x 2 ) x 3 = 0 {\displaystyle ({\bar {G_{1}}})_{x_{2}=1}=-\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{3}=0}}
and
( G 3 ¯ ) x 2 = 1 = − ( ∫ 1 0 G 2 ¯ ( 1 − x 2 ) 2 d x 2 ) x 1 = 0 {\displaystyle ({\bar {G_{3}}})_{x_{2}=1}=-\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{1}=0}}
The final expression is given by substitution of these constants into the previous equation:
G = ( 1 − x 2 ) ( ∫ 1 x 2 G 2 ¯ ( 1 − x 2 ) 2 d x 2 ) x 1 x 3 − x 1 ( ∫ 1 0 G 2 ¯ ( 1 − x 2 ) 2 d x 2 ) x 3 = 0 − x 3 ( ∫ 1 0 G 2 ¯ ( 1 − x 2 ) 2 d x 2 ) x 1 = 0 {\displaystyle G=(1-x_{2})\left(\int _{1}^{x_{2}}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{\frac {x_{1}}{x_{3}}}-x_{1}\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{3}=0}-x_{3}\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{1}=0}} | https://en.wikipedia.org/wiki/Gibbs–Duhem_equation |
The Gibbs–Thomson effect, in common physics usage, refers to variations in vapor pressure or chemical potential across a curved surface or interface. The existence of a positive interfacial energy will increase the energy required to form small particles with high curvature, and these particles will exhibit an increased vapor pressure. See Ostwald–Freundlich equation .
More specifically, the Gibbs–Thomson effect refers to the observation that small crystals that are in equilibrium with their liquid, melt at a lower temperature than large crystals. In cases of confined geometry, such as liquids contained within porous media, this leads to a depression in the freezing point / melting point that is inversely proportional to the pore size, as given by the Gibbs–Thomson equation .
The technique is closely related to using gas adsorption to measure pore sizes, but uses the Gibbs–Thomson equation rather than the Kelvin equation . They are both particular cases of the Gibbs Equations of Josiah Willard Gibbs : the Kelvin equation is the constant temperature case, and the Gibbs–Thomson equation is the constant pressure case. [ 1 ] This behaviour is closely related to the capillary effect and both are due to the change in bulk free energy caused by the curvature of an interfacial surface under tension. [ 2 ] [ 3 ] The original equation only applies to isolated particles, but with the addition of surface interaction terms (usually expressed in terms of the contact wetting angle) can be modified to apply to liquids and their crystals in porous media. As such it has given rise to various related techniques for measuring pore size distributions. (See Thermoporometry and cryoporometry .)
The Gibbs–Thomson effect lowers both melting and freezing point, and also raises boiling point. However, simple cooling of an all-liquid sample usually leads to a state of non-equilibrium super cooling and only eventual non-equilibrium freezing. To obtain a measurement of the equilibrium freezing event, it is necessary to first cool enough to freeze a sample with excess liquid outside the pores, then warm the sample until the liquid in the pores is all melted, but the bulk material is still frozen. Then, on re-cooling the equilibrium freezing event can be measured, as the external ice will then grow into the pores. [ 4 ] [ 5 ] This is in effect an "ice intrusion" measurement (cf. mercury intrusion ), and as such in part may provide information on pore throat properties. The melting event can be expected to provide more accurate information on the pore body.
For an isolated spherical solid particle of diameter d {\displaystyle d} in its own liquid, the Gibbs–Thomson equation for the structural melting point depression can be written: [ 6 ]
Δ T m ( d ) = T m B − T m ( d ) = T m B 4 σ s l H f ρ s d {\displaystyle \Delta \,T_{m}(d)=T_{mB}-T_{m}(d)=T_{mB}{\frac {4\sigma _{sl}}{H_{f}\rho _{s}d}}}
where:
Very similar equations may be applied to the growth and melting of crystals in the confined geometry of porous systems. However the geometry term for the crystal-liquid interface may be different, and there may be additional surface energy terms to consider, which can be written as a wetting angle term cos ϕ {\displaystyle \cos \phi \,} . The angle is usually considered to be near 180°. In cylindrical pores there is some evidence that the freezing interface may be spherical, while the melting interface may be cylindrical, based on preliminary measurements for the measured ratio for Δ T f / Δ T m {\displaystyle \Delta \,T_{f}/\Delta \,T_{m}} in cylindrical pores. [ 7 ]
Thus for a spherical interface between a non-wetting crystal and its own liquid, in an infinite cylindrical pore of diameter x {\displaystyle x} , the structural melting point depression
is given by: [ 8 ]
Δ T m ( x ) = T m B − T m ( x ) = − T m B 4 σ s l cos ϕ H f ρ s x {\displaystyle \Delta \,T_{m}(x)=T_{mB}-T_{m}(x)=-T_{mB}{\frac {4\sigma \,_{sl}\cos \phi \,}{H_{f}\rho \,_{s}x}}}
The Gibbs–Thomson equation may be written in a compact form: [ 9 ]
Δ T m ( x ) = k G T x {\displaystyle \Delta \,T_{m}(x)={\frac {k_{GT}}{x}}}
where the Gibbs–Thomson coefficient k G T {\displaystyle k_{GT}} assumes different values for different liquids [ 6 ] [ 7 ] and different interfacial geometries (spherical/cylindrical/planar). [ 7 ]
In more detail:, [ 1 ] [ 10 ]
Δ T m ( x ) = k G T x = k g k s k i x {\displaystyle \Delta \,T_{m}(x)={\frac {k_{GT}}{x}}={\frac {k_{g}\,k_{s}\,k_{i}}{x}}}
where:
As early as 1886, Robert von Helmholtz (son of the German physicist Hermann von Helmholtz ) had observed that finely dispersed liquids have a higher vapor pressure. [ 11 ] By 1906, the German physical chemist Friedrich Wilhelm Küster (1861–1917) had predicted that since the vapor pressure of a finely pulverized volatile solid is greater than the vapor pressure of the bulk solid, then the melting point of the fine powder should be lower than that of the bulk solid. [ 12 ] Investigators such as the Russian physical chemists Pavel Nikolaevich Pavlov (or Pawlow (in German), 1872–1953) and Peter Petrovich von Weymarn (1879–1935), among others, searched for and eventually observed such melting point depression. [ 13 ] By 1932, Czech investigator Paul Kubelka (1900–1956) had observed that the melting point of iodine in activated charcoal is depressed as much as 100 °C. [ 14 ] Investigators recognized that the melting point depression occurred when the change in surface energy was significant compared to the latent heat of the phase transition, which condition obtained in the case of very small particles. [ 15 ]
Neither Josiah Willard Gibbs nor William Thomson ( Lord Kelvin ) derived the Gibbs–Thomson equation. [ 16 ] Also, although many sources claim that British physicist J. J. Thomson derived the Gibbs–Thomson equation in 1888, he did not. [ 17 ] Early in the 20th century, investigators derived precursors of the Gibbs–Thomson equation. [ 18 ] However, in 1920, the Gibbs–Thomson equation was first derived in its modern form by two researchers working independently: Friedrich Meissner, a student of the Estonian-German physical chemist Gustav Tammann , and Ernst Rie (1896–1921), an Austrian physicist at the University of Vienna. [ 19 ] [ 20 ] These early investigators did not call the relation the "Gibbs–Thomson" equation. That name was in use by 1910 or earlier; [ 21 ] it originally referred to equations concerning the adsorption of solutes by interfaces between two phases — equations that Gibbs and then J. J. Thomson derived. [ 22 ] Hence, in the name "Gibbs–Thomson" equation, "Thomson" refers to J. J. Thomson, not William Thomson (Lord Kelvin).
In 1871, William Thomson published an equation describing capillary action and relating the curvature of a liquid-vapor interface to the vapor pressure: [ 23 ]
p ( r 1 , r 2 ) = P − γ ρ vapor ( ρ liquid − ρ vapor ) ( 1 r 1 + 1 r 2 ) {\displaystyle p(r_{1},r_{2})=P-{\frac {\gamma \,\rho _{\text{vapor}}}{(\rho _{\text{liquid}}-\rho _{\text{vapor}})}}\left({\frac {1}{r_{1}}}+{\frac {1}{r_{2}}}\right)}
where:
In his dissertation of 1885, Robert von Helmholtz (son of German physicist Hermann von Helmholtz ) showed how the Ostwald–Freundlich equation
ln ( p ( r ) P ) = 2 γ V molecule k B T r {\displaystyle \ln \left({\frac {p(r)}{P}}\right)={\frac {2\gamma V_{\text{molecule}}}{k_{B}Tr}}}
could be derived from Kelvin's equation. [ 24 ] [ 25 ] The Gibbs–Thomson equation can then be derived from the Ostwald–Freundlich equation via a simple substitution using the integrated form of the Clausius–Clapeyron relation : [ 26 ]
ln ( P 2 P 1 ) = L R ( 1 T 1 − 1 T 2 ) . {\displaystyle \ln \left({\frac {P_{2}}{P_{1}}}\right)={\frac {L}{R}}\left({\frac {1}{T_{1}}}-{\frac {1}{T_{2}}}\right).}
The Gibbs–Thomson equation can also be derived directly from Gibbs' equation for the energy of an interface between phases. [ 27 ] [ 28 ]
It should be mentioned that in the literature, there is still not agreement about the specific equation to which the name "Gibbs–Thomson equation" refers. For example, in the case of some authors, it's another name for the "Ostwald–Freundlich equation" [ 29 ] —which, in turn, is often called the "Kelvin equation"—whereas in the case of other authors, the "Gibbs–Thomson relation" is the Gibbs free energy that's required to expand the interface, [ 30 ] and so forth. | https://en.wikipedia.org/wiki/Gibbs–Thomson_equation |
Instructions per second ( IPS ) is a measure of a computer 's processor speed. For complex instruction set computers (CISCs), different instructions take different amounts of time, so the value measured depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematic. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches and no cache contention , whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.
The term is commonly used in association with a metric prefix (k, M, G, T, P, or E) to form kilo instructions per second ( kIPS ), mega instructions per second ( MIPS ), giga instructions per second ( GIPS ) and so on. Formerly TIPS was used occasionally for "thousand IPS".
IPS can be calculated using this equation: [ 1 ]
However, the instructions/cycle measurement depends on the instruction sequence, the data and external factors.
Before standard benchmarks were available, average speed rating of computers was based on calculations for a mix of instructions with the results given in kilo instructions per second (kIPS). The most famous was the Gibson Mix , [ 2 ] produced by Jack Clark Gibson of IBM for scientific applications in 1959.
Other ratings, such as the ADP mix which does not include floating point operations, were produced for commercial applications. The thousand instructions per second (kIPS) unit is rarely used today, as most current microprocessors can execute at least a million instructions per second.
Gibson divided computer instructions into 12 classes, based on the IBM 704 architecture, adding a 13th class to account for indexing time. Weights were primarily based on analysis of seven scientific programs run on the 704, with a small contribution from some IBM 650 programs. The overall score was then the weighted sum of the average execution speed for instructions in each class. [ 3 ]
The speed of a given CPU depends on many factors, such as the type of instructions being executed, the execution order and the presence of branch instructions (problematic in CPU pipelines). CPU instruction rates are different from clock frequencies, usually reported in Hz , as each instruction may require several clock cycles to complete or the processor may be capable of executing multiple independent instructions simultaneously. MIPS can be useful when comparing performance between processors made with similar architecture (e.g. Microchip branded microcontrollers), but they are difficult to compare between differing CPU architectures . [ 4 ] This led to the term "Meaningless Indicator of Processor Speed," [ 5 ] or less commonly, "Meaningless Indices of Performance," [ 6 ] being popular amongst technical people by the mid-1980s.
For this reason, MIPS has become not a measure of instruction execution speed, but task performance speed compared to a reference. In the late 1970s, minicomputer performance was compared using VAX MIPS , where computers were measured on a task and their performance rated against the VAX-11/780 that was marketed as a 1 MIPS machine. (The measure was also known as the VAX Unit of Performance or VUP .) This was chosen because the 11/780 was roughly equivalent in performance to an IBM System/370 model 158–3, which was commonly accepted in the computing industry as running at 1 MIPS.
Many minicomputer performance claims were based on the Fortran version of the Whetstone benchmark , giving Millions of Whetstone Instructions Per Second (MWIPS). The VAX 11/780 with FPA (1977) runs at 1.02 MWIPS.
Effective MIPS speeds are highly dependent on the programming language used. The Whetstone Report has a table showing MWIPS speeds of PCs via early interpreters and compilers up to modern languages. The first PC compiler was for BASIC (1982) when a 4.8 MHz 8088/87 CPU obtained 0.01 MWIPS. Results on a 2.4 GHz Intel Core 2 Duo (1 CPU 2007) vary from 9.7 MWIPS using BASIC Interpreter, 59 MWIPS via BASIC Compiler, 347 MWIPS using 1987 Fortran, 1,534 MWIPS through HTML/Java to 2,403 MWIPS using a modern C / C++ compiler.
For the most early 8-bit and 16-bit microprocessors , performance was measured in thousand instructions per second (1000 kIPS = 1 MIPS).
zMIPS refers to the MIPS measure used internally by IBM to rate its mainframe servers ( zSeries , IBM System z9 , and IBM System z10 ).
Weighted million operations per second (WMOPS) is a similar measurement, used for audio codecs.
[ 7 ] | https://en.wikipedia.org/wiki/Gibson_Mix |
Gibson assembly is a molecular cloning method that allows for the joining of multiple DNA fragments in a single, isothermal reaction. It is named after its creator, Daniel G. Gibson, who is the chief technology officer and co-founder of the synthetic biology company, Telesis Bio. The technology is more efficient than manual plasmid genetic recombination methods, but remains expensive as it is still under patent. [ 1 ] [ 2 ]
The entire Gibson assembly reaction requires few components with minor manipulations. [ 3 ]
The method can simultaneously combine up to 15 DNA fragments based on sequence identity. It requires that the DNA fragments contain ~20-40 base pair overlap with adjacent DNA fragments. These DNA fragments are mixed with a cocktail of three enzymes, along with other buffer components.
The three required enzyme activities are: exonuclease , DNA polymerase , and DNA ligase .
The resulting product is different DNA fragments joined into one. Either linear or closed circular molecules can be assembled.
There are two approaches to Gibson assembly. A one-step method and a two-step method. Both methods can be performed in a single reaction vessel. The Gibson assembly 1-step method allows for the assembly of up to 5 different fragments using a single step isothermal process. In this method, fragments and a master mix of enzymes are combined and the entire mixture is incubated at 50 °C for up to one hour. For the creation of more complex constructs with up to 15 fragments, or for constructs incorporating fragments from 100 bp to 10 kb, the Gibson assembly two-step approach is used. The two-step reaction requires two separate additions of master mix. One of the reactions is for the exonuclease and annealing step while the other is for DNA polymerase and ligation steps. For the two-step approach, different incubation temperatures are used to carry out the assembly process.
The Gibson DNA assembly method has many advantages compared to conventional restriction enzyme/ligation cloning of recombinant DNA. For example, | https://en.wikipedia.org/wiki/Gibson_assembly |
In mathematics , the Gieseking manifold is a cusped hyperbolic 3-manifold of finite volume. It is non-orientable and has the smallest volume among non-compact hyperbolic manifolds, having volume approximately V ≈ 1.0149416 {\displaystyle V\approx 1.0149416} . It was discovered by Hugo Gieseking ( 1912 ).
The Gieseking manifold can be constructed by removing the vertices from a tetrahedron , then gluing the faces together in pairs using affine-linear maps. Label the vertices 0, 1, 2, 3. Glue the face with vertices 0, 1, 2 to the face with vertices 3, 1, 0 in that order. Glue the face 0, 2, 3 to the face 3, 2, 1 in that order. In the hyperbolic structure of the Gieseking manifold, this ideal tetrahedron is the canonical polyhedral decomposition of David B. A. Epstein and Robert C. Penner. Moreover, the angle made by the faces is π / 3 {\displaystyle \pi /3} . The triangulation has one tetrahedron, two faces, one edge and no vertices, so all the edges of the original tetrahedron are glued together.
The Gieseking manifold has a double cover homeomorphic to the figure-eight knot complement . The underlying compact manifold has a Klein bottle boundary, and the first homology group of the Gieseking manifold is the integers.
The Gieseking manifold is a fiber bundle over the circle with fiber the once-punctured torus and monodromy given by ( x , y ) → ( x + y , x ) . {\displaystyle (x,y)\to (x+y,x).} The square of this map is Arnold's cat map and this gives another way to see that the Gieseking manifold is double covered by the complement of the figure-eight knot.
The volume of the Gieseking manifold is called the Gieseking constant [ 1 ] and has a numeral value of approximately:
It can be given as in a closed form [ 3 ] with the Clausen function Cl 2 ( φ ) {\displaystyle \operatorname {Cl} _{2}\left(\varphi \right)} as:
V = Cl 2 ( π 3 ) {\displaystyle V=\operatorname {Cl} _{2}\left({\frac {\pi }{3}}\right)}
This is similar to Catalan's constant G {\displaystyle G} , which also manifests as a volume and can be expressed in terms of the Clausen function:
G = Cl 2 ( π 2 ) = 0.91596559 … {\displaystyle G=\operatorname {Cl} _{2}\left({\frac {\pi }{2}}\right)=0.91596559\dots }
There is a related expression in terms of a special value of a Dirichlet L-function given by the identity
V = 3 3 4 ⋅ L ( 2 , χ − 3 ) = 3 3 4 ( ∑ k = 0 ∞ 1 ( 3 k + 1 ) 2 − 1 ( 3 k + 2 ) 2 ) {\displaystyle V={\frac {3{\sqrt {3}}}{4}}\cdot L(2,\chi _{-3})={\frac {3{\sqrt {3}}}{4}}\left(\sum _{k=0}^{\infty }{\frac {1}{(3k+1)^{2}}}-{\frac {1}{(3k+2)^{2}}}\right)}
whereas Catalan's constant is equal to L ( 2 , χ − 4 ) {\displaystyle L(2,\chi _{-4})}
Another closed form expression may be given in terms of the trigamma function :
V = 3 3 ( ψ 1 ( 1 / 3 ) 2 − π 2 3 ) {\displaystyle V={\frac {\sqrt {3}}{3}}\left({\frac {\psi _{1}(1/3)}{2}}-{\frac {\pi ^{2}}{3}}\right)}
Integrals for the Gieseking constant are given by
V = ∫ 0 2 π / 3 ln ( 2 cos ( 1 2 x ) ) d x {\displaystyle V=\int _{0}^{2\pi /3}\ln \left(2\cos \left({\tfrac {1}{2}}x\right)\right)\mathrm {d} x}
V = 2 ∫ 0 1 ln ( 1 + x ) ( 1 − x ) ( 3 + x ) d x {\displaystyle V=2\int _{0}^{1}{\frac {\ln(1+x)}{\sqrt {(1-x)(3+x)}}}\mathrm {d} x}
which follow from its definition through the Clausen function and [ 4 ]
V = 3 2 ∫ 0 ∞ ∫ 0 ∞ ∫ 0 ∞ d x d y d z x y z ( x + y + z + 1 x + 1 y + 1 z ) 2 {\displaystyle V={\frac {\sqrt {3}}{2}}\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }{\frac {\mathrm {d} x\ \mathrm {d} y\ \mathrm {d} z}{xyz(x+y+z+{\tfrac {1}{x}}+{\tfrac {1}{y}}+{\tfrac {1}{z}})^{2}}}}
A further expression is:
V = 3 3 4 ( ∑ k = 0 ∞ 1 ( 3 k + 1 ) 2 − ∑ k = 0 ∞ 1 ( 3 k + 2 ) 2 ) {\displaystyle V={\frac {3{\sqrt {3}}}{4}}\left(\sum _{k=0}^{\infty }{\frac {1}{(3k+1)^{2}}}-\sum _{k=0}^{\infty }{\frac {1}{(3k+2)^{2}}}\right)}
This gives:
∑ k = 0 ∞ 1 ( 3 k + 1 ) 2 = 2 π 2 27 + 2 3 9 V {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{(3k+1)^{2}}}={\frac {2\pi ^{2}}{27}}+{\frac {2{\sqrt {3}}}{9}}V}
∑ k = 0 ∞ 1 ( 3 k + 2 ) 2 = 2 π 2 27 − 2 3 9 V {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{(3k+2)^{2}}}={\frac {2\pi ^{2}}{27}}-{\frac {2{\sqrt {3}}}{9}}V}
which is similar to:
∑ k = 0 ∞ 1 ( 4 k + 1 ) 2 = π 2 16 + 1 2 G {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{(4k+1)^{2}}}={\frac {\pi ^{2}}{16}}+{\frac {1}{2}}G}
∑ k = 0 ∞ 1 ( 4 k + 3 ) 2 = π 2 16 − 1 2 G {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{(4k+3)^{2}}}={\frac {\pi ^{2}}{16}}-{\frac {1}{2}}G}
for Catalan's constant G {\displaystyle G} .
In 2024, Frank Calegari , Vesselin Dimitrov, and Yunqing Tang proved that 1 , π 2 , L ( 2 , χ − 3 ) {\displaystyle 1,\pi ^{2},L(2,\chi _{-3})} are linearly independent over the rationals. This proves that 3 ⋅ V {\displaystyle {\sqrt {3}}\cdot V} is irrational as well as the special values ψ 1 ( 1 / 6 ) , ψ 1 ( 1 / 3 ) , ψ 1 ( 2 / 3 ) , ψ 1 ( 5 / 6 ) {\displaystyle \psi _{1}(1/6),\psi _{1}(1/3),\psi _{1}(2/3),\psi _{1}(5/6)} of the trigamma function. The irrationality of V {\displaystyle V} itself is still open. [ 5 ] | https://en.wikipedia.org/wiki/Gieseking_manifold |
GigSky is a Palo Alto, California -based mobile technology company that provides e-SIM and SIM card -based data services to international travelers. Users connect to public data networks using a mobile app and a GigSky e-SIM or Apple SIM card. [ 1 ] GigSky also offers services for enterprise customers, and provides mobile data for airline electronic flight bag (EFB) solutions. [ 2 ]
GigSky was founded in 2010 by Ravi Rishy-Maharaj. [ 3 ] [ 4 ] In June 2015, Apple began offering access to the GigSky service on cellular iPads with Apple SIM. [ 5 ]
Norwegian Air Lines started using GigSky’s service in late 2013 to support its EFB product. [ 2 ]
In August 2016, Avionica, an avionics technology company, partnered with GigSky to offer a global flight data transmission service for airlines. [ 6 ]
In November 2018, GigSky began offering international data eSIMs on iPhone XS , XS Max , and XR phones. [ 7 ] [ clarification needed ]
This business-related article is a stub . You can help Wikipedia by expanding it .
This mobile technology related article is a stub . You can help Wikipedia by expanding it .
This mobile technology –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GigSky |
Gigabit wireless is the name given to wireless communication systems whose data transfer speeds reach or exceed one gigabit (one billion bits) per second. Such speeds are achieved with complex modulations of the signal, such as quadrature amplitude modulation (QAM) or signals spanning many frequencies. When a signal spans many frequencies, physicists refer that a wide bandwidth signal. In the communication industry, many wireless internet service providers and cell phone companies deploy wireless radio frequency antennas to backhaul core networks, connect businesses, and even individual residential homes. [ 1 ] [ 2 ]
In general, indoor protocols follow a cross-vendor standard and communicate in the unlicensed 2.4 GHz, 5 GHz, and (soon) 60 GHz bands.
The outdoor carrier link protocols vary widely and are not compatible across vendors (and often models from the same vendor).
Note: the higher bandwidth devices require a less complex modulation to achieve high speeds.
Internet service providers (ISP's) are looking for ways to expand gigabit per second (Gbit/s) high-speed services to their customers. These can be achieved through fiber to the premises broadband network architecture, or a more affordable alternative using fixed wireless in the last mile in combination with the fiber networks in the middle mile in order to reduce the costs of trenching fiber optic cables to the users. In the United States, 60 GHz V band is unlicensed. This makes the V band an appealing choice to be used as fixed wireless access for Gbit/s services to connect to homes and businesses. Similarly, 70/80 GHz E band is lightly licensed which can be more accessible to more providers to provide such services. [ 16 ]
There had been some early adopters of the hybrid fiber-wireless approach to provide Gbit/s services to customers. One of those ISP's was Webpass, a company founded in 2003 in San Francisco as a wireless ISP focusing on buildings in big cities. Since then, Webpass had been increasing the speeds along with improved wireless technologies. By 2015, Webpass offered 1 Gbit/s connections to commercial customers, however, the residential customers were limited to speeds of up to 500 Mbit/s to share the 1 Gbit/s wireless link among many residents in the same building. The company utilized a combination of various licensed and unlicensed bands. [ 17 ]
In January 2016, a startup company Starry from Boston introduced Starry Point with the goal to provide Gbit/s speed internet wirelessly to homes. The device is a fixed wireless unit attached to a window as an access point to connect to Starry core networks using a millimetre wave band communication. The company did not reveal the details of the band, but claimed to be "the world’s first millimeter wave band active phased array technology for consumer internet communications". [ 18 ] However, in January 2018, at the time that the company announced the expansion of its beta service to cover 3 cities: Boston, Los Angeles , and Washington, DC , the speeds were still limited to up to 200 Mbit/s. [ 19 ]
In June 2016, Google Fiber acquired Webpass to boost its effort in its experiments with wireless technologies. [ 20 ] As a result, Google Fiber put its effort on fiber to the premises on hold to explore more on the cheaper wireless alternative. [ 21 ] By early 2017, the Webpass division of Google Fiber expanded 1 Gbit/s wireless service to customers in many cities in the United States. [ 22 ]
In November 2016, Atlas Networks, an ISP that serves Seattle , deployed its V-band Gbit/s service to customers within the 750-metre (0.47-mile) to its fiber networks. The maximum throughput for each connection was 1 gigabit per second. [ 23 ]
In October 2017, Cloudwifi, a startup ISP based in Kitchener, Ontario started using 60 GHz band fixed wireless to provide Gbit/s connectivity to customers within the 2-kilometre (1.2-mile) range of its fiber connection points. [ 24 ]
In October 2017, Newark Fiber enabled its first customer in Newark, New Jersey with 10 Gbit/s fixed wireless service. [ 25 ] Newark Fiber used V-band 10 Gbit/s transmitters with the distance of up to 1.8 kilometres (1.1 miles). [ 26 ] | https://en.wikipedia.org/wiki/Gigabit_wireless |
The gigabyte ( / ˈ ɡ ɪ ɡ ə b aɪ t , ˈ dʒ ɪ ɡ ə b aɪ t / ) [ 1 ] is a multiple of the unit byte for digital information. The prefix giga means 10 9 in the International System of Units (SI). Therefore, one gigabyte is one billion bytes. The unit symbol for the gigabyte is GB .
This definition is used in all contexts of science (especially data science ), engineering , business , and many areas of computing , including storage capacities of hard drives , solid-state drives , and tapes , as well as data transmission speeds. The term is also used in some fields of computer science and information technology to denote 1 073 741 824 (1024 3 or 2 30 ) bytes, however, particularly for sizes of RAM . Thus, some usage of gigabyte has been ambiguous. To resolve this difficulty, IEC 80000-13 clarifies that a gigabyte (GB) is 10 9 bytes and specifies the term gibibyte (GiB) to denote 2 30 bytes. These differences are still readily seen, for example, when a 400 GB drive's capacity is displayed by Microsoft Windows as 372 GB instead of 372 GiB. Analogously, a memory module that is labeled as having the size " 1 GB " has one gibibyte ( 1 GiB ) of storage capacity.
In response to litigation over whether the makers of electronic storage devices must conform to Microsoft Windows' use of a binary definition of "GB" instead of the metric/decimal definition, the United States District Court for the Northern District of California rejected that argument, ruling that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce. ' " [ 2 ] [ 3 ]
The term gigabyte has a standard definition of 1000 3 bytes, as well as a discouraged [ 2 ] meaning of 1024 3 bytes. The latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name. As 1024 (2 10 ) is approximately 1000 (10 3 ), roughly corresponding to SI multiples, it was used for binary multiples as well.
In 1998 the International Electrotechnical Commission (IEC) published standards for binary prefixes , requiring that the gigabyte strictly denote 1000 3 bytes and gibibyte denote 1024 3 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE , EU , and NIST , and in 2009 it was incorporated in the International System of Quantities . Nevertheless, the term gigabyte continues to be widely used with the following two different meanings:
Based on powers of 10, this definition uses the prefix giga- as defined in the International System of Units (SI). This is the recommended definition by the International Electrotechnical Commission (IEC). [ 4 ] This definition is used in networking contexts and most storage media , particularly hard drives , flash -based storage, [ 5 ] [ 6 ] and DVDs , and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance . The file manager of Mac OS X version 10.6 and later versions are a notable example of this usage in software, which report files sizes in decimal units. [ 7 ]
The binary definition uses powers of the base 2, as does the architectural principle of binary computers .
This usage is widely promulgated by some operating systems , such as Microsoft Windows in reference to computer memory (e.g., RAM ). This definition is synonymous with the unambiguous unit gibibyte .
Since the first disk drive, the IBM 350 , disk drive manufacturers expressed hard drive capacities using decimal prefixes. With the advent of gigabyte-range drive capacities, manufacturers labelled many consumer hard drive , solid-state drive and USB flash drive capacities in certain size classes expressed in decimal gigabytes, such as "500 GB". The exact capacity of a given drive model is usually slightly larger than the class designation. Practically all manufacturers of hard disk drives and flash-memory disk devices [ 5 ] [ 6 ] continue to define one gigabyte as 1 000 000 000 bytes , which is displayed on the packaging. Some operating systems such as Mac OS X , [ 8 ] iOS , Android , [ citation needed ] Ubuntu , [ 9 ] and Debian [ 10 ] express hard drive capacity or file size using decimal multipliers, while others such as Microsoft Windows (including Windows Phone ) report file size using binary multipliers. This discrepancy causes confusion, as a disk with an advertised capacity of, for example, 400 GB (meaning 400 000 000 000 bytes , equal to 372 GiB) might be reported by the operating system as " 372 GB ".
For RAM , the JEDEC memory standards use IEEE 100 nomenclature which quote the gigabyte as 1 073 741 824 bytes (2 30 bytes). [ 11 ]
The difference between units based on decimal and binary prefixes increases as a semi-logarithmic (linear-log) function—for example, the decimal kilobyte value is nearly 98% of the kibibyte, a megabyte is under 96% of a mebibyte, and a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB (279 GiB) hard disk might be indicated variously as "300 GB", "279 GB" or "279 GiB", depending on the operating system. As storage sizes increase and larger units are used, these differences become more pronounced.
A lawsuit decided in 2019 that arose from alleged breach of contract and other claims over the binary and decimal definitions used for "gigabyte" have ended in favour of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (10 9 ) bytes (the decimal definition). Specifically, the courts held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' .... The California Legislature has likewise adopted the decimal system for all 'transactions in this state'." [ 2 ]
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital . [ 12 ] [ 13 ] Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. [ 12 ] Seagate was sued on similar grounds and also settled. [ 12 ] [ 14 ]
Because of their physical design, the capacity of modern computer random-access memory devices, such as DIMM modules, is always a multiple of a power of 1024. It is thus convenient to use prefixes denoting powers of 1024, known as binary prefixes , in describing them. For example, a memory capacity of 1 073 741 824 bytes (1024 3 B) is conveniently expressed as 1 GiB rather than as 1.074 GB. The former specification is, however, often quoted as "1 GB" when applied to random-access memory. [ 15 ]
Software allocates memory in varying degrees of granularity as needed to fulfill data structure requirements and binary multiples are usually not required. Other computer capacities and rates, like storage hardware size, data transfer rates , clock speeds , operations per second , etc. are usually presented in decimal units. For example, the manufacturer of a "300 GB" hard drive is claiming a capacity of 300 000 000 000 bytes , not 300 × 1024 3 (which would be 322 122 547 200 ) bytes.
The "gigabyte" symbol is encoded by Unicode at code point U+3387 ㎇ SQUARE GB . [ 16 ] | https://en.wikipedia.org/wiki/Gigabyte |
Gigapackets are billions (10 9 ) of packets or datagrams . [ 1 ] The packet is the fundamental unit of information in computer networks . [ 1 ]
Data transfer rates in gigapackets per second are associated with high speed networks, especially fiber optic networks. The bit rates that are used to create gigapackets are in the range of gigabits per second . These rates are seen in network speeds of gigabit Ethernet or 10 Gigabit Ethernet and SONET Optical Carrier rates of OC-48 at 2.5 Gbit/s and OC-192 at 10 Gbit/s.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gigapackets |
A gigot bitume (asphalt leg of lamb) is a leg of lamb prepared by wrapping the meat in kraft paper and placing it in a bath of hot asphalt . This preparation method is traditionally used in France to celebrate the completion of the structural portion of construction projects or public works . [ 1 ] [ 2 ] [ 3 ]
A recipe for the dish, gigot cuit dans le goudron , appears in the 1900 cookbook La Vraie cuisine pratique: Les potages, les poissons, le bœuf, le veau, l'agneau, le mouton, le porc, la volaille, le gibier ("True Practical Cuisine: Soups, Fish, Beef, Veal, Lamb, Mutton, Pork, Poultry, Game")
This is a strange cooking method that can be put to good use by workers. I give the recipe as it was passed on to me. In a hot boiler of asphalt, when paving the ground, immerse a leg of lamb wrapped in very strong paper. With the assistance of a stone attached to one end, it will be pulled into the middle of the tar. One hour of cooking gives the meat a particularly excellent flavor. Salt when removed from the heat.
Usage of this cooking method by asphalt workers, in particular, has existed since the end of the 19th century.
In Val-de-Travers in western Switzerland , a similar tradition exists where a ham is cooked in asphalt (Jambon cuit dans l’asphalte). [ 4 ]
This article about French cuisine is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gigot_bitume |
Johannes Gijsbrecht Kuenen (born 9 December 1940, Heemstede ) is a Dutch microbiologist who is professor emeritus at the Delft University of Technology and a visiting scientist at the University of Southern California . His research is influenced by, and a contribution to, the scientific tradition of the Delft School of Microbiology.
Kuenen studied at the University of Groningen , where he received both his Doctorandus degree and in 1972 his Doctorate (PhD) under the supervision of Professor Dr. Hans Veldkamp. The title of his thesis was ‘‘Colourless sulphur bacteria from Dutch tidal mudflats’’. After a short post-doc at the University of California in Los Angeles (USA), he returned as a senior lecturer to Groningen. In 1980, he moved to Delft to become the 4th Professor of Microbiology (succeeding M.W. Beijerinck and A.J. Kluyver) at Delft University of Technology. Kuenen's initial research interests were (the application of) bacteria involved in the natural sulfur cycle and yeast physiology and metabolism. His later interest in the (eco)physiology of nitrifying and denitrifying bacteria has led a.o. to the discovery of the bacteria within the phylum Planctomycetota that perform the Anammox process. In addition, his research has been focussed on (halo) alkaliphilic sulfur-oxidizing bacteria from soda lakes. Gijs Kuenen retired in 2005 but remains active in science.
In 2004 Gijs Kuenen became a Knight in the Order of the Netherlands Lion , In 2005 he was elected Fellow of the American Academy of Microbiology . In 2006 he received the Jim Tiedje Award for his outstanding contribution to microbial ecology at the 11th International Symposium on Microbial Ecology in Vienna and in 2007 he was awarded the Procter & Gamble Award in Applied and Environmental Microbiology.
For his contribution to the founding of the education Life Science and Technology ( Delft University of Technology and Leiden University ) in 2005 he received an honorary membership of Study Association LIFE .
One of the five known anammox genera, with the single member Kuenenia stuttgartiensis , has been named after Kuenen. The Kuenen lab had named the first discovered species Brocadia anammoxidans , after the company Gist-Brocades (now DSM Gist ), for which Kuenen did consulting work and in which wastewater the bacteria was discovered. | https://en.wikipedia.org/wiki/Gijs_Kuenen |
Gila Hanna is a Canadian mathematics educator and philosopher of mathematics whose research interests include the nature and educational role of mathematical proofs , and gender in mathematics education. She is professor emerita in the Department of Curriculum, Teaching and Learning at the University of Toronto , affiliated with the Ontario Institute for Studies in Education , [ 1 ] the former director of mathematics education at the Fields Institute , [ 2 ] and the founder of the Canadian Journal of Mathematics, Science and Technology Education . [ 3 ]
Hanna is the author of Contact and Communication: An Evaluation of Bilingual Student Exchange Programs (OISE Press, 1980) and Rigorous Proof in Mathematics Education (OISE Press, 1983). [ 4 ] Her numerous edited volumes include:
Hanna was named a Fields Institute Fellow in 2003. [ 8 ] She was the 2020 winner of the Partners in Research Dr. Jonathon Borwein Mathematics Ambassador Award. [ 3 ] | https://en.wikipedia.org/wiki/Gila_Hanna |
Gilbert Stork (December 31, 1921 – October 21, 2017) [ 2 ] was a Belgian-American organic chemist . For a quarter of a century he was the Eugene Higgins Professor of Chemistry Emeritus at Columbia University . [ 3 ] He is known for making significant contributions to the total synthesis of natural products , including a lifelong fascination with the synthesis of quinine . In so doing he also made a number of contributions to mechanistic understanding of reactions, and performed pioneering work on enamine chemistry, leading to development of the Stork enamine alkylation . [ 3 ] : 111 [ 4 ] It is believed he was responsible for the first planned stereocontrolled synthesis as well as the first natural product to be synthesised with high stereoselectivity. [ 5 ]
Stork was also an accomplished mentor of young chemists and many of his students have gone on to make significant contributions in their own right.
Gilbert Stork was born in the Ixelles municipality of Brussels, Belgium on December 31, 1921. [ 6 ] [ 7 ] The oldest of 3 children, his middle brother Michel died in infancy, but he remained close with his younger sister Monique his whole life. His family had Jewish origins, although Gilbert himself didn't recall them being religiously active. [ 6 ] The family moved to Nice when Gilbert was about 14 (circa. 1935) and remained there until 1939. During this period, Gilbert completed his lycée studies, distinguishing himself in French literature and writing. Characterizing himself during those years as "not terribly self-confident," and uncertain whether he could find employment in a profession he enjoyed, Gilbert considered applying for a colonial civil service job in French Indochina . [ 5 ] However, the outbreak of World War II that year led the family to flee to New York, where his father's older brother, Sylvain, had already emigrated.
Gilbert studied for a Bachelor of Science at the University of Florida , from 1940 to 1942. He then moved to the University of Wisconsin–Madison for this PhD, which he obtained in 1945 under the supervision of Samuel M. McElvain . [ 8 ] While at Wisconsin he met Carl Djerassi , with whom he would go on to form a lasting friendship.
During his time at the University of Wisconsin, Stork kept a steak on his windowsill in the winter in order to keep it refrigerated. The steak began to degrade and to dispose of it Stork put it in a hot acid bath used to clean glassware which contained nitric and sulphuric acids. He was then concerned he would produce nitroglycerine due to the glycerine in the steak and the presence of nitric and sulphuric acids. However, due to the high temperature of the bath, the oxidation of glycerol was much faster than the nitration of glycerin thus preventing the formation of explosives. [ 5 ]
Professor Stork received a number of awards and honors including the following: [ 11 ]
Stork also held honorary doctorates from Lawrence University , the University of Wisconsin–Madison , the University of Paris , the University of Rochester , and Columbia University . [ 14 ] [ 15 ]
The inaugural Gilbert Stork Lecture was held in his honor in 2014 at his alma mater, the University of Wisconsin-Madison. [ 3 ] [ 16 ] Gilbert Stork named lecture series are also held at other institutions, including Columbia University [ 17 ] and the University of Pennsylvania , as a result of his endowments. [ 18 ]
He was fêted for his sense of humor and colorful personality by historian of chemistry Jeffrey I. Seeman who published a collection of "Storkisms". [ 19 ] | https://en.wikipedia.org/wiki/Gilbert_Stork |
In electronics , the Gilbert cell is a type of frequency mixer . It produces output signals proportional to the product of two input signals. Such circuits are widely used for frequency conversion in radio systems. [ 1 ] The advantage of this circuit is the output current is an accurate multiplication of the (differential) base currents of both inputs. As a mixer, its balanced operation cancels out many unwanted mixing products , resulting in a "cleaner" output. Gilbert cells can also be used as variable-gain amplifiers (VGA). [ 2 ]
It is a generalized case of an early circuit first used by Howard Jones in 1963, [ 3 ] invented independently and greatly augmented by Barrie Gilbert in 1967. [ 4 ] It is a specific example of "translinear" design, a current-mode approach to analog circuit design. The specific property of this cell is that the differential output current is a precise algebraic product of its two differential analog current inputs.
There is little difference between the Jones cell and the translinear multiplier in this topology. In both forms, two differential amplifier stages are formed by emitter-coupled transistor pairs (Q1/Q4, Q3/Q5) whose outputs are connected (currents summed) with opposite phases. The emitter junctions of these amplifier stages are fed by the collectors of a third differential pair (Q2/Q6). The output currents of Q2/Q6 become emitter currents for the differential amplifiers. Simplified, the output current of an individual transistor is given by i c = g m v be . Its transconductance g m is (at T = 300 K ) about g m = 40 I C . Combining these equations gives i c = 40 I C v be,lo . However, I C here is given by v be,rf g m,rf . Hence i c = 40 v be,lo v be,rf g m,rf , which is a multiplication of v be,lo and v be,rf . Combining the two different stages output currents yields four-quadrant operation.
The Jones topology can be generalized by "stacking" any number of pairs of differential pairs (whose two differential inputs and two differential outputs are likewise connected out-of-phase and in-phase, respectively) on top of a conventional Jones cell, resulting in a circuit that retains the balanced nature of the Jones cell's operation. Specifically, the differential output current would now be proportional to the product of an arbitrary number of differential inputs (or some translinear function thereof). [ 5 ] However, the utility of this generalization in practical microelectronics settings is limited due to the large voltage headroom needed to keep all of the transistors in the proper (forward-active) region of operation .
However, in the cells later invented by Gilbert, shown in the figure on the right, there are two additional diode-connected transistors (labeled as V1 and V2). This is a crucial difference because they generate the logarithm of the associated differential (X) input current so that the exponential characteristics of the following transistors result in an ideally perfect multiplication of these input currents with the remaining pair of (Y) currents. This additional diode cell topology is typically used when a low distortion voltage-controlled amplifier (VCA) is required. This topology is rarely used in RF mixer/modulator applications for various reasons, one being that the linearity advantage of the top linearized cascode is minimal due to the near-square wave drive signals to these bases. The drive is less likely to be a fast-edge squarewave at very high frequencies when there may be some advantage in the linearization.
Nowadays, functionally similar circuits can be constructed using CMOS or BiCMOS cells.
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gilbert_cell |
The Gilbert–Johnson–Keerthi distance algorithm is a method of determining the minimum distance between two convex sets , first published by Elmer G. Gilbert , Daniel W. Johnson, and S. Sathiya Keerthi in 1988. Unlike many other distance algorithms, it does not require that the geometry data be stored in any specific format, but instead relies solely on a support function to iteratively generate closer simplices to the correct answer using the configuration space obstacle (CSO) of two convex shapes, more commonly known as the Minkowski difference .
"Enhanced GJK" algorithms use edge information to speed up the algorithm by following edges when looking for the next simplex. This improves performance substantially for polytopes with large numbers of vertices.
GJK makes use of Johnson's distance sub algorithm, which computes in the general case the point of a tetrahedron closest to the origin, but is known to suffer from numerical robustness problems. In 2017 Montanari, Petrinic, and Barbieri proposed a new sub algorithm based on signed volumes which avoid the multiplication of potentially small quantities and achieved a speedup of 15% to 30%.
GJK algorithms are often used incrementally in simulation systems and video games. In this mode, the final simplex from a previous solution is used as the initial guess in the next iteration, or "frame". If the positions in the new frame are close to those in the old frame, the algorithm will converge in one or two iterations. This yields collision detection systems which operate in near-constant time.
The algorithm's stability, speed, and small storage footprint make it popular for realtime collision detection , especially in physics engines for video games .
GJK relies on two functions:
The simplices handled by NearestSimplex may each be any simplex sub-space of R n . For example in 3D, they may be a point, a line segment, a triangle, or a tetrahedron ; each defined by 1, 2, 3, or 4 points respectively.
This applied mathematics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gilbert–Johnson–Keerthi_distance_algorithm |
In coding theory , the Gilbert–Varshamov bound (due to Edgar Gilbert [ 1 ] and independently Rom Varshamov [ 2 ] ) is a bound on the size of a (not necessarily linear ) code . It is occasionally known as the Gilbert– Shannon –Varshamov bound (or the GSV bound ), but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes .
Recall that a code has a minimum distance d {\displaystyle d} if any two elements in the code are at least a distance d {\displaystyle d} apart. Let
denote the maximum possible size of a q -ary code C {\displaystyle C} with length n and minimum Hamming distance d (a q -ary code is a code over the field F q {\displaystyle \mathbb {F} _{q}} of q elements).
Then:
Let C {\displaystyle C} be a code of length n {\displaystyle n} and minimum Hamming distance d {\displaystyle d} having maximal size:
Then for all x ∈ F q n {\displaystyle x\in \mathbb {F} _{q}^{n}} , there exists at least one codeword c x ∈ C {\displaystyle c_{x}\in C} such that the Hamming distance d ( x , c x ) {\displaystyle d(x,c_{x})} between x {\displaystyle x} and c x {\displaystyle c_{x}} satisfies
since otherwise we could add x to the code whilst maintaining the code's minimum Hamming distance d {\displaystyle d} – a contradiction on the maximality of | C | {\displaystyle |C|} .
Hence the whole of F q n {\displaystyle \mathbb {F} _{q}^{n}} is contained in the union of all balls of radius d − 1 {\displaystyle d-1} having their centre at some c ∈ C {\displaystyle c\in C} :
Now each ball has size
since we may allow (or choose ) up to d − 1 {\displaystyle d-1} of the n {\displaystyle n} components of a codeword to deviate (from the value of the corresponding component of the ball's centre ) to one of ( q − 1 ) {\displaystyle (q-1)} possible other values (recall: the code is q-ary: it takes values in F q n {\displaystyle \mathbb {F} _{q}^{n}} ). Hence we deduce
That is:
For q a prime power, one can improve the bound to A q ( n , d ) ≥ q k {\displaystyle A_{q}(n,d)\geq q^{k}} where k is the greatest integer for which | https://en.wikipedia.org/wiki/Gilbert–Varshamov_bound |
The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound , which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field F q {\displaystyle \mathbb {F} _{q}} . This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q -ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method , and thus is not constructive.
The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. [ citation needed ] For larger alphabets, algebraic geometry codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert–Varshamov bound. [ 1 ]
Here H q {\displaystyle H_{q}} is the q -ary entropy function defined as follows:
The above result was proved by Edgar Gilbert for general codes using the greedy method . Rom Varshamov refined the result to show the existence of a linear code. The proof uses the probabilistic method .
High-level proof:
To show the existence of the linear code that satisfies those constraints, the probabilistic method is used to construct the random linear code. Specifically, the linear code is chosen by picking a generator matrix G ∈ F q k × n {\displaystyle G\in \mathbb {F} _{q}^{k\times n}} whose entries are randomly chosen elements of F q {\displaystyle \mathbb {F} _{q}} . The minimum Hamming distance of a linear code is equal to the minimum weight of a nonzero codeword, so in order to prove that the code generated by G {\displaystyle G} has minimum distance d {\displaystyle d} , it suffices to show that for any m ∈ F q k ∖ { 0 } , wt ( m G ) ≥ d {\displaystyle m\in \mathbb {F} _{q}^{k}\smallsetminus \left\{0\right\},\operatorname {wt} (mG)\geq d} . We will prove that the probability that there exists a nonzero codeword of weight less than d {\displaystyle d} is exponentially small in n {\displaystyle n} . Then by the probabilistic method, there exists a linear code satisfying the theorem.
Formal proof:
By using the probabilistic method, to show that there exists a linear code that has a Hamming distance greater than d {\displaystyle d} , we will show that the probability that the random linear code having the distance less than d {\displaystyle d} is exponentially small in n {\displaystyle n} .
The linear code is defined by its generator matrix , which we choose to be a random k × n {\displaystyle k\times n} generator matrix; that is, a matrix of k n {\displaystyle kn} elements which are chosen independently and uniformly over the field F q {\displaystyle \mathbb {F} _{q}} .
Recall that in a linear code , the distance equals the minimum weight of a nonzero codeword. Let wt ( y ) {\displaystyle \operatorname {wt} (y)} be the weight of the codeword y {\displaystyle y} . So
The last equality follows from the definition: if a codeword y {\displaystyle y} belongs to a linear code generated by G {\displaystyle G} , then y = m G {\displaystyle y=mG} for some vector m ∈ F q k {\displaystyle m\in \mathbb {F} _{q}^{k}} .
By Boole's inequality , we have:
Now for a given message 0 ≠ m ∈ F q k , {\displaystyle 0\neq m\in \mathbb {F} _{q}^{k},} we want to compute
Let Δ ( m 1 , m 2 ) {\displaystyle \Delta (m_{1},m_{2})} be a Hamming distance of two messages m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} . Then for any message m {\displaystyle m} , we have: wt ( m ) = Δ ( 0 , m ) {\displaystyle \operatorname {wt} (m)=\Delta (0,m)} . Therefore:
Due to the randomness of G {\displaystyle G} , m G {\displaystyle mG} is a uniformly random vector from F q n {\displaystyle \mathbb {F} _{q}^{n}} . So
Let Vol q ( r , n ) {\displaystyle \operatorname {Vol} _{q}(r,n)} be the volume of a Hamming ball with the radius r {\displaystyle r} . Then: [ 2 ]
By choosing k = ( 1 − H q ( δ ) − ε ) n {\displaystyle k=(1-H_{q}(\delta )-\varepsilon )n} , the above inequality becomes
Finally q − ε n ≪ 1 {\displaystyle q^{-\varepsilon n}\ll 1} , which is exponentially small in n, that is what we want before. Then by the probabilistic method, there exists a linear code C {\displaystyle C} with relative distance δ {\displaystyle \delta } and rate R {\displaystyle R} at least ( 1 − H q ( δ ) − ε ) {\displaystyle (1-H_{q}(\delta )-\varepsilon )} , which completes the proof. | https://en.wikipedia.org/wiki/Gilbert–Varshamov_bound_for_linear_codes |
The Gilchrist–Thomas process or Thomas process is a historical process for refining pig iron , derived from the Bessemer converter . It is named after its inventors who patented it in 1877: Percy Carlyle Gilchrist and his cousin Sidney Gilchrist Thomas . By allowing the exploitation of phosphorous iron ore , the most abundant, this process allowed the rapid expansion of the steel industry outside the United Kingdom and the United States .
The process differs essentially from the Bessemer process in the refractory lining of the converter . The latter, being made of dolomite ( (Ca,Mg)(CO 3 ) 2 ) fired with tar , is basic ( MgO giving O 2− anions ), whereas the Bessemer lining, made of packed sand , is acidic ( SiO 2 accepting O 2− anions) according to the Lux-Flood theory of molten oxides . Phosphorus , by migrating from liquid iron to molten slag , allows both the production of a steel of satisfactory quality, and of phosphates sought after as fertilizer , known as "Thomas meal". The disadvantages of the basic process includes larger iron loss and more frequent relining of the converter vessel.
After having favored the spectacular growth of the Lorraine iron and steel industry, the process progressively faded away in front of the Siemens-Martin Open-hearth furnace , which also used the benefit of basic refractory lining, before disappearing in the mid-1960s: with the development of gas liquefaction and the cryogenic separation of O 2 from air , the use of pure oxygen became economically viable. Even if modern pure oxygen converters all operate with a basic medium, their performance and operation have little to do with their ancestor.
This metallurgy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gilchrist–Thomas_process |
Gilead Sciences, Inc. ( / ˈ ɡ ɪ l i ə d / ) is an American biopharmaceutical company headquartered in Foster City, California , that focuses on researching and developing antiviral drugs used in the treatment of HIV/AIDS , hepatitis B , hepatitis C , influenza , and COVID-19 , including ledipasvir/sofosbuvir and sofosbuvir . Gilead is a member of the Nasdaq-100 and the S&P 100 .
Gilead was founded in 1987 under the name Oligogen by Michael L. Riordan. The original name was a reference to oligonucleotides , small strands of DNA used to target genetic sequences. Gilead held its initial public offering in 1992, and successfully developed drugs like Tamiflu and Vistide that decade.
In the 2000s, Gilead received approval for drugs including Viread and Hepsera , among others. It began evolving from a biotechnology company into a pharmaceutical company, acquiring several subsidiaries, though it still relied heavily on contracting to manufacture its drugs.
The company continued its growth in the 2010s. However, it came under heavy scrutiny over its business practices, including extremely high pricing of drugs such as Sovaldi and Truvada in the United States relative to production cost and cost in the developing world. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
In June 1987, Gilead Sciences was originally founded under the name Oligogen [ 7 ] by Michael L. Riordan, a medical doctor. [ 8 ] Riordan graduated from Washington University in St. Louis , the Johns Hopkins School of Medicine , and the Harvard Business School . [ 9 ] The idea for Gilead began as a research project at Menlo Ventures , where Michael was an associate. Three scientific advisers worked with Riordan to create the company: Peter Dervan of Caltech , Doug Melton of Harvard , and Harold M. Weintraub of the Fred Hutchinson Cancer Research Center , along with H. Dubose Montgomery, one of Menlo Ventures founders. Riordan served as CEO from the company's founding until 1996. [ 10 ] [ 11 ] Menlo Ventures subsequently made the first investment in Gilead of $2 million. [ 12 ] Riordan also recruited scientific advisers, including Harold Varmus , a Nobel laureate who later became Director of the National Institutes of Health , and Jack Szostak , recipient of the Nobel Prize for Physiology or Medicine in 2009. [ 13 ]
The company's primary therapeutic focus was in antiviral medicines, a field that piqued Riordan's interest after he contracted dengue fever . [ 14 ] Riordan recruited Donald Rumsfeld to join the board of directors in 1988, [ 15 ] followed by Benno C. Schmidt, Sr. , [ citation needed ] Gordon Moore , [ 15 ] and George P. Shultz . [ 15 ] Riordan tried to recruit Warren Buffett as an investor and board member but was unsuccessful. [ 8 ]
The company focused its early research on making small strands of DNA ( oligomers , or more particularly, oligonucleotides ) to target specific genetic code sequences – that is, antisense therapy , a form of gene therapy . [ 7 ] According to Riordan, he had always wanted to use the name Gilead Sciences all along. Still, he used Oligogen as a temporary name because he needed to deal with a trademark clearance issue with a California nonprofit organization that was already using the word Gilead in its name. [ 16 ] He had first heard of the Balm of Gilead when he read Lanford Wilson 's play Balm in Gilead while in medical school, then learned that naturally occurring acetylsalicylic acid ( aspirin ) had been found in modern times in a willow tree species from that part of that world, and was therefore inspired to name his company Gilead. [ 16 ] After founding Oligogen, he contacted the nonprofit about the naming issue and secured the right to use the Gilead Sciences name in exchange for a $1,000 donation. [ 16 ]
By 1988, the company had moved its headquarters to Foster City's Vintage Park neighborhood, where it has been based ever since. [ 7 ] The company began to develop small molecule antiviral therapeutics in 1991, when the company in-licensed a group of nucleotide compounds including tenofovir . [ 8 ]
Riordan later recalled that Gilead's first decade as a startup was an extremely stressful experience for him, as a young venture capitalist serving for the first time as the founder, chairman, and chief executive officer of his own biotech company. [ 17 ] The new company had no products and very little income, and narrowly escaped going out of business on several occasions: "It was touch and go for a long time". [ 17 ] Finding a way for Gilead to make money was Riordan's top priority "every second of the day for eight years". [ 17 ]
Gilead's antisense intellectual property portfolio was sold to Ionis Pharmaceuticals . Gilead debuted on the NASDAQ in January 1992. [ 18 ] Its initial public offering raised $86.25 million in proceeds. [ 18 ]
In June 1996, Gilead launched Vistide ( cidofovir injection) for the treatment of cytomegalovirus (CMV) retinitis in patients with AIDS . [ 19 ]
In January 1997, Donald Rumsfeld was appointed chairman, but left the board in January 2001 when he was appointed United States Secretary of Defense during George W. Bush 's first term as president. [ 20 ] [ 21 ]
In March 1999, Gilead acquired NeXstar Pharmaceuticals of Boulder, Colorado . At the time, NeXstar's annual sales of $130 million was three times Gilead's sales; it sold AmBisome, an injectable fungal treatment, and DaunoXome , an oncology drug taken by HIV patients. That same year, Roche announced FDA approval of Tamiflu ( oseltamivir ) for the treatment of influenza . [ 22 ] Tamiflu was originally discovered by Gilead and licensed to Roche for late-phase development and marketing. [ 23 ]
One reason for entering into the Tamiflu licensing agreement was that with only 350 employees, Gilead still did not yet have the capability to sell its drugs directly to overseas buyers. [ 24 ] To avoid having to license future drugs in order to access international markets, Gilead simply acquired the 480-employee NeXstar, which had already built its own sales force in Europe to market AmBisome there. [ 24 ]
Viread ( tenofovir ) achieved first approval in 2001 for the treatment of HIV. [ 25 ]
In 2002, Gilead changed its corporate strategy to focus exclusively on antivirals, and sold its cancer assets to OSI Pharmaceuticals for $200 million. [ 26 ]
In December 2002, Gilead and Triangle Pharmaceuticals announced that Gilead would acquire Triangle for around $464 million; Triangle's lead drug was emtricitabine that was near FDA approval, and it had two other antivirals in its pipeline. [ 26 ] [ 27 ] The company also announced its first full year of profitability. Later that year, Hepsera ( adefovir ) was approved for the treatment of chronic hepatitis B , and Emtriva ( emtricitabine ) for the treatment of HIV. [ citation needed ]
During this era, Gilead completed its gradual evolution from a biotech startup into a pharmaceutical company. [ 7 ] [ 15 ] The San Francisco Chronicle noted that by 2003, the Gilead corporate campus in Foster City had expanded to "seven low-slung sand-colored buildings around a tiny lake on which ducks happily paddle." [ 7 ] Like many startups, Gilead originally leased its space, but in 2004, the company paid $123 million to buy all its headquarters buildings from its landlords. [ 15 ] However, even as Gilead developed its ability to distribute and sell its own drugs, it remained distinct from most pharmaceutical companies in terms of its strong reliance on subcontracting most of its manufacturing to contract manufacturing organizations . [ 28 ]
In 2004, during the Avian flu pandemic scare, Gilead Sciences' revenue from Tamiflu almost quadrupled to $44.6m as more than 60 national governments stockpiled the antiviral drug, though the firm had made a loss in 2003 before concern about the flu started. As stocks soared, US Defense Secretary and Pentagon chief Donald Rumsfeld sold shares of the company, receiving more than $5 million in capital gains, while still maintaining up to $25m-worth of shares by the end of the year. Sales of Tamiflu almost quadrupled again in 2005, to $161.6m, during which time the share price tripled. A 2005 report showed that, in all, Rumsfeld owned shares worth up to $95.9m, from which he got an income of up to $13m. [ 29 ]
In 2006, the company acquired Corus Pharma, Inc. for $365 million. [ 30 ] The acquisition of Corus signaled Gilead's entry into the respiratory arena. Corus was developing aztreonam lysine for the treatment of patients with cystic fibrosis who are infected with Pseudomonas aeruginosa . [ citation needed ]
In July 2006, the U.S. Food and Drug Administration (FDA) approved Atripla , a once a day single tablet regimen for HIV, combining Sustiva ( efavirenz ), a Bristol-Myers Squibb product, and Truvada ( emtricitabine and tenofovir disoproxil ), a Gilead product. [ 31 ] [ 32 ] [ 33 ]
Gilead purchased Raylo Chemicals, Inc. in November 2006, for a price of US$133.3 million . [ 34 ] Raylo Chemical, based in Edmonton, Alberta , was a wholly-owned subsidiary of Degussa AG , a German company. Raylo Chemical was a custom manufacturer of active pharmaceutical ingredients and advanced intermediates for the pharmaceutical and biopharmaceutical industries.
Later in the same year, Gilead acquired Myogen, Inc. for $2.5 billion (then its largest acquisition). With two drugs in development (ambrisentan and darusentan), and one marketed product (Flolan) for pulmonary diseases, the acquisition of Myogen has solidified Gilead's position in this therapeutic arena. Under an agreement with GlaxoSmithKline , Myogen marketed Flolan ( epoprostenol sodium ) in the United States for the treatment of primary pulmonary hypertension . Additionally, Myogen was developing (in Phase 3 studies) darusentan, [ 35 ] also an endothelin receptor antagonist, for the potential treatment of resistant hypertension .
Gilead expanded its move into respiratory therapeutics in 2007 by entering into a licensing agreement with Parion for an epithelial sodium channel inhibitor for the treatment of pulmonary diseases, including cystic fibrosis, chronic obstructive pulmonary disease and bronchiectasis . [ 36 ]
In 2009, the company acquired CV Therapeutics, Inc. for $1.4 billion, bringing Ranexa and Lexiscan into Gilead. [ 37 ] Ranexa is a cardiovascular drug used to treat chest pain related to coronary artery disease, with both of these products and pipeline building out Gilead's cardiovascular franchise. [ 37 ] Later that year, the company was named one of the Fastest Growing Companies by Fortune . [ 38 ] [ 39 ]
In 2010, the company acquired CGI Pharmaceuticals for $120 million, expanding Gilead's research expertise into kinase biology and chemistry. Later that year, the company acquired Arresto Biosciences, Inc. for $225 million, obtaining developmental-stage research for treating fibrotic diseases and cancer. [ 40 ]
In February 2011, the company acquired Calistoga Pharmaceuticals for US$375 million ($225 million plus milestone payments). The acquisition boosted Gilead's oncology and inflammation areas. [ 41 ] [ 42 ] Later that year, Gilead made its most important acquisition – and by then most expensive – with the US$10.4 billion purchase of Pharmasset , Inc. This transaction helped cement Gilead as the leader in treatment of the hepatitis C virus by giving it control of sofosbuvir (see below).
In October 2011, Gilead broke ground on a massive multi-year expansion of its 17-building headquarters campus in Foster City. [ 43 ] By replacing eight one or two-story buildings with seven new structures ranging as tall as 10 stories, Gilead nearly doubled its headquarters real estate footprint from about 620,000 square feet to about 1.2 million square feet. [ 43 ]
On July 16, 2012, the FDA approved Gilead's Truvada for prevention of HIV infection (it was already approved for treating HIV). The pill was a preventive measure ( PrEP ) for people at high risk of getting HIV through sexual activity. [ 44 ] [ 45 ] [ 46 ] [ 47 ]
In 2013, the company acquired YM Biosciences , Inc. for $510 million. [ 48 ] The acquisition brings drug candidate CYT387 , an orally-administered, once-daily, selective inhibitor of the Janus kinase (JAK) family, specifically JAK1 and JAK2, into Gilead's oncology pipeline. The JAK enzymes have been implicated in myeloproliferative diseases, inflammatory disorders, and certain cancers.
In 2015, the company made a trio of acquisitions:
In 2016, the company acquired Nimbus Apollo, Inc. for $400 million, giving Gilead control of the compound NDI-010976 (an ACC inhibitor) and other preclinical ACC inhibitors for the treatment of non-alcoholic steatohepatitis and for the potential treatment of hepatocellular carcinoma . [ 52 ] [ 53 ] Also in 2016, the company was named the most generous company on the 2016 Fortune list of The Most Generous Companies of the Fortune 500 . Charitable donations to HIV/AIDS and liver disease organizations totaled over 440 million in 2015. [ 54 ]
In August 2017, the company announced it would acquire Kite Pharma for $11.9 billion, [ 55 ] equating to $180 cash per share, a 29% premium over the closing price of the shares. The deal was Gilead's entry into the cell therapy market and added a chimeric antigen receptor T cell ( CAR-T ) therapy candidate to the company's portfolio. [ 56 ] By 2022 this acquisition had led to two marketed products for lymphoma: Yescarta and Tecartus . [ 57 ] In November, the company announced it will acquire Cell Design Labs for up to $567 million, after it indirectly acquired a stake of 12.2% via the Kite Pharma deal. [ 58 ]
On May 9, 2019, the U.S. Department of Health and Human Services announced that Gilead Sciences will donate Truvada, the only drug approved to prevent infection with H.I.V., for free to 200,000 patients annually for 11 years. [ 59 ] On December 3, 2019, HHS explained how the government would distribute the donated drugs. HHS Secretary Alex Azar explained that the U.S. government will pay Gilead $200 per bottle for 30 pills for costs associated with getting the drug from factories into the eventual hands of patients. [ 60 ]
In March 2020, the company announced it would acquire Forty Seven Inc. for $95.50 a share ($4.9 billion in total). [ 61 ] [ 62 ] [ 63 ] On April 7, 2020, Gilead completed acquisition of Forty Seven, Inc. for "$95.50 per share, net to the seller in cash, without interest, or approximately $4.9 billion in the aggregate." [ 64 ] [ 65 ]
In June 2020, Bloomberg reported that AstraZeneca Plc had made a preliminary approach to Gilead for a potential merger, worth almost $240 billion. [ 66 ] [ 67 ] [ 68 ] In the same month, the company announced it would acquire a 49.9% stake in privately held Pionyr Immunotherapeutics Inc for $275 million. [ 69 ]
In September 2020, Gilead announced it had reached a deal to acquire Immunomedics for $21 billion ($88 per share), gaining control of the cancer treatment Trodelvy ( Sacituzumab govitecan -hziy) – a first-in-class Trop-2 antibody-drug conjugate. [ 70 ] [ 71 ] [ 72 ] In December, the business announced it would acquire German biotech, MYR GmbH, for €1.15 billion plus up to a further €300 million. MYR focuses on the treatment of chronic hepatitis delta virus . [ 73 ] [ 74 ]
On August 11, 2021, U.S. Senator Rand Paul disclosed that his wife Kelley Paul had purchased a stake in Gilead Sciences on February 26, 2020. [ 75 ]
In November 2021, the company was added to the Dow Jones Sustainability World Index . [ 76 ]
In January 2022, Gilead pulled its cancer drug Zydelig (idelalisib) from its accelerated approval in relapsed follicular B-cell non-Hodgkin lymphoma (FL) and relapsed small lymphocytic leukemia (SLL). [ 77 ] In September, the company completed its acquisition of MiroBio for $405 million. [ 78 ]
In February 2023, the business, through Kite Pharma completed its acquisition of Tmunity Therapeutics [ 79 ] In May, the business announced it would acquire XinThera and its small molecule inhibitors. [ 80 ]
In February 2024, the company acquired CymaBay Therapeutics , [ 81 ] and in September, paid Genesis Therapeutics $35 million for AI-based drug discovery work. [ 82 ]
The drug sofosbuvir had been part of the 2011 acquisition of Pharmasset. In 2013, the FDA approved this drug, under the trade name Sovaldi, as a treatment for the hepatitis C virus . Forbes magazine ranked Gilead its number 4 drug company, citing a market capitalization of US$113 billion and stock appreciation of 100%, and describing their 2011 purchase of Pharmasset for $11 billion as "one of the best pharma acquisitions ever". [ 83 ] Deutsche Bank estimated Sovaldi sales in the year's final quarter would be $53 million, [ 84 ] and Barron's noted the FDA approval and subsequent strong sales of the "potentially revolutionary" drug as a positive indicator for the stock. [ 85 ]
On July 11, 2014, the United States Senate Committee on Finance investigated Sovaldi's high price ($1,000 per pill; $84,000 for the full 12-week regimen). Senators questioned the extent to which the market was operating "efficiently and rationally", and committee chairman Ron Wyden ( D - Oregon ) and ranking minority member Chuck Grassley ( R - Iowa ) wrote to CEO John C. Martin asking Gilead to justify the price for this drug. [ 86 ] The committee hearings did not result in new law, but in 2014 and 2015, due to negotiated and mandated discounts, Sovaldi was sold well below the list price. [ 87 ] For poorer countries, Gilead licensed multiple companies to produce generic versions of Sovaldi; in India, a pill's price was as low as $4.29. [ 88 ]
Gilead later combined Sovaldi with other antivirals in single-pill combinations. First, Sovaldi was combined with ledipasvir and marketed as Harvoni . This treatment for hepatitis C cures the patient in 94% to 99% of cases (HCV genotype 1). [ 89 ] By 2017, Gilead was reporting drastic drops in Sovaldi revenue from year to year, not only because of pricing pressure but because the number of suitable patients decreased. [ 90 ] Later single-pill combinations were Epclusa (with velpatasvir ) and Vosevi (with velpatasvir and voxilaprevir ).
For the fiscal year 2017, Gilead Sciences reported earnings of US$4.628 billion and annual revenue of US$26.107 billion, [ 91 ] a decline of 14.1% over the previous fiscal cycle. Gilead Sciences's shares traded at over $70 per share, and its market capitalization was valued at US$93.4 billion in October 2018. [ 92 ]
As of 2017, Gilead's challenge is to develop or acquire new blockbuster drugs before its current revenue-producers wane or their patent protection expires. Gilead benefited from the expansion of Medicaid in the ACA; Leerink analyst Geoffrey Porges wrote that Gilead's HIV drugs could face funding pressure under reform proposals. [ 95 ] Gilead has $32 billion in cash, but $27.4 billion is outside the U.S. and is unavailable for acquisitions unless Gilead pays U.S. tax on it, though it could borrow against it. [ 96 ] Gilead would benefit from proposals to let companies repatriate offshore capital with minimal further taxation. [ 97 ]
Gilead's Entospletinib has shown a 90% complete response rate for MLL type acute myeloid leukaemia (AML). [ 98 ]
Several mass tort lawsuits [ 99 ] have been filed against Gilead alleging that the company deliberately delayed development of antiretroviral drugs based on tenofovir alafenamide fumarate (TAF) in order to maximize profits from previous-generation medications containing tenofovir disoproxil fumarate (TDF) . [ 100 ] Plaintiffs allege that Gilead suspended TAF in 2004 despite clear evidence indicating that TAF-based medications were safer than TDF, a compound whose long-term use was associated with adverse side effects such as nephrotoxicity and bone density loss. [ 101 ] [ 102 ]
Gilead's first TAF medication, marketed under the trade name Genvoya, came out in 2015. Lawsuits allege that in the interim period, many HIV patients who continuously took Gilead's older TDF-based drugs suffered severe side effects, including nephrotoxicity . [ 103 ] [ 104 ]
In 2023, the Institute for Clinical and Economic Review (ICER) identified Biktarvy (bictegravir/emtricitabine/tenofovir alafenamide) as one of five high-expenditure drugs that experienced significant net price increases without new clinical evidence to justify the hikes. Specifically, Biktarvy's wholesale acquisition cost rose by 5.49%, leading to an additional $815 million in costs to U.S. payers. [ 105 ]
Gilead came under intense criticism for its high pricing of its patented drug sofosbuvir (sold under the brand name Sovaldi), used to treat hepatitis C . [ 2 ] In the US, for instance, it was launched at $1,000 per pill or $84,000 for the standard 84-day course, [ 3 ] [ 106 ] but it was drastically cheaper in the developing world; [ 4 ] in India, it dropped as low as $4.29 per pill. [ 107 ] While Sovaldi represented a significant improvement over contemporary treatments, the controversy surrounding its price ignited a national debate in the US, according to Reuters. [ 108 ]
The United States Senate Committee on Finance launched an 18-month investigation of Gilead's Sovaldi pricing, and argued in its 2015 report that Gilead set prices high in disregard of the human cost and in order to set the stage for a higher eventual price for Sovaldi's successor, Harvoni. [ 109 ] [ 5 ] The committee's investigation, based in part on internal documents obtained from Gilead, revealed that the company had considered prices ranging from $50,000 to $115,000 per year, trying to strike a balance between revenue and predicted activist and public relations blowback, with little regard to research and development costs. [ 110 ]
The high prices forced state Medicaid programs to ration treatment to patients, delaying treatment of less advanced hepatitis C cases. [ 110 ] In Oregon, for example, 10,000 Medicaid patients were deemed good candidates for Sovaldi therapy, but the Oregon Health Authority estimated that treating half of these patients would more than double the state's total drug expenditures. The state thus opted to limit treatment to 500 patients per year. [ 110 ]
Truvada was introduced to the market by Gilead in 2004 to treat HIV infections. [ 6 ] In the following years, the United States government conducted research demonstrating that Truvada was able to prevent HIV infection. The US Centers for Disease Control holds the patent for this use of Truvada as pre-exposure prophylaxis (PreP). [ 6 ]
Gilead introduced Truvada for PreP in 2012, at which point a prescription cost approximately $1,200 per month in the United States. [ 111 ] By 2018, this price had increased to up to $2,000, despite generally costing less than $100 outside the U.S. [ 111 ] Gilead made over $3 billion in sales of Truvada in 2018. [ 6 ]
The high price drew the ire of activist groups such as ACT UP and was the subject of a Congressional hearing in May 2019. [ 112 ] Gilead's CEO defended its pricing in the hearing by noting the large sums the company spends on HIV/AIDS research. [ 113 ] Activists pressured the US government to enforce its patent on Truvada in order to combat the high prices set by Gilead. [ 6 ]
In May 2019, Gilead announced it would donate enough Truvada to treat up to 200,000 patients annually for up to 11 years, the result of discussions with the Department of Health and Human Services under Trump. Dr. Rochelle Walensky noted that the donations still covered less than one-fifth of the people who need the drug, and argued it was possibly a move to help the company market Descovy, a more advanced successor drug. [ 114 ] Walensky led a 2020 study that concluded the high costs of Descovy would on the whole negate any comparative advantage of prescribing it over a generic Truvada alternative. [ 115 ] [ 116 ]
In July 2021, Gilead announced it would decrease 340B Drug Pricing Program reimbursements to clinics serving primarily low-income communities; clinics argued this severely hinders their ability to provide HIV/AIDS prevention and treatment services among vulnerable populations. [ 117 ]
Gilead has also been accused of stifling competition. A lawsuit filed in the United States in 2019 alleged that the company entered "pay for delay" agreements with other manufacturers, wherein the manufacturers agreed to delay releasing generic versions of Truvada. [ 118 ] In 2021, CVS Pharmacy and RiteAid filed a lawsuit on similar grounds against Gilead, Bristol-Myers Squibb , and Teva Pharmaceuticals in 2021. [ 119 ]
In response to criticisms over the price of Sovaldi, Gilead began licensing the rights to produce generic versions of the drug to select producers in India in 2015. Included in the licensing agreements were 'anti-diversion' provisions, designed to prevent the drug from being exported back to developed countries where the cheaper, generic alternatives were still unavailable. [ 120 ] (In India, a one-month treatment cost approximately US$300, versus $1,000 per pill in the United States.) [ 120 ] Gilead required the Indian producers to screen patients to determine who could buy Sovaldi, which was criticized by Médecins Sans Frontières since it could lead to the exclusion of vulnerable groups like refugees and migrants from accessing the medicines. [ 121 ] In response to the criticism, Gilead eventually relaxed these requirements. [ 120 ]
Gilead has been criticized for tax avoidance . Tax avoidance, as opposed to tax evasion , is the use of legal means to shift tax burdens from the one jurisdiction to overseas affiliates that pay a lower tax rate, even if revenue is primarily generated outside the overseas jurisdiction. [ citation needed ]
A 2016 report by the liberal think tank Americans for Tax Fairness argued that Gilead was able to avoid up to $10 billion in taxes on U.S. sales through mechanisms such as transfer pricing , [ 122 ] [ 123 ] [ 124 ] the sale of assets between affiliated entities. In particular, Gilead sells intellectual property to an Irish subsidiary, which then sells the finished products, such as Sovaldi, in the United States and elsewhere, paying the low Irish tax rate on profits. [ 125 ] The practice is common among multinational pharmaceutical companies like Gilead. [ 122 ]
On December 26, 2018, The Times reported that Gilead had used the Double Irish arrangement to avoid U.S. corporate taxes on global profits, stating that the firm "used a controversial tax loophole arrangement to shift almost €20 billion in profits through an Irish entity in just two years" without paying Irish taxes. [ 126 ] The company repatriated a portion of the Irish subsidiary's holdings, $28 billion, to the United States in 2018 following reductions of the corporate tax rate. For this it paid an estimated $5.5 billion in tax. [ 126 ]
Gilead sought and obtained orphan drug designation for remdesivir from the US Food and Drug Administration (FDA) on March 23, 2020. [ 127 ] This designation is intended to encourage the development of drugs affecting fewer than 200,000 Americans by granting strengthened and extended legal monopoly rights to the manufacturer, along with waivers on taxes and government fees. [ 128 ]
Remdesivir became a candidate for treating COVID-19; at the time the status was granted, fewer than 200,000 Americans had COVID-19, but numbers were climbing rapidly as the COVID-19 pandemic reached the US, and crossing the threshold soon was considered inevitable. [ 128 ] Gilead retains 20-year remdesivir patents in more than 70 countries. [ 129 ]
In 2021, remdesivir (tradename Veklury ) generated more than $4.5 billion in annual revenues, and was Gilead's highest selling product. [ 130 ] [ 131 ]
Emergency use authorization for remdesivir was granted in the U.S. on May 1, 2020, for people hospitalized with severe COVID-19. [ 132 ] In September 2020 following a review of the evidence, the WHO issued guidance not to use remdesivir for people with COVID-19, as there was no good evidence of benefit. [ 133 ] However, over 2020–22 with further clinical research , remdesivir had been approved for treatment of hospitalized people with COVID-19 in the United States, European Union, and multiple other countries. [ 134 ] [ 135 ] [ 136 ] In 2022, the Canadian component of the WHO international Solidarity Trial reported that in-hospital people with COVID-19 treated with remdesivir had lower death rates (by about 4%) and reduced need for oxygen and mechanical ventilation compared to people receiving standard-of-care treatments. [ 137 ]
Veklury received approval from the US Food and Drug Administration (FDA) in October 2020 use in hospitalized adults and children 12 years and older for treatment of severe COVID-19 infections. [ 138 ] In January 2022, the FDA gave regulatory approval to Veklury for use in adults and children (12 years of age and older who weigh at least 40 kilograms (88 lb) and are positive for COVID-19, not hospitalized, and are ill having high risk for developing severe COVID-19, including hospitalization or death. [ 139 ]
The FDA also provided Emergency Use Authorization for Veklury treatment of children under age 12 who are COVID-positive and not hospitalized, but have mild-to-moderate COVID-19 with high risk of developing severe COVID-19, including hospitalization or death. [ 139 ] | https://en.wikipedia.org/wiki/Gilead_Sciences |
Gilles Holst (20 March 1886 – 11 October 1968) [ 1 ] was a Dutch physicist, known worldwide for his invention of the low-pressure sodium lamp in 1932.
His father was a manager of a shipyard. In 1904 he went to ETH Zurich to study mechanical engineering, switching to mathematics and physics after a year.
He worked with Balthasar van der Pol , known for the Van der Pol oscillator , and Frans Michel Penning , known for Penning ionization and the Penning mixture . In 1908 he became a geprüfter Fachlehrer , or qualified teacher. And most important,
he became the science director of the Philips Physics Laboratory in Eindhoven.
In 1909 he became an assistant to Heike Kamerlingh Onnes at Leiden University . At Leiden, it is believed that he was the first to witness the phenomenon of superconductivity . In 1926 he became a member of the Royal Netherlands Academy of Arts and Sciences . [ 2 ]
The Gilles Holst Award was first awarded in 1939.
He died in the Netherlands at the age of 82. | https://en.wikipedia.org/wiki/Gilles_Holst |
In probability theory , the Gillespie algorithm (or the Doob–Gillespie algorithm or stochastic simulation algorithm , the SSA ) generates a statistically correct trajectory (possible solution) of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others (circa 1945), presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power (see stochastic simulation ). [ 1 ] As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology . [ citation needed ]
The process that led to the algorithm recognizes several important steps. In 1931, Andrei Kolmogorov introduced the differential equations corresponding to the time-evolution of stochastic processes that proceed by jumps, today known as Kolmogorov equations (Markov jump process) (a simplified version is known as master equation in the natural sciences). It was William Feller , in 1940, who found the conditions under which the Kolmogorov equations admitted (proper) probabilities as solutions. In his Theorem I (1940 work) he establishes that the time-to-the-next-jump was exponentially distributed and the probability of the next event is proportional to the rate. As such, he established the relation of Kolmogorov's equations with stochastic processes .
Later, Doob (1942, 1945) extended Feller's solutions beyond the case of pure-jump processes. The method was implemented in computers by David George Kendall (1950) using the Manchester Mark 1 computer and later used by Maurice S. Bartlett (1953) in his studies of epidemics outbreaks. Gillespie (1977) obtains the algorithm in a different manner by making use of a physical argument.
In a reaction chamber, there are a finite number of molecules. At each infinitesimal slice of time, a single reaction might take place. The rate is determined by the number of molecules in each chemical species.
Naively, we can simulate the trajectory of the reaction chamber by discretizing time, then simulate each time-step. However, there might be long stretches of time where no reaction occurs. The Gillespie algorithm samples a random waiting time until some reaction occurs, then take another random sample to decide which reaction has occurred.
The key assumptions are that
Given the two assumptions, the random waiting time for some reaction is exponentially distributed, with exponential rate being the sum of the individual reaction's rates.
Traditional continuous and deterministic biochemical rate equations do not accurately predict cellular reactions since they rely on bulk reactions that require the interactions of millions of molecules. They are typically modeled as a set of coupled ordinary differential equations. In contrast, the Gillespie algorithm allows a discrete and stochastic simulation of a system with few reactants because every reaction is explicitly simulated. A trajectory corresponding to a single Gillespie simulation represents an exact sample from the probability mass function that is the solution of the master equation .
The physical basis of the algorithm is the collision of molecules within a reaction vessel. It is assumed that collisions are frequent, but collisions with the proper orientation and energy are infrequent. It is assumed that the reaction environment is well mixed.
A review (Gillespie, 2007) outlines three different, but equivalent formulations; the direct, first-reaction, and first-family methods, whereby the former two are special cases of the latter. The formulation of the direct and first-reaction methods is centered on performing the usual Monte Carlo inversion steps on the so-called "fundamental premise of stochastic chemical kinetics", which mathematically is the function
where each of the a {\displaystyle a} terms are propensity functions of an elementary reaction, whose argument is x {\displaystyle {\boldsymbol {x}}} , the vector of species counts. The τ {\displaystyle \tau } parameter is the time to the next reaction (or sojourn time), and t {\displaystyle t} is the current time. To paraphrase Gillespie, this expression is read as "the probability, given X ( t ) = x {\displaystyle {\boldsymbol {X}}(t)={\boldsymbol {x}}} , that the system's next reaction will occur in the infinitesimal time interval [ t + τ , t + τ + d τ ] {\displaystyle [t+\tau ,t+\tau +d\tau ]} , and will be of stoichiometry corresponding to the j {\displaystyle j} th reaction". This formulation provides a window to the direct and first-reaction methods by implying τ {\displaystyle \tau } is an exponentially-distributed random variable, and j {\displaystyle j} is "a statistically independent integer random variable with point probabilities a j ( x ) / ∑ j a j ( x ) {\displaystyle a_{j}({\boldsymbol {x}})/\sum _{j}a_{j}({\boldsymbol {x}})} ".
Thus, the Monte Carlo generating method is simply to draw two pseudorandom numbers, r 1 {\displaystyle r_{1}} and r 2 {\displaystyle r_{2}} on [ 0 , 1 ] {\displaystyle [0,1]} , and compute
and
Utilizing this generating method for the sojourn time and next reaction, the direct method algorithm is stated by Gillespie as
where ν j {\displaystyle \nu _{j}} represents adding the j th {\displaystyle j^{\text{th}}} component of the given state-change vector ν {\displaystyle \nu } . This family of algorithms is computationally expensive and thus many modifications and adaptations exist, including the next reaction method (Gibson & Bruck), tau-leaping , as well as hybrid techniques where abundant reactants are modeled with deterministic behavior. Adapted techniques generally compromise the exactitude of the theory behind the algorithm as it connects to the master equation, but offer reasonable realizations for greatly improved timescales. The computational cost of exact versions of the algorithm is determined by the coupling class of the reaction network. In weakly coupled networks, the number of reactions that is influenced by any other reaction is bounded by a small constant. In strongly coupled networks, a single reaction firing can in principle affect all other reactions. An exact version of the algorithm with constant-time scaling for weakly coupled networks has been developed, enabling efficient simulation of systems with very large numbers of reaction channels (Slepoy Thompson Plimpton 2008). The generalized Gillespie algorithm that accounts for the non-Markovian properties of random biochemical events with delay has been developed by Bratsun et al. 2005 and independently Barrio et al. 2006, as well as (Cai 2007). See the articles cited below for details.
Partial-propensity formulations, as developed independently by both Ramaswamy et al. (2009, 2010) and Indurkhya and Beal (2010), are available to construct a family of exact versions of the algorithm whose computational cost is proportional to the number of chemical species in the network, rather than the (larger) number of reactions. These formulations can reduce the computational cost to constant-time scaling for weakly coupled networks and to scale at most linearly with the number of species for strongly coupled networks. A partial-propensity variant of the generalized Gillespie algorithm for reactions with delays has also been proposed (Ramaswamy Sbalzarini 2011). The use of partial-propensity methods is limited to elementary chemical reactions, i.e., reactions with at most two different reactants. Every non-elementary chemical reaction can be equivalently decomposed into a set of elementary ones, at the expense of a linear (in the order of the reaction) increase in network size.
A simple example may help to explain how the Gillespie algorithm works. Consider a system of molecules of two types, A and B . In this system, A and B reversibly bind together to form AB dimers such that two reactions are possible: either A and B react reversibly to form an AB dimer, or an AB dimer dissociates into A and B . The reaction rate constant for a given single A molecule reacting with a given single B molecule is k D {\displaystyle k_{\mathrm {D} }} , and the reaction rate for an AB dimer breaking up is k B {\displaystyle k_{\mathrm {B} }} .
If at time t there is one molecule of each type then the rate of dimer formation is k D {\displaystyle k_{\mathrm {D} }} , while if there are n A {\displaystyle n_{\mathrm {A} }} molecules of type A and n B {\displaystyle n_{\mathrm {B} }} molecules of type B , the rate of dimer formation is k D n A n B {\displaystyle k_{\mathrm {D} }n_{\mathrm {A} }n_{\mathrm {B} }} . If there are n A B {\displaystyle n_{\mathrm {AB} }} dimers then the rate of dimer dissociation is k B n A B {\displaystyle k_{\mathrm {B} }n_{\mathrm {AB} }} .
The total reaction rate, R T O T {\displaystyle R_{\mathrm {TOT} }} , at time t is then given by
So, we have now described a simple model with two reactions. This definition is independent of the Gillespie algorithm. We will now describe how to apply the Gillespie algorithm to this system.
In the algorithm, we advance forward in time in two steps: calculating the time to the next reaction, and determining which of the possible reactions the next reaction is. Reactions are assumed to be completely random, so if the reaction rate at a time t is R T O T {\displaystyle R_{\mathrm {TOT} }} , then the time, δ t , until the next reaction occurs is a random number drawn from exponential distribution function with mean 1 / R T O T {\displaystyle 1/R_{\mathrm {TOT} }} . Thus, we advance time from t to t + δ t .
The probability that this reaction is an A molecule binding to a B molecule is simply the fraction of total rate due to this type of reaction, i.e.,
the probability that reaction is P ( A + B ⟶ AB ) = k D n A n B / R TOT {\displaystyle P({\ce {{A}+ B -> AB}})=k_{D}n_{A}n_{B}/R_{{\ce {TOT}}}}
The probability that the next reaction is an AB dimer dissociating is just 1 minus that. So with these two probabilities we either form a dimer by reducing n A {\displaystyle n_{\mathrm {A} }} and n B {\displaystyle n_{\mathrm {B} }} by one, and increase n A B {\displaystyle n_{\mathrm {AB} }} by one, or we dissociate a dimer and increase n A {\displaystyle n_{\mathrm {A} }} and n B {\displaystyle n_{\mathrm {B} }} by one and decrease n A B {\displaystyle n_{\mathrm {AB} }} by one.
Now we have both advanced time to t + δ t , and performed a single reaction. The Gillespie algorithm just repeats these two steps as many times as needed to simulate the system for however long we want (i.e., for as many reactions). The result of a Gillespie simulation that starts with n A = n B = 10 {\displaystyle n_{\mathrm {A} }=n_{\mathrm {B} }=10} and n A B = 0 {\displaystyle n_{\mathrm {AB} }=0} at t =0, and where k D = 2 {\displaystyle k_{\mathrm {D} }=2} and k B = 1 {\displaystyle k_{\mathrm {B} }=1} , is shown at the right. For these parameter values, on average there are 8 n A B {\displaystyle n_{\mathrm {AB} }} dimers and 2 of A and B but due to the small numbers of molecules fluctuations around these values are large. The Gillespie algorithm is often used to study systems where these fluctuations are important.
That was just a simple example, with two reactions. More complex systems with more reactions are handled in the same way. All reaction rates must be calculated at each time step, and one chosen with probability equal to its fractional contribution to the rate. Time is then advanced as in this example. | https://en.wikipedia.org/wiki/Gillespie_algorithm |
A Gilman reagent is a diorganocopper compound with the formula Li[CuR 2 ], where R is an alkyl or aryl . They are colorless solids. [ citation needed ]
These reagents are useful because, unlike related Grignard reagents and organolithium reagents , they react with organic halides to replace the halide group with an R group (the Corey–House reaction ). Such displacement reactions allow for the synthesis of complex products from simple building blocks. [ 1 ] [ 2 ] Lewis acids can be used to modify the reagent. [ 2 ]
These reagents were discovered by Henry Gilman and coworkers. [ 3 ] Lithium dimethylcopper (CH 3 ) 2 CuLi can be prepared by adding copper(I) iodide to methyllithium in tetrahydrofuran at −78 °C. In the reaction depicted below, [ 4 ] the Gilman reagent is a methylating reagent reacting with an alkyne in a conjugate addition , and the ester group forms a cyclic enone .
Lithium dimethylcuprate exists as a dimer in diethyl ether forming an 8-membered ring. Similarly, lithium diphenylcuprate crystallizes as a dimeric etherate, [{Li(OEt 2 )}(CuPh 2 )] 2 . [ 5 ]
If the Li + ions is complexed with the crown ether 12-crown-4 , the resulting diorganylcuprate anions adopt a linear coordination geometry at copper. [ 6 ]
For the 'higher order cyanocuprate' Li 2 CuCN(CH 3 ) 2 , Lipshutz and coworkers have claimed that the cyanide ligand is coordinated to Li and π-bound to Cu. [ 7 ] However, the existence of 'mixed higher order organocuprates' has been disputed by Bertz and coworkers, who rejoined that the cyano ligand is actually bound solely to the lithium atom, and that such a structure could still explain the enhanced reactivity of cuprate prepared from CuCN. [ 8 ] [ 9 ] To date, no crystallographic evidence for the existence of 'mixed higher order cuprates' ([R 2 CuX] 2– , X ≠ R) has been obtained. On the other hand, a homoleptic higher order cuprate in the form of a [Ph 3 Cu] 2– moiety has been observed in Li 3 Cu 2 Ph 5 (SMe 2 ) 4 , prepared by Olmstead and Power. [ 10 ]
More useful generally than the Gilman reagents are the so-called mixed cuprates with the formula [RCuX] − and [R 2 CuX] 2− (see above for the controversy over existence of the latter). Such compounds are often prepared by the addition of the organolithium reagent to copper(I) halides and cyanide. These mixed cuprates are more stable and more readily purified. [ 11 ] One problem addressed by mixed cuprates is the economical use of the alkyl group. Thus, in some applications, the mixed cuprate with the formula Li 2 [Cu(2-thienyl)(CN)R] is prepared by combining thienyllithium and cuprous cyanide followed by the organic group to be transferred. In this higher order mixed cuprate, both the cyanide and thienyl groups do not transfer, only the R group does. [ 12 ] | https://en.wikipedia.org/wiki/Gilman_reagent |
The Gilman test is a chemical test for the detection of Grignard reagents and organolithium reagents . [ 1 ] [ 2 ]
A 0.5 mL sample is added to a 1% solution of Mischler's ketone in benzene or toluene . To this solution is added 1 mL of water for hydrolysis to take place and then several drops of 0.2% iodine in glacial acetic acid . If the color of the resulting solution becomes a greenish-blue then the original sample did contain the organometallic species. | https://en.wikipedia.org/wiki/Gilman_test |
In finite group theory , a mathematical discipline, the Gilman–Griess theorem , proved by Robert H. Gilman and Robert L. Griess , classifies the finite simple groups of characteristic 2 type with e ( G ) ≥ 4 that have a "standard component", which covers one of the three cases of the trichotomy theorem . [ 1 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gilman–Griess_theorem |
Gimbal lock is the loss of one degree of freedom in a multi-dimensional mechanism at certain alignments of the axes. In a three-dimensional three- gimbal mechanism, gimbal lock occurs when the axes of two of the gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.
The term can be misleading in the sense that none of the individual gimbals is actually restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes, there is no gimbal available to accommodate rotation about one axis, leaving the suspended object effectively locked (i.e. unable to rotate) around that axis.
The problem can be generalized to other contexts, where a coordinate system loses definition of one of its variables at certain values of the other variables.
A gimbal is a ring that is suspended so it can rotate about an axis. Gimbals are typically nested one within another to accommodate rotation about multiple axes.
They appear in gyroscopes and in inertial measurement units to allow the inner gimbal's orientation to remain fixed while the outer gimbal suspension assumes any orientation. In compasses and flywheel energy storage mechanisms they allow objects to remain upright. They are used to orient thrusters on rockets. [ 1 ]
Some coordinate systems in mathematics behave as if they were real gimbals used to measure the angles, notably Euler angles .
For cases of three or fewer nested gimbals, gimbal lock inevitably occurs at some point in the system due to properties of covering spaces .
While only two specific orientations produce exact gimbal lock, practical mechanical gimbals encounter difficulties near those orientations. When a set of gimbals is close to the locked configuration, small rotations of the gimbal platform require large motions of the surrounding gimbals. Although the ratio is infinite only at the point of gimbal lock, the practical speed and acceleration limits of the gimbals—due to inertia (resulting from the mass of each gimbal ring), bearing friction, the flow resistance of air or other fluid surrounding the gimbals (if they are not in a vacuum), and other physical and engineering factors—limit the motion of the platform close to that point.
Gimbal lock can occur in gimbal systems with two degrees of freedom such as a theodolite with rotations about an azimuth (horizontal angle) and elevation (vertical angle). These two-dimensional systems can gimbal lock at zenith and nadir , because at those points azimuth is not well-defined, and rotation in the azimuth direction does not change the direction the theodolite is pointing.
Consider tracking a helicopter flying towards the theodolite from the horizon. The theodolite is a telescope mounted on a tripod so that it can move in azimuth and elevation to track the helicopter. The helicopter flies towards the theodolite and is tracked by the telescope in elevation and azimuth. The helicopter flies immediately above the tripod (i.e. it is at zenith) when it changes direction and flies at 90 degrees to its previous course. The telescope cannot track this maneuver without a discontinuous jump in one or both of the gimbal orientations. There is no continuous motion that allows it to follow the target. It is in gimbal lock. So there is an infinity of directions around zenith for which the telescope cannot continuously track all movements of a target. [ 2 ] Note that even if the helicopter does not pass through zenith, but only near zenith, so that gimbal lock does not occur, the system must still move exceptionally rapidly to track it, as it rapidly passes from one bearing to the other. The closer to zenith the nearest point is, the faster this must be done, and if it actually goes through zenith, the limit of these "increasingly rapid" movements becomes infinitely fast, namely discontinuous.
To recover from gimbal lock the user has to go around the zenith – explicitly: reduce the elevation, change the azimuth to match the azimuth of the target, then change the elevation to match the target.
Mathematically, this corresponds to the fact that spherical coordinates do not define a coordinate chart on the sphere at zenith and nadir. Alternatively, the corresponding map T 2 → S 2 from the torus T 2 to the sphere S 2 (given by the point with given azimuth and elevation) is not a covering map at these points.
Consider a case of a level-sensing platform on an aircraft flying due north with its three gimbal axes mutually perpendicular (i.e., roll , pitch and yaw angles each zero). If the aircraft pitches up 90 degrees, the aircraft and platform's yaw axis gimbal becomes parallel to the roll axis gimbal, and changes about yaw can no longer be compensated for.
This problem may be overcome by use of a fourth gimbal, actively driven by a motor so as to maintain a large angle between roll and yaw gimbal axes. Another solution is to rotate one or more of the gimbals to an arbitrary position when gimbal lock is detected and thus reset the device.
Modern practice is to avoid the use of gimbals entirely. In the context of inertial navigation systems , that can be done by mounting the inertial sensors directly to the body of the vehicle (this is called a strapdown system) [ 3 ] and integrating sensed rotation and acceleration digitally using quaternion methods to derive vehicle orientation and velocity. Another way to replace gimbals is to use fluid bearings or a flotation chamber. [ 4 ]
A well-known gimbal lock incident happened in the Apollo 11 Moon mission. On this spacecraft, a set of gimbals was used on an inertial measurement unit (IMU). The engineers were aware of the gimbal lock problem but had declined to use a fourth gimbal. [ 5 ] Some of the reasoning behind this decision is apparent from the following quote:
The advantages of the redundant gimbal seem to be outweighed by the equipment simplicity, size advantages, and corresponding implied reliability of the direct three degree of freedom unit.
They preferred an alternate solution using an indicator that would be triggered when near to 85 degrees pitch.
Near that point, in a closed stabilization loop, the torque motors could theoretically be commanded to flip the gimbal 180 degrees instantaneously. Instead, in the LM , the computer flashed a "gimbal lock" warning at 70 degrees and froze the IMU at 85 degrees
Rather than try to drive the gimbals faster than they could go, the system simply gave up and froze the platform. From this point, the spacecraft would have to be manually moved away from the gimbal lock position, and the platform would have to be manually realigned using the stars as a reference. [ 6 ]
After the Lunar Module had landed, Mike Collins aboard the Command Module joked "How about sending me a fourth gimbal for Christmas?"
In robotics, gimbal lock is commonly referred to as "wrist flip", due to the use of a "triple-roll wrist" in robotic arms , where three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point.
An example of a wrist flip, also called a wrist singularity, is when the path through which the robot is traveling causes the first and third axes of the robot's wrist to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process.
The importance of avoiding singularities in robotics has led the American National Standard for Industrial Robots and Robot Systems – Safety Requirements to define it as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities". [ 7 ]
The problem of gimbal lock appears when one uses Euler angles in applied mathematics; developers of 3D computer programs , such as 3D modeling , embedded navigation systems , and video games must take care to avoid it.
In formal language, gimbal lock occurs because the map from Euler angles to rotations (topologically, from the 3-torus T 3 to the real projective space RP 3 , which is the same as the space of rotations for three-dimensional rigid bodies, formally named SO(3) ) is not a local homeomorphism at every point, and thus at some points the rank (degrees of freedom) must drop below 3, at which point gimbal lock occurs. Euler angles provide a means for giving a numerical description of any rotation in three-dimensional space using three numbers, but not only is this description not unique, but there are some points where not every change in the target space (rotations) can be realized by a change in the source space (Euler angles). This is a topological constraint – there is no covering map from the 3-torus to the 3-dimensional real projective space; the only (non-trivial) covering map is from the 3-sphere, as in the use of quaternions .
To make a comparison, all the translations can be described using three numbers x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} , as the succession of three consecutive linear movements along three perpendicular axes X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} axes. The same holds true for rotations: all the rotations can be described using three numbers α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , as the succession of three rotational movements around three axes that are perpendicular one to the next. This similarity between linear coordinates and angular coordinates makes Euler angles very intuitive , but unfortunately they suffer from the gimbal lock problem.
A rotation in 3D space can be represented numerically with matrices in several ways. One of these representations is:
An example worth examining happens when β = π 2 {\displaystyle \beta ={\tfrac {\pi }{2}}} . Knowing that cos π 2 = 0 {\displaystyle \cos {\tfrac {\pi }{2}}=0} and sin π 2 = 1 {\displaystyle \sin {\tfrac {\pi }{2}}=1} , the above expression becomes equal to:
Carrying out matrix multiplication :
And finally using the trigonometry formulas :
Changing the values of α {\displaystyle \alpha } and γ {\displaystyle \gamma } in the above matrix has the same effects: the rotation angle α + γ {\displaystyle \alpha +\gamma } changes, but the rotation axis remains in the Z {\displaystyle Z} direction: the last column and the first row in the matrix won't change. The only solution for α {\displaystyle \alpha } and γ {\displaystyle \gamma } to recover different roles is to change β {\displaystyle \beta } .
It is possible to imagine an airplane rotated by the above-mentioned Euler angles using the X-Y-Z convention. In this case, the first angle - α {\displaystyle \alpha } is the pitch. Yaw is then set to π 2 {\displaystyle {\tfrac {\pi }{2}}} and the final rotation - by γ {\displaystyle \gamma } - is again the airplane's pitch. Because of gimbal lock, it has lost one of the degrees of freedom - in this case the ability to roll.
It is also possible to choose another convention for representing a rotation with a matrix using Euler angles than the X-Y-Z convention above, and also choose other variation intervals for the angles, but in the end there is always at least one value for which a degree of freedom is lost.
The gimbal lock problem does not make Euler angles "invalid" (they always serve as a well-defined coordinate system), but it makes them unsuited for some practical applications.
The cause of gimbal lock is the representation of orientation in calculations as three axial rotations based on Euler angles. A potential solution therefore is to represent the orientation in some other way. This could be as a rotation matrix , a quaternion (see quaternions and spatial rotation ), or a similar orientation representation that treats the orientation as a value rather than three separate and related values. Given such a representation, the user stores the orientation as a value. To quantify angular changes produced by a transformation, the orientation change is expressed as a delta angle/axis rotation. The resulting orientation must be re-normalized to prevent the accumulation of floating-point error in successive transformations. For matrices, re-normalizing the result requires converting the matrix into its nearest orthonormal representation . For quaternions, re-normalization requires performing quaternion normalization . | https://en.wikipedia.org/wiki/Gimbal_lock |
ASTRO-C , renamed Ginga (Japanese for 'galaxy'), was an X-ray astronomy satellite launched from the Kagoshima Space Center on 5 February 1987 using M-3SII launch vehicle. The primary instrument for observations was the Large Area Counter (LAC). Ginga was the third Japanese X-ray astronomy mission, following Hakucho and Tenma (also Hinotori satellite - which preceded Ginga - had X-ray sensors, but it can be seen as a heliophysics rather than X-ray astronomy mission). Ginga reentered the Earth's atmosphere on 1 November 1991.
This article about one or more spacecraft of Japan is a stub . You can help Wikipedia by expanding it .
This article about a specific observatory, telescope or astronomical instrument is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ginga_(satellite) |
GingerMaster is malware that affects Android operating system version 2.3. It was first detected in August 2011. [ 1 ]
GingerMaster is Android malware that contains a root exploit packaged within an infected app. [ 2 ] [ 3 ] GingerMaster's Root exploit is the "KillingInTheNameOfGingerBreakzegRush" [ 4 ]
GingerMaster acts to be a normal application on the users phone, and once the application is launched on an Android device, it acquires root privileges through GingerBreak on the device and then accesses sensitive data. [ 5 ] [ 4 ] Once GingerMaster has root access it will try to install a root shell for future malicious use. [ 4 ]
GingerMaster steals data such as:
This malware -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GingerMaster |
Ginlo is a German GDPR -compliant encrypted messaging service for instant messaging , voice calls , and video calls . It allows account registration without cell phone number. [ 2 ]
The name is an anagram of the word login . The servers are hosted in Germany.
Ginlo was founded by the founder of GMX in 2017 under the name Brabbler. In 2019, the company bought the messenger Simsme from Deutsche Post . They later ran out of money and Brabbler cofounder Karsten Schramm bought it in 2020 and renamed it to Ginlo. [ 2 ]
Ginlo has a private [ 3 ] and a business edition. [ 4 ]
The private version is free while the business version is paid. The app contains no ads and is completely financed by the business version revenue. [ 2 ]
Both versions can communicate with each other. [ 3 ] | https://en.wikipedia.org/wiki/Ginlo_(software) |
Ginsenoside Rb 1 (or Ginsenoside Rb1 or GRb 1 or GRb1 ) is a chemical compound belonging to the ginsenoside family.
Like other ginsenosides, it is found in the plant genus Panax ( ginseng ), and has a variety of potential health effects including anticarcinogenic , immunomodulatory , anti‐inflammatory , antiallergic , antiatherosclerotic , antihypertensive , and antidiabetic effects as well as antistress activity and effects on the central nervous system . [ 3 ]
A 1998 study by Seoul National University reported that GRb 1 and GRg 3 (ginsenosides Rb 1 and Rg 3 ) significantly attenuated glutamate-induced neurotoxicity by inhibiting the overproduction of nitric oxide synthase among some other findings regarding their neuroprotective properties. [ 4 ]
In 2002, the Laboratory for Cancer Research in Rutgers University showed that GRb 1 and GRg 1 have neuroprotective effect for spinal cord neurons , while ginsenoside Re did not exhibit any activity. GRb 1 and GRg 1 are proposed to represent potentially effective therapeutic agents for spinal cord injuries . [ 5 ]
The protection that GRg 1 (ginsenoside Rg 1 ) and GRb 1 offer against Alzheimer’s disease symptoms in mice was first published by researchers in 2015. The GRg 1 affected three metabolic pathways: the metabolism of lecithin, amino acids and sphingolipids , while GRb 1 treatment affected lecithin and amino acid metabolism. [ 6 ]
It was reported in 2017 that GRb 1 improved cardiac function and remodelling in heart failure in mice. The treatment of H-ginsenoside Rb 1 potentially attenuated cardiac hypertrophy and myocardial fibrosis . [ 7 ]
The biosynthesis of GRb 1 in Panax ginseng starts from farnesyl diphosphate (FPP), which is converted to squalene with squalene synthase (SQS), then to 2,3-oxidosqualene with squalene epoxidase (SE).
The 2,3-oxidasqualene is then converted to dammarenediol-II by cyclization , with dammarenediol-II synthase (DS) as the catalyst . The dammarenediol-II is converted to protopanaxadiol and then to ginsenoside Rd.
Finally, GRb 1 is synthesized from ginsenoside Rd, catalysed by UDPG:ginsenoside Rd glucosyltransferase (UGRdGT), a biosynthetic enzyme of GRb 1 first discovered in 2005. [ 8 ] [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Ginsenoside_Rb1 |
Mean field theory gives sensible results as long as one is able to neglect fluctuations in the system under consideration. The Ginzburg criterion tells quantitatively when mean field theory is valid. It also gives the idea of an upper critical dimension , a dimensionality of the system above which mean field theory gives proper results, and the critical exponents predicted by mean field theory match exactly with those obtained by numerical methods.
If ϕ {\displaystyle \phi } is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
Quantitatively, this means that [ 1 ]
Using this in the Landau theory , which is identical to the mean field theory for the Ising model , the value of the upper critical dimension comes out to be 4. If the dimension of the space is greater than 4, the mean-field results are good and self-consistent. But for dimensions less than 4, the predictions are less accurate. For instance, in one dimension, the mean field approximation predicts a phase transition at finite temperatures for the Ising model, whereas the exact analytic solution in one dimension has none (except for T = 0 {\displaystyle T=0} and T → ∞ {\displaystyle T\rightarrow \infty } ).
In the classical Heisenberg model of magnetism, the order parameter has a higher symmetry, and it has violent directional fluctuations which are more important than the size fluctuations. They overtake to [ clarification needed ] the Ginzburg temperature interval [ clarification needed ] over which fluctuations modify the mean-field description thus replacing the criterion by another, more relevant one. | https://en.wikipedia.org/wiki/Ginzburg_criterion |
The Ginzburg–Landau equation , named after Vitaly Ginzburg and Lev Landau , describes the nonlinear evolution of small disturbances near a finite wavelength bifurcation from a stable to an unstable state of a system. At the onset of finite wavelength bifurcation, the system becomes unstable for a critical wavenumber k c {\displaystyle k_{c}} which is non-zero. In the neighbourhood of this bifurcation, the evolution of disturbances is characterised by the particular Fourier mode for k c {\displaystyle k_{c}} with slowly varying amplitude A {\displaystyle A} (more precisely the real part of A {\displaystyle A} ). The Ginzburg–Landau equation is the governing equation for A {\displaystyle A} . The unstable modes can either be non-oscillatory (stationary) or oscillatory. [ 1 ] [ 2 ]
For non-oscillatory bifurcation, A {\displaystyle A} satisfies the real Ginzburg–Landau equation
which was first derived by Alan C. Newell and John A. Whitehead [ 3 ] and by Lee Segel [ 4 ] in 1969. For oscillatory bifurcation, A {\displaystyle A} satisfies the complex Ginzburg–Landau equation
which was first derived by Keith Stewartson and John Trevor Stuart in 1971. [ 5 ] Here α {\displaystyle \alpha } and β {\displaystyle \beta } are real constants.
When the problem is homogeneous, i.e., when A {\displaystyle A} is independent of the spatial coordinates, the Ginzburg–Landau equation reduces to Stuart–Landau equation . The Swift–Hohenberg equation results in the Ginzburg–Landau equation.
Substituting A ( x , t ) = R e i Θ {\displaystyle A(\mathbf {x} ,t)=Re^{i\Theta }} , where R = | A | {\displaystyle R=|A|} is the amplitude and Θ = a r g ( A ) {\displaystyle \Theta =\mathrm {arg} (A)} is the phase, one obtains the following equations
If we substitute A = f ( k ) e i k ⋅ x {\displaystyle A=f(k)e^{i\mathbf {k} \cdot \mathbf {x} }} in the real equation without the time derivative term, we obtain
This solution is known to become unstable due to Eckhaus instability for wavenumbers k 2 > 1 / 3. {\displaystyle k^{2}>1/3.}
Once again, let us look for steady solutions, but with an absorbing boundary condition A = 0 {\displaystyle A=0} at some location. In a semi-infinite, 1D domain 0 ≤ x < ∞ {\displaystyle 0\leq x<\infty } , the solution is given by
where a {\displaystyle a} is an arbitrary real constant. Similar solutions can be constructed numerically in a finite domain.
The traveling wave solution is given by
The group velocity of the wave is given by d ω / d k = 2 ( α − β ) k . {\displaystyle d\omega /dk=2(\alpha -\beta )k.} The above solution becomes unstable due to Benjamin–Feir instability for wavenumbers k 2 > ( 1 + α β ) / ( 2 β 2 + α β + 3 ) . {\displaystyle k^{2}>(1+\alpha \beta )/(2\beta ^{2}+\alpha \beta +3).}
Hocking–Stewartson pulse refers to a quasi-steady, 1D solution of the complex Ginzburg–Landau equation, obtained by Leslie M. Hocking and Keith Stewartson in 1972. [ 6 ] The solution is given by
where the four real constants in the above solution satisfy
The coherent structure solutions are obtained by assuming A = e i k ⋅ x − ω t B ( ξ , t ) {\displaystyle A=e^{i\mathbf {k} \cdot \mathbf {x} -\omega t}B({\boldsymbol {\xi }},t)} where ξ = x + u t {\displaystyle {\boldsymbol {\xi }}=\mathbf {x} +\mathbf {u} t} . This leads to
where v = u + ( 1 + i α ) k {\displaystyle \mathbf {v} =\mathbf {u} +(1+i\alpha )\mathbf {k} } and λ = 1 + i ω − ( 1 + i α ) k 2 . {\displaystyle \lambda =1+i\omega -(1+i\alpha )k^{2}.} | https://en.wikipedia.org/wiki/Ginzburg–Landau_equation |
In physics , Ginzburg–Landau theory , often called Landau–Ginzburg theory , named after Vitaly Ginzburg and Lev Landau , is a mathematical physical theory used to describe superconductivity . In its initial form, it was postulated as a phenomenological model which could describe type-I superconductors without examining their microscopic properties. One GL-type superconductor is the famous YBCO , and generally all cuprates . [ 1 ]
Later, a version of Ginzburg–Landau theory was derived from the Bardeen–Cooper–Schrieffer microscopic theory by Lev Gor'kov , [ 2 ] thus showing that it also appears in some limit of microscopic theory and giving microscopic interpretation of all its parameters. The theory can also be given a general geometric setting, placing it in the context of Riemannian geometry , where in many cases exact solutions can be given. This general setting then extends to quantum field theory and string theory , again owing to its solvability, and its close relation to other, similar systems.
Based on Landau 's previously established theory of second-order phase transitions , Ginzburg and Landau argued that the free energy density f s {\displaystyle f_{s}} of a superconductor near the superconducting transition can be expressed in terms of a complex order parameter field ψ ( r ) = | ψ ( r ) | e i ϕ ( r ) {\displaystyle \psi (r)=|\psi (r)|e^{i\phi (r)}} , where the quantity | ψ ( r ) | 2 {\displaystyle |\psi (r)|^{2}} is a measure of the local density of superconducting electrons n s ( r ) {\displaystyle n_{s}(r)} analogous to a quantum mechanical wave function . [ 2 ] While ψ ( r ) {\displaystyle \psi (r)} is nonzero below a phase transition into a superconducting state, no direct interpretation of this parameter was given in the original paper. Assuming smallness of | ψ | {\displaystyle |\psi |} and smallness of its gradients , the free energy density has the form of a field theory and exhibits U(1) gauge symmetry:
f s = f n + α ( T ) | ψ | 2 + 1 2 β ( T ) | ψ | 4 + 1 2 m ∗ | ( − i ℏ ∇ − e ∗ c A ) ψ | 2 + B 2 8 π , {\displaystyle f_{s}=f_{n}+\alpha (T)|\psi |^{2}+{\frac {1}{2}}\beta (T)|\psi |^{4}+{\frac {1}{2m^{*}}}\left|\left(-i\hbar \nabla -{\frac {e^{*}}{c}}\mathbf {A} \right)\psi \right|^{2}+{\frac {\mathbf {B} ^{2}}{8\pi }},}
where
The total free energy is given by F = ∫ f s d 3 r {\displaystyle F=\int f_{s}d^{3}r} . By minimizing F {\displaystyle F} with respect to variations in the order parameter ψ {\displaystyle \psi } and the vector potential A {\displaystyle \mathbf {A} } , one arrives at the Ginzburg–Landau equations
α ψ + β | ψ | 2 ψ + 1 2 m ∗ ( − i ℏ ∇ − e ∗ c A ) 2 ψ = 0 {\displaystyle \alpha \psi +\beta |\psi |^{2}\psi +{\frac {1}{2m^{*}}}\left(-i\hbar \nabla -{\frac {e^{*}}{c}}\mathbf {A} \right)^{2}\psi =0}
∇ × B = 4 π c J ; J = e ∗ m ∗ Re { ψ ∗ ( − i ℏ ∇ − e ∗ c A ) ψ } , {\displaystyle \nabla \times \mathbf {B} ={\frac {4\pi }{c}}\mathbf {J} \;\;;\;\;\mathbf {J} ={\frac {e^{*}}{m^{*}}}\operatorname {Re} \left\{\psi ^{*}\left(-i\hbar \nabla -{\frac {e^{*}}{c}}\mathbf {A} \right)\psi \right\},}
where J {\displaystyle J} denotes the dissipation -free electric current density and Re the real part . The first equation — which bears some similarities to the time-independent Schrödinger equation , but is principally different due to a nonlinear term — determines the order parameter, ψ {\displaystyle \psi } . The second equation then provides the superconducting current.
Consider a homogeneous superconductor where there is no superconducting current and the equation for ψ simplifies to: α ψ + β | ψ | 2 ψ = 0. {\displaystyle \alpha \psi +\beta |\psi |^{2}\psi =0.}
This equation has a trivial solution: ψ = 0 . This corresponds to the normal conducting state, that is for temperatures above the superconducting transition temperature, T > T c .
Below the superconducting transition temperature, the above equation is expected to have a non-trivial solution (that is ψ ≠ 0 {\displaystyle \psi \neq 0} ). Under this assumption the equation above can be rearranged into: | ψ | 2 = − α β . {\displaystyle |\psi |^{2}=-{\frac {\alpha }{\beta }}.}
When the right hand side of this equation is positive, there is a nonzero solution for ψ (remember that the magnitude of a complex number can be positive or zero). This can be achieved by assuming the following temperature dependence of α : α ( T ) = α 0 ( T − T c ) {\displaystyle \alpha :\alpha (T)=\alpha _{0}(T-T_{\rm {c}})} with α 0 / β > 0 {\displaystyle \alpha _{0}/\beta >0} :
In Ginzburg–Landau theory the electrons that contribute to superconductivity were proposed to form a superfluid . [ 3 ] In this interpretation, | ψ | 2 indicates the fraction of electrons that have condensed into a superfluid. [ 3 ]
The Ginzburg–Landau equations predicted two new characteristic lengths in a superconductor. The first characteristic length was termed coherence length , ξ . For T > T c (normal phase), it is given by
while for T < T c (superconducting phase), where it is more relevant, it is given by
It sets the exponential law according to which small perturbations of density of superconducting electrons recover their equilibrium value ψ 0 . Thus this theory characterized all superconductors by two length scales. The second one is the penetration depth, λ . It was previously introduced by the London brothers in their London theory . Expressed in terms of the parameters of Ginzburg–Landau model it is
where ψ 0 is the equilibrium value of the order parameter in the absence of an electromagnetic field. The penetration depth sets the exponential law according to which an external magnetic field decays inside the superconductor.
The original idea on the parameter κ belongs to Landau. The ratio κ = λ / ξ is presently known as the Ginzburg–Landau parameter. It has been proposed by Landau that Type I superconductors are those with 0 < κ < 1/ √ 2 , and Type II superconductors those with κ > 1/ √ 2 .
The phase transition from the normal state is of second order for Type II superconductors, taking into account fluctuations, as demonstrated by Dasgupta and Halperin, while for Type I superconductors it is of first order, as demonstrated by Halperin, Lubensky and Ma. [ 4 ]
In the original paper Ginzburg and Landau observed the existence of two types of superconductors depending
on the energy of the interface between the normal and superconducting states. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors , superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value H c . Depending on the geometry of the sample, one may obtain an intermediate state [ 5 ] consisting of a baroque pattern [ 6 ] of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors , raising the applied field past a critical value H c 1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength H c 2 , superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized . Most pure elemental superconductors, except niobium and carbon nanotubes , are Type I, while almost all impure and compound superconductors are Type II.
The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957. He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices . [ 7 ]
The Ginzburg–Landau functional can be formulated in the general setting of a complex vector bundle over a compact Riemannian manifold . [ 8 ] This is the same functional as given above, transposed to the notation commonly used in Riemannian geometry. In multiple interesting cases, it can be shown to exhibit the same phenomena as the above, including Abrikosov vortices (see discussion below).
For a complex vector bundle E {\displaystyle E} over a Riemannian manifold M {\displaystyle M} with fiber C n {\displaystyle \mathbb {C} ^{n}} , the order parameter ψ {\displaystyle \psi } is understood as a section of the vector bundle E {\displaystyle E} . The Ginzburg–Landau functional is then a Lagrangian for that section:
The notation used here is as follows. The fibers C n {\displaystyle \mathbb {C} ^{n}} are assumed to be equipped with a Hermitian inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } so that the square of the norm is written as | ψ | 2 = ⟨ ψ , ψ ⟩ {\displaystyle \vert \psi \vert ^{2}=\langle \psi ,\psi \rangle } . The phenomenological parameters α {\displaystyle \alpha } and β {\displaystyle \beta } have been absorbed so that the potential energy term is a quartic mexican hat potential ; i.e., exhibiting spontaneous symmetry breaking , with a minimum at some real value σ ∈ R {\displaystyle \sigma \in \mathbb {R} } . The integral is explicitly over the volume form
for an m {\displaystyle m} -dimensional manifold M {\displaystyle M} with determinant | g | {\displaystyle |g|} of the metric tensor g {\displaystyle g} .
The D = d + A {\displaystyle D=d+A} is the connection one-form and F {\displaystyle F} is the corresponding curvature 2-form (this is not the same as the free energy F {\displaystyle F} given up top; here, F {\displaystyle F} corresponds to the electromagnetic field strength tensor ). The A {\displaystyle A} corresponds to the vector potential , but is in general non-Abelian when n > 1 {\displaystyle n>1} , and is normalized differently. In physics, one conventionally writes the connection as d − i e A {\displaystyle d-ieA} for the electric charge e {\displaystyle e} and vector potential A {\displaystyle A} ; in Riemannian geometry, it is more convenient to drop the e {\displaystyle e} (and all other physical units) and take A = A μ d x μ {\displaystyle A=A_{\mu }dx^{\mu }} to be a one-form taking values in the Lie algebra corresponding to the symmetry group of the fiber. Here, the symmetry group is SU(n) , as that leaves the inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } invariant; so here, A {\displaystyle A} is a form taking values in the algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} .
The curvature F {\displaystyle F} generalizes the electromagnetic field strength to the non-Abelian setting, as the curvature form of an affine connection on a vector bundle . It is conventionally written as
That is, each A μ {\displaystyle A_{\mu }} is an n × n {\displaystyle n\times n} skew-symmetric matrix. (See the article on the metric connection for additional articulation of this specific notation.) To emphasize this, note that the first term of the Ginzburg–Landau functional, involving the field-strength only, is
which is just the Yang–Mills action on a compact Riemannian manifold.
The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations [ 9 ]
and
where D ∗ {\displaystyle D^{*}} is the adjoint of D {\displaystyle D} , analogous to the codifferential δ = d ∗ {\displaystyle \delta =d^{*}} . Note that these are closely related to the Yang–Mills–Higgs equations .
In string theory , it is conventional to study the Ginzburg–Landau functional for the manifold M {\displaystyle M} being a Riemann surface , and taking n = 1 {\displaystyle n=1} ; i.e., a line bundle . [ 10 ] The phenomenon of Abrikosov vortices persists in these general cases, including M = R 2 {\displaystyle M=\mathbb {R} ^{2}} , where one can specify any finite set of points where ψ {\displaystyle \psi } vanishes, including multiplicity. [ 11 ] The proof generalizes to arbitrary Riemann surfaces and to Kähler manifolds . [ 12 ] [ 13 ] [ 14 ] [ 15 ] In the limit of weak coupling, it can be shown that | ψ | {\displaystyle \vert \psi \vert } converges uniformly to 1, while D ψ {\displaystyle D\psi } and d A {\displaystyle dA} converge uniformly to zero, and the curvature becomes a sum over delta-function distributions at the vortices. [ 16 ] The sum over vortices, with multiplicity, just equals the degree of the line bundle; as a result, one may write a line bundle on a Riemann surface as a flat bundle, with N singular points and a covariantly constant section.
When the manifold is four-dimensional, possessing a spin c structure , then one may write a very similar functional, the Seiberg–Witten functional , which may be analyzed in a similar fashion, and which possesses many similar properties, including self-duality. When such systems are integrable , they are studied as Hitchin systems .
When the manifold M {\displaystyle M} is a Riemann surface M = Σ {\displaystyle M=\Sigma } , the functional can be re-written so as to explicitly show self-duality. One achieves this by writing the exterior derivative as a sum of Dolbeault operators d = ∂ + ∂ ¯ {\displaystyle d=\partial +{\overline {\partial }}} . Likewise, the space Ω 1 {\displaystyle \Omega ^{1}} of one-forms over a Riemann surface decomposes into a space that is holomorphic, and one that is anti-holomorphic: Ω 1 = Ω 1 , 0 ⊕ Ω 0 , 1 {\displaystyle \Omega ^{1}=\Omega ^{1,0}\oplus \Omega ^{0,1}} , so that forms in Ω 1 , 0 {\displaystyle \Omega ^{1,0}} are holomorphic in z {\displaystyle z} and have no dependence on z ¯ {\displaystyle {\overline {z}}} ; and vice-versa for Ω 0 , 1 {\displaystyle \Omega ^{0,1}} . This allows the vector potential to be written as A = A 1 , 0 + A 0 , 1 {\displaystyle A=A^{1,0}+A^{0,1}} and likewise D = ∂ A + ∂ ¯ A {\displaystyle D=\partial _{A}+{\overline {\partial }}_{A}} with ∂ A = ∂ + A 1 , 0 {\displaystyle \partial _{A}=\partial +A^{1,0}} and ∂ ¯ A = ∂ ¯ + A 0 , 1 {\displaystyle {\overline {\partial }}_{A}={\overline {\partial }}+A^{0,1}} .
For the case of n = 1 {\displaystyle n=1} , where the fiber is C {\displaystyle \mathbb {C} } so that the bundle is a line bundle , the field strength can similarly be written as
Note that in the sign-convention being used here, both A 1 , 0 , A 0 , 1 {\displaystyle A^{1,0},A^{0,1}} and F {\displaystyle F} are purely imaginary ( viz U(1) is generated by e i θ {\displaystyle e^{i\theta }} so derivatives are purely imaginary). The functional then becomes
The integral is understood to be over the volume form
so that
is the total area of the surface Σ {\displaystyle \Sigma } . The ∗ {\displaystyle *} is the Hodge star , as before. The degree deg L {\displaystyle \operatorname {deg} L} of the line bundle L {\displaystyle L} over the surface Σ {\displaystyle \Sigma } is
where c 1 ( L ) = c 1 ( L ) [ Σ ] ∈ H 2 ( Σ ) {\displaystyle c_{1}(L)=c_{1}(L)[\Sigma ]\in H^{2}(\Sigma )} is the first Chern class .
The Lagrangian is minimized (stationary) when ψ , A {\displaystyle \psi ,A} solve the Ginzberg–Landau equations
Note that these are both first-order differential equations, manifestly self-dual. Integrating the second of these, one quickly finds that a non-trivial solution must obey
Roughly speaking, this can be interpreted as an upper limit to the density of the Abrikosov vortecies. One can also show that the solutions are bounded; one must have | ψ | ≤ σ {\displaystyle |\psi |\leq \sigma } .
In particle physics , any quantum field theory with a unique classical vacuum state and a potential energy with a degenerate critical point is called a Landau–Ginzburg theory. The generalization to N = (2,2) supersymmetric theories in 2 spacetime dimensions was proposed by Cumrun Vafa and Nicholas Warner in November 1988; [ 17 ] in this generalization one imposes that the superpotential possess a degenerate critical point. The same month, together with Brian Greene they argued that these theories are related by a renormalization group flow to sigma models on Calabi–Yau manifolds . [ 18 ] In his 1993 paper "Phases of N = 2 theories in two-dimensions", Edward Witten argued that Landau–Ginzburg theories and sigma models on Calabi–Yau manifolds are different phases of the same theory. [ 19 ] A construction of such a duality was given by relating the Gromov–Witten theory of Calabi–Yau orbifolds to FJRW theory an analogous Landau–Ginzburg "FJRW" theory. [ 20 ] Witten's sigma models were later used to describe the low energy dynamics of 4-dimensional gauge theories with monopoles as well as brane constructions. [ 21 ] | https://en.wikipedia.org/wiki/Ginzburg–Landau_theory |
Giordano Bruno ( / dʒ ɔːr ˈ d ɑː n oʊ ˈ b r uː n oʊ / jor- DAH -noh BROO -noh , Italian: [dʒorˈdaːno ˈbruːno] ; Latin : Iordanus Brunus Nolanus ; born Filippo Bruno ; January or February 1548 – 17 February 1600) was an Italian philosopher , poet , alchemist , astrologer , cosmological theorist, and esotericist . [ 1 ] [ 2 ] He is known for his cosmological theories, which conceptually extended to include the then-novel Copernican model . He practiced Hermeticism and gave a mystical stance to exploring the universe. He proposed that the stars were distant suns surrounded by their own planets ( exoplanets ), and he raised the possibility that these planets might foster life of their own, a cosmological position known as cosmic pluralism . He also insisted that the universe is infinite and could have no center.
Bruno was tried for heresy by the Roman Inquisition on charges of denial of several core Catholic doctrines, including eternal damnation , the Trinity , the divinity of Christ , the virginity of Mary , and transubstantiation . Bruno's pantheism was not taken lightly by the church, [ 3 ] nor was his teaching of metempsychosis regarding the reincarnation of the soul . The Inquisition found him guilty, and he was burned at the stake in Rome's Campo de' Fiori in 1600. After his death, he gained considerable fame, being particularly celebrated by 19th- and early 20th-century commentators who regarded him as a martyr for science. Some historians are of the opinion his heresy trial was not a response to his cosmological views but rather a response to his religious and afterlife views, [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] while others find the main reason for Bruno's death was indeed his cosmological views. [ 9 ] [ 10 ] [ 11 ] Bruno's case is still considered a landmark in the history of free thought and the emerging sciences. [ 12 ] [ 13 ]
In addition to cosmology, Bruno also wrote extensively on the art of memory , a loosely organized group of mnemonic techniques and principles. Historian Frances Yates argues that Bruno was deeply influenced by the presocratic Empedocles , Neoplatonism , Renaissance Hermeticism, and Book of Genesis -like legends surrounding the Hellenistic conception of Hermes Trismegistus . [ 14 ] Other studies of Bruno have focused on his qualitative approach to mathematics and his application of the spatial concepts of geometry to language. [ 15 ] [ 16 ]
Born Filippo Bruno in Nola (a comune in the modern-day province of Naples , in the Southern Italian region of Campania , then part of the Kingdom of Naples ) in 1548, he was the son of Giovanni Bruno (1517 – c. 1592), a soldier, and Fraulissa Savolino (1520–?). In his youth he was sent to Naples to be educated. He was tutored privately at the Augustinian monastery there, and attended public lectures at the Studium Generale . [ 17 ] At the age of 17, he entered the Dominican Order at the monastery of San Domenico Maggiore in Naples, taking the name Giordano, after Giordano Crispo, his metaphysics tutor. He continued his studies there, completing his novitiate , and ordained a priest in 1572 at age 24. During his time in Naples, he became known for his skill with the art of memory and on one occasion traveled to Rome to demonstrate his mnemonic system before Pope Pius V and Cardinal Rebiba . In his later years, Bruno claimed that the Pope accepted his dedication to him of the lost work On The Ark of Noah at this time. [ 18 ]
While Bruno was distinguished for outstanding ability, his taste for free thinking and forbidden books soon caused him difficulties. Given the controversy he caused in later life, it is surprising that he was able to remain within the monastic system for eleven years. In his testimony to Venetian inquisitors during his trial many years later, he says that proceedings were twice taken against him for having cast away images of the saints, retaining only a crucifix , and for having recommended controversial texts to a novice. [ 19 ] Such behavior could perhaps be overlooked, but Bruno's situation became much more serious when he was reported to have defended the Arian heresy , and when a copy of the banned writings of Erasmus , annotated by him, was discovered hidden in the monastery latrine . When he learned that an indictment was being prepared against him in Naples he fled, shedding his religious habit , at least for a time. [ 20 ]
Bruno first went to the Genoese port of Noli , then to Savona , Turin and finally to Venice , where he published his lost work On the Signs of the Times with the permission (so he claimed at his trial) of the Dominican Remigio Nannini Fiorentino . From Venice he went to Padua , where he met fellow Dominicans who convinced him to wear his religious habit again. From Padua he went to Bergamo and then across the Alps to Chambéry and Lyon . His movements after this time are obscure. [ 21 ]
In 1579, Bruno arrived in Geneva . During his Venetian trial, he told inquisitors that while in Geneva he told the Marchese de Vico of Naples, who was notable for helping Italian refugees in Geneva, "I did not intend to adopt the religion of the city. I desired to stay there only that I might live at liberty and in security." [ 23 ] Bruno had a pair of breeches made for himself, and the Marchese and others apparently made Bruno a gift of a sword, hat, cape and other necessities for dressing himself; in such clothing Bruno could no longer be recognized as a priest. Things apparently went well for Bruno for a time, as he entered his name in the Rector's Book of the University of Geneva in May 1579. [ 24 ] But in keeping with his personality he could not long remain silent. In August he published an attack on the work of Antoine de La Faye [ fr ] , a distinguished professor. Bruno and the printer, Jean Bergeon, were promptly arrested. [ 25 ] Rather than apologizing, Bruno insisted on continuing to defend his publication. He was refused the right to take sacrament . [ 26 ] Though this right was soon restored, he left Geneva. [ 27 ]
He went to France, arriving first in Lyon , and thereafter settling for a time (1580–1581) in Toulouse , where he took his doctorate in theology and was elected by students to lecture in philosophy. [ 28 ] He also attempted at this time to return to Catholicism, but was denied absolution by the Jesuit priest he approached. [ 29 ] When religious strife broke out in the summer of 1581, he moved to Paris. [ 30 ] There he held a cycle of thirty lectures on theological topics and also began to gain fame for his prodigious memory. [ 31 ] His talents attracted the benevolent attention of the king Henry III ; Bruno subsequently reported:
"I got me such a name that King Henry III summoned me one day to discover from me if the memory which I possessed was natural or acquired by magic art. I satisfied him that it did not come from sorcery but from organized knowledge; and, following this, I got a book on memory printed, entitled The Shadows of Ideas , which I dedicated to His Majesty. Forthwith he gave me an Extraordinary Lectureship with a salary." [ 32 ]
In Paris, Bruno enjoyed the protection of his powerful French patrons. During this period, he published several works on mnemonics, including De umbris idearum ( On the Shadows of Ideas , 1582), Ars memoriae [ it ] ( The Art of Memory , 1582), and Cantus circaeus ( Circe's Song , 1582; described at Circe in the arts § Reasoning beasts ). All of these were based on his mnemonic models of organized knowledge and experience, as opposed to the simplistic logic-based mnemonic techniques of Petrus Ramus then becoming popular. [ citation needed ] Bruno also published a comedy summarizing some of his philosophical positions, titled Il Candelaio ( The Candlemaker , 1582). In the 16th century dedications were, as a rule, approved beforehand, and hence were a way of placing a work under the protection of an individual. Given that Bruno dedicated various works to the likes of King Henry III, Sir Philip Sidney , Michel de Castelnau (French Ambassador to England), and possibly Pope Pius V , it is apparent that this wanderer had risen sharply in status and moved in powerful circles. [ citation needed ]
In April 1583, Bruno went to England with letters of recommendation from Henry III as a guest of the French ambassador, Michel de Castelnau . Bruno lived at the French embassy with the lexicographer John Florio . There he became acquainted with the poet Philip Sidney (to whom he dedicated two books) and other members of the Hermetic circle around John Dee , though there is no evidence that Bruno ever met Dee himself. He also lectured at Oxford , and unsuccessfully sought a teaching position there. His views were controversial, notably with John Underhill , Rector of Lincoln College and subsequently bishop of Oxford, and George Abbot , who later became Archbishop of Canterbury . Abbot mocked Bruno for supporting "the opinion of Copernicus that the earth did go round, and the heavens did stand still; whereas in truth it was his own head which rather did run round, and his brains did not stand still", [ 33 ] and found Bruno had both plagiarized and misrepresented Ficino 's work, leading Bruno to return to the continent. [ 34 ]
Nevertheless, his stay in England was fruitful. During that time Bruno completed and published some of his most important works, the six "Italian Dialogues", including the cosmological tracts La cena de le ceneri ( The Ash Wednesday Supper , 1584), De la causa, principio et uno ( On Cause, Principle and Unity , 1584), De l'infinito, universo et mondi ( On the Infinite, Universe and Worlds , 1584) as well as Lo spaccio de la bestia trionfante ( The Expulsion of the Triumphant Beast , 1584) and De gli eroici furori [ it ] ( On the Heroic Frenzies , 1585). Some of these were printed by John Charlewood . Some of the works that Bruno published in London, notably The Ash Wednesday Supper , appear to have given offense. Once again, Bruno's controversial views and tactless language lost him the support of his friends. John Bossy has advanced the theory that, while staying in the French Embassy in London, Bruno was also spying on Catholic conspirators, under the pseudonym "Henry Fagot", for Sir Francis Walsingham , Queen Elizabeth 's Secretary of State. [ 35 ]
Bruno is sometimes cited as being the first to propose that the universe is infinite, which he did during his time in England, but an English scientist , Thomas Digges , put forth this idea in a published work in 1576, some eight years earlier than Bruno. [ 36 ] An infinite universe and the possibility of alien life had also been earlier suggested by German Catholic Cardinal Nicholas of Cusa in "On Learned Ignorance" published in 1440 and Bruno attributed his understanding of multiple worlds to this earlier scholar, who he called "the divine Cusanus". [ 37 ]
In October 1585, Castelnau was recalled to France, and Bruno went with him. [ 38 ] In Paris, Bruno found a tense political situation. Moreover, his 120 theses against Aristotelian natural science soon put him in ill favor. In 1586, following a violent quarrel over these theses, he left France for Germany. [ 39 ]
In Germany he failed to obtain a teaching position at Marburg , but was granted permission to teach at Wittenberg , where he lectured on Aristotle for two years. [ 40 ] However, with a change of intellectual climate there, he was no longer welcome, and went in 1588 to Prague , where he obtained 300 taler from Rudolf II , but no teaching position. [ 41 ] He went on to serve briefly as a professor in Helmstedt , but had to flee again in 1590 when he was excommunicated by the Lutherans . [ 42 ]
During this period he produced several Latin works, dictated to his friend and secretary Girolamo Besler, including De Magia ( On Magic ), Theses De Magia ( Theses on Magic ) and De Vinculis in Genere ( A General Account of Bonding ). All these were apparently transcribed or recorded by Besler (or Bisler) between 1589 and 1590. [ 43 ] He also published De Imaginum, Signorum, Et Idearum Compositione ( On the Composition of Images, Signs and Ideas , 1591).
In 1591 he was in Frankfurt , where he received an invitation from the Venetian patrician Giovanni Mocenigo , who wished to be instructed in the art of memory, [ 44 ] and also heard of a vacant chair in mathematics at the University of Padua . At the time the Inquisition seemed to be losing some of its strictness, and because the Republic of Venice was the most liberal state in the Italian Peninsula , Bruno was lulled into making the fatal mistake of returning to Italy. [ 45 ]
He went first to Padua , where he taught briefly, and applied unsuccessfully for the chair of mathematics, which was given instead to Galileo Galilei one year later. Bruno accepted Mocenigo's invitation and moved to Venice in March 1592. [ 46 ] For about two months he served as an in-house tutor to Mocenigo, to whom he let slip some of his heterodox ideas. [ 47 ] Mocenigo denounced him to the Venetian Inquisition , which had Bruno arrested on 22 May 1592. [ 48 ] Among the numerous charges of blasphemy and heresy brought against him in Venice, based on Mocenigo's denunciation, was his belief in the plurality of worlds , as well as accusations of personal misconduct. [ 49 ] Bruno defended himself skillfully, stressing the philosophical character of some of his positions, denying others and admitting that he had had doubts on some matters of dogma. The Roman Inquisition, however, asked for his transfer to Rome. [ 50 ] After several months of argument, the Venetian authorities reluctantly consented and Bruno was sent to Rome in January 1593. [ 51 ]
During the seven years of his trial in Rome, Bruno was held in confinement, lastly in the Tower of Nona . Some important documents about the trial are lost, but others have been preserved, among them a summary of the proceedings that was rediscovered in 1940. [ 52 ] The numerous charges against Bruno, based on some of his books as well as on witness accounts, included blasphemy, immoral conduct, and heresy in matters of dogmatic theology, and involved some of the basic doctrines of his philosophy and cosmology. Luigi Firpo speculates the charges made against Bruno by the Roman Inquisition were: [ 53 ]
Bruno defended himself as he had in Venice, insisting that he accepted the Church's dogmatic teachings, but trying to preserve the basis of his cosmological views. In particular, he held firm to his belief in the plurality of worlds, although he was admonished to abandon it. His trial was overseen by the Inquisitor Cardinal Bellarmine , who demanded a full recantation, which Bruno eventually refused. On 20 January 1600, Pope Clement VIII declared Bruno a heretic, and the Inquisition issued a sentence of death. According to the correspondence of Gaspar Schopp of Breslau , he is said to have made a threatening gesture towards his judges and to have replied: Maiori forsan cum timore sententiam in me fertis quam ego accipiam ("Perhaps you pronounce this sentence against me with greater fear than I receive it"). [ 54 ]
He was turned over to the secular authorities. On 17 February 1600, in the Campo de' Fiori (a central Roman market square), naked, with his "tongue imprisoned because of his wicked words", he was burned alive at the stake . [ 55 ] [ 56 ] His ashes were thrown into the Tiber river.
All of Bruno's works were placed on the Index Librorum Prohibitorum in 1603. The inquisition cardinals who judged Giordano Bruno were Cardinal Bellarmino (Bellarmine) , Cardinal Madruzzo (Madruzzi) , Camillo Cardinal Borghese (later Pope Paul V ), Domenico Cardinal Pinelli, Pompeio Cardinal Arrigoni, Cardinal Sfondrati , Pedro Cardinal De Deza Manuel and Cardinal Santorio (Archbishop of Santa Severina, Cardinal-Bishop of Palestrina). [ 57 ]
The measures taken to prevent Bruno continuing to speak have resulted in his becoming a symbol for free thought and free speech in present-day Rome, where an annual memorial service takes place close to the spot where he was executed. [ 58 ]
The earliest likeness of Bruno is an engraving published in 1715 [ 59 ] and cited by Salvestrini as "the only known portrait of Bruno". Salvestrini suggests that it is a re-engraving made from a now lost original. [ 22 ] This engraving has provided the source for later images.
The records of Bruno's imprisonment by the Venetian inquisition in May 1592 describe him as a man "of average height, with a hazel-coloured beard and the appearance of being about forty years of age". Alternately, a passage in a work by George Abbot indicates that Bruno was of diminutive stature: "When that Italian Didapper, who intituled himself Philotheus Iordanus Brunus Nolanus, magis elaboratae Theologiae Doctor, &c. with a name longer than his body...". [ 60 ]
In the first half of the 15th century, Nicholas of Cusa challenged the then widely accepted philosophies of Aristotelianism , envisioning instead an infinite universe whose center was everywhere and circumference nowhere, and moreover teeming with countless stars. [ 61 ] He also predicted that neither were the rotational orbits circular nor were their movements uniform. [ 62 ]
In the second half of the 16th century, the theories of Copernicus (1473–1543) began diffusing through Europe. Copernicus conserved the idea of planets fixed to solid spheres, but considered the apparent motion of the stars to be an illusion caused by the rotation of the Earth on its axis; he also preserved the notion of an immobile center, but it was the Sun rather than the Earth. Copernicus also argued the Earth was a planet orbiting the Sun once every year. However he maintained the Ptolemaic hypothesis that the orbits of the planets were composed of perfect circles— deferents and epicycles —and that the stars were fixed on a stationary outer sphere. [ 63 ]
Despite the widespread publication of Copernicus' work De revolutionibus orbium coelestium , during Bruno's time most educated Catholics subscribed to the Aristotelian geocentric view that the Earth was the center of the universe , and that all heavenly bodies revolved around it. [ 64 ] The ultimate limit of the universe was the primum mobile , whose diurnal rotation was conferred upon it by a transcendental God, not part of the universe (although, as the kingdom of heaven , adjacent to it [ 65 ] ), a motionless prime mover and first cause . The fixed stars were part of this celestial sphere, all at the same fixed distance from the immobile Earth at the center of the sphere. Ptolemy had numbered these at 1,022, grouped into 48 constellations . The planets were each fixed to a transparent sphere. [ 66 ]
Few astronomers of Bruno's time accepted Copernicus's heliocentric model . Among those who did were the Germans Michael Maestlin (1550–1631), Christoph Rothmann , Johannes Kepler (1571–1630); the Englishman Thomas Digges (c. 1546–1595), author of A Perfit Description of the Caelestial Orbes ; and the Italian Galileo Galilei (1564–1642).
In 1584, Bruno published two important philosophical dialogues ( La Cena de le Ceneri and De l'infinito universo et mondi ) in which he argued against the planetary spheres ( Christoph Rothmann did the same in 1586 as did Tycho Brahe in 1587) and affirmed the Copernican principle.
In particular, to support the Copernican view and oppose the objection according to which the motion of the Earth would be perceived by means of the motion of winds, clouds etc., in La Cena de le Ceneri Bruno anticipates some of the arguments of Galilei on the relativity principle. [ 67 ] Note that he also uses the example now known as Galileo's ship .
Theophilus – [...] air through which the clouds and winds move are parts of the Earth, [...] to mean under the name of Earth the whole machinery and the entire animated part, which consists of dissimilar parts; so that the rivers, the rocks, the seas, the whole vaporous and turbulent air, which is enclosed within the highest mountains, should belong to the Earth as its members, just as the air [does] in the lungs and in other cavities of animals by which they breathe, widen their arteries, and other similar effects necessary for life are performed. The clouds, too, move through accidents in the body of the Earth and are in its bowels as are the waters. [...] With the Earth move [...] all things that are on the Earth. If, therefore, from a point outside the Earth something were thrown upon the Earth, it would lose, because of the latter's motion, its straightness as would be seen on the ship [...] moving along a river, if someone on point C of the riverbank were to throw a stone along a straight line, and would see the stone miss its target by the amount of the velocity of the ship's motion. But if someone were placed high on the mast of that ship, move as it may however fast, he would not miss his target at all, so that the stone or some other heavy thing thrown downward would not come along a straight line from the point E which is at the top of the mast, or cage, to the point D which is at the bottom of the mast, or at some point in the bowels and body of the ship. Thus, if from the point D to the point E someone who is inside the ship would throw a stone straight up, it would return to the bottom along the same line however far the ship moved, provided it was not subject to any pitch and roll." [ 68 ]
Bruno's infinite universe was filled with a substance—a "pure air", aether , or spiritus —that offered no resistance to the heavenly bodies which, in Bruno's view, rather than being fixed, moved under their own impetus (momentum). Most dramatically, he completely abandoned the idea of a hierarchical universe.
The universe is then one, infinite, immobile... It is not capable of comprehension and therefore is endless and limitless, and to that extent infinite and indeterminable, and consequently immobile. [ 69 ]
Bruno's cosmology distinguishes between "suns" which produce their own light and heat, and have other bodies moving around them; and "earths" which move around suns and receive light and heat from them. [ 70 ] Bruno suggested that some, if not all, of the objects classically known as fixed stars are in fact suns. [ 70 ] According to astrophysicist Steven Soter , he was the first person to grasp that "stars are other suns with their own planets." [ 71 ]
Bruno wrote that other worlds "have no less virtue nor a nature different from that of our Earth" and, like Earth, "contain animals and inhabitants". [ 72 ]
During the late 16th century, and throughout the 17th century, Bruno's ideas were held up for ridicule, debate, or inspiration. Margaret Cavendish , for example, wrote an entire series of poems against "atoms" and "infinite worlds" in Poems and Fancies in 1664. Bruno's true, if partial, vindication would have to wait for the implications and impact of Newtonian cosmology.
Bruno's overall contribution to the birth of modern science is still controversial. Some scholars follow Frances Yates in stressing the importance of Bruno's ideas about the universe being infinite and lacking geocentric structure as a crucial crossing point between the old and the new. Others see in Bruno's idea of multiple worlds instantiating the infinite possibilities of a pristine, indivisible One, [ 73 ] a forerunner of Everett 's many-worlds interpretation of quantum mechanics. [ 74 ]
While many academics note Bruno's theological position as pantheism , several have described it as pandeism , and some also as panentheism . [ 75 ] [ 76 ] Physicist and philosopher Max Bernhard Weinstein in his Welt- und Lebensanschauungen, Hervorgegangen aus Religion, Philosophie und Naturerkenntnis ("World and Life Views, Emerging From Religion, Philosophy and Nature"), wrote that the theological model of pandeism was strongly expressed in the teachings of Bruno, especially with respect to the vision of a deity for which "the concept of God is not separated from that of the universe." [ 77 ] However, Otto Kern takes exception to what he considers Weinstein's overbroad assertions that Bruno, as well as other historical philosophers such as John Scotus Eriugena , Nicholas of Cusa , Mendelssohn , and Lessing , were pandeists or leaned towards pandeism. [ 78 ] Discover editor Corey S. Powell also described Bruno's cosmology as pandeistic, writing that it was "a tool for advancing an animist or Pandeist theology", [ 79 ] and this assessment of Bruno as a pandeist was agreed with by science writer Michael Newton Keas, [ 80 ] and The Daily Beast writer David Sessions. [ 81 ]
The Vatican has published few official statements about Bruno's trial and execution. In 1942, Cardinal Giovanni Mercati , who discovered a number of lost documents relating to Bruno's trial, stated that the Church was perfectly justified in condemning him. [ citation needed ] On the 400th anniversary of Bruno's death, in 2000, Cardinal Angelo Sodano declared Bruno's death to be a "sad episode" but, despite his regret, he defended Bruno's prosecutors, maintaining that the Inquisitors "had the desire to serve freedom and promote the common good and did everything possible to save his life". [ 82 ] In the same year, Pope John Paul II made a general apology for "the use of violence that some have committed in the service of truth". [ 83 ]
Some authors have characterized Bruno as a "martyr of science", suggesting parallels with the Galileo affair which began around 1610. [ 84 ] "It should not be supposed," writes A. M. Paterson of Bruno and his "heliocentric solar system", that he "reached his conclusions via some mystical revelation ... His work is an essential part of the scientific and philosophical developments that he initiated." [ 85 ] Paterson echoes Hegel in writing that Bruno "ushers in a modern theory of knowledge that understands all natural things in the universe to be known by the human mind through the mind's dialectical structure". [ 86 ]
Ingegno writes that Bruno embraced the philosophy of Lucretius , "aimed at liberating man from the fear of death and the gods." [ 87 ] Characters in Bruno's Cause, Principle and Unity desire "to improve speculative science and knowledge of natural things," and to achieve a philosophy "which brings about the perfection of the human intellect most easily and eminently, and most closely corresponds to the truth of nature." [ 88 ]
Other scholars oppose such views, and claim Bruno's martyrdom to science to be exaggerated, or outright false. For Yates, while "nineteenth century liberals" were thrown "into ecstasies" over Bruno's Copernicanism, "Bruno pushes Copernicus' scientific work back into a prescientific stage, back into Hermeticism, interpreting the Copernican diagram as a hieroglyph of divine mysteries." [ 89 ]
According to historian Mordechai Feingold, "Both admirers and critics of Giordano Bruno basically agree that he was pompous and arrogant, highly valuing his opinions and showing little patience with anyone who even mildly disagreed with him." Discussing Bruno's experience of rejection when he visited Oxford University, Feingold suggests that "it might have been Bruno's manner, his language and his self-assertiveness, rather than his ideas" that caused offence. [ 90 ]
In his Lectures on the History of Philosophy , Hegel writes that Bruno's life represented "a bold rejection of all Catholic beliefs resting on mere authority." [ 91 ]
Alfonso Ingegno states that Bruno's philosophy "challenges the developments of the Reformation, calls into question the truth-value of the whole of Christianity, and claims that Christ perpetrated a deceit on mankind ... Bruno suggests that we can now recognize the universal law which controls the perpetual becoming of all things in an infinite universe." [ 92 ] A. M. Paterson says that, while we no longer have a copy of the official papal condemnation of Bruno, his heresies included "the doctrine of the infinite universe and the innumerable worlds" and his beliefs "on the movement of the earth". [ 93 ]
Michael White notes that the Inquisition may have pursued Bruno early in his life on the basis of his opposition to Aristotle , interest in Arianism , reading of Erasmus , and possession of banned texts. [ 94 ] White considers that Bruno's later heresy was "multifaceted" and may have rested on his conception of infinite worlds. "This was perhaps the most dangerous notion of all ... If other worlds existed with intelligent beings living there, did they too have their visitations? The idea was quite unthinkable." [ 94 ]
Frances Yates rejects what she describes as the "legend that Bruno was prosecuted as a philosophical thinker, was burned for his daring views on innumerable worlds or on the movement of the earth." Yates however writes that "the Church was ... perfectly within its rights if it included philosophical points in its condemnation of Bruno's heresies" because "the philosophical points were quite inseparable from the heresies." [ 95 ]
According to the Stanford Encyclopedia of Philosophy , "in 1600 there was no official Catholic position on the Copernican system, and it was certainly not a heresy. When [...] Bruno [...] was burned at the stake as a heretic, it had nothing to do with his writings in support of Copernican cosmology." [ 96 ]
The website of the Vatican Apostolic Archive , discussing a summary of legal proceedings against Bruno in Rome, states:
In the same rooms where Giordano Bruno was questioned, for the same important reasons of the relationship between science and faith, at the dawning of the new astronomy and at the decline of Aristotle's philosophy, sixteen years later, Cardinal Bellarmino , who then contested Bruno's heretical theses, summoned Galileo Galilei, who also faced a famous inquisitorial trial, which, luckily for him, ended with a simple abjuration. [ 97 ]
Galileo ultimately recanted his views and agreed to house arrest and was spared from being burned at the stake, while Bruno held his positions until death. [ 98 ] The concepts of exoplanets , the idea that the shape of the universe goes on infinitely , and the Solar System holding no cosmic importance or center, would later became major concepts in the field of cosmology , as well as philosophy. [ 99 ] [ 100 ]
Following the 1870 Capture of Rome by the newly created Kingdom of Italy and the end of the Church's temporal power over the city, the erection of a monument to Bruno on the site of his execution became feasible. The monument was sharply opposed by the clerical party, but was finally erected by the Rome Municipality and inaugurated in 1889. [ 102 ]
A statue of a stretched human figure standing on its head, designed by Alexander Polzin and depicting Bruno's death at the stake, was placed in Potsdamer Platz station in Berlin on 2 March 2008. [ 103 ] [ 104 ]
Retrospective iconography of Bruno shows him with a Dominican cowl but not tonsured . Edward Gosselin has suggested that it is likely Bruno kept his tonsure at least until 1579, and it is possible that he wore it again thereafter. [ 105 ]
An idealized animated version of Bruno appears in the first episode of the 2014 television series Cosmos: A Spacetime Odyssey . In this depiction, Bruno is shown with a more modern look, without tonsure and wearing clerical robes and without his hood. Cosmos presents Bruno as an impoverished philosopher who was ultimately executed due to his refusal to recant his belief in other worlds, a portrayal that was criticized by some as simplistic or historically inaccurate. [ 106 ] [ 107 ] [ 108 ] Corey S. Powell, of Discover magazine, says of Bruno, "A major reason he moved around so much is that he was argumentative, sarcastic, and drawn to controversy ... He was a brilliant, complicated, difficult man. [ 106 ]
Poems that refer to Bruno include:
Bruno and his theory of "the coincidence of contraries" ( coincidentia oppositorum ) play an important role in James Joyce 's 1939 novel Finnegans Wake . Joyce wrote in a letter to his patroness, Harriet Shaw Weaver , "His philosophy is a kind of dualism – every power in nature must evolve an opposite in order to realise itself and opposition brings reunion". [ 113 ] Amongst his numerous allusions to Bruno in his novel, including his trial and torture, Joyce plays upon Bruno's notion of coincidentia oppositorum through applying his name to word puns such as "Browne and Nolan" (the name of Dublin printers) and '"brownesberrow in nolandsland". [ 114 ]
In 1973 the biographical drama Giordano Bruno was released, an Italian/French movie directed by Giuliano Montaldo , starring Gian Maria Volonté as Bruno. [ 115 ]
Bruno is a major character in the four-novel Aegypt sequence (1987–2007) by John Crowley . Historical episodes from Bruno's life are fictionalized in the novels, and his philosophical ideas are key to the novels’ themes. [ 116 ]
The Last Confession (2000) by Morris West is an unfinished, posthumously published fictional autobiography of Bruno, ostensibly written shortly before Bruno's execution. [ 117 ]
In the 2008 novel Children of God by Mary Doria Russell , several characters travel on an interstellar spaceship named Giordano Bruno . [ 118 ]
Bruno features as the hero of the Giordano Bruno series (2010–2023) of historical crime novels by S. J. Parris (a pseudonym of Stephanie Merritt ). [ 119 ]
Hans Werner Henze set his large scale cantata for orchestra, choir and four soloists, Novae de infinito laudes to Italian texts by Bruno, recorded in 1972 at the Salzburg Festival reissued on CD Orfeo C609 031B. [ 120 ]
The Italian composer Francesco Filidei wrote an opera, based on a libretto by Stefano Busellato, titled Giordano Bruno . The premiere took place on 12 September 2015 at the Casa da Música in Porto, Portugal. [ 121 ] [ 122 ] [ 123 ] [ 124 ]
The 2016 song "Roman Sky" by heavy metal band Avenged Sevenfold focuses on the death of Bruno. [ 125 ]
Bruno is the central character in Roger Doyle ’s Heresy – an electronic opera (2017). [ 126 ]
Chumlee tracking down a first edition copy of a book by Bruno in Rome is a part of a season 17 episode of the American TV series Pawn Stars . [ 127 ]
The Doctor Who serial The Ribos Operation (1978) features a character named "Binro the Heretic", who was ostracized by his people for claiming the stars were not ice crystals, but other suns. The BBC has explicitly drawn a connection between the two. [ 128 ]
The Giordano Bruno Foundation (German: Giordano-Bruno-Stiftung) is a non-profit foundation based in Germany that pursues the "Support of Evolutionary Humanism ". It was founded by entrepreneur Herbert Steffen in 2004. The Giordano Bruno Foundation is critical of religious fundamentalism and nationalism. [ 129 ]
The SETI League makes an annual award honoring the memory of Giordano Bruno to a deserving person or persons who have made a significant contribution to the practice of SETI (the search for extraterrestrial intelligence). The award was proposed by sociologist Donald Tarter in 1995 on the 395th anniversary of Bruno's death. The trophy presented is called a Bruno. [ 130 ]
The 22 km impact crater Giordano Bruno on the far side of the Moon is named in his honor, as are the main belt Asteroids 5148 Giordano and 13223 Cenaceneri ; the latter is named after his philosophical dialogue La Cena de le Ceneri ("The Ash Wednesday Supper") (see above). [ citation needed ] | https://en.wikipedia.org/wiki/Giordano_Bruno |
Giovanni is a Web interface that allows users to analyze NASA 's gridded data from various satellite and surface observations.
Giovanni lets researchers examine data on atmospheric chemistry, atmospheric temperature, water vapor and clouds , atmospheric aerosols , precipitation , and ocean chlorophyll and surface temperature . The primary data consist of global gridded data sets with reduced spatial resolution. Basic analytical functions performed by Giovanni are carried out by the Grid Analysis and Display System ( GrADS ).
Giovanni is an acronym for GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure.
It allows access to data from multiple remote sites, supports multiple data formats including Hierarchical Data Format (HDF), HDF-EOS , network Common Data Form ( netCDF ), GRIdded Binary ( GRIB ), and binary, and multiple plot types including area, time, Hovmoller, and image animation. | https://en.wikipedia.org/wiki/Giovanni_(meteorology) |
Giovanni Girolamo Saccheri ( Italian pronunciation: [dʒoˈvanni dʒiˈrɔːlamo sakˈkɛːri] ; 5 September 1667 – 25 October 1733) was an Italian Jesuit priest, scholastic philosopher , and mathematician . He is considered the forerunner of non-Euclidean geometry . [ 2 ] [ 3 ]
The son of a lawyer , Saccheri was born in Sanremo , Genoa (now Italy) on September 5, 1667. [ 4 ] From his youth he showed extreme precociousness and a spirit of inquiry. [ 2 ] He entered the Jesuit novitiate in 1685. He studied philosophy and theology at the Jesuit College of Brera in Milan. [ 5 ]
His mathematics teacher at the Brera college was Tommaso Ceva , who introduced him to his brother Giovanni . [ 4 ] Ceva convinced Saccheri to devote himself to mathematical research and became the young man's mentor . Saccheri was in close scientific communion with both brothers. He used Ceva's ingenious methods in his first published work, 1693, solutions of six geometric problems proposed by the Sicilian mathematician Ruggero Ventimiglia (1670-1698). [ 6 ]
Saccheri was ordained as a priest in March 1694. He taught philosophy at the University of Turin from 1694 to 1697 and philosophy, theology and mathematics at the University of Pavia from 1697 until his death. [ 3 ] He published several works including Quaesita geometrica (1693), Logica demonstrativa (1697), and Neo-statica (1708). Saccheri died in Milan on 25 October 1733. [ 4 ]
The Logica demonstrativa , reissued in Turin in 1701 and in Cologne in 1735, gives Saccheri the right to an eminent place in the history of modern logic. [ 7 ] According to Thomas Heath “ Mill ’s account of the true distinction between real and nominal definitions was fully anticipated by Saccheri.” [ 8 ]
Saccheri is primarily known today for his last publication, in 1733 shortly before his death. Now considered an early exploration of non-Euclidean geometry , Euclides ab omni naevo vindicatus ( Euclid Freed of Every Flaw ) languished in obscurity until it was rediscovered by Eugenio Beltrami , in the mid-19th century. [ 9 ]
The intent of Saccheri's work was ostensibly to establish the validity of Euclid by means of a reductio ad absurdum proof of any alternative to Euclid 's parallel postulate . To do so, he assumed that the parallel postulate was false and attempted to derive a contradiction. [ 3 ]
Since Euclid's postulate is equivalent to the statement that the sum of the internal angles of a triangle is 180°, he considered both the hypothesis that the angles add up to more or less than 180°.
The first led to the conclusion that straight lines are finite, contradicting Euclid's second postulate. So Saccheri correctly rejected it. However, the principle is now accepted as the basis of elliptic geometry , where both the second and fifth postulates are rejected.
The second possibility turned out to be harder to refute. In fact he was unable to derive a logical contradiction and instead derived many non-intuitive results; for example that triangles have a maximum finite area and that there is an absolute unit of length. He finally concluded that: "the hypothesis of the acute angle is absolutely false; because it is repugnant to the nature of straight lines". Today, his results are theorems of hyperbolic geometry . [ 10 ]
There is some minor argument on whether Saccheri really meant that, as he published his work in the final year of his life, came extremely close to discovering non-Euclidean geometry and was a logician. Some believe Saccheri concluded as he did only to avoid the criticism that might come from seemingly-illogical aspects of hyperbolic geometry.
One tool that Saccheri developed in his work (now called a Saccheri quadrilateral ) has a precedent in the 11th-century Persian polymath Omar Khayyám 's Discussion of Difficulties in Euclid ( Risâla fî sharh mâ ashkala min musâdarât Kitâb 'Uglîdis ). Khayyam, however, made no significant use of the quadrilateral, whereas Saccheri explored its consequences deeply. [ 11 ] | https://en.wikipedia.org/wiki/Giovanni_Girolamo_Saccheri |
In mathematical logic , System U and System U − are pure type systems , i.e. special forms of a typed lambda calculus with an arbitrary number of sorts , axioms and rules (or dependencies between the sorts).
System U was proved inconsistent by Jean-Yves Girard in 1972 [ 1 ] (and the question of consistency of System U − was formulated). This result led to the realization that Martin-Löf 's original 1971 type theory was inconsistent, as it allowed the same "Type in Type" behaviour that Girard's paradox exploits.
System U is defined [ 2 ] : 352 as a pure type system with
System U − is defined the same with the exception of the ( △ , ∗ ) {\displaystyle (\triangle ,\ast )} rule.
The sorts ∗ {\displaystyle \ast } and ◻ {\displaystyle \square } are conventionally called “Type” and “ Kind ”, respectively; the sort △ {\displaystyle \triangle } doesn't have a specific name. The two axioms describe the containment of types in kinds ( ∗ : ◻ {\displaystyle \ast :\square } ) and kinds in △ {\displaystyle \triangle } ( ◻ : △ {\displaystyle \square :\triangle } ). Intuitively, the sorts describe a hierarchy in the nature of the terms.
The rules govern the dependencies between the sorts: ( ∗ , ∗ ) {\displaystyle (\ast ,\ast )} says that values may depend on values ( functions ), ( ◻ , ∗ ) {\displaystyle (\square ,\ast )} allows values to depend on types ( polymorphism ), ( ◻ , ◻ ) {\displaystyle (\square ,\square )} allows types to depend on types ( type operators ), and so on.
The definitions of System U and U − allow the assignment of polymorphic kinds to generic constructors in analogy to polymorphic types of terms in classical polymorphic lambda calculi, such as System F . An example of such a generic constructor might be [ 2 ] : 353 (where k denotes a kind variable)
This mechanism is sufficient to construct a term with the type ( ∀ p : ∗ , p ) {\displaystyle (\forall p:\ast ,p)} (equivalent to the type ⊥ {\displaystyle \bot } ), which implies that every type is inhabited . By the Curry–Howard correspondence , this is equivalent to all logical propositions being provable, which makes the system inconsistent.
Girard's paradox is the type-theoretic analogue of Russell's paradox in set theory . | https://en.wikipedia.org/wiki/Girard's_paradox |
A girder ( / ˈ ɡ ɜːr d ər / ) is a beam used in construction . [ 1 ] It is the main horizontal support of a structure which supports smaller beams. Girders often have an I-beam cross section composed of two load-bearing flanges separated by a stabilizing web , but may also have a box shape, Z shape, or other forms. Girders are commonly used to build bridges.
A girt is a vertically aligned girder placed to resist shear loads.
Small steel girders are rolled into shape. Larger girders (1 m/3 feet deep or more) are made as plate girders , welded or bolted together from separate pieces of steel plate . [ 2 ]
The Warren type girder replaces the solid web with an open latticework truss between the flanges. This arrangement combines strength with economy of materials, minimizing weight and thereby reducing loads and expense. Patented in 1848 by its designers James Warren and Willoughby Theobald Monzani, its structure consists of longitudinal members joined only by angled cross-members, forming alternately inverted equilateral triangle -shaped spaces along its length, ensuring that no individual strut , beam, or tie is subject to bending or torsional straining forces, but only to tension or compression . It is an improvement [ citation needed ] over the Neville truss, which uses a spacing configuration of isosceles triangles .
This material -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Girder |
The Girdler sulfide ( GS ) process , also known as the Geib–Spevack ( GS ) process , [ 1 ] is an industrial production method for extracting heavy water ( deuterium oxide, D 2 O) from natural water. Heavy water is used in particle research, in deuterium NMR spectroscopy, deuterated solvents for proton NMR spectroscopy, heavy water nuclear reactors (as a coolant and moderator ) and deuterated drugs .
In 1943, Karl-Hermann Geib and Jerome S. Spevack independently invented the process. [ 2 ] The process is named after the Girdler Company, which constructed the first American plant to implement it.
The method is an isotopic exchange process, where isotopes of hydrogen are swapped between hydrogen sulfide (H 2 S) and water (H 2 O), also known as "light" water, that produces heavy water over several steps. This process is highly energy intensive. [ 3 ]
Until its closure in 1997, the Bruce Heavy Water Plant in Ontario (located on the same site as Douglas Point and the Bruce Nuclear Generating Station ) was the world's largest heavy water production plant, with a peak capacity of 1600 tonnes per year (800 tonnes per year per full plant, two fully operational plants at its peak). It used the Girdler sulfide process to produce heavy water, and required by mass 340000 units of feed water to produce 1 unit of heavy water. [ 4 ]
The first such facility of India's Heavy Water Board to use the Girdler process is at Rawatbhata near Kota, Rajasthan. This was followed by a larger plant at Manuguru, Andhra Pradesh. Other plants exist in the United States and Romania for example. [ 5 ] Romania, India and the former supplier of much of the world's heavy water demand, Canada, all have operating heavy water reactors with two at Cernavoda Nuclear Power Plant in Romania making up the country's entire fleet and several each in India (mostly IPHWR ) and Canada (exclusively CANDU ).
Each of a number of steps consists of two sieve tray columns. One column is maintained at 30 °C (86 °F) and is called the 'cold tower' and the other at 130 °C (266 °F) and is called the 'hot tower'. The enrichment process is based on the difference in separation between 30 °C and 130 °C.
The process of interest is the equilibrium reaction,
At 30 °C, the equilibrium constant K = 2.33, while at 130 °C, K = 1.82. This difference is exploited for enriching deuterium in heavy water. [ 6 ]
Hydrogen sulfide gas is circulated in a closed loop between the cold tower and the hot tower (although these can be separate towers, they can also be separate sections of one tower, with the cold section at the top). Demineralised and deaerated water is fed to the cold tower where deuterium migration preferentially takes place from the hydrogen sulfide gas to the liquid water. Normal water is fed to the hot tower where deuterium transfer takes place from the liquid water to the hydrogen sulfide gas. In cascade systems, the same water is used for both inputs. The mechanism for this is the difference in the equilibrium constant; in the cold tower, deuterium concentration in the hydrogen sulfide is lowered, and the concentration in the water raised. The deuterium in the hot loop slightly prefers to be in the hydrogen sulfide, resulting in excess deuterium in the hydrogen sulfide relative to the cold tower. For n moles of deuterium per mole of protium in the hot tower input water, there are n / 1.82 moles per mole of protium in the hydrogen sulfide. In the cold tower, part of this deuterium is transferred to the cold tower input water, in accordance with the equilibrium constant. At the input to the cold tower, the ratio of products to reactants in the above equation is 1.82, since both input streams have equal concentrations of deuterium. The chemical equilibrium tries to force more deuterium into the water to correct the ratio. Ideally for equal amounts of water and hydrogen sulfide, the cold tower should output water with 12% more deuterium than it entered. Enriched water is output from the cold tower, while depleted water is output from the hot tower.
An appropriate cascade system accomplishes enrichment: enriched water is fed into another separation unit and is further enriched.
Normally in this process, water is enriched to 15–20% D 2 O. Further enrichment to "reactor-grade" heavy water (> 99% D 2 O) is done in another process, e.g. distillation . [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Girdler_sulfide_process |
Girls Gotta Eat is a podcast started in 2018 and hosted by Rayna Greenberg and Ashley Hesseltine . [ 1 ] [ 2 ] [ 3 ]
Much of the podcast is centered around handing out dating advice to the show's listeners. [ 4 ]
The podcast is known for its live shows across the United States and its "Is This Weird?" segment. [ 4 ]
Girls Gotta Eat has gotten in excess of 100 million downloads since its 2018 debut. [ 5 ]
In June 2022, Greenberg and Hesseltine began Vibes Only, a sex toy company with its own mobile app. [ 5 ]
Elite Daily described the show thusly "The two regularly share advice and hear from listeners about everything from f*ckboys to masturbation habits, and their no-holds-barred vibe has gained them a dedicated following of "snackheads," [ 5 ] (i.e. fans). [ 6 ]
Hannah Orenstein of Elite Daily said "Equal parts raunchy and insightful, Girls Gotta Eat feels extra special because it acknowledges the most important love of all is the kind you have for yourself — meaningful romance and sexual escapades are the cherries on top." [ 7 ]
This article about podcasting is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Girls_Gotta_Eat |
In probability theory , Girsanov's theorem or the Cameron-Martin-Girsanov theorem explains how stochastic processes change under changes in measure . The theorem is especially important in the theory of financial mathematics as it explains how to convert from the physical measure , which describes the probability that an underlying instrument (such as a share price or interest rate ) will take a particular value or values, to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying.
Results of this type were first proved by Cameron-Martin in the 1940s and by Igor Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977).
Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is a measure that is absolutely continuous with respect to P then every P -semimartingale is a Q -semimartingale.
We state the theorem first for the special case when the underlying stochastic process is a Wiener process . This special case is sufficient for risk-neutral pricing in the Black–Scholes model .
Let { W t } {\displaystyle \{W_{t}\}} be a Wiener process on the Wiener probability space { Ω , F , P } {\displaystyle \{\Omega ,{\mathcal {F}},P\}} . Let X t {\displaystyle X_{t}} be a measurable process adapted to the natural filtration of the Wiener process { F t W } {\displaystyle \{{\mathcal {F}}_{t}^{W}\}} ; we assume that the usual conditions have been satisfied.
Given an adapted process X t {\displaystyle X_{t}} define
where E ( X ) {\displaystyle {\mathcal {E}}(X)} is the stochastic exponential of X with respect to W , i.e.
and [ X ] t {\displaystyle [X]_{t}} denotes the quadratic variation of the process X .
If Z t {\displaystyle Z_{t}} is a martingale then a probability
measure Q can be defined on { Ω , F } {\displaystyle \{\Omega ,{\mathcal {F}}\}} such that Radon–Nikodym derivative
Then for each t the measure Q restricted to the unaugmented sigma fields F t o {\displaystyle {\mathcal {F}}_{t}^{o}} is equivalent to P restricted to
Furthermore, if Y t {\displaystyle Y_{t}} is a local martingale under P then the process
is a Q local martingale on the filtered probability space { Ω , F , Q , { F t W } } {\displaystyle \{\Omega ,F,Q,\{{\mathcal {F}}_{t}^{W}\}\}} .
If X is a continuous process and W is a Brownian motion under measure P then
is a Brownian motion under Q .
The fact that W ~ t {\displaystyle {\tilde {W}}_{t}} is continuous is trivial; by Girsanov's theorem it is a Q local martingale, and by computing
it follows by Levy's characterization of Brownian motion that this is a Q Brownian
motion.
In many common applications, the process X is defined by
For X of this form then a necessary and sufficient condition for E ( X ) {\displaystyle {\mathcal {E}}(X)} to be a martingale is Novikov's condition which requires that
The stochastic exponential E ( X ) {\displaystyle {\mathcal {E}}(X)} is the process Z which solves the stochastic differential equation
The measure Q constructed above is not equivalent to P on F ∞ {\displaystyle {\mathcal {F}}_{\infty }} as this would only be the case if the Radon–Nikodym derivative were a uniformly integrable martingale, which the exponential martingale described above is not. On the other hand, as long as Novikov's condition is satisfied the measures are equivalent on F T {\displaystyle {\mathcal {F}}_{T}} .
Additionally, then combining this above observation in this case, we see that the process
W ~ t = W t − ∫ 0 t Y s d s {\displaystyle {\tilde {W}}_{t}=W_{t}-\int _{0}^{t}Y_{s}ds}
for t ∈ [ 0 , T ] {\displaystyle t\in [0,T]} is a Q Brownian motion. This was Igor Girsanov's original formulation of the above theorem.
This theorem can be used to show in the Black–Scholes model the unique risk-neutral measure, i.e. the measure in which the fair value of a derivative is the discounted expected value, Q, is specified by
Another application of this theorem, also given in the original paper of Igor Girsanov, is for stochastic differential equations . Specifically, let us consider the equation
d X t = μ ( t , X t ) d t + d W t , {\displaystyle dX_{t}=\mu (t,X_{t})dt+dW_{t},}
where W t {\displaystyle W_{t}} denotes a Brownian motion. Here μ {\displaystyle \mu } and σ {\displaystyle \sigma } are fixed deterministic functions. We assume that this equation has a unique strong solution on [ 0 , T ] {\displaystyle [0,T]} . In this case Girsanov's theorem may be used to compute functionals of X t {\displaystyle X_{t}} directly in terms a related functional for Brownian motion. More specifically, we have for any bounded functional Φ {\displaystyle \Phi } on continuous functions C ( [ 0 , T ] ) {\displaystyle C([0,T])} that
E Φ ( X ) = E [ Φ ( W ) exp ( ∫ 0 T μ ( s , W s ) d W s − 1 2 ∫ 0 T μ ( s , W s ) 2 d s ) ] . {\displaystyle E\Phi (X)=E\left[\Phi (W)\exp \left(\int _{0}^{T}\mu (s,W_{s})dW_{s}-{\frac {1}{2}}\int _{0}^{T}\mu (s,W_{s})^{2}ds\right)\right].}
This follows by applying Girsanov's theorem, and the above observation, to the martingale process
Y t = ∫ 0 t μ ( s , W s ) d W s . {\displaystyle Y_{t}=\int _{0}^{t}\mu (s,W_{s})dW_{s}.}
In particular, with the notation above, the process
W ~ t = W t − ∫ 0 t μ ( s , W s ) d s {\displaystyle {\tilde {W}}_{t}=W_{t}-\int _{0}^{t}\mu (s,W_{s})ds}
is a Q Brownian motion. Rewriting this in differential form as
d W t = d W ~ t + μ ( t , W t ) d t , {\displaystyle dW_{t}=d{\tilde {W}}_{t}+\mu (t,W_{t})dt,}
we see that the law of W t {\displaystyle W_{t}} under Q solves the equation defining X t {\displaystyle X_{t}} , as W ~ t {\displaystyle {\tilde {W}}_{t}} is a Q Brownian motion. In particular, we see that the right-hand side may be written as E Q [ Φ ( W ) ] {\displaystyle E_{Q}[\Phi (W)]} , where Q is the measure taken with respect to the process Y, so the result now is just the statement of Girsanov's theorem.
A more general form of this application is that if both
d X t = μ ( X t , t ) d t + σ ( X t , t ) d W t , {\displaystyle dX_{t}=\mu (X_{t},t)dt+\sigma (X_{t},t)dW_{t},} d Y t = ( μ ( Y t , t ) + ν ( Y t , t ) ) d t + σ ( Y t , t ) d W t , {\displaystyle dY_{t}=(\mu (Y_{t},t)+\nu (Y_{t},t))dt+\sigma (Y_{t},t)dW_{t},}
admit unique strong solutions on [ 0 , T ] {\displaystyle [0,T]} , then for any bounded functional on C ( [ 0 , T ] ) {\displaystyle C([0,T])} , we have that
E Φ ( X ) = E [ Φ ( Y ) exp ( − ∫ 0 T ν ( Y s , s ) σ ( Y s , s ) d W s − 1 2 ∫ 0 T ν ( Y s , s ) 2 σ ( Y s , s ) 2 d s ) ] . {\displaystyle E\Phi (X)=E\left[\Phi (Y)\exp \left(-\int _{0}^{T}{\frac {\nu (Y_{s},s)}{\sigma (Y_{s},s)}}dW_{s}-{\frac {1}{2}}\int _{0}^{T}{\frac {\nu (Y_{s},s)^{2}}{\sigma (Y_{s},s)^{2}}}ds\right)\right].} | https://en.wikipedia.org/wiki/Girsanov_theorem |
In architecture or structural engineering , a girt , also known as a sheeting rail , is a horizontal structural member in a framed wall . Girts provide lateral support to the wall panel, primarily to resist wind loads . [ citation needed ]
A comparable element in roof construction is a purlin .
The girt is commonly used as a stabilizing element to the primary structure (e.g. column, post). Wall cladding fastened to the girt, or a discrete bracing system which includes the girt, can provide shear resistance, in the plane of the wall, along the length of the primary member. Since the girts are normally fastened to, or near, the exterior flange of a column, stability braces may be installed at a girt to resist rotation of the unsupported, inner flange of the primary member. The girt system must be competent and adequately stiff to provide the required stabilizing resistance in addition to its role as a wall panel support. [ citation needed ]
Girts are stabilized by (sag) rods/angles/straps and by the wall cladding. Stabilizing rods are discrete brace members to prevent rotation of an unsupported flange of the girt. Sheet metal wall panels are usually considered providing lateral bracing to the connected, typically exterior flange along the length of the girt. Under restricted circumstances, [ 1 ] sheet metal wall panels are also capable of providing rotational restraint to the girt section. [ citation needed ]
In general: Girt supports panel, panel stabilizes girt; Column supports girt, girt stabilizes column. The building designer should be knowledgeable in the complexities of this interactive design condition to ensure competent design of the complete structure. [ citation needed ]
This architecture -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Girt |
GitHub ( / ˈ ɡ ɪ t h ʌ b / ⓘ ) is a proprietary developer platform that allows developers to create, store, manage, and share their code. It uses Git to provide distributed version control and GitHub itself provides access control , bug tracking , software feature requests, task management , continuous integration , and wikis for every project. [ 8 ] Headquartered in California , GitHub, Inc. has been a subsidiary of Microsoft since 2018. [ 9 ]
It is commonly used to host open source software development projects. [ 10 ] As of January 2023, [update] GitHub reported having over 100 million developers [ 11 ] and more than 420 million repositories , [ 12 ] including at least 28 million public repositories. [ 13 ] It is the world's largest source code host as of June 2023. [update] Over five billion developer contributions were made to more than 500 million open source projects in 2024. [ 14 ]
The development of the GitHub platform began on October 19, 2005. [ 15 ] [ 16 ] [ 17 ] The site was launched in April 2008 by Tom Preston-Werner , Chris Wanstrath , P. J. Hyett and Scott Chacon after it had been available for a few months as a beta release . [ 18 ] Its name was chosen as a compound of Git and hub . [ 19 ]
GitHub, Inc. was originally a flat organization with no middle managers, instead relying on self-management . [ 20 ] Employees could choose to work on projects that interested them ( open allocation ), but the chief executive set salaries. [ 21 ]
In 2014, the company added a layer of middle management in response to harassment allegations against its co-founder and then-CEO, Thomas Preston-Werner, and his wife, Theresa Preston-Werner. As a result of the scandal, Tom Preston-Werner resigned from his position as CEO. [ 22 ] Co-founder and Product lead, Chris Wanstrath , became CEO. Julio Avalos , then General Counsel and Administrative Officer, assumed control over GitHub's business operations and day-to-day management. [ 23 ]
GitHub was a bootstrapped start-up business , which in its first years provided enough revenue to be funded solely by its three founders and start taking on employees. [ 24 ]
In July 2012, four years after the company was founded, Andreessen Horowitz invested $100 million in venture capital [ 8 ] with a $750 million valuation. [ 25 ]
In July 2015 GitHub raised another $250 million (~$314 million in 2023) of venture capital in a series B round . The lead investor was Sequoia Capital , and other investors were Andreessen Horowitz , Thrive Capital , IVP (Institutional Venture Partners) and other venture capital funds. [ 26 ] [ 27 ] The company was then valued at approximately $2 billion. [ 28 ]
As of 2023, [update] GitHub was estimated to generate $1 billion in revenue annually. [ 2 ]
The GitHub service was developed by Chris Wanstrath , P. J. Hyett , Tom Preston-Werner , and Scott Chacon using Ruby on Rails , and started in February 2008. The company, GitHub, Inc., was formed in 2007 and is located in San Francisco. [ 29 ]
On February 24, 2009, GitHub announced that within the first year of being online, GitHub had accumulated over 46,000 public repositories, 17,000 of which were formed in the previous month. At that time, about 6,200 repositories had been forked at least once, and 4,600 had been merged.
That same year, the site was used by over 100,000 users, according to GitHub, [ 30 ] and had grown to host 90,000 unique public repositories, 12,000 having been forked at least once, for a total of 135,000 repositories. [ 31 ]
In 2010, GitHub was hosting 1 million repositories. [ 32 ] A year later, this number doubled. [ 33 ] ReadWriteWeb reported that GitHub had surpassed SourceForge and Google Code in total number of commits for the period of January to May 2011. [ 34 ] On January 16, 2013, GitHub passed the 3 million users mark and was then hosting more than 5 million repositories. [ 35 ] By the end of the year, the number of repositories was twice as great, reaching 10 million repositories. [ 36 ]
In 2015, GitHub opened an office in Japan, its first outside of the U.S. [ 37 ]
On February 28, 2018, GitHub fell victim to the third-largest distributed denial-of-service (DDoS) attack in history, with incoming traffic reaching a peak of about 1.35 terabits per second. [ 38 ]
On June 19, 2018, GitHub expanded its GitHub Education by offering free education bundles to all schools. [ 39 ] [ 40 ]
From 2012, Microsoft became a significant user of GitHub, using it to host open-source projects and development tools such as .NET Core , Chakra Core , MSBuild , PowerShell , PowerToys , Visual Studio Code , Windows Calculator , Windows Terminal and the bulk of its product documentation (now to be found on Microsoft Docs ). [ 42 ] [ 43 ]
On June 4, 2018, Microsoft announced its intent to acquire GitHub for US$7.5 billion (~$8.96 billion in 2023). The deal closed on October 26, 2018. [ 44 ] GitHub continued to operate independently as a community, platform and business. [ 45 ] Under Microsoft, the service was led by Xamarin 's Nat Friedman , reporting to Scott Guthrie , executive vice president of Microsoft Cloud and AI. Nat Friedman resigned November 3, 2021; he was replaced by Thomas Dohmke. [ 46 ]
There have been concerns from developers Kyle Simpson, JavaScript trainer and author, and Rafael Laguna, CEO at Open-Xchange over Microsoft's purchase, citing uneasiness over Microsoft's handling of previous acquisitions, such as Nokia's mobile business and Skype . [ 47 ] [ 48 ]
This acquisition was in line with Microsoft's business strategy under CEO Satya Nadella , which has seen a larger focus on cloud computing services, alongside the development of and contributions to open-source software. [ 9 ] [ 43 ] [ 49 ] Harvard Business Review argued that Microsoft was intending to acquire GitHub to get access to its user base, so it can be used as a loss leader to encourage the use of its other development products and services. [ 50 ]
Concerns over the sale bolstered interest in competitors: Bitbucket (owned by Atlassian ), GitLab and SourceForge (owned by Slashdot ) reported that they had seen spikes in new users intending to migrate projects from GitHub to their respective services. [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ]
In September 2019, GitHub acquired Semmle , a code analysis tool. [ 56 ] In February 2020, GitHub launched in India under the name GitHub India Private Limited. [ 57 ] In March 2020, GitHub announced that it was acquiring npm , a JavaScript packaging vendor, for an undisclosed sum of money. [ 58 ] The deal was closed on April 15, 2020. [ 59 ]
In early July 2020, the GitHub Archive Program was established to archive its open-source code in perpetuity. [ 60 ]
GitHub's mascot is an anthropomorphized "octocat" with five octopus-like arms . [ 61 ] [ 62 ] The character was created by graphic designer Simon Oxley as clip art to sell on iStock , [ 63 ] a website that enables designers to market royalty-free digital images . The illustration GitHub chose was a character that Oxley had named Octopuss. [ 63 ] Since GitHub wanted Octopuss for their logo (a use that the iStock license disallows), they negotiated with Oxley to buy exclusive rights to the image. [ 63 ]
GitHub renamed Octopuss to Octocat, [ 63 ] and trademarked the character along with the new name. [ 61 ] Later, GitHub hired illustrator Cameron McEfee to adapt Octocat for different purposes on the website and promotional materials; McEfee and various GitHub users have since created hundreds of variations of the character, which are available on The Octodex . [ 64 ] [ 65 ]
Projects on GitHub can be accessed and managed using the standard Git command-line interface; all standard Git commands work with it. GitHub also allows users to browse public repositories on the site. Multiple desktop clients and Git plugins are also available. In addition, the site provides social networking -like functions such as feeds, followers, wikis (using wiki software called Gollum ), and a social network graph to display how developers work on their versions (" forks ") of a repository and what fork (and branch within that fork) is newest.
Anyone can browse and download public repositories, but only registered users can contribute content to repositories. With a registered user account, users can have discussions, manage repositories, submit contributions to others' repositories, and review changes to code . GitHub began offering limited private repositories at no cost in January 2019 (limited to three contributors per project). Previously, only public repositories were free. [ 66 ] [ 67 ] [ 68 ] On April 14, 2020, GitHub made "all of the core GitHub features" free for everyone, including "private repositories with unlimited collaborators." [ 69 ]
The fundamental software that underpins GitHub is Git itself, written by Linus Torvalds , creator of Linux. The additional software that provides the GitHub user interface was written using Ruby on Rails and Erlang by GitHub, Inc. developers Wanstrath, [ 70 ] Hyett, and Preston-Werner.
The primary purpose of GitHub is to facilitate the version control and issue tracking aspects of software development. Labels, milestones, responsibility assignment, and a search engine are available for issue tracking. For version control, Git (and, by extension, GitHub) allows pull requests to propose changes to the source code. Users who can review the proposed changes can see a diff between the requested changes and approve them. In Git terminology, this action is called "committing" and one instance of it is a "commit." A history of all commits is kept and can be viewed at a later time.
In addition, GitHub supports the following formats and features:
GitHub's Terms of Service do not require public software projects hosted on GitHub to meet the Open Source Definition . The terms of service state, "By setting your repositories to be viewed publicly, you agree to allow others to view and fork your repositories." [ 84 ]
GitHub Enterprise is a self-managed version of GitHub with similar functionality. It can be run on an organization's hardware or a cloud provider and has been available as of November 2011. [update] [ 85 ] In November 2020, source code for GitHub Enterprise Server was leaked online in an apparent protest against DMCA takedown of youtube-dl . According to GitHub, the source code came from GitHub accidentally sharing the code with Enterprise customers themselves, not from an attack on GitHub servers. [ 86 ] [ 87 ]
In 2008, GitHub introduced GitHub Pages, a static web hosting service for blogs , project documentation, [ 88 ] [ 89 ] and books. [ 90 ] All GitHub Pages content is stored in a Git repository as files served to visitors verbatim or in Markdown format. GitHub is integrated with Jekyll static website and blog generator and GitHub continuous integration pipelines. Each time the content source is updated, Jekyll regenerates the website and automatically serves it via GitHub Pages infrastructure. [ 91 ]
Like the rest of GitHub, it includes free and paid service tiers. Websites generated through this service are hosted either as subdomains of the github.io domain or can be connected to custom domains bought through a third-party domain name registrar . [ 92 ] GitHub Pages supports HTTPS encryption. [ 93 ] [ 94 ]
GitHub Actions was officially launched on November 13, 2019. It was first announced in October 2018 at GitHub Universe as a way to automate workflows, but the full general availability (GA) release came a year later in 2019. GitHub Actions, [ 95 ] which allows building continuous integration and continuous deployment pipelines for testing, releasing and deploying software without the use of third-party websites/platforms. Unlike many other CI/CD tools, GitHub Actions launched with a marketplace where developers could share and reuse prebuilt actions (e.g., testing, linting, deployments). GitHub wanted to reduce reliance on third-party services and keep developers within the GitHub ecosystem. GitHub Actions provided hosted runners (Linux, Windows, macOS) that could dynamically scale, eliminating the need for self-managed build servers.
GitHub also operates a pastebin -style site called Gist , which is for code snippets , as opposed to GitHub proper, which is usually used for larger projects. [ 18 ] Tom Preston-Werner débuted the feature at a Ruby conference in 2008. [ 96 ]
Gist builds on the traditional simple concept of a pastebin by adding version control for code snippets, easy forking, and TLS encryption for private pastes. Because each "gist" is its own Git repository, multiple code snippets can be contained in a single page, and they can be pushed and pulled using Git. [ 97 ]
Unregistered users could upload Gists until March 19, 2018, when uploading Gists was restricted to logged-in users, reportedly to mitigate spamming on the page of recent Gists. [ 98 ]
Gists' URLs use hexadecimal IDs, and edits to Gists are recorded in a revision history , which can show the text difference of thirty revisions per page with an option between a "split" and "unified" view. Like repositories, Gists can be forked, "starred", i.e., publicly bookmarked, and commented on. The count of revisions, stars, and forks is indicated on the gist page. [ 99 ]
GitHub launched a new program called the GitHub Student Developer Pack to give students free access to more than a dozen popular development tools and services. GitHub partnered with Bitnami , Crowdflower , DigitalOcean , DNSimple, HackHands , Namecheap , Orchestrate, Screenhero, SendGrid , Stripe , Travis CI , and Unreal Engine to launch the program. [ 100 ]
In 2016, GitHub announced the launch of the GitHub Campus Experts program [ 101 ] to train and encourage students to grow technology communities at their universities. The Campus Experts program is open to university students 18 years and older worldwide. [ 102 ] GitHub Campus Experts are one of the primary ways that GitHub funds student-oriented events and communities, Campus Experts are given access to training, funding, and additional resources to run events and grow their communities. To become a Campus Expert, applicants must complete an online training course with multiple modules to develop community leadership skills.
GitHub also provides some software as a service (SaaS) integrations for adding extra features to projects. Those services include:
GitHub Sponsors allows users to make monthly money donations to projects hosted on GitHub. [ 106 ] The public beta was announced on May 23, 2019, and the project accepts waitlist registrations. The Verge said that GitHub Sponsors "works exactly like Patreon " because "developers can offer various funding tiers that come with different perks, and they'll receive recurring payments from supporters who want to access them and encourage their work" except with "zero fees to use the program." Furthermore, GitHub offers incentives for early adopters during the first year: it pledges to cover payment processing costs and match sponsorship payments up to $5,000 per developer. Furthermore, users can still use similar services like Patreon and Open Collective and link to their websites. [ 107 ] [ 108 ]
In July 2020, GitHub stored a February archive of the site [ 60 ] in an abandoned mountain mine in Svalbard , Norway, part of the Arctic World Archive and not far from the Svalbard Global Seed Vault . The archive contained the code of all active public repositories, as well as that of dormant but significant public repositories. The 21 TB of data was stored on piqlFilm archival film reels as matrix (2D) barcode ( Boxing barcode ), and is expected to last 500–1,000 years. [ 109 ] [ 110 ] [ 111 ] [ 112 ]
The GitHub Archive Program is also working with partners on Project Silica, in an attempt to store all public repositories for 10,000 years. It aims to write archives into the molecular structure of quartz glass platters, using a high-precision petahertz pulse laser, i.e. one that pulses a quadrillion (1,000,000,000,000,000) times per second. [ 112 ]
In March 2014, GitHub programmer Julie Ann Horvath alleged that founder and CEO Tom Preston-Werner and his wife, Theresa, engaged in a pattern of harassment against her that led to her leaving the company. [ 113 ] In April 2014, GitHub released a statement denying Horvath's allegations. [ 114 ] [ 115 ] [ 116 ] However, following an internal investigation, GitHub confirmed the claims. GitHub's CEO Chris Wanstrath wrote on the company blog, "The investigation found Tom Preston-Werner in his capacity as GitHub's CEO acted inappropriately, including confrontational conduct, disregard of workplace complaints, insensitivity to the impact of his spouse's presence in the workplace, and failure to enforce an agreement that his spouse should not work in the office." [ 117 ] Preston-Werner subsequently resigned from the company. [ 118 ] The firm then announced it would implement new initiatives and trainings "to make sure employee concerns and conflicts are taken seriously and dealt with appropriately." [ 118 ]
On July 25, 2019, a developer based in Iran wrote on Medium that GitHub had blocked his private repositories and prohibited access to GitHub pages. [ 119 ] Soon after, GitHub confirmed that it was now blocking developers in Iran , Crimea , Cuba , North Korea , and Syria from accessing private repositories. [ 120 ] However, GitHub reopened access to GitHub Pages days later, for public repositories regardless of location. It was also revealed that using GitHub while visiting sanctioned countries could result in similar actions occurring on a user's account. GitHub responded to complaints and the media through a spokesperson, saying:
GitHub is subject to US trade control laws, and is committed to full compliance with applicable law. At the same time, GitHub's vision is to be the global platform for developer collaboration, no matter where developers reside. As a result, we take seriously our responsibility to examine government mandates thoroughly to be certain that users and customers are not impacted beyond what is required by law. This includes keeping public repositories services, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions. [ 121 ] [ 122 ]
Developers who feel that they should not have restrictions can appeal for the removal of said restrictions, including those who only travel to, and do not reside in, those countries. GitHub has forbidden the use of VPNs and IP proxies to access the site from sanctioned countries, as purchase history and IP addresses are how they flag users, among other sources. [ 123 ]
On December 4, 2014, Russia blacklisted GitHub.com because GitHub initially refused to take down user-posted suicide manuals. [ 124 ] After a day, Russia withdrew its block, [ 125 ] and GitHub began blocking specific content and pages in Russia. [ 126 ] On December 31, 2014, India blocked GitHub.com along with 31 other websites over pro- ISIS content posted by users; [ 127 ] the block was lifted three days later. [ 128 ] On October 8, 2016, Turkey blocked GitHub to prevent email leakage of a hacked account belonging to the country's energy minister. [ 129 ]
On March 26, 2015, a large-scale DDoS attack was launched against GitHub.com that lasted for just under five days. [ 130 ] The attack, which appeared to originate from China, primarily targeted GitHub-hosted user content describing methods of circumventing Internet censorship . [ 131 ] [ 132 ] [ 133 ]
On April 19, 2020, Chinese police detained Chen Mei and Cai Wei (volunteers for Terminus 2049, a project hosted on GitHub), and accused them of "picking quarrels and provoking trouble." Cai and Chen archived news articles, interviews, and other materials published on Chinese media outlets and social media platforms that have been removed by censors in China. [ 134 ]
GitHub has a $200,000 contract with U.S. Immigration and Customs Enforcement (ICE) for the use of their on-site product GitHub Enterprise Server. This contract was renewed in 2019, despite internal opposition from many GitHub employees. In an email sent to employees, later posted to the GitHub blog on October 9, 2019, CEO Nat Friedman stated, "The revenue from the purchase is less than $200,000 and not financially material for our company." He announced that GitHub had pledged to donate $500,000 to "nonprofit groups supporting immigrant communities targeted by the current administration ." [ 135 ] In response, at least 150 GitHub employees signed an open letter re-stating their opposition to the contract, and denouncing alleged human rights abuses by ICE. As of November 13, 2019, [update] five workers had resigned over the contract. [ 136 ] [ 137 ] [ 138 ]
The ICE contract dispute came into focus again in June 2020 due to the company's decision to abandon "master/slave" branch terminology , spurred by the George Floyd protests and Black Lives Matter movement. [ 139 ] Detractors of GitHub describe the branch renaming to be a form of performative activism and have urged GitHub to cancel their ICE contract instead. [ 140 ] An open letter from members of the open source community was shared on GitHub in December 2019, demanding that the company drop its contract with ICE and provide more transparency into how they conduct business and partnerships. The letter has been signed by more than 700 people. [ 141 ]
In January 2021, GitHub fired one of its employees after he expressed concern for colleagues following the January 6 United States Capitol attack , calling some of the rioters " Nazis ". [ 142 ] After an investigation, GitHub's COO said there were "significant errors of judgment and procedure" with the company's decision to fire the employee. As a result of the investigation, GitHub reached out to the employee, and the company's head of human resources resigned. [ 143 ] [ 144 ]
In 2023, parts of the social media platform Twitter were uploaded onto GitHub. The leak was first reported by the New York Times and was part of a legal filing Twitter submitted to the United States District Court for the Northern District of California . Twitter claimed that the postings infringed on copyright property owned by them, and asked the court for information to identify the user who posted the source code to GitHub, under the username "FreeSpeechEnthusiast". [ 145 ]
In 2012, Linus Torvalds , the original developer of Git, highly praised GitHub, stating, "The hosting of github [ sic ] is excellent. They've done a good job on that. I think GitHub should be commended enormously for making open source project hosting so easy." However, he also sharply criticized the implementation of GitHub's merging interface, saying, "Git comes with a nice pull-request generation module, but GitHub instead decided to replace it with their own totally inferior version. As a result, I consider GitHub useless for these kinds of things. It's fine for hosting, but the pull requests and the online commit editing, are just pure garbage." [ 146 ] [ 147 ] | https://en.wikipedia.org/wiki/GitHub |
Gitee ( simplified Chinese : 码云 ; traditional Chinese : 碼雲 ; pinyin : Mǎyún ) is a proprietary online forge that allows software version control using Git and is intended primarily for the hosting of open source software. It is a fork of Gitea and uses a compatible API . It was launched by Shenzhen -based OSChina in 2013. [ 2 ] [ 3 ] Gitee claims to have more than 10 million repositories and 5 million users. [ 3 ]
Gitee was chosen by the Ministry of Industry and Information Technology of the Chinese government to make an "independent, open-source [ ambiguous ] code hosting platform for China." [ 3 ]
On 18 May 2022, Gitee announced all code will be manually reviewed before public availability. [ 4 ] [ 5 ] Gitee did not specify a reason for the change, though there was widespread speculation it was ordered by the Chinese government amid increasing online censorship in China . [ 4 ] [ 6 ]
This software-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gitee |
Giuliana Tesoro ( previously Cavaglieri ) (June 1, 1921–September 29, 2002) was an Italian-born American chemist who earned more than 125 patents, with her most notable consisting of improvements in fabric comfort, practicality, and flame resistance. [ 1 ]
Giuliana Tesoro (previously Cavaglieri) was an Italian-born American chemist whose pioneering work in textile chemistry led to significant advancement in fabric safety and performance. Born on June 1, 1921 in Venice, Italy, to Gino and Margherita Maroni Cavaglierei during the rise of fascism and anti-semtic laws under Benito Mussolini. She began third grade at age six and graduated from Liceo Classico Marco Polo High School at seventeen in 1938. Despite the laws that barred her from higher education in Italy due to her Jewish heritage, she pursued her passion for science with unwavering determination. After obtaining an x-ray technician degree in Geneva, Switzerland, Tesoro immigrated to the United States in 1939 where she then received a Ph.D. in organic chemistry from Yale, at age 21.
Tesoro's professional journey was marked by a series of influential roles. She began her career as a research chemist at Onyx Oil and Chemical where she then climbed the ranks to head of the organic synthesis department and eventually to associate director by 1955. After her ten-year tenure she moved to the Textile Research Institute for two years. In 1969 she accepted a position as a senior chemist at Burlington Industries where she then was appointed director of chemical research in 1971. During this time she was a prolific inventor, having been granted more than two dozen patents in 1970. Transitioning to academia, Tesoro served as a visiting professor in the Department of Mechanical Engineering at the Massachusetts Institute of Technology (MIT) from 1972 to 1976, later becoming an adjunct professor and senior research scientist. She continued her academic contributions as a research professor of polymer chemistry at Polytechnic University (now NYU Tandon School of Engineering) from 1982 until her retirement in 1996. She also often lectured on polymers at conferences worldwide while teaching as well as serving as an editor for the Textile Research Journal.
Giuliana Tesoro died on September 29, 2002, in Dobbs Ferry, New York, at the age of 81.
Beyond flame-resistant fabrics, Tesoro contributed to several advancements in textile chemistry, including improvements in fabric durability, wrinkle resistance, and the development of synthetic fibers with enhanced properties. These innovations improved both consumer products and industrial applications, making textiles safer, longer-lasting, and more functional.
Flame-resistant fabrics
One of Tesoro's most significant innovations was the development of flame-resistant fabric. Before her work, cotton and other natural fibers were highly flammable, posing serious risk, especially for workers in high-risk industries like firefighting and electrical engineering. Tesoro developed chemical treatments that made these fibers resistant to fire, drastically reducing burn injuries and deaths. This innovation has had a lasting societal impact, as flame-resistant fabrics continue to be an essential component in protective clothing, including firefighter gear, industrial uniforms, and even household textiles like mattresses and curtains. Her work laid the foundation for materials such as Kevlar and Nomex.
Tesoro's expertise was recognized through her active participation in several esteemed organizations, including the National Academy of Sciences and the National Research Council, where she contributed to committees focused on toxic materials and fire safety. She was also a founding member of the Fiber Society (1974) and held memberships in the American Chemical Society, the American Association of Textile Chemists and Colorists, and the American Association for the Advancement of Science. She won the Olney Medal from the American Association of Textile Chemists and Colorists in 1963, and then the Society of Women Engineers Achievement Award in 1978.
Giuliana Tesoro's legacy endures through the continued use and evolution of her innovations. Flame-resistant fabrics remain integral to protective clothing across various industries, continually adapting to meet modern safety standards and technological advancements. | https://en.wikipedia.org/wiki/Giuliana_Tesoro |
Giuseppe Donatiello (born 14 December 1967) is an Italian amateur astronomer . He is primarily known as the discoverer of eleven nearby dwarf galaxies in the Local Volume . [ 1 ] [ 2 ]
To these discoveries must be added some candidate planetary nebulae [ 23 ] and the participation in the discovery and analysis of several dozen stellar streams [ 24 ] [ 25 ]
He is the principal investigator and coordinator of the National Deep Sky Research Section of the Italian Amateur Astronomers Union [ 26 ] | https://en.wikipedia.org/wiki/Giuseppe_Donatiello |
Giuseppe Resnati (born 26 August 1955) is an Italian chemist with interests in supramolecular chemistry and fluorine chemistry . He has a particular focus on self-assembly processes driven by halogen bonds , [ 3 ] chalcogen bonds , [ 4 ] and pnictogen bonds . [ 5 ] His results on the attractive non-covalent interactions wherein atoms act as electrophiles thanks to the anisotropic distribution of the electron density typical for bonded atoms, prompted a systematic rationalization and categorization of many different weak bonds formed by many elements of the p- and d- blocks of the periodic table . [ 6 ]
Resnati was born in Monza , Italy . He obtained his PhD in Industrial Chemistry at the University of Milan in 1988 with Prof. Carlo Scolastico and a thesis on asymmetric synthesis via chiral sulfoxides. After a period of activity at the Italian National Research Council , in 2001 he became professor of chemistry for materials at the Politecnico di Milano .
His research interests cover/have covered the following topics: | https://en.wikipedia.org/wiki/Giuseppe_Resnati |
Glacial geoengineering is a set of proposed geoengineering that focus on slowing the loss of glaciers , ice sheets , and sea ice in polar regions and, in some cases, alpine areas. Proposals are motivated by concerns that feedback loops—such as ice-albedo loss, accelerated glacier flow, and permafrost methane release—could amplify climate change and trigger climate tipping points . [ 1 ] [ 2 ]
Proposed glacial geoengineering methods include regional or local solar radiation management , thinning cirrus clouds to allow more heat to escape, and deploying mechanical or engineering structures to stabilize ice. Specific strategies under investigation are stratospheric aerosol injection focused on polar regions, [ 1 ] marine cloud brightening , [ 3 ] surface albedo modification with reflective materials, [ 4 ] basal interventions such as draining subglacial water or promoting basal freezing, [ 2 ] and ice shelf protection measures including seabed curtains. [ 5 ]
Glacial geoengineering is in the early research stage and many proposals face major technical, environmental, and governance challenges. [ 3 ] Supporters argue that targeted interventions could help stabilize ice sheets, slow sea-level rise, and reduce the risk of passing irreversible thresholds in the climate system. At the same time, experts caution that the effectiveness of these methods remains highly uncertain and that interventions could produce unintended side effects. [ 2 ] Glacial geoengineering is generally considered a possible complement to, not a replacement for, efforts to reduce greenhouse gas emissions. [ 1 ] [ 3 ]
The rapid decline of Arctic sea ice has drawn attention to feedback loops that could accelerate global warming and has motivated proposals for climate intervention.
The Arctic's albedo plays a major role in regulating how much solar radiation is reflected away from Earth's surface. [ 6 ] As sea ice melts and the region's albedo decreases, less sunlight is reflected, causing additional warming. [ 6 ] This creates a positive feedback loop, known as the ice-albedo feedback loop , where rising temperatures cause further ice loss. [ 7 ] If this process continues, it could push the climate system past critical tipping points . [ 7 ]
Melting Arctic ice may also release methane, a powerful greenhouse gas stored in permafrost as methane clathrate . [ 8 ] Methane release could drive additional warming, creating another feedback loop. [ 9 ] A 3 °C rise above pre-industrial temperatures could thaw 30–85% of Arctic permafrost, risking major climate impacts. [ 9 ] [ clarification needed ] The IPCC Sixth Assessment Report projected that Arctic late-summer sea ice could largely disappear by the mid 21st century. [ 10 ] In response, climate engineering has been proposed to slow or reverse these trends. [ 11 ] mid
Supporters of Arctic geoengineering argue it could stabilize permafrost carbon stores and limit further warming. [ 11 ] Arctic permafrost holds an estimated 1,700 billion metric tons of carbon—about 51 times the amount of annual global fossil fuel emissions. [ 12 ] Permafrost soils across the Northern Hemisphere contain about twice as much carbon as the atmosphere, and Arctic air temperatures have risen roughly six times faster than the global average. [ 11 ] Continued ice loss could substantially accelerate global warming . [ 11 ] Arctic sea ice also helps regulate global temperatures by limiting the release of strong greenhouse gases . [ 11 ]
Proposed geoengineering strategies aim to protect existing sea ice and encourage new ice growth. Methods include reducing sunlight reaching the surface, promoting freezing, and slowing melt rates. [ 11 ] [ 13 ] Approaches include stratospheric sulfate aerosol injection, pumping seawater onto ice to thicken it, and covering ice with hollow glass spheres to enhance reflectivity. [ 13 ] [ 12 ] These methods vary widely in cost, complexity, and technical feasibility. [ 13 ]
Surface ice thickening is a proposed glacial geoengineering strategy aimed at slowing ice loss by building up the thickness of glaciers, ice sheets, or sea ice. One method involves pumping seawater onto the surface of polar ice sheets during winter, allowing it to freeze and add mass. Thickening the ice in this way could make it more resistant to melting and flow. [ 4 ] [ 3 ] The Centre for Climate Repair at Cambridge has proposed a concept where fleets of wind- and solar-powered pumps would distribute seawater across vulnerable areas to help stabilize ice sheets, [ 14 ] while the RealIce project has explored similar techniques using energy-efficient pumping technologies. [ 15 ]
Another approach focuses on increasing snowfall. Artificial snow production, a technology already common at ski resorts, could be adapted to add mass to glaciers and ice sheets. By spraying fine droplets of water into cold air, snow can be generated and deposited on the surface. Research initiatives have investigated the potential of artificial snowmaking for glacier protection, particularly in alpine regions. [ 16 ]
Surface thickening methods could be deployed either over large sections of polar ice sheets or in more targeted ways, such as reinforcing weak spots near glacier grounding lines. However, scaling these interventions across vast polar areas would require large infrastructure investments and could present environmental challenges. [ 2 ]
Basal interventions aim to slow the flow of glaciers and ice sheets by modifying conditions at their base. One proposed method is draining meltwater from beneath glaciers to reduce lubrication at the ice-bed interface. Removing this water could increase friction between the ice and bedrock, slowing glacial movement and reducing the contribution to sea-level rise. [ 4 ] [ 17 ]
Another approach involves basal freezing, where artificial cooling is used to promote the refreezing of water at the base of the ice sheet. This could increase the strength of the ice-bed connection and further stabilize glacier flow. [ 3 ] Techniques under consideration include installing thermal systems to extract heat from the bed or injecting cooled fluids to promote freezing.
Basal interventions could target key outlet glaciers or grounding lines where destabilization is occurring most rapidly. Modeling studies suggest that these methods could be effective in slowing ice sheet collapse, but the technical challenges are significant. Drilling, installing, and maintaining systems under thick ice in remote, harsh environments would require major engineering efforts. [ 2 ]
Protecting ice shelves is an important focus of glacial geoengineering proposals, as ice shelves play a key role in slowing the flow of glaciers into the ocean. Several strategies have been proposed to stabilize ice shelves and reduce the risk of rapid ice loss.
One approach involves buttressing ice shelves by constructing artificial anchors or adding material to strengthen existing grounding points. This could include placing rocks or engineered structures on the seabed where ice shelves are weak, helping to pin the ice and slow its flow (; https://climateinterventions.org/interventions/ice-sheet-stabilization-via-buttressing/). [ 4 ] [ 18 ] Studies suggest that even small changes in buttressing could have large effects on the stability of upstream glaciers.
Another proposal is to install seabed curtains or barriers to block the flow of warm ocean water toward glacier grounding lines. These flexible underwater structures would be anchored to the seabed and extend vertically to impede warm currents, which currently erode the ice from below. [ 3 ] [ 2 ] The Centre for Climate Repair at Cambridge has highlighted seabed curtains as a potentially scalable method to slow ice shelf thinning and collapse. [ 14 ] Research and engineering studies have explored designs for curtains that could withstand ocean currents while remaining flexible enough to adjust to ice movements. [ 19 ]
While modeling studies suggest that both buttressing and seabed barriers could meaningfully slow ice loss, these approaches would involve major engineering challenges. Building and maintaining structures in remote, dynamic polar environments would be technically complex and costly. Potential environmental impacts, such as changes to ocean circulation or ecosystems, would also need to be carefully considered. [ 5 ]
Stratospheric aerosol injection (SAI) concentrated in polar regions is a proposed geoengineering method to slow the melting of polar ice. It involves releasing small reflective particles, such as sulfur dioxide , into the stratosphere over high latitudes to reflect sunlight and cool the surface below. Targeting aerosols in the Arctic and Antarctic could reduce polar amplification —the faster warming of the poles compared to the rest of the planet—and help preserve sea ice and glaciers. [ 1 ] Climate model studies suggest that polar-focused SAI could reduce summer ice loss, limit sea-level rise, and have fewer global side effects than a uniform worldwide aerosol distribution . [ 1 ] [ 22 ]
One proposed strategy is to release aerosols seasonally during the polar winter, when solar energy is returning but atmospheric conditions are more stable. [ 22 ] This could maximize cooling effects while minimizing disruption to atmospheric circulation. However, even polar SAI could alter weather patterns, weaken the polar vortex, and affect ozone chemistry. While SAI shows potential to slow polar ice loss, uncertainties remain about its effectiveness, regional impacts, and governance challenges. [ 23 ]
Marine cloud brightening (MCB) is a proposed geoengineering method that would involve spraying fine seawater droplets into the atmosphere to make clouds more reflective, thereby cooling the surface below. In polar regions, MCB aims to increase the brightness of low-lying clouds over the oceans to reduce regional warming and slow ice loss. Research suggests that targeting MCB at high latitudes could help stabilize Arctic sea ice, with fewer global side effects compared to interventions applied worldwide. [ 1 ] [ 23 ] Observational studies in the Southern Ocean, where natural cloud brightening occurs, provide supporting evidence that increasing cloud droplet concentration can significantly boost cloud reflectivity and cooling potential. [ 1 ]
The Centre for Climate Repair at Cambridge has proposed developing MCB techniques specifically to "refreeze" the Arctic by restoring the reflectivity of polar clouds. [ 14 ] Other proposals suggest using fleets of unmanned vessels to continuously spray seawater into the atmosphere over targeted ocean areas. [ 24 ] Although polar MCB shows promise in models, technical challenges, potential ecological impacts, and the difficulty of achieving sufficient cloud modification at large scales remain significant obstacles. [ 23 ]
Ocean albedo modification would aim to make open ocean surfaces near the poles more reflective, reducing the amount of solar energy absorbed by the water. One idea is to generate microbubbles or apply reflective foams across the ocean surface to increase its brightness. Studies suggest that even modest increases in surface reflectivity could contribute to localized cooling and help slow the loss of sea ice. [ 25 ] [ 26 ] Proposed techniques include releasing air bubbles from ships or using surface treatments to create a whiter ocean surface . [ 27 ] However, large-scale deployment of these methods remains theoretical. Challenges include maintaining a sufficient concentration of bubbles or foam over time, potential impacts on marine ecosystems, and the difficulty of covering large ocean areas in a sustainable way. [ 28 ]
Surface albedo modification is a proposed geoengineering method aimed at slowing ice melt by increasing the reflectivity of glaciers, ice sheets, and sea ice. Techniques under study include applying bright materials, such as hollow glass microspheres or reflective geotextiles, to ice surfaces. By increasing albedo, these treatments are intended to reflect more solar radiation and reduce surface warming. [ 1 ] [ 23 ] Field experiments have demonstrated that surface treatments can raise local albedo and delay melting under controlled conditions. [ 29 ] Scaling such methods to cover the extensive areas of polar ice necessary to significantly impact global sea-level rise presents major technical and logistical challenges.
The organization Ice911 Research, later renamed the Arctic Ice Project, conducted field tests using hollow glass microspheres to increase the reflectivity of sea ice. [ 30 ] [ 31 ] Although small-scale trials indicated some increase in ice surface albedo, questions about environmental impacts, material durability, and deployment feasibility remained. The Arctic Ice Project ended operations in 2024. [ 32 ]
Surface albedo modification has also been tested on alpine glaciers. Projects in Switzerland, Austria, and elsewhere have deployed geotextile blankets over glacier surfaces to reflect sunlight and reduce seasonal melt. [ 23 ] Unlike polar-scale proposals, alpine applications are generally focused on preserving ice for tourism, water supply, and local ecosystems rather than influencing global climate.
Cirrus cloud thinning (CCT) is a proposed geoengineering method designed to reduce the warming effect of high-altitude cirrus clouds by making them thinner and shorter-lived. Unlike low clouds, which reflect sunlight and cool the surface, cirrus clouds trap outgoing infrared radiation and contribute to warming. In polar regions, especially during winter when sunlight is minimal, thinning cirrus clouds could enhance longwave radiation loss to space and promote regional cooling. [ 1 ] [ 2 ] Proposed techniques involve injecting ice-nucleating particles into the upper troposphere to encourage the growth of larger ice crystals, which fall out more rapidly, reducing cloud thickness and lifetime. [ 3 ] [ 33 ]
Modeling studies suggest that cirrus cloud thinning focused on high latitudes could support cooling of polar regions. Because it modifies the greenhouse effect rather than the reflection of sunlight, it may avoid some side effects associated with other SRM methods. However, uncertainties remain about its effectiveness, particularly concerning potential impacts on atmospheric circulation and moisture transport. [ 3 ] | https://en.wikipedia.org/wiki/Glacial_geoengineering |
A glacial refugium (plural glacial refugia ) is a geographic region which made possible the survival of flora and fauna during ice ages and allowed for post-glacial re-colonization. [ 1 ] [ 2 ] Different types of glacial refugia can be distinguished, namely nunatak , peripheral, and lowland. [ 3 ] Glacial refugia have been suggested as a major cause of floral and faunal distribution patterns in both temperate and tropical latitudes. [ 4 ] [ 5 ] [ 6 ] With respect to disjunct populations of modern-day species, especially in birds, [ 7 ] [ 8 ] doubt has been cast on the validity of such inferences, as much of the differentiation between populations observed today may have occurred before or after their restriction to refugia. [ 9 ] [ 10 ] In contrast, isolated geographic locales that host one or more critically endangered species (regarded as paleoendemics or glacial relicts ) are generally uncontested as bona fide glacial refugia. [ 11 ]
Traditionally, the identification of glacial refugia has occurred through palaeoecological analysis, which examines fossil organisms and their remains to determine the origins of modern taxa. [ 5 ] For example, paleoecological approaches have been used to reconstruct the distributions of pollen in Europe for the 13,000 years since the last glaciation. Researchers in this case ultimately established the spread of forest trees from the mountainous southern fringe of Europe, which suggests that this area served as a glacial refugium during this time. [ 12 ]
Four distinct types of glacial refugium have been identified:
This type of refugium is created by an influx of hydrothermal waters which maintains a humid and warm microclimate that allowed thermophilous trees like oak (Quercus), linden (Tilia), and common ash (Fraxinus excelsior) to survive the last ice age in Central Europe. [ 13 ]
A nunatak is a type of glacial refugium located on the snow-free, exposed peaks of mountains, which lie above the ice sheet during glaciations. [ 3 ] The identification of ‘diversity hotspots’ in areas that should have been migration regions during major glacial episodes is evidence for nunatak glacial refugia. [ 14 ] For example, the Monte Rosa mountain ranges, the Avers , and the Engadine and the Bernina are all floristically rich proposed nunatak regions, which are indicative nunatak glacial survival. [ 14 ]
Like nunataks, peripheral glacial refugia exist within mountain systems; they differ in that they are located at the borders of mountain systems. [ 3 ] Evidence for peripheral refugia can be found along the borders of the Carpathian Mountains , Pyrenees , and European Alps , all of which were once glaciated mountain systems. For example, using the amplified fragment length polymorphism (AFLP) technique, researchers have inferred survival of Phyteuma globulariifolium in peripheral refugia in the European Alps. [ 15 ]
Lowland glacial refugia, unlike nunatak and peripheral glacial refugia, are found at low elevations rather than in mountains. [ 3 ] Situated beyond the limits of ice shields, lowland refugia have been identified for several plant and animal species. In Europe, for example, researchers using allozyme analysis have been able to confirm the continuous distribution of Zygaena exulans in between the foothills of the Pyrenees and the Alps during the last ice age. [ 16 ]
In eastern North America, lowland glacial refugia along the Atlantic and Gulf Coasts host endemic plants — some of which are rare, even endangered, and others entail the most southerly disjunct populations of plants that commonly appear only hundreds of miles to the north. Major rivers draining southward from the Appalachian Mountains are associated with a gradation of paleoendemic tree species. These range from the extinct Critchfield spruce near the outlet of the Mississippi River , to extinct-in-the-wild Franklinia along the Altamaha River , to the critically endangered Florida torreya and Florida yew at the downstream end of the Chattahoochee River system. [ 11 ] [ 17 ] (See illustration at right.) | https://en.wikipedia.org/wiki/Glacial_refugium |
A glacial relict is a population of a species that was common in the Northern Hemisphere prior to the onset of glaciation in the late Tertiary that was forced by climate change to retreat into refugia when continental ice sheets advanced. [ 1 ] They are typically cold-adapted species with a distribution restricted to regions and microhabitats that allow them to survive despite climatic changes. [ 1 ] [ 2 ]
There are a wide variety of plant species which fit the category of glacial relict. The ones given here are a small selection of the much larger group.
This biology article is a stub . You can help Wikipedia by expanding it .
This glaciology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glacial_relict |
The Gladstone–Dale relation [ 1 ] is a mathematical relation used for optical analysis of liquids, the determination of composition from optical measurements. It can also be used to calculate the density of a liquid for use in fluid dynamics (e.g., flow visualization [ 2 ] ). The relation has also been used to calculate refractive index of glass and minerals in optical mineralogy . [ 3 ]
In the Gladstone–Dale relation, ( n − 1 ) / ρ = ∑ k m {\textstyle (n-1)/\rho =\sum km} , the index of refraction ( n ) or the density ( ρ in g/cm 3 ) of miscible liquids that are mixed in mass fraction ( m ) can be calculated from characteristic optical constants (the molar refractivity k in cm 3 /g) of pure molecular end-members. For example, for any mass ( m ) of ethanol added to a mass of water, the alcohol content is determined by measuring density or index of refraction ( Brix refractometer ). Mass ( m ) per unit volume ( V ) is the density m / V . Mass is conserved on mixing, but the volume of 1 cm 3 of ethanol mixed with 1 cm 3 of water is reduced to less than 2 cm 3 due to the formation of ethanol-water bonds. The plot of volume or density versus molecular fraction of ethanol in water is a quadratic curve. However, the plot of index of refraction versus molecular fraction of ethanol in water is linear, and the weight fraction equals the fractional density [ 4 ]
In the 1900s, the Gladstone–Dale relation was applied to glass, synthetic crystals and minerals . Average values for the refractivity of oxides such as MgO or SiO 2 give good to excellent agreement between the calculated and measured average indices of refraction of minerals. [ 3 ] However, specific values of refractivity are required to deal with different structure-types, [ 5 ] and the relation required modification to deal with structural polymorphs and the birefringence of anisotropic crystal structures.
In recent optical crystallography, Gladstone–Dale constants for the refractivity of ions were related to the inter-ionic distances and angles of the crystal structure . The ionic refractivity depends on 1/ d 2 , where d is the inter-ionic distance, indicating that a particle-like photon refracts locally due to the electrostatic Coulomb force between ions. [ 6 ]
The Gladstone–Dale relation can be expressed as an equation of state by re-arranging the terms to ( n − 1 ) V = ∑ k d m {\textstyle (n-1)V=\sum kdm} . ( n − 1 ) / d = c o n s t a n t {\displaystyle (n-1)/d=\mathrm {constant} } [ 7 ]
where n is the index of refraction, d = density and constant = Gladstone-Dale constant.
The macroscopic values ( n ) and ( V ) determined on bulk material are now calculated as a sum of atomic or molecular properties. Each molecule has a characteristic mass (due to the atomic weights of the elements) and atomic or molecular volume that contributes to the bulk density, and a characteristic refractivity due to a characteristic electric structure that contributes to the net index of refraction.
The refractivity of a single molecule is the refractive volume k (MW)/ N A in nm 3 , where MW is the molecular weight and N A is the Avogadro constant . To calculate the optical properties of materials using the polarizability or refractivity volumes in nm 3 , the Gladstone–Dale relation competes with the Kramers–Kronig relation and Lorentz–Lorenz relation but differs in optical theory. [ 8 ]
The index of refraction ( n ) is calculated from the change of angle of a collimated monochromatic beam of light from vacuum into liquid using Snell's law for refraction . Using the theory of light as an electromagnetic wave, [ 9 ] light takes a straight-line path through water at reduced speed ( v ) and wavelength ( λ ). The ratio v / λ is a constant equal to the frequency ( ν ) of the light, as is the quantized (photon) energy using the Planck constant and E = hν . Compared to the constant speed of light in vacuum ( c ), the index of refraction of water is n = c / v .
The Gladstone–Dale term ( n − 1) is the non-linear optical path length or time delay. Using Isaac Newton 's theory of light as a stream of particles refracted locally by (electric) forces acting between atoms, the optic path length is due to refraction at constant speed by displacement about each atom. For light passing through 1 m of water with n = 1.33 , light traveled an extra 0.33 m compared to light that traveled 1 m in a straight line in vacuum. As the speed of light is a ratio (distance per unit time in m/s), light also took an extra 0.33 s to travel through water compared to light traveling 1 s in vacuum.
Mandarino, in his review of the Gladstone–Dale relationship in minerals proposed the concept of the Compatibility Index in comparing the physical and optical properties of minerals. This compatibility index is a required calculation for approval as a new mineral species (see IMA guidelines).
The compatibility index ( CI ) is defined as follows: C I meas = ( 1 − K P D meas / K C ) C I calc = ( 1 − K P D calc / K C ) {\displaystyle \mathrm {CI} _{\text{meas}}=(1-\mathrm {KPD} _{\text{meas}}/\mathrm {KC} )\quad \mathrm {CI} _{\text{calc}}=(1-\mathrm {KPD} _{\text{calc}}/\mathrm {KC} )}
Where, KP = Gladstone-Dale Constant derived from physical properties. [ 10 ]
The Gladstone–Dale relation requires a particle model of light because the continuous wave-front required by wave theory cannot be maintained if light encounters atoms or molecules that maintain a local electric structure with a characteristic refractivity. Similarly, the wave theory cannot explain the photoelectric effect or absorption by individual atoms and one requires a local particle of light (see Wave–particle duality ).
A local model of light consistent with these electrostatic refraction calculations occurs if the electromagnetic energy is restricted to a finite region of space. An electric-charge monopole must occur perpendicular to dipole loops of magnetic flux, but if local mechanisms for propagation are required, a periodic oscillatory exchange of electromagnetic energy occurs with transient mass. In the same manner, a change of mass occurs as an electron binds to a proton. This local photon has zero rest mass and no net charge, but has wave properties with spin-1 symmetry on trace over time. In this modern version of Newton's corpuscular theory of light, the local photon acts as a probe of the molecular or crystal structure. [ 11 ] | https://en.wikipedia.org/wiki/Gladstone–Dale_relation |
In mathematical analysis , Glaeser's continuity theorem is a characterization of the continuity of the derivative of the square roots of functions of class C 2 {\displaystyle C^{2}} . It was introduced in 1963 by Georges Glaeser , [ 1 ] and was later simplified by Jean Dieudonné . [ 2 ]
The theorem states: Let f : U → R 0 + {\displaystyle f\ :\ U\rightarrow \mathbb {R} _{0}^{+}} be a function of class C 2 {\displaystyle C^{2}} in an open set U contained in R n {\displaystyle \mathbb {R} ^{n}} , then f {\displaystyle {\sqrt {f}}} is of class C 1 {\displaystyle C^{1}} in U if and only if its partial derivatives of first and second order vanish in the zeros of f . | https://en.wikipedia.org/wiki/Glaeser's_continuity_theorem |
Glagolitic numerals are a numeral system derived from the Glagolitic script , generally agreed to have been created in the 9th century by Saint Cyril . They are similar to Cyrillic numerals , except that numeric values are assigned according to the native alphabetic order of the Glagolitic alphabet . [ 1 ] [ 2 ] Use of Glagolitic script and numerals declined through the Middle Ages and by the 17th century Glagolitic was used almost only in religious writings. It is unclear if the use of Glagolitic numerals persisted as long as the use of Glagolitic script. [ 3 ]
The system is a decimal alphabetic numeral system , with values assigned in alphabetical order, so Ⰰ [ ɑ ] = 1, Ⰱ [ b ] = 2, and so forth. Glyphs for the ones, tens, and hundreds values are combined additively to form numbers, for example, ⰗⰑⰂ is 500 + 80 + 3 or 583. Numbers are written from left to right, highest value at the left. As with Cyrillic numerals, between 11 and 19 the ordinary sign order is reversed, so the numbers 11 through 19 are typically written with the ones digit before the glyph for 10; for example ⰅⰊ is 6 + 10, making 16, this reflects the Slavic lexical numerals for the teens. [ 4 ] [ 3 ]
For numbers greater than 999, there is conflicting evidence. As the earliest version of the Glagolitic alphabet had 36 characters, there are indications of the use of Glagolitic letters for 1000 through 9000, [ 3 ] [ 5 ] although the validity of 3000 and greater is questioned. [ 6 ] There is also evidence of the use of a thousands sign, similar to the lower-left keraia in Greek numerals or the Cyrillic thousands sign to mark numbers greater than 999. [ 3 ]
To distinguish numbers from text, numerals are typically set apart with dots or a mark is placed over the numbers. [ 3 ] For example, the Missale Romanum Glagolitice printed in 1483, uses both dots around and a titlo over letters in places to indicate a number, [ 7 ] as does the Vinodol statute .
Example: ( ·Ⱍ҃·Ⱄ҃·Ⱁ҃· ) – 1280
As noted earlier, the letters associated with number values greater than 999 are uncertain, and different authors have inferred different values (and different orders) for letters towards the end of the Glagolitic alphabet. [ 6 ] [ 8 ] | https://en.wikipedia.org/wiki/Glagolitic_numerals |
In number theory , Glaisher's theorem is an identity useful to the study of integer partitions . Proved in 1883 [ 1 ] by James Whitbread Lee Glaisher , it states that the number of partitions of an integer n {\displaystyle n} into parts not divisible by d {\displaystyle d} is equal to the number of partitions in which no part is repeated d {\displaystyle d} or more times. This generalizes a result established in 1748 by Leonhard Euler for the case d = 2 {\displaystyle d=2} .
It states that the number of partitions of an integer n {\displaystyle n} into parts not divisible by d {\displaystyle d} is equal to the number of partitions in which no part is repeated d or more times, which can be written formally as partitions of the form n = λ 1 + ⋯ + λ k {\displaystyle n=\lambda _{1}+\cdots +\lambda _{k}} where λ i ≥ λ i + 1 {\displaystyle \lambda _{i}\geq \lambda _{i+1}} and λ i ≥ λ i + d − 1 + 1 {\displaystyle \lambda _{i}\geq \lambda _{i+d-1}+1} .
When d = 2 {\displaystyle d=2} this becomes the special case known as Euler's theorem, that the number of partitions of n {\displaystyle n} into distinct parts is equal to the number of partitions of n {\displaystyle n} into odd parts.
In the following examples, we use the multiplicity notation of partitions. For example, 1 4 2 1 3 2 {\displaystyle 1^{4}2^{1}3^{2}} is a notation for the partition 1 + 1 + 1 + 1 + 2 + 3 + 3.
Among the 15 partitions of the number 7, there are 5, shown in bold below, that contain only odd parts (i.e. only odd numbers):
7 , 6 1 1 1 , 5 1 2 1 , 5 1 1 2 , 4 1 3 1 , 4 1 2 1 1 1 , 4 1 1 3 , 3 2 1 1 , 3 1 2 2 , 3 1 2 1 1 2 , 3 1 1 4 , 2 3 1 1 , 2 2 1 3 , 2 1 1 5 , 1 7 {\displaystyle \mathbf {7} ,6^{1}1^{1},5^{1}2^{1},\mathbf {5^{1}1^{2}} ,4^{1}3^{1},4^{1}2^{1}1^{1},4^{1}1^{3},\mathbf {3^{2}1^{1}} ,3^{1}2^{2},3^{1}2^{1}1^{2},\mathbf {3^{1}1^{4}} ,2^{3}1^{1},2^{2}1^{3},2^{1}1^{5},\mathbf {1^{7}} }
If we count now the partitions of 7 with distinct parts (i.e. where no number is repeated), we also obtain 5:
7 , 6 1 1 1 , 5 1 2 1 , 5 1 1 2 , 4 1 3 1 , 4 1 2 1 1 1 , 4 1 1 3 , 3 2 1 1 , 3 1 2 2 , 3 1 2 1 1 2 , 3 1 1 4 , 2 3 1 1 , 2 2 1 3 , 2 1 1 5 , 1 7 {\displaystyle \mathbf {7} ,\mathbf {6^{1}1^{1}} ,\mathbf {5^{1}2^{1}} ,5^{1}1^{2},\mathbf {4^{1}3^{1}} ,\mathbf {4^{1}2^{1}1^{1}} ,4^{1}1^{3},3^{2}1^{1},3^{1}2^{2},3^{1}2^{1}1^{2},3^{1}1^{4},2^{3}1^{1},2^{2}1^{3},2^{1}1^{5},1^{7}}
The partitions in bold in the first and second case are not the same, and it is not obvious why their number is the same.
Among the 11 partitions of the number 6, there are 7, shown in bold below, that contain only parts not divisible by 3:
6 , 5 1 1 1 , 4 1 2 1 , 4 1 1 2 , 3 2 , 3 1 2 1 1 1 , 3 1 1 3 , 2 3 , 2 2 1 2 , 2 1 1 4 , 1 6 {\displaystyle 6,\mathbf {5^{1}1^{1}} ,\mathbf {4^{1}2^{1}} ,\mathbf {4^{1}1^{2}} ,3^{2},3^{1}2^{1}1^{1},3^{1}1^{3},\mathbf {2^{3}} ,\mathbf {2^{2}1^{2}} ,\mathbf {2^{1}1^{4}} ,\mathbf {1^{6}} }
And if we count the partitions of 6 with no part that repeats more than 2 times, we also obtain 7: 6 , 5 1 1 1 , 4 1 2 1 , 4 1 1 2 , 3 2 , 3 1 2 1 1 1 , 3 1 1 3 , 2 3 , 2 2 1 2 , 2 1 1 4 , 1 6 {\displaystyle \mathbf {6} ,\mathbf {5^{1}1^{1}} ,\mathbf {4^{1}2^{1}} ,\mathbf {4^{1}1^{2}} ,\mathbf {3^{2}} ,\mathbf {3^{1}2^{1}1^{1}} ,3^{1}1^{3},2^{3},\mathbf {2^{2}1^{2}} ,2^{1}1^{4},1^{6}}
A proof of the theorem can be obtained with generating functions . If we note p d ( n ) {\displaystyle p_{d}(n)} the number of partitions with no parts divisible by d and q d ( n ) {\displaystyle q_{d}(n)} the number of partitions with no parts repeated more than d-1 times, then the theorem means that for all n p d ( n ) = q d ( n ) {\displaystyle p_{d}(n)=q_{d}(n)} . The uniqueness of ordinary generating functions implies that instead of proving that p d ( n ) = q d ( n ) {\displaystyle p_{d}(n)=q_{d}(n)} for all n, it suffices to prove that the generating functions of p d ( n ) {\displaystyle p_{d}(n)} and q d ( n ) {\displaystyle q_{d}(n)} are equal, i.e. that ∑ n = 0 ∞ p d ( n ) x n = ∑ n = 0 ∞ q d ( n ) x n {\displaystyle \sum _{n=0}^{\infty }p_{d}(n)x^{n}=\sum _{n=0}^{\infty }q_{d}(n)x^{n}} .
Each generating function can be rewritten as infinite products (with a method similar to the infinite product of the partition function ) :
If we expand the infinite product for q d ( n ) {\displaystyle q_{d}(n)} :
we see that each term in the numerator cancels with the corresponding multiple of d in the denominator. What remains after canceling all the numerator terms is exactly the infinite product for p d ( n ) {\displaystyle p_{d}(n)} .
Hence the generating functions for p d ( n ) {\displaystyle p_{d}(n)} and q d ( n ) {\displaystyle q_{d}(n)} are equal.
If instead of counting the number of partitions with distinct parts we count the number of partitions with parts differing by at least 2, a further generalization is possible. It was first discovered by Leonard James Rogers in 1894, and then independently by Ramanujan in 1913 and Schur in 1917, in what are now known as the Rogers-Ramanujan identities . It states that:
For example, there are only 3 partitions of 7, shown in bold below, into parts differing by at least 2 (note: if a number is repeated in a partition, it means a difference of 0 between two parts, hence the partition is not counted):
7 , 6 1 1 1 , 5 1 2 1 , 5 1 1 2 , 4 1 3 1 , 4 1 2 1 1 1 , 4 1 1 3 , 3 2 1 1 , 3 1 2 2 , 3 1 2 1 1 2 , 3 1 1 4 , 2 3 1 1 , 2 2 1 3 , 2 1 1 5 , 1 7 {\displaystyle \mathbf {7} ,\mathbf {6^{1}1^{1}} ,\mathbf {5^{1}2^{1}} ,5^{1}1^{2},4^{1}3^{1},4^{1}2^{1}1^{1},4^{1}1^{3},3^{2}1^{1},3^{1}2^{2},3^{1}2^{1}1^{2},3^{1}1^{4},2^{3}1^{1},2^{2}1^{3},2^{1}1^{5},1^{7}}
And there are also only 3 partitions of 7 involving only the parts 1, 4, 6:
7 , 6 1 1 1 , 5 1 2 1 , 5 1 1 2 , 4 1 3 1 , 4 1 2 1 1 1 , 4 1 1 3 , 3 2 1 1 , 3 1 2 2 , 3 1 2 1 1 2 , 3 1 1 4 , 2 3 1 1 , 2 2 1 3 , 2 1 1 5 , 1 7 {\displaystyle 7,\mathbf {6^{1}1^{1}} ,5^{1}2^{1},5^{1}1^{2},4^{1}3^{1},4^{1}2^{1}1^{1},\mathbf {4^{1}1^{3}} ,3^{2}1^{1},3^{1}2^{2},3^{1}2^{1}1^{2},3^{1}1^{4},2^{3}1^{1},2^{2}1^{3},2^{1}1^{5},\mathbf {1^{7}} }
For an example of the second statement of the Rogers-Ramanujan identities, we consider partitions of 7 with the further restriction of the smallest part at least 2, and there are only 2, shown in bold below:
7 , 6 1 1 1 , 5 1 2 1 , 5 1 1 2 , 4 1 3 1 , 4 1 2 1 1 1 , 4 1 1 3 , 3 2 1 1 , 3 1 2 2 , 3 1 2 1 1 2 , 3 1 1 4 , 2 3 1 1 , 2 2 1 3 , 2 1 1 5 , 1 7 {\displaystyle \mathbf {7} ,6^{1}1^{1},\mathbf {5^{1}2^{1}} ,5^{1}1^{2},4^{1}3^{1},4^{1}2^{1}1^{1},4^{1}1^{3},3^{2}1^{1},3^{1}2^{2},3^{1}2^{1}1^{2},3^{1}1^{4},2^{3}1^{1},2^{2}1^{3},2^{1}1^{5},1^{7}}
And there are also only 2 partitions of 7 involving only the parts 2, 3, 7:
7 , 6 1 1 1 , 5 1 2 1 , 5 1 1 2 , 4 1 3 1 , 4 1 2 1 1 1 , 4 1 1 3 , 3 2 1 1 , 3 1 2 2 , 3 1 2 1 1 2 , 3 1 1 4 , 2 3 1 1 , 2 2 1 3 , 2 1 1 5 , 1 7 {\displaystyle \mathbf {7} ,6^{1}1^{1},5^{1}2^{1},5^{1}1^{2},4^{1}3^{1},4^{1}2^{1}1^{1},4^{1}1^{3},3^{2}1^{1},\mathbf {3^{1}2^{2}} ,3^{1}2^{1}1^{2},3^{1}1^{4},2^{3}1^{1},2^{2}1^{3},2^{1}1^{5},1^{7}} | https://en.wikipedia.org/wiki/Glaisher's_theorem |
In mathematics , the Glaisher–Kinkelin constant or Glaisher's constant , typically denoted A , is a mathematical constant , related to special functions like the K -function and the Barnes G -function . The constant also appears in a number of sums and integrals , especially those involving the gamma function and the Riemann zeta function . It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin .
Its approximate value is:
Glaisher's constant plays a role both in mathematics and in physics . It appears when giving a closed form expression for Porter's constant , when estimating the efficiency of the Euclidean algorithm . It also is connected to solutions of Painlevé differential equations and the Gaudin model . [ 1 ]
The Glaisher–Kinkelin constant A can be defined via the following limit : [ 2 ]
where H ( n ) {\displaystyle H(n)} is the hyperfactorial : H ( n ) = ∏ i = 1 n i i = 1 1 ⋅ 2 2 ⋅ 3 3 ⋅ . . . ⋅ n n {\displaystyle H(n)=\prod _{i=1}^{n}i^{i}=1^{1}\cdot 2^{2}\cdot 3^{3}\cdot {...}\cdot n^{n}} An analogous limit, presenting a similarity between A {\displaystyle A} and 2 π {\displaystyle {\sqrt {2\pi }}} , is given by Stirling's formula as:
with n ! = ∏ i = 1 n i = 1 ⋅ 2 ⋅ 3 ⋅ . . . ⋅ n {\displaystyle n!=\prod _{i=1}^{n}i=1\cdot 2\cdot 3\cdot {...}\cdot n} which shows that just as π is obtained from approximation of the factorials , A is obtained from the approximation of the hyperfactorials.
Just as the factorials can be extended to the complex numbers by the gamma function such that Γ ( n ) = ( n − 1 ) ! {\displaystyle \Gamma (n)=(n-1)!} for positive integers n , the hyperfactorials can be extended by the K-function [ 3 ] with K ( n ) = H ( n − 1 ) {\displaystyle K(n)=H(n-1)} also for positive integers n , where:
This gives: [ 1 ]
A related function is the Barnes G -function which is given by
and for which a similar limit exists: [ 2 ]
The Glaisher-Kinkelin constant also appears in the evaluation of the K-function and Barnes-G function at half and quarter integer values such as: [ 1 ] [ 4 ]
with G {\displaystyle G} being Catalan's constant and ϖ = Γ ( 1 / 4 ) 2 2 2 π {\displaystyle \varpi ={\frac {\Gamma (1/4)^{2}}{2{\sqrt {2\pi }}}}} being the lemniscate constant .
Similar to the gamma function , there exists a multiplication formula for the K-Function. It involves Glaisher's constant: [ 5 ]
The logarithm of G ( z + 1) has the following asymptotic expansion , as established by Barnes: [ 6 ]
The Glaisher-Kinkelin constant is related to the derivatives of the Euler-constant function : [ 5 ] [ 7 ]
A {\displaystyle A} also is related to the Lerch transcendent : [ 8 ]
Glaisher's constant may be used to give values of the derivative of the Riemann zeta function as closed form expressions, such as: [ 2 ] [ 9 ]
where γ is the Euler–Mascheroni constant .
The above formula for ζ ′ ( 2 ) {\displaystyle \zeta '(2)} gives the following series: [ 2 ]
which directly leads to the following product found by Glaisher :
Similarly it is
which gives:
An alternative product formula, defined over the prime numbers , reads: [ 10 ]
Another product is given by: [ 5 ]
A series involving the cosine integral is: [ 11 ]
Helmut Hasse gave another series representation for the logarithm of Glaisher's constant, following from a series for the Riemann zeta function: [ 8 ]
The following are some definite integrals involving Glaisher's constant: [ 1 ]
the latter being a special case of: [ 12 ]
We further have: [ 13 ] ∫ 0 ∞ ( 1 − e − x / 2 ) ( x coth x 2 − 2 ) x 3 d x = 3 ln A − 1 3 ln 2 − 1 8 {\displaystyle \int _{0}^{\infty }{\frac {(1-e^{-x/2})(x\coth {\tfrac {x}{2}}-2)}{x^{3}}}dx=3\ln A-{\frac {1}{3}}\ln 2-{\frac {1}{8}}} and ∫ 0 ∞ ( 8 − 3 x ) e x − 8 e x / 2 − x 4 x 2 e x ( e x − 1 ) d x = 3 ln A − 7 12 ln 2 + 1 2 ln π − 1 {\displaystyle \int _{0}^{\infty }{\frac {(8-3x)e^{x}-8e^{x/2}-x}{4x^{2}e^{x}(e^{x}-1)}}dx=3\ln A-{\frac {7}{12}}\ln 2+{\frac {1}{2}}\ln \pi -1} A double integral is given by: [ 8 ]
The Glaisher-Kinkelin constant can be viewed as the first constant in a sequence of infinitely many so-called generalized Glaisher constants or Bendersky constants . [ 1 ] They emerge from studying the following product: ∏ m = 1 n m m k = 1 1 k ⋅ 2 2 k ⋅ 3 3 k ⋅ . . . ⋅ n n k {\displaystyle \prod _{m=1}^{n}m^{m^{k}}=1^{1^{k}}\cdot 2^{2^{k}}\cdot 3^{3^{k}}\cdot {...}\cdot n^{n^{k}}} Setting k = 0 {\displaystyle k=0} gives the factorial n ! {\displaystyle n!} , while choosing k = 1 {\displaystyle k=1} gives the hyperfactorial H ( n ) {\displaystyle H(n)} .
Defining the following function P k ( n ) = ( n k + 1 k + 1 + n k 2 + B k + 1 k + 1 ) ln n − n k + 1 ( k + 1 ) 2 + k ! ∑ j = 1 k − 1 B j + 1 ( j + 1 ) ! n k − j ( k − j ) ! ( ln n + ∑ i = 1 j 1 k − i + 1 ) {\displaystyle P_{k}(n)=\left({\frac {n^{k+1}}{k+1}}+{\frac {n^{k}}{2}}+{\frac {B_{k+1}}{k+1}}\right)\ln n-{\frac {n^{k+1}}{(k+1)^{2}}}+k!\sum _{j=1}^{k-1}{\frac {B_{j+1}}{(j+1)!}}{\frac {n^{k-j}}{(k-j)!}}\left(\ln n+\sum _{i=1}^{j}{\frac {1}{k-i+1}}\right)} with the Bernoulli numbers B k {\displaystyle B_{k}} (and using B 1 = 0 {\displaystyle B_{1}=0} ), one may approximate the above products asymptotically via exp ( P k ( n ) ) {\displaystyle \exp({P_{k}(n)})} .
For k = 0 {\displaystyle k=0} we get Stirling's approximation without the factor 2 π {\displaystyle {\sqrt {2\pi }}} as exp ( P 0 ( n ) ) = n n + 1 2 e − n {\displaystyle \exp({P_{0}(n)})=n^{n+{\frac {1}{2}}}e^{-n}} .
For k = 1 {\displaystyle k=1} we obtain exp ( P 1 ( n ) ) = n n 2 2 + n 2 + 1 12 e − n 2 4 {\displaystyle \exp({P_{1}(n)})=n^{{\tfrac {n^{2}}{2}}+{\tfrac {n}{2}}+{\tfrac {1}{12}}}\,e^{-{\tfrac {n^{2}}{4}}}} , similar as in the limit definition of A {\displaystyle A} .
This leads to the following definition of the generalized Glaisher constants:
which may also be written as:
This gives A 0 = 2 π {\displaystyle A_{0}={\sqrt {2\pi }}} and A 1 = A {\displaystyle A_{1}=A} and in general: [ 1 ] [ 14 ] [ 15 ]
with the harmonic numbers H k {\displaystyle H_{k}} and H 0 = 0 {\displaystyle H_{0}=0} .
Because of the formula
for m > 0 {\displaystyle m>0} , there exist closed form expressions for A k {\displaystyle A_{k}} with even k = 2 m {\displaystyle k=2m} in terms of the values of the Riemann zeta function such as: [ 1 ]
For odd k = 2 m − 1 {\displaystyle k=2m-1} one can express the constants A k {\displaystyle A_{k}} in terms of the derivative of the Riemann zeta function such as:
The numerical values of the first few generalized Glaisher constants are given below: | https://en.wikipedia.org/wiki/Glaisher–Kinkelin_constant |
Glasdegib , sold under the brand name Daurismo , is a medication for the treatment of newly-diagnosed acute myeloid leukemia (AML) in adults older than 75 years or those who have comorbidities that preclude use of intensive induction chemotherapy . [ 4 ] [ 5 ] [ 6 ] It is taken by mouth and is used in combination with low-dose cytarabine . [ 5 ]
The recommended dose of glasdegib is 100 mg orally once daily on days 1 to 28 in combination with cytarabine 20 mg subcutaneously twice daily on days 1 to 10 of each 28-day cycle in the absence of unacceptable toxicity or loss of disease control. [ 5 ]
The most common adverse reactions are anemia, fatigue, hemorrhage, febrile neutropenia, musculoskeletal pain, nausea, edema, thrombocytopenia, dyspnea, decreased appetite, dysgeusia, mucositis, constipation, and rash. [ 4 ]
It is a small molecule inhibitor of sonic hedgehog , which is a protein overexpressed in many types of cancer. It inhibits the sonic hedgehog receptor smoothened (SMO), as do most drugs in its class. [ 7 ]
Glasdegib was approved for medical use in the United States in December 2018. [ 4 ] [ 5 ] [ 8 ] [ 9 ] [ 10 ]
FDA approval was based on a multicenter, open-label, randomized study (BRIGHT AML 1003, NCT01546038) that included 115 subjects with newly-diagnosed AML who met at least one of the following criteria: a) age 75 years or older, b) severe cardiac disease, c) baseline Eastern Cooperative Oncology Group performance status of 2, or d) baseline serum creatinine >1.3 mg/dL. [ 4 ] Subjects were randomized 2:1 to receive glasdegib, 100 mg daily, with LDAC 20 mg subcutaneously twice daily on days 1 to 10 of a 28-day cycle (N=77) or LDAC alone (N=38) in 28-day cycles until disease progression or unacceptable toxicity. [ 4 ] The trial was conducted in United States, Canada and Europe. [ 11 ]
Efficacy was established based on an improvement in overall survival (date of randomization to death from any cause). [ 4 ] With a median follow-up of 20 months, median survival was 8.3 months (95% CI: 4.4, 12.2) for the glasdegib + LDAC arm and 4.3 months (95% CI: 1.9, 5.7) for the LDAC alone arm and HR of 0.46 (95% CI: 0.30, 0.71; p=0.0002). [ 4 ]
Glasdegib was granted priority review and orphan drug designation by the U.S. Food and Drug Administration (FDA). [ 4 ] [ 12 ] It was granted orphan drug designation by the European Medicines Agency (EMA) in October 2017. [ 13 ]
Glasdegib was approved for medical use in the European Union in June 2020. [ 2 ]
This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glasdegib |
The Glaser coupling is a type of coupling reaction . It is by far one of the oldest coupling reactions and is based on copper compounds like copper(I) chloride or copper(I) bromide and an additional oxidant like air. The base used in the original research paper is ammonia and the solvent is water or an alcohol.
The reaction was first reported by Carl Andreas Glaser [ de ] in 1869. [ 1 ] [ 2 ] He suggested the following process on his way to diphenylbutadiyne :
In the related Eglinton reaction two terminal alkynes are coupled by a copper(II) salt such as cupric acetate . [ 3 ]
The oxidative coupling of alkynes has been used to synthesize a number of natural products . The stoichiometry is represented by this highly simplified scheme: [ 4 ]
Such reactions proceed via copper(I)-alkyne complexes.
This methodology was used in the synthesis of cyclooctadecanonaene . [ 5 ] Another example is the synthesis of diphenylbutadiyne from phenylacetylene . [ 6 ]
The Hay coupling is variant of the Glaser coupling. It relies on the TMEDA complex of copper(I) chloride to activate the terminal alkyne. Oxygen (air) is used in the Hay variant to oxidize catalytic amounts of Cu(I) to Cu(II) throughout the reaction, as opposed to a stoichiometric amount of Cu(II) used in the Eglington variant. [ 7 ] The Hay coupling of trimethylsilylacetylene gives the butadiyne derivative. [ 8 ]
In 1882 Adolf von Baeyer used the method to prepare 1,4-bis(2-nitrophenyl)butadiyne, en route to indigo dye . [ 9 ] [ 10 ]
Shortly afterwards, Baeyer reported a different route to indigo, now known as the Baeyer–Drewson indigo synthesis . | https://en.wikipedia.org/wiki/Glaser_coupling |
Glass-ceramics are polycrystalline materials produced through controlled crystallization of base glass, producing a fine uniform dispersion of crystals throughout the bulk material. Crystallization is accomplished by subjecting suitable glasses to a carefully regulated heat treatment schedule, resulting in the nucleation and growth of crystal phases. In many cases, the crystallization process can proceed to near completion, but in a small proportion of processes, the residual glass phase often remains. [ 1 ]
Glass-ceramic materials share many properties with both glasses and ceramics . Glass-ceramics have an amorphous phase and one or more crystalline phases and are produced by a so-called "controlled crystallization" in contrast to a spontaneous crystallization, which is usually not wanted in glass manufacturing. Glass-ceramics have the fabrication advantage of glass, as well as special properties of ceramics. When used for sealing, some glass-ceramics do not require brazing but can withstand brazing temperatures up to 700 °C. [ 2 ]
Glass-ceramics usually have between 30% [m/m ] and 90% [m/m] crystallinity and yield an array of materials with interesting properties like zero porosity , high strength, toughness, translucency or opacity , pigmentation , opalescence , low or even negative thermal expansion , high temperature stability, fluorescence , machinability, ferromagnetism , resorbability or high chemical durability, biocompatibility , bioactivity , ion conductivity, superconductivity , isolation capabilities, low dielectric constant and loss, corrosion resistance, [ 3 ] high resistivity and break-down voltage. These properties can be tailored by controlling the base-glass composition and by controlled heat treatment/crystallization of base glass. In manufacturing, glass-ceramics are valued for having the strength of ceramic but the hermetic sealing properties of glass.
Glass-ceramics are mostly produced in two steps: First, a glass is formed by a glass-manufacturing process, after which the glass is cooled down. Second, the glass is put through a controlled heat treatment schedule. In this heat treatment the glass partly crystallizes . In most cases nucleation agents are added to the base composition of the glass-ceramic. These nucleation agents aid and control the crystallization process. Because there is usually no pressing and sintering, glass-ceramics have no pores, unlike sintered ceramics .
A wide variety of glass-ceramic systems exist, e.g., the Li 2 O × Al 2 O 3 × n SiO 2 system (LAS system), the MgO × Al 2 O 3 × n SiO 2 system (MAS system), and the ZnO × Al 2 O 3 × n SiO 2 system (ZAS system).
Réaumur , a French chemist, made early attempts to produce polycrystalline materials from glass, demonstrating that if glass bottles were packed into a mixture of sand and gypsum, and subjected to red heat for several days, the glass bottles turned opaque and porcelain-like. Although Réaumur was successful in the conversion of glass to a polycrystalline material, he was unsuccessful in achieving the control of the crystallization process, which is a key step in producing true practical glass ceramics with the improved properties mentioned above. [ 3 ]
The discovery of glass-ceramics is credited to a man named Donald Stookey , a renowned glass scientist who worked at Corning Inc. for 47 years. [ 4 ] [ 5 ] The first iteration stemmed from a glass material, Fotoform, which was also discovered by Stookey while he was searching for a photo-etch-able material to be used in television screens. [ 6 ] Soon after the beginning of Fotoform, the first ceramic material was discovered when Stookey overheated a Fotoform plate in a furnace at 900 degrees Celsius and found an opaque, milky-white plate inside the furnace rather than the molten mess that was expected. [ 4 ] While examining the new material, which Stookey aptly named Fotoceram , he took note that it was much stronger than the Fotoform that it was created from as it survived a short fall onto concrete. [ 6 ]
In the late 1950s two more glass-ceramic materials would be developed by Stookey , one found use as the radome in the nose cone of missiles, [ 7 ] while the other led to the line of consumer kitchenware known as Corningware . [ 5 ] Corning executives announced Stookey 's discovery of the latter "new basic material" called Pyroceram which was touted as light, durable, capable of being an electrical insulator and yet thermally shock resistant. At the time, there were only few materials which offered the specific combination of characteristics that Pyroceram did and the material was rolled out as the Corningware kitchen line August 7, 1958. [ 8 ]
Some of the success that Pyroceram brought inspired Corning to put an effort towards strengthening glass which became an effort by the technical director's of Corning titled Project Muscle. [ 8 ] A lesser known "ultrastrong" glass-ceramic material developed in 1962 called Chemcor (now known as Gorilla Glass ) was produced by Corning 's glass team due to the Project Muscle effort. [ 8 ] Chemcor would even be used to innovate the Pyroceram line of products as in 1961 Corning launched Centura Ware , a new line of Pyroceram that was lined with a glass laminate (invented by John MacDowell ) and treated with the Chemcor process. [ 8 ] Stookey continued to forge ahead in the discovery of the properties of glass-ceramics as he discovered how to make the material transparent in 1966. [ 8 ] Though Corning would not release a product with his new innovation, for fear of cannibalizing Pyrex sales, until the late 1970s under the name Visions . [ 8 ]
The key to engineering a glass-ceramic material is controlling the nucleation and growth of crystals in the base glass. The amount of crystallinity will vary depending on the amount of nuclei present and the time and temperature at which the material is heated. [ 9 ] [ 4 ] It is important to understand the types of nucleation occurring in the material, whether it is homogeneous or heterogeneous.
Homogeneous nucleation is a process resulting from the inherent thermodynamic instability of a glassy material. [ 4 ] When enough thermal energy is applied to the system, the metastable glassy phase begins to return to the lower-energy, crystalline state. [ 9 ] The term "homogeneous" is used here because the formation of nuclei comes from the base glass without any second phases or surfaces promoting their formation.
The rate of homogenous nucleation in a condensed system can be described with the following equation, proposed by Becker in 1938.
Where Q is the activation energy for diffusion across the phase boundary, A is a constant, and F ∗ {\displaystyle F^{*}} is the maximum activation energy for formation of a stable nucleus, as given by the equation below.
Where Δ f v {\displaystyle \Delta f_{v}} is the change of free energy per unit volume resulting from the transformation from one phase to the other, and Δ f s {\displaystyle \Delta f_{s}} can be equated with interfacial tension.
Heterogeneous nucleation is a term used when a nucleating agent is introduced into the system to aid and control the crystallization process. [ 4 ] The presence of this nucleating agent, in the form of an additional phase or surface, can act as a catalyst for nucleation and is particularly effective if there is epitaxy between the nucleus and the substrate. [ 4 ] There are a number of metals that can act as nucleating agents in glass because they can exist in the glass in the form of particle dispersion of colloidal dimensions. Examples include copper, metallic silver, and platinum. It was suggested by Stookey in 1959 that the effectiveness of metallic nucleation catalysts relates to the similarities between the crystal structures of the metals and the phase being nucleated.
The most important feature of heterogenous nucleation is that the interfacial tension between the heterogeneity and the nucleated phase is minimized. This means that the influence that the catalyzing surface has on the rate of nucleation is determined by the contact angle at the interface. Based on this, Turnbull and Vonnegut (1952) modified the equation for homogenous nucleation rate to give an expression for heterogenous nucleation rate.
If activation energy for diffusion is included, as suggested by Stokey (1959a), the equation then becomes:
From these equations, heterogeneous nucleation can be described in terms of the same parameters as homogeneous nucleation with a shape factor, which is a function of θ (contact angle). The term f ( θ ) {\displaystyle f(\theta )} is given by:
f ( θ ) = ( 2 + cos θ ) ( 1 − cos θ ) 2 4 {\displaystyle f(\theta )={\frac {(2+\cos \theta )(1-\cos \theta )^{2}}{4}}}
if the nucleus has the form of a spherical cap. [ 3 ]
In addition to nucleation, crystal growth is also required for the formation of glass ceramics. The crystal growth process is of considerable importance in determining the morphology of the produced glass ceramic composite material. Crystal growth is primarily dependent on two factors. First, it is dependent upon the rate at which the disordered structure can be re-arranged into a periodic lattice with longer-range order. Second, it is dependent upon the rate at which energy is released in the phase transformation (essentially the rate of cooling at the interface). [ 3 ]
Glass-ceramics are used in medical applications due to their unique interaction, or lack thereof, with human body tissue. Bioceramics are typically placed into the following groups based on their biocompatibility: biopassive (bioinert), bioactive , or resorbable ceramics. [ 9 ]
Biopassive (bioinert) ceramics are, as the name suggests, characterized by the limited interaction the material has with the surrounding biological tissue. [ 9 ] Historically, these were the "first generation" biomaterials used as replacements for missing or damaged tissues. [ 9 ] One problem resulting from using inert biomaterials was the body's reaction to the foreign object; it was found that a phenomenon known as "fibrous encapsulation" would occur, where tissues would grow around the implant in an attempt to isolate the object from the rest of the body. [ 9 ] This occasionally caused a variety of problems such as necrosis or sequestration of the implant. [ 9 ] Two commonly used bioinert materials are alumina (Al2O3) and zirconia (ZrO2). [ 9 ]
Bioactive materials have the ability to form bonds and interfaces with natural tissues. [ 9 ] In the case of bone implants, two properties known as osteoconduction and osteoinduction play an important role in the success and longevity of the implant. [ 9 ] Osteoconduction refers to a material's ability to permit bone growth on the surface and into the pores and channels of the material. [ 9 ] [ 10 ] Osteoinduction is a term used when a material stimulates existing cells to proliferate, causing new bone to grow independently of the implant. [ 9 ] [ 10 ] In general, the bioactivity of a material is a result of a chemical reaction, typically dissolution of the implanted material. [ 9 ] Calcium phosphate ceramics and bioactive glasses are commonly used as bioactive materials as they exhibit this dissolution behavior when introduced to living body tissue. [ 9 ] One engineering goal relating to these materials is that the dissolution rate of the implant be closely matched to the growth rate of new tissue, leading to a state of dynamic equilibrium. [ 9 ]
Resorbable ceramics are similar to bioactive ceramics in their interaction with the body, but the main difference lies in the extent to which the dissolution occurs. Resorbable ceramics are intended to gradually dissolve entirely, all the while new tissue grows in its stead. [ 9 ] The architecture of these materials has become quite complex, with foam-like scaffolds being introduced to maximize the interfacial area between the implant and body tissue. [ 10 ] One issue that arises from using highly porous materials for bioactive/resorbable implants is the low mechanical strength, especially in load-bearing areas such as the bones in the legs. [ 10 ] An example of a resorbable material that has seen some success is tricalcium phosphate (TCP), however, it too falls short in terms of mechanical strength when used in high-stress areas. [ 9 ]
The commercially most important system is the Li 2 O × Al 2 O 3 × n SiO 2 system (LAS system). [ citation needed ] The LAS system mainly refers to a mix of lithium , silicon , and aluminum oxides with additional components, e.g., glass-phase-forming agents such as Na 2 O, K 2 O and CaO and refining agents. As nucleation agents most commonly zirconium(IV) oxide in combination with titanium(IV) oxide is used. This important system was studied first and intensively by Hummel, [ 11 ] and Smoke. [ 12 ]
After crystallization the dominant crystal phase in this type of glass-ceramic is a high-quartz solid solution (HQ s.s.). If the glass-ceramic is subjected to a more intense heat treatment, this HQ s.s. transforms into a keatite-solid solution (K s.s., sometimes wrongly named as beta- spodumene ). This transition is non-reversible and reconstructive, which means bonds in the crystal-lattice are broken and new arranged. However, these two crystal phases show a very similar structure as Li could show. [ 13 ]
An interesting property of these glass-ceramics is their thermomechanical durability. Glass-ceramic from the LAS system is a mechanically strong material and can sustain repeated and quick temperature changes up to 800–1000 °C. The dominant crystalline phase of the LAS glass-ceramics, HQ s.s., has a strong negative coefficient of thermal expansion (CTE), keatite-solid solution as still a negative CTE but much higher than HQ s.s. These negative CTEs of the crystalline phase contrasts with the positive CTE of the residual glass. Adjusting the proportion of these phases offers a wide range of possible CTEs in the finished composite. Mostly for today's applications a low or even zero CTE is desired. Also a negative CTE is possible, which means, in contrast to most materials when heated up, such a glass-ceramic contracts. At a certain point, generally between 60% [m/m] and 80% [m/m] crystallinity, the two coefficients balance such that the glass-ceramic as a whole has a thermal expansion coefficient that is very close to zero. Also, when an interface between material will be subject to thermal fatigue , glass-ceramics can be adjusted to match the coefficient of the material they will be bonded to.
Originally developed for use in the mirrors and mirror mounts of astronomical telescopes , LAS glass-ceramics have become known and entered the domestic market through its use in glass-ceramic cooktops , as well as cookware and bakeware or as high-performance reflectors for digital projectors.
One particularly notable use of glass-ceramics is in the processing of ceramic matrix composites . For many ceramic matrix composites typical sintering temperatures and times cannot be used, as the degradation and corrosion of the constituent fibres becomes more of an issue as temperature and sintering time increase.
One example of this is SiC fibres, which can start to degrade via pyrolysis at temperatures above 1470K. [ 14 ] One solution to this is to use the glassy form of the ceramic as the sintering feedstock rather than the ceramic, as unlike the ceramic the glass pellets have a softening point and will generally flow at much lower pressures and temperatures. This allows the use of less extreme processing parameters, making the production of many new technologically important fibre-matrix combinations by sintering possible.
Glass-ceramic from the LAS-System is a mechanically strong material and can sustain repeated and quick temperature changes, and its smooth glass-like surface is easy to clean, therefore it is often used as a cooktop surface.
The material has a very low heat conduction coefficient , which means that it stays cool outside the cooking area. It can be made nearly transparent (15–20% loss in a typical cooktop) for radiation in the infrared wavelengths . In the visible range glass-ceramics can be transparent, translucent or opaque and even colored by coloring agents.
However, glass-ceramic is not totally unbreakable. Because it is still a brittle material as glass and ceramics are, it can be broken – in particular it is less robust than traditional cooktops made of steel or cast iron. There have been instances where users reported damage to their cooktops when the surface was struck with a hard or blunt object (such as a can falling from above or other heavy items).
Today [update] , there are two major types of electrical stoves with cooktops made of glass-ceramic:
This technology is not entirely new, as glass-ceramic ranges were first introduced in the 1970s using Corningware tops instead of the more durable material used today. These first generation smoothtops were problematic and could only be used with flat-bottomed cookware as the heating was primarily conductive rather than radiative. [ 15 ]
Compared to conventional kitchen stoves, glass-ceramic cooktops are relatively simple to clean, due to their flat surface. However, glass-ceramic cooktops can be scratched very easily, so care must be taken not to slide the cooking pans over the surface. If food with a high sugar content (such as jam) spills, it should never be allowed to dry on the surface, otherwise damage will occur. [ 16 ]
For best results and maximum heat transfer, all cookware should be flat-bottomed and matched to the same size as the burner zone.
Some well-known brands of glass-ceramics are Pyroceram , Ceran, Eurokera, Zerodur , and Macor . Nippon Electric Glass is a predominant worldwide manufacturer of glass ceramics, whose related products in this area include FireLite [1] and NeoCeram [2] , ceramic glass materials for architectural and high temperature applications respectively. Keralite , manufactured by Vetrotech Saint-Gobain, is a specialty glass-ceramic fire and impact safety rated material for use in fire-rated applications. [ 17 ] Glass-ceramics manufactured in the Soviet Union / Russia are known under the name Sitall . Macor is a white, odorless, porcelain-like glass ceramic material and was developed originally to minimize heat transfer during crewed spaceflight by Corning Inc. [ 18 ] StellaShine , launched in 2016 by Nippon Electric Glass Co. , is a heat-resistant, glass-ceramic material with a thermal shock resistance of up to 800 degrees Celsius. [ 19 ] This was developed as an addition to Nippon 's line of heat-resistant cooking range plates along with materials like Neoceram . KangerTech is an ecigarette manufacturer which began in Shenzhen, China which produces glass ceramic materials and other special hardened-glass applications like vaporizer modification tanks. [ 20 ]
The same class of material is also used in Visions and CorningWare glass-ceramic cookware, allowing it to be taken from the freezer directly to the stovetop or oven with no risk of thermal shock while maintaining the transparent look of glassware. [ 21 ] | https://en.wikipedia.org/wiki/Glass-ceramic |
A glass-ceramic-to-metal seal is a type of mechanical seal which binds glass-ceramic and metal surfaces. They are related to glass-to-metal seals , and like them are hermetic (airtight).
Glass-ceramics are polycrystalline ceramic materials prepared by the controlled crystallization of suitable glasses, normally silicates . Depending on the starting glass composition and the heat-treatment schedule adopted, glass-ceramics can be prepared with tailored thermal expansion characteristics. This makes them ideal for sealing to a variety of different metals, ranging from low expansion tungsten (W) or molybdenum (Mo) to high expansion stainless steels and nickel -based superalloys .
Glass-ceramic-to-metal seals offer superior properties over their glass equivalents including more refractory behaviour, in addition to their ability to seal successfully to many different metals and alloys. They have been used in electrical feed-through seals for such applications as vacuum interrupter envelopes and pyrotechnic actuators , in addition to many applications where a higher temperature capability than is possible with glass-to-metal seals is required, including solid oxide fuel cells .
In the formation of a glass-ceramic-to-metal seal, the parts to be joined are first heated, normally under inert atmosphere, in order to melt the glass and allow it to wet and flow into the metal parts, in much the same way as when preparing a more conventional glass-to-metal seal . The temperature is then normally reduced into a temperature regime where many microscopic nuclei are formed in the glass. The temperature is then raised again into a regime where the major crystalline phases can form and grow to create the polycrystalline ceramic material with thermal expansion characteristics matched to that of the particular metal parts.
The white opaque "glue" between the panel and the funnel of a colour TV cathode ray tube is a
devitrified solder glass based on the system PbO – ZnO – B 2 O 3 . While
this is a glass-ceramic-to-glass seal, the basic patent of S.A. Claypoole considers
glass-ceramic-to-metal seals as well. | https://en.wikipedia.org/wiki/Glass-ceramic-to-metal_seals |
Glass-filled polymer (or glass-filled plastic ), is a mouldable composite material . It comprises short glass fibers in a matrix of a polymer material. It is used to manufacture a wide range of structural components by injection or compression moulding . [ 1 ] It is an ideal glass alternative that offers flexibility in the part, chemical resistance, shatter resistance and overall better durability. [ 2 ]
Either thermoplastic or thermosetting polymers may be used. One of the most widely used thermoplastics is a polyamide polymer nylon . [ 3 ]
The first mouldable composite was Bakelite . This used wood flour fibres in phenolic resin as the thermoset polymer matrix . As the fibres were only short this material had relatively low bulk strength, but still improved surface hardness and good mouldability.
A wide range of polymers are now produced in glass-filled varieties, including polyamide (Nylon), acetal homopolymers and copolymers , polyester , polyphenylene oxide (PPO / Noryl ), polycarbonate , polyethersulphone [ 4 ]
Bulk moulding compound is a pre-mixed material of resin and fibres supplied for moulding. Some are thermoplastic or thermosetting, others are chemically cured and are mixed with a catalyst (polyester) or hardener (epoxy) before moulding.
Compared to the native polymer, glass-filled materials have improved mechanical properties of rigidity , strength and may also have improved surface hardness .
Bulk glass filled materials are considered distinct from fibreglass or fibre-reinforced plastic materials. [ 5 ] These use a substrate of fabric sheets made from long fibres, draped to shape in a mould and then impregnated with resin. They are usually moulded into shapes made of large but thin sheets. Filled materials, in contrast, are used for applications that are thicker or of varying section and not usually as large as sheet materials.
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Glass-filled_polymer |
Glass-to-metal seals are a type of mechanical seal which joins glass and metal surfaces. They are very important elements in the construction of vacuum tubes , electric discharge tubes , incandescent light bulbs , glass-encapsulated semiconductor diodes , reed switches , glass windows in metal cases, and metal or ceramic packages of electronic components .
Properly done, such a seal is hermetic (capable of supporting a vacuum , good electrical insulation , special optical properties e.g. UV lamps). To achieve such a seal, two properties must hold:
Thinking for example about a metal wire in a glass bulb sealing, the metal glass contact can break if the coefficients of thermal expansion (CTE) are not well aligned. For the case that the CTE of the metal is larger than the CTE of the glass, the sealing shows a high probability to break upon cooling. By lowering the temperature, the metal wire shrinks more than the glass does, leading to a strong tensile force on the glass, which finally leads to breakage. On the other hand, if the CTE of the glass is larger than the CTE of the metal wire, the seal will tighten upon cooling since compression force is applied on the glass.
According to all requirements that need to be fulfilled and the strong necessity to align the CTE of both materials, there are only a few companies offering specialty glass for glass-metal sealing, such as SCHOTT AG and Morgan Advanced Materials .
Glass and metal can bond together by purely mechanical means, which usually gives weaker joints, or by chemical interaction, where the oxide layer on the metal surface forms a strong bond with the glass (the glass itself is about 73% composed of a silicon dioxide (SiO 2 )) . The acid-base reactions are main causes of interaction between glass-metal in the presence of metal oxides on the surface of metal. [ citation needed ] After complete dissolution of the surface oxides into the glass, further progress of interaction depends on the oxygen activity at the interface. The oxygen activity can be increased by diffusion of molecular oxygen through some defects like cracks.
Also, reduction of the thermodynamically less stable components in the glass (and releasing the oxygen ions) can increase the oxygen activity at the interface. In other words, the redox reactions are main causes of interaction between glass-metal in the absence of metal oxides on the surface of metal. [ 1 ]
For achieving a vacuum-tight seal, the seal must not contain bubbles. The bubbles are most commonly created by gases escaping the metal at high temperature; degassing the metal before its sealing is therefore important, especially for nickel and iron and their alloys. This is achieved by heating the metal in vacuum or sometimes in hydrogen atmosphere or in some cases even in air at temperatures above those used during the sealing process. Oxidizing of the metal surface also reduces gas evolution. Most of the evolved gas is produced due to the presence of carbon impurities in the metals; these can be removed by heating in hydrogen. [ 2 ]
The glass-oxide bond is stronger than glass-metal. The oxide forms a layer on the metal surface, with the proportion of oxygen changing from zero in the metal to the stoichiometry of the oxide and the glass itself. A too-thick oxide layer tends to be porous on the surface and mechanically weak, flaking, compromising the bond strength and creating possible leakage paths along the metal-oxide interface. Proper thickness of the oxide layer is therefore critical.
Metallic copper does not bond well to glass. Copper(I) oxide , however, is wetted by molten glass and partially dissolves in it, forming a strong bond. The oxide also bonds well to the underlying metal. But copper(II) oxide causes weak joints that may leak and its formation must be prevented.
For bonding copper to glass, the surface needs to be properly oxidized. The oxide layer is to have the right thickness; too little oxide would not provide enough material for the glass to anchor to, too much oxide would cause the oxide layer to fail, and in both cases the joint would be weak and possibly non-hermetic. To improve the bonding to glass, the oxide layer should be borated; this is achieved by e.g. dipping the hot part into a concentrated solution of borax and then heating it again for certain time. This treatment stabilizes the oxide layer by forming a thin protective layer of sodium borate on its surface, so the oxide does not grow too thick during subsequent handling and joining. The layer should have uniform deep red to purple sheen. [ 3 ] [ 4 ] The boron oxide from the borated layer diffuses into glass and lowers its melting point. The oxidation occurs by oxygen diffusing through the molten borate layer and forming copper(I) oxide, while formation of copper(II) oxide is inhibited. [ 2 ]
The copper-to-glass seal should look brilliant red, almost scarlet; pink, sherry and honey colors are also acceptable. Too thin an oxide layer appears light, up to the color of metallic copper, while too thick oxide looks too dark.
Oxygen-free copper has to be used if the metal comes in contact with hydrogen (e.g. in a hydrogen-filled tube or during handling in the flame). Normally, copper contains small inclusions of copper(I) oxide . Hydrogen diffuses through the metal and reacts with the oxide, reducing it to copper and yielding water. The water molecules however can not diffuse through the metal, are trapped in the location of the inclusion, and cause embrittlement .
As copper(I) oxide bonds well to the glass, it is often used for combined glass-metal devices. The ductility of copper can be used for compensation of the thermal expansion mismatch in e.g. the knife-edge seals. For wire feed throughs, dumet wire – nickel-iron alloy plated with copper – is frequently used. Its maximum diameter is however limited to about 0.5 mm due to its thermal expansion.
Copper can be sealed to glass without the oxide layer, but the resulting joint is less strong.
Platinum has similar thermal expansion as glass and is well-wetted with molten glass. It however does not form oxides, so its bond strength is lower. The seal has metallic color and limited strength.
Like platinum, gold does not form oxides that could assist in bonding. Glass-gold bonds are therefore metallic in color and weak. Gold tends to be used for glass-metal seals only rarely. Special compositions of soda-lime glasses that match the thermal expansion of gold, containing tungsten trioxide and oxides of lanthanum, aluminum and zirconium, exist. [ 5 ]
Silver forms a thin layer of silver oxide on its surface. This layer dissolves in molten glass and forms silver silicate , facilitating a strong bond. [ 6 ]
Nickel can bond with glass either as a metal, or via the nickel(II) oxide layer. The metal joint has metallic color and inferior strength. The oxide-layer joint has characteristic green-grey color. Nickel plating can be used in similar way as copper plating, to facilitate better bonding with the underlying metal. [ 3 ]
Iron is only rarely used for feedthroughs, but frequently gets coated with vitreous enamel , where the interface is also a glass-metal bond. The bond strength is also governed by the character of the oxide layer on its surface. A presence of cobalt in the glass leads to a chemical reaction between the metallic iron and cobalt oxide , yielding iron oxide dissolved in glass and cobalt alloying with the iron and forming dendrites , growing into the glass and improving the bond strength. [ 6 ]
Iron can not be directly sealed to lead glass , as it reacts with the lead oxide and reduces it to metallic lead. For sealing to lead glasses, it has to be copper-plated or an intermediate lead-free glass has to be used. Iron is prone to creating gas bubbles in glass due to the residual carbon impurities; these can be removed by heating in wet hydrogen. Plating with copper, nickel or chromium is also advised. [ 2 ]
Chromium is a highly reactive metal present in many iron alloys. Chromium may react with glass, reducing the silicon and forming crystals of chromium silicide growing into the glass and anchoring together the metal and glass, improving the bond strength. [ 6 ]
Kovar , an iron-nickel-cobalt alloy, has low thermal expansion similar to high-borosilicate glass and is frequently used for glass-metal seals especially for the application in x-ray tubes or glass lasers. It can bond to glass via the intermediate oxide layer of nickel(II) oxide and cobalt(II) oxide ; the proportion of iron oxide is low due to its reduction with cobalt. The bond strength is highly dependent on the oxide layer thickness and character. [ 4 ] [ 6 ] The presence of cobalt makes the oxide layer easier to melt and dissolve in the molten glass. A grey, grey-blue or grey-brown color indicates a good seal. A metallic color indicates lack of oxide, while black color indicates overly oxidized metal, in both cases leading to a weak joint. [ 2 ]
Molybdenum bonds to the glass via the intermediate layer of molybdenum(IV) oxide . Due to its low thermal expansion coefficient, matched to glass, molybdenum, like tungsten, is often used for glass-metal bonds especially in conjunction with aluminium-silicate glass. Its high electrical conductivity makes it superior over nickel-cobalt-iron alloys. It is favored by the lighting industry as feedthroughs for lightbulbs and other devices. Molybdenum oxidizes much faster than tungsten and quickly develops a thick oxide layer that does not adhere well, its oxidation should be therefore limited to just yellowish or at most blue-green color. The oxide is volatile and evaporates as a white smoke above 700 °C; excess oxide can be removed by heating in inert gas (argon) at 1000 °C. Molybdenum strips are used instead of wires where higher currents (and higher cross-sections of the conductors) are needed. [ 2 ]
Tungsten bonds to the glass via the intermediate layer of tungsten(VI) oxide . A properly formed bond has characteristic coppery/orange/brown-yellow color in lithium-free glasses; in lithium-containing glasses the bond is blue due to formation of lithium tungstate . Due to its low thermal expansion coefficient, matched to glass, tungsten is frequently used for glass-metal bonds. Tungsten forms satisfying bonds with glasses with similar thermal expansion coefficient such as high-borosilicate glass . The surface of both the metal and glass should be smooth, without scratches. [ 4 ] Tungsten has the lowest expansion coefficient of metals and the highest melting point.
304 Stainless steel forms bonds with glass via an intermediate layer of chromium(III) oxide and iron(III) oxide . Further reactions of chromium, forming chromium silicide dendrites, are possible. The thermal expansion coefficient of steel is however fairly different from the glass; like with copper, this can be alleviated by using knife-edge (Houskeeper) seals. [ 4 ]
Zirconium wire can be sealed to glass with just little treatment – rubbing with abrasive paper and short heating in flame. Zirconium is used in applications demanding chemical resistance or lack of magnetism. [ 2 ]
Titanium , like zirconium, can be sealed to some glasses with just little treatment. [ 2 ]
Indium and some of its alloys can be used as a solder capable of wetting glass, ceramics, and metals and joining them together. [ 7 ] Indium has low melting point and is very soft; the softness allows it to deform plastically and absorb the stresses from thermal expansion mismatches. Due to its very low vapor pressure, indium finds use in glass-metal seals used in vacuum technology [ 8 ] and cryogenic applications. [ 9 ]
Gallium is a soft metal with melting point at 30 °C. It readily wets glasses and most metals and can be used for seals that can be assembled/disassembled by just slight heating. It can be used as a liquid seal up to high temperatures or even at lower temperatures when alloyed with other metals (e.g. as galinstan ). [ 8 ]
Mercury is a metal liquid at normal temperature and does not wet glass. It was used as the earliest glass-to-metal seal and is still in use for liquid seals for e.g. rotary shafts.
The first technological use of a glass-to-metal seal was the encapsulation of the vacuum in the barometer by Torricelli . The liquid mercury wets the glass and thus provides for a vacuum tight seal. Liquid mercury was also used to seal the metal leads of early mercury arc lamps into the fused silica bulbs.
A less toxic and more expensive alternative to mercury is gallium .
Mercury and gallium seals can be used for vacuum-sealing rotary shafts.
The next step was to use thin platinum wire . Platinum is easily wetted by glass and has a similar coefficient of thermal expansion as typical soda-lime and lead glass . It is also easy to work with because of its
non-oxidibility and high melting point. This type of seal was used in scientific equipment throughout the 19th century and also in the early incandescent lamps and radio tubes.
In 1911 the Dumet -wire seal was invented which is still [ when? ] the common practice to seal copper leads through soda-lime or lead glass .
If copper is properly oxidised before it is wetted by molten glass a vacuum tight seal of good mechanical strength can be obtained. After copper is oxidized, it is often dipped in a borax solution, as borating the copper helps prevents over-oxidation when reintroduced to a flame. Simple copper wire is not usable because its CTE is much higher than that of the glass. Thus, on cooling a strong tensile force acts on the glass-to-metal interface and it breaks.
Glass and glass-to-metal interfaces are especially sensitive to tensile stress. Dumet-wire is a copper clad wire (25% of copper by weight) with a core of nickel-iron alloy 42 (42% of nickel by weight). [ 10 ] The core having low CTE makes it possible to produce a wire with a radial CTE lower than a linear CTE of the glass, so that the glass-to-metal interface is under a low compression stress. It is not possible to adjust the axial thermal expansion of the wire as well. Because of the much higher mechanical strength of the nickel-iron core compared to the copper, the axial CTE the wire is about the same as of the core. Therefore, a shear stress builds up which is limited to a safe value by the low tensile strength of the copper. This is also the reason why Dumet is only useful for wire diameters lower than about 0.5 mm. [ clarification needed ] [ citation needed ]
In a typical Dumet seal through the base of a vacuum tube a short piece of Dumet-wire is butt welded to a nickel wire at one end and a copper wire at the other end. When the base is pressed of lead glass the Dumet-wire and a short part of the nickel and the copper wire are enclosed in the glass. Then the nickel wire and the glass around the Dumet-wire are heated by a gas flame and the glass seals to the Dumet-wire.
The nickel and copper do not seal vacuum tight to the glass but are mechanically supported. The butt welding also avoids problems with gas-leakages at the interface between the core wire and the copper.
Another possibility to avoid a strong tensile stress when sealing copper through glass is the use of a thin walled copper tube instead of a solid wire. Here a shear stress builds up in the glass-to-metal interface which is limited by the low tensile strength of the copper combined with a low tensile stress. The copper tube is insensitive to high electric current compared to a Dumet-seal because on heating the tensile stress converts into a compression stress which is again limited by the tensile strength of the copper. Also, it is possible to lead an additional solid copper wire through the copper tube. In a later variant, only a short section of the copper tube has a thin wall and the copper tube is hindered to shrink at cooling by a ceramic tube
inside the copper tube.
If large parts of copper are to be fitted to glass like the water cooled copper anode of a high power radio transmitter tube or an x-ray tube historically the Houskeeper knife edge seal is used. Here the end of a copper tube is machined to a sharp knife edge, invented by O. Kruh in 1917. In the method described by W.G. Houskeeper the outside or the inside of the
copper tube right to the knife edge is wetted with glass and connected to the glass tube . [ 11 ] In later descriptions the knife edge is just wetted several millimeters deep with glass, usually deeper on the inside, and then connected to the glass tube .
If copper is sealed to glass, it is an advantage to get a thin bright red Cu 2 O containing layer between copper and glass. This is done by borating. After W.J. Scott a copper plated tungsten wire is immersed for about 30 s in chromic acid and
then washed thoroughly in running tap water. Then it is dipped into a saturated solution of borax and heated to bright red heat
in the oxidizing part of a gas flame. Possibly followed by quenching in water and drying. Another method is to oxidize the
copper slightly in a gas flame and then to dip it into borax solution and let it dry. The surface of the borated
copper is black when hot and turns to dark wine red on cooling.
It is also possible to make a bright seal between copper and glass where it is possible to see the blank copper surface through the glass, but this gives less adherence than the seal with the red Cu 2 O containing layer. If glass is melted on
copper in a reducing hydrogen atmosphere the seal is extremely weak. If copper is to be heated in hydrogen-containing atmosphere e.g. a gas flame it
needs to be oxygen-free to prevent hydrogen embrittlement. Copper which is meant to be used as an electrical conductor is not necessarily oxygen-free
and contains particles of Cu 2 O which react with hydrogen that diffuses into the copper to H 2 O which cannot diffuse out-off the
copper and thus causes embrittlement. The copper usually used in vacuum applications is of the very pure OFHC (oxygen-free-high-conductivity)
quality which is both free of Cu 2 O and deoxidising additives which might evaporate at high temperature in vacuum.
In the copper disc seal, as proposed by W.G. Houskeeper, the end of a glass tube is closed by a round copper disc. An additional ring of glass on the opposite side of the disc increases the possible thickness of the disc to more than 0.3 mm. Best mechanical strength is obtained if both sides of the disc are fused to the same type of glass tube and both tubes are under vacuum. The disc seal is of special practical interest because it is a simple method to make a seal to low expansion borosilicate glass without the need of special tools or materials. The keys to success are proper borating, heating of the joint to a temperature as close to the melting point of the copper as possible and to slow down the cooling, at least by packing the assembly into glass wool while it is still red hot.
In a matched seal the thermal expansion of metal and glass is matched. Copper-plated tungsten wire can be used to seal through borosilicate glass with a low coefficient of thermal expansion which is matched by tungsten. The tungsten is electrolytically copper plated and heated in hydrogen atmosphere to fill cracks in the tungsten and to get a proper surface to easily seal to glass. The borosilicate glass of usual laboratory glassware has a lower coefficient of thermal expansion than tungsten, thus it is necessary to use an intermediate sealing glass to get a stress-free seal.
There are combinations of glass and iron-nickel-cobalt alloys ( Kovar ) where even the non-linearity of the thermal expansion is matched. These alloys can be directly sealed to glass, but then the oxidation is critical. Also, their low electrical conductivity is a disadvantage. Thus, they are often gold plated. It is also possible to use silver plating, but then an additional gold layer is necessary as an oxygen diffusion barrier to prevent the formation of iron oxide.
While there are Fe-Ni alloys which match the thermal expansion of tungsten at room temperature, they are not useful to seal to glass because of a too strong increase of their thermal expansion at higher temperatures.
Reed switches use a matched seal between an iron-nickel alloy (NiFe 52) and a matched glass. The glass of reed switches is usually green due to its iron content because the sealing of reed switches is done by heating with infrared radiation and this glass shows a high absorption in the near infrared.
The electrical connections of high-pressure sodium vapour lamps, the light yellow lamps for street lighting, are made of niobium alloyed with 1% of zirconium. [ 12 ]
Historically, some television cathode ray tubes were made by using ferric steel for the funnel and glass matched in expansion to ferric steel. The steel plate used had a diffusion layer enriched with chromium at the surface made by heating the steel together with chromium oxide in a HCl-containing atmosphere. In contrast to copper, pure iron does not bond strongly to silicate glass. Also, technical iron contains some carbon which forms bubbles of CO when it is sealed to glass under oxidizing conditions. Both are a major source of problems for the technical enamel coating of steel and make direct seals between iron and glass unsuitable for high vacuum applications. The oxide layer formed on chromium-containing steel can seal vacuum tight to glass and the chromium strongly reacts with carbon. Silver-plated iron was used in early microwave tubes.
It is possible to make matched seals between copper or austenitic steel and glass, but silicate glass with that high thermal expansion is especially fragile and has a low chemical durability.
Another widely used method to seal through glass with low coefficient of thermal expansion is the use of strips of thin molybdenum foil. This can be done with matched coefficients of thermal expansion. Then the edges of the strip also have to be knife sharp. The disadvantage here is that the tip of the edge which is a local point of high tensile stress reaches through the wall of the glass container . This can lead to low gas leakages. In the tube to tube knife edge seal the edge is either outside, inside, or buried into the glass wall.
Another possibility of seal construction is the compression seal. This type of glass-to-metal seal can be used to feed through the wall of a metal container. Here the wire is usually matched to the glass which is inside of the bore of a strong metal part with higher coefficient of thermal expansion. Compression seals can withstand extremely high pressures [ a ] and physical stress such as mechanical and thermal shock. [ 13 ]
Silver chloride , which melts at 457 C bonds to glass, metals and other materials and has been used for vacuum seals. Even if it can be a convenient way to seal metal into glass it will not be a true glass to metal seal but rather a combination of a glass to silver chloride and a silver chloride to metal bond; an inorganic alternative to wax or glue bonds.
Also the mechanical design of a glass-to-metal seal has an important influence on the reliability of the seal. In practical glass-to-metal seals cracks usually start at the edge of the interface between glass and metal either inside or outside the glass container. If the metal and the surrounding glass are symmetric the crack propagates in an angle away from the axis. So, if the glass envelope of the metal wire extends far enough from the wall of the container the crack will not go through the wall of the container but it will reach the surface on the same side where it started and the seal will not leak despite the crack.
Another important aspect is the wetting of the metal by the glass. If the thermal expansion of the metal is higher than the thermal expansion of
the glass like with the Houskeeper seal, a high contact angle (bad wetting) means that there is a high tensile stress in the surface of the glass
near the metal. Such seals usually break inside the glass and leave a thin cover of glass on the metal. If the contact angle is low (good wetting)
the surface of the glass is everywhere under compression stress like an enamel coating. Ordinary soda-lime glass does not flow on copper at temperatures below the melting point of the copper and, thus, does not give a low contact angle. The solution is to cover the copper with a solder glass which has a low melting point and does flow on copper and then to press the soft soda-lime glass onto the copper. The solder glass
must have a coefficient of thermal expansion which is equal or a little lower than that of the soda-lime glass. Classically high lead containing
glasses are used, but it is also possible to substitute these by multi-component glasses e.g. based on the
system Li 2 O - Na 2 O - K 2 O - CaO - SiO 2 - B 2 O 3 - ZnO - TiO 2 - BaO - Al 2 O 3 . | https://en.wikipedia.org/wiki/Glass-to-metal_seal |
The Glass Pavilion , designed by Bruno Taut and built in 1914, was a prismatic glass dome structure at the Cologne Deutscher Werkbund Exhibition . [ 1 ] [ 2 ] The structure was a brightly colored landmark of the exhibition, constructed using concrete and glass. [ 1 ] [ 2 ] The dome had a double glass outer layer with colored glass prisms on the inside and reflective glass on the outside. The facade had inlaid colored glass plates that acted as mirrors. [ 3 ] Taut described his "little temple of beauty" as "reflections of light whose colors began at the base with a dark blue and rose up through moss green and golden yellow to culminate at the top in a luminous pale yellow." [ 3 ]
The Glass Pavilion is Taut's single best-known architectural achievement. [ 1 ] [ 2 ] [ 3 ] He built it for the German glass industry association specifically for the 1914 exhibition. [ 1 ] [ 2 ] [ 3 ] They financed the structure that was considered a house of art . [ 4 ] The purpose of the building was to demonstrate the potential of different types of glass for architecture. [ 1 ] It also indicated how the material might be used to orchestrate human emotions and assist in the construction of a spiritual utopia. The structure was made at the time when expressionism was most fashionable in Germany, and it is sometimes referred to as an expressionist-style building. [ 1 ] [ 2 ] [ 3 ] The only known photographs of the building were made in 1914, but these black-and-white images are only marginal representations of the actualities of the work. [ 1 ] The building was destroyed soon after the exhibition since it was an exhibition building only and not built for practical use. [ 1 ] [ 3 ]
The Glass Pavilion was a pineapple-shaped multi-faceted polygonal designed rhombic structure. [ 1 ] [ 2 ] It had a fourteen-sided base constructed of thick glass bricks used for the exterior walls devoid of rectangles. Each part of the cupola was designed to recall the complex geometry of nature. [ 1 ] [ 2 ] The Pavilion structure was on a concrete plinth, the entrance reached by two flights of steps (one on either side of the building), which gave the pavilion a temple-like quality. Taut's Glass Pavilion was the first building of importance made of glass bricks. [ 3 ] [ 5 ]
There were glass-treaded metal staircases inside that led to the upper projection room that showed a kaleidoscope of colors. [ 2 ] [ 4 ] Between the staircases was a seven-tiered cascading waterfall with underwater lighting, this created a sensation of descending to the lower level "as if through sparkling water". [ 1 ] [ 2 ] [ 4 ] The interior had prisms producing colored rays from the outside sunlight. [ 1 ] [ 4 ] The floor-to-ceiling colored glass walls were mosaic. [ 1 ] [ 4 ] All this had the effect of a large crystal producing a large variety of colors. [ 1 ] [ 4 ]
The frieze of the Glass Pavilion was written with aphoristic poems of glass done by the anarcho-socialist writer Paul Scheerbart . [ 2 ] Examples of these were "Colored glass destroys hatred" and "Without a glass palace, life is a conviction". [ 2 ] Scheerbart's ideas also inspired the ritualistic composition of the interior. For Scheerbart, bringing in the light of the moon and the stars brought in different positive feelings which led to a whole new culture.
Paul Scheerbart in 1914 published a book called Glasarchitektur ("Architecture in glass") and dedicated it to Taut. [ 2 ] Taut in 1914 founded a magazine called Frühlicht ("Dawn's Light") for his Expressionist devotees. [ 1 ] It emphasized the iconography of glass which is also represented by his Glass Pavilion. [ 1 ] This philosophy can be traced back to accounts of Solomon's Temple . [ 1 ] An early drawing of the Glass Pavilion by Taut says he made it in the spirit of a Gothic cathedral. [ 1 ]
Taut called on architects to follow him into the contemporary Expressionist painters in seeking a new artistic spirit, he wanted to create a building with a different structure, and similar to Gothic Cathedrals. Bruno said that his building didn't have any real function, it was more to provoke something in someone than a practical building.
The Glass Pavilion or "Glashaus" was one of the first exhibition buildings designed as a mechanism to create vivid experiences, where people would be able to feel, touch, and primarily see.
The goal of this functionless building was that architecture would include the other arts of painting and sculpture to achieve a new, unified expression.
"The longing for purity and clarity, for glowing lightness and crystalline exactness, for immaterial lightness and infinite liveliness found a means of its fulfillment in glass—the most ineffable, most elementary, most flexible and most changeable of materials, richest in meaning and inspiration, fusing with the world like no other. This least fixed of materials transforms itself with every change of atmosphere. It is infinitely rich in elations, mirroring what is above, below, and what is below, above. It is animated, full of spirit and alive ... It is an example of a transcendent passion to build, functionless, free, satisfying no practical demands—and yet a functional building, soulful, awakening spiritual inspirations—an ethical functional building" -Behne
This building was made into an installation, a symbol, a mystical sign and a start for a new world view and future architecture. | https://en.wikipedia.org/wiki/Glass_Pavilion |
Glass batch calculation or glass batching is used to determine the correct mix of raw materials (batch) for a glass melt.
The raw materials mixture for glass melting is termed "batch". The batch must be measured properly to achieve a given, desired glass formulation. This batch calculation is based on the common linear regression equation:
N B = ( B T ⋅ B ) − 1 ⋅ B T ⋅ N G {\displaystyle N_{B}=(B^{T}\cdot B)^{-1}\cdot B^{T}\cdot N_{G}}
with N B and N G being the molarities 1-column matrices of the batch and glass components respectively, and B being the batching matrix . [ 1 ] [ 2 ] [ 3 ] The symbol " T " stands for the matrix transpose operation, " −1 " indicates matrix inversion , and the sign "·" means the scalar product . From the molarities matrices N, percentages by weight (wt%) can easily be derived using the appropriate molar masses .
An example batch calculation may be demonstrated here. The desired glass composition in wt% is: 67 SiO 2 , 12 Na 2 O , 10 CaO , 5 Al 2 O 3 , 1 K 2 O , 2 MgO , 3 B 2 O 3 , and as raw materials are used sand , trona , lime , albite , orthoclase , dolomite , and borax . The formulas and molar masses of the glass and batch components are listed in the following table:
The batching matrix B indicates the relation of the molarity in the batch (columns) and in the glass (rows). For example, the batch component SiO 2 adds 1 mol SiO 2 to the glass, therefore, the intersection of the first column and row shows "1". Trona adds 1.5 mol Na 2 O to the glass; albite adds 6 mol SiO 2 , 1 mol Na 2 O, and 1 mol Al 2 O 3 , and so on. For the example given above, the complete batching matrix is listed below. The molarity matrix N G of the glass is simply determined by dividing the desired wt% concentrations by the appropriate molar masses, e.g., for SiO 2 67/60.0843 = 1.1151.
The resulting molarity matrix of the batch, N B , is given here. After multiplication with the appropriate molar masses of the batch ingredients one obtains the batch mass fraction matrix M B :
N B = [ 0.82087 0.08910 0.12870 0.03842 0.01062 0.04962 0.02155 ] {\displaystyle \mathbf {N_{B}} ={\begin{bmatrix}0.82087\\0.08910\\0.12870\\0.03842\\0.01062\\0.04962\\0.02155\end{bmatrix}}} M B = [ 49.321 20.138 12.881 20.150 5.910 9.150 8.217 ] {\displaystyle \mathbf {M_{B}} ={\begin{bmatrix}49.321\\20.138\\12.881\\20.150\\5.910\\9.150\\8.217\end{bmatrix}}} or M B ( 100 % n o r m a l i z e d ) = [ 39.216 16.012 10.242 16.022 4.699 7.276 6.533 ] {\displaystyle \mathbf {M_{B}(100\%normalized)} ={\begin{bmatrix}39.216\\16.012\\10.242\\16.022\\4.699\\7.276\\6.533\end{bmatrix}}}
The matrix M B , normalized to sum up to 100% as seen above, contains the final batch composition in wt%: 39.216 sand, 16.012 trona, 10.242 lime, 16.022 albite, 4.699 orthoclase, 7.276 dolomite, 6.533 borax. If this batch is melted to a glass, the desired composition given above is obtained. [ 4 ] During glass melting, carbon dioxide (from trona, lime, dolomite) and water (from trona, borax) evaporate.
Simple glass batch calculation can be found at the website of the University of Washington. [ 5 ]
If the number of glass and batch components is not equal, if it is impossible to exactly obtain the desired glass composition using the selected batch ingredients, or if the matrix equation is not soluble for other reasons (i.e., the rows/columns are linearly dependent ), the batch composition must be determined by optimization techniques . | https://en.wikipedia.org/wiki/Glass_batch_calculation |
Glass beads composed of soda lime glass are essential for providing retroreflectivity in many kinds of road surface markings . [ 1 ] Retroreflectivity occurs when incident light from vehicles is refracted within glass beads that are embedded in road surface markings and then reflected back into the driver's field of view. [ 2 ] In North America, approximately 227 million kilograms (500 million lb) of glass beads are used for road surface markings annually. [ 3 ] Roughly 520 kilograms (1,150 lb) of glass beads are used per mile during remarking of a five lane highway system, [ 4 ] and road remarking can occur every two to five years. [ 4 ] In the United States, the massive demand for glass beads has led to importing from countries using outdated manufacturing regulations and techniques.
These techniques include the use of heavy metals such as arsenic , antimony , and lead during the manufacturing process as decolorizers and fining agents . It has been found that the heavy metals become incorporated into the bead's glass matrix and may leach under environmental conditions that roads experience. [ 5 ]
The synthesis of these beads begins when calcium carbonate is heated to anywhere from 800 to 1300 ∘ {\displaystyle ^{\circ }} C. This heating causes a decomposition reaction which forms solid calcium oxide and releases carbon dioxide gas.
Similarly, sodium carbonate decomposes to sodium oxide and releases carbon dioxide gas.
Sodium oxide is then reacted with silica to produce sodium silicate liquid glass.
Lastly, to complete the general structure of the soda-lime glass, calcium oxide is dissolved in solution with sodium silicate glass, which ultimately reduces the softening temperature of the glass. [ 6 ] Additional metals and ions are added to this melted glass to improve its properties, and the compound is then sprayed and formed into beads using either the direct or indirect method.
Overall, the percent composition of major compounds found in the final glass bead product is shown below. [ 3 ]
In addition to these primary components of soda-lime glass, manufacturers include the heavy metals arsenic, antimony , and lead to refine and improve the properties of the glass bead. Lead in the form of PbO is added to increase the durability of the glass to withstand harsh road conditions. [ 8 ] Arsenic and antimony are used as fining agents that facilitate the removal of gas bubbles from the molten mixture. [ 9 ] Carbon dioxide produced by the decomposition of calcium carbonate and sodium carbonate is removed to obtain the required retroreflective properties of the glass. In addition, both arsenic and antimony are used as decolorizers. Having a colorless glass is crucial to maximizing retroreflectivity. Arsenic in its inorganic form assists in the decolorization of the glass by controlling iron's oxidation state. [ 3 ] Arsenic oxidizes ferrous oxide to its less colorful counterpart, ferric oxide .
Antimony in the form of Sb 2 O 5 performs a similar reaction as arsenic, oxidizing ferrous oxide to ferric oxide.
While these three heavy metals can typically be found in both domestic and imported glass beads, they vary in concentration. According to the US Environmental Protection Agency , the Resource Conservation and Recovery Act limits the levels of heavy metal content in accordance with their toxicity . [ 11 ] Due to increasing demands for marked roads, however, the majority of glass beads used in the U.S. are imported from countries with little to no regulation on heavy metal content. For example, beads obtained from North America contain approximately 15 mg of arsenic per kg of beads, while some from China have concentrations of up to 1000 mg/kg. [ 3 ] Imported bead concentrations of each of these metals are listed in the table below.
Environmental conditions can cause degradation of glass beads, leading to release of incorporated heavy metals into the environment. [ 3 ] While abrasion may dislodge these beads from the road marking itself, the reaction of these beads with an aqueous environment vastly accelerate their decomposition and heavy metal release.
There are three reactions involved in the corrosion of silicon dioxide. The first is an ion exchange reaction, in which mobile ions of a solution are exchanged for those of similar charge on the solid. Particularly, this reaction is involving cation exchange material, where a negatively charged structural backbone allows the replacement of positively charged cations. [ 12 ] This reaction involved in the degradation of soda lime beads shows various ions that are interaction with the silicon-oxygen network (e.g. Na + {\displaystyle {\ce {Na+}}} , Ca 2 + {\displaystyle {\ce {Ca^2+}}} , K + {\displaystyle {\ce {K+}}} , Mg 2 + {\displaystyle {\ce {Mg^2+}}} ) being replaced with a hydrogen ion.
In addition to this reaction, a hydroxyl ion can attack the Si − O {\displaystyle {\ce {Si-O}}} bond causing dissolution of the SiO 2 {\displaystyle {\ce {SiO2}}} matrix and creating silanol and non-bridging oxygen groups.
As dissolution occurs, the non-bridging oxygen groups can abstract hydrogen ions from solution.
An increase in the concentration of hydroxyl ions comes with increased alkalinity of the aqueous solution. This increase in pH has shown, in varying column leaching studies, to increase the reduction potential and DOC (dissolved organic carbon) concentration of the solution. This ultimately leads to an increase in mobility of many metals including arsenic, copper , and nickel .
The mobility of these heavy metals are therefore affected by the presence of alkali oxides. The Na + {\displaystyle {\ce {Na+}}} , Ca 2 + {\displaystyle {\ce {Ca^2+}}} , Mg 2 + {\displaystyle {\ce {Mg^2+}}} , and K + {\displaystyle {\ce {K+}}} ions can associate with the tetrahedral networks of silicon and oxygen, forming a trigonal antiprism network. In trigonal antiprism formation, the ions coordinate with three oxygen atoms at a distance of 2.3 angstroms and then another three oxygen atoms at a nonbonding distance of 3 angstroms. As the concentration of alkali oxides increases in metal beads, the probability of chemical attack increases due to the more open and accessible glass chemical network and structure. [ 3 ]
During both routine road marking removal and harsh environmental conditions, these glass beads can degrade and leach incorporated heavy metals. Although the exact mechanism of heavy metal incorporation into the glass beads is unknown, current literature hypothesizes that the heavy metals are associated with alkali and alkali earth metals on the surface of glass beads. Environmental conditions relevant to road surfaces such as pH, different salts , and ionic strength strongly influence the leaching process. In particular, pH determines the speciation of the heavy metal which is critical for solubility in the aqueous phase. The following graphs show the speciation of heavy metals as a function of pH. [ 3 ]
Few U.S. states have regulations on leached concentrations of heavy metals. For example, New Jersey limits arsenic to 3 μg/L, lead to 65 μg/L, and antimony to 78 μg/L. In studies that subjected batches of glass beads to environmental conditions in a lab setting, 96% of the leached concentrations of arsenic exceeded 3 μg/L, 75% of leached lead exceeded 65 μg/L, and 27% of the leached concentrations of antimony exceeded the criterion of 78 μg/L. [ 13 ] The following graphs show the total concentrations of heavy metals leached from glass beads after 160 days as a function of pH, salt type, and ionic strength. [ 3 ]
Once the arsenic is mobilized in aqueous form, humic substances interact with arsenic. It has been shown that particularly under acidic environments, humic acids contribute immensely to the retention of arsenic in the soil matrix . [ 14 ] While an exact mechanism for this has not been confirmed, it has been hypothesized that humic acids are acting as anion exchange moieties, potentially through amine interaction within the humic material with arsenic. This is only likely if the amine is quaternary, thus justifying the low pH claim, as similar resins are used to separate As(III) and As(V). Another possible mechanism of arsenic's interaction with humic substances is through metal complexes. Potentially, arsenic adsorption could occur as a humic-acid-metal-As bridging ligand , or possibly adsorbed to the clay that is bound to the humic acid itself as well. [ 15 ]
Lead, on the other hand, has been shown to increase binding to humic substances with increasing pH and decreasing ionic strength. Research has indicated that monodentate lead binds at a relatively high measure to carboxylic type groups present in humic materials. There is also evidence of the bidentate form of lead binding to phenolic-type groups in the ortho position in humic material when concentrations of lead are high, as is the case for soils nearby marked roads. [ 16 ]
In the case of antinomy, qualitative studies on its association with humic substances is scarce and rarely conclusive. It has been shown in many cases, however, that pH has little indication on these interactions. One study indicated that organic ligands that possess carboxylic groups or hydroxyl groups create stable bidentate chelates in its speciation as As(III) and As(V). Another indicated that As(III) when bound to humic material is easily oxidized, and can be released back into aqueous solution as (SbOH)6-, thus showing that As(V) is more commonly bound to humic material. The details of how this binding occurs mechanistically remains relatively unresolved, but knowledge of the primary form of its binding is important to furthering this research. [ 17 ]
Retroreflectivity is essential to safe driving conditions. While metals are necessary to achieve these goals, there are other, non-toxic metals that can achieve the same results. These may include zirconium , tungsten , titanium , and barium . [ 18 ] The amount of these metals that could be incorporated into the glass varies based on its country of origins and the regulations placed on those countries, but further research on alternatives to heavy metal usage in road markings would assist in reducing heavy metal leachate near roadside soils. | https://en.wikipedia.org/wiki/Glass_bead_road_surface_marking |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.