id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
16,266,461 | https://en.wikipedia.org/wiki/Montreal%20Laboratory | The Montreal Laboratory was a program established by the National Research Council of Canada during World War II to undertake nuclear research in collaboration with the United Kingdom, and to absorb some of the scientists and work of the Tube Alloys nuclear project in Britain. It became part of the Manhattan Project, and designed and built some of the world's first nuclear reactors.
After the Fall of France, some French scientists escaped to Britain with their stock of heavy water. They were temporarily installed in the Cavendish Laboratory at the University of Cambridge, where they worked on reactor design. The MAUD Committee was uncertain whether this was relevant to the main task of Tube Alloys, that of building an atomic bomb, although there remained a possibility that a reactor could be used to breed plutonium, which might be used in one. It therefore recommended that they be relocated to the United States, and co-located with the Manhattan Project's reactor effort. Due to American concerns about security (many of the scientists were foreign nationals) and patent claims by the French scientists and Imperial Chemical Industries (ICI), it was decided to relocate them to Canada instead.
The Canadian government agreed to the proposal, and the Montreal Laboratory was established in a house belonging to McGill University; it moved to permanent accommodation at the Université de Montréal in March 1943. The first eight laboratory staff arrived in Montreal at the end of 1942. These were Bertrand Goldschmidt and Pierre Auger from France, George Placzek from Czechoslovakia, S. G. Bauer from Switzerland, Friedrich Paneth and Hans von Halban from Austria, and R. E. Newell and F. R. Jackson from Britain. The Canadian contingent included George Volkoff, Bernice Weldon Sargent and George Laurence, and promising young Canadian scientists such as J. Carson Mark, Phil Wallace and Leo Yaffe.
Although Canada was a major source of uranium ore and heavy water, these were controlled by the Americans. Anglo-American cooperation broke down, denying the Montreal Laboratory scientists access to the materials they needed to build a reactor. In 1943, the Quebec Agreement merged Tube Alloys with the American Manhattan Project. The Americans agreed to help build the reactor. Scientists who were not British subjects left, and John Cockcroft became the new director of the Montreal Laboratory in May 1944. The Chalk River Laboratories opened in 1944, and the Montreal Laboratory was closed in July 1946. Two reactors were built at Chalk River. The small ZEEP went critical on 5 September 1945, and the larger NRX on 21 July 1947. NRX was for a time the most powerful research reactor in the world.
Early nuclear research in Canada
Canada has a long history of involvement with nuclear research, dating back to the pioneering work of Ernest Rutherford at McGill University in 1899. In 1940, George Laurence of the National Research Council (NRC) began experiments in Ottawa to measure neutron capture and nuclear fission in uranium to demonstrate the feasibility of a nuclear reactor. For that purpose, he obtained of uranium dioxide in paper bags from the Eldorado Mine at Port Radium in the Northwest Territories. For a neutron moderator, he used carbon in the form of petroleum coke. This was placed with the bags of uranium oxide in a large wooden bin lined with paraffin wax, another neutron moderator. A neutron source was added and a Geiger counter used to measure radioactivity.
The experiments continued in 1942, but were ultimately unsuccessful; the problems posed by impurities in the coke and uranium oxide had not been fully appreciated, and as a result too many neutrons were captured. But Laurence's efforts attracted some attention, and in the summer of 1940 he was visited by R. H. Fowler, the British scientific liaison officer in Canada. This was followed by a visit from John Cockcroft of the British Tizard Mission to the United States in the autumn. They brought news of the similar research being carried out under the supervision of the MAUD Committee in Britain and the National Defense Research Committee (NDRC) in the United States.
Fowler became the channel of communication between the NDRC and its counterparts in Britain and Canada. Through him, Laurence obtained an introduction to Lyman J. Briggs, the chairman of the NDRC's S-1 Uranium Committee, who supplied copies of American studies. On returning to England, Cockcroft arranged through Lord Melchett for Laurence to receive a $5,000 grant to continue his research. This payment was made by Imperial Chemical Industries (ICI) through a Canadian subsidiary. It had the desired side effect of impressing the Canadian authorities with the importance of Laurence's work.
French connection
Laurence had chosen to use carbon instead of heavy water because it was cheaper and more readily available. A team of scientists in France that included Hans von Halban, Lew Kowarski, and Francis Perrin had been conducting similar experiments since 1939. By 1940, they had decided to use heavy water as a moderator, and through the French Minister of Armaments obtained about from the Norsk Hydro hydroelectric station at Vemork in Norway. After the Fall of France, they had escaped to Britain with their stock of heavy water. They were temporarily installed in the Cavendish Laboratory at the University of Cambridge but, believing that Britain would soon fall as well, were eager to relocate to the United States or Canada.
Canada was an alternative source of heavy water. Cominco had been involved in heavy water research since 1934, and produced it at its smelting plant in Trail, British Columbia. On 26 February 1941, the NRC inquired about its ability to produce heavy water. This was followed on 23 July by a letter from Hugh Taylor, a British-born scientist working at Princeton University, on behalf of the Office of Scientific Research and Development (OSRD). Taylor offered a NDRC contract to produce , for which the NDRC was prepared to pay $5 per pound for low-grade and $10 for high-grade heavy water. At the time it was selling for up to $1,130 per pound.
Cominco's president, Selwyn G. Blaylock, was cautious. There might be no post-war demand for heavy water, and the patent on the process was held by Albert Edgar Knowles, so a profit-sharing agreement would be required. In response, Taylor offered $20,000 for plant modifications. There the matter rested until 6 December, when Blaylock had a meeting with the British physicist G. I. Higson, who informed him that Taylor had become discouraged with Cominco, and had decided to find another source of heavy water. Blaylock invited Taylor to visit Trail, which he did from 5 to 8 January 1942. The two soon found common ground. Blaylock agreed to produce heavy water at Trail, and quickly secured approval from the chairman of the board, Sir Edward Beatty. A contract was signed on 1 August 1942. The heavy water project became known as the P-9 Project in October 1942.
The French scientists made good progress on the design of an aqueous homogeneous reactor, but there were doubts that their work was relevant to the main task of the British Tube Alloys project, that of building an atomic bomb, and resources were tightly controlled in wartime Britain. There was a possibility that a reactor could be used to breed plutonium, but its use in a bomb seemed a remote possibility. The MAUD Committee therefore felt that they should relocate to America. It made sense to pool resources, and America had advantages, notably access to materials such as heavy water. American scientists such as Henry D. Smyth, Harold Urey and Hugh Taylor urged that the Cambridge team be sent to America. On the other hand, American officials had concerns about security, since only one of the six senior scientists in the Cambridge group was British, and about French patent claims. These included patents on controlling nuclear chain reactions, enriching uranium, and using deuterium as a neutron moderator. There were also two patent applications in conjunction with Egon Bretscher and Norman Feather on the production and use of plutonium. George Thomson, the chairman of the MAUD Committee, suggested a compromise: relocating the team to Canada.
Establishment
The next step was to broach the matter with the Canadians. The Lord President, Sir John Anderson, as the minister responsible for Tube Alloys, wrote to the British High Commissioner to Canada, Malcolm MacDonald, who had been involved in Tube Alloys negotiations with Canada regarding Eldorado's uranium mine at Port Radium and its refinery in Port Hope, Ontario. On 19 February 1942, MacDonald, Thomson and Wallace Akers, the director of Tube Alloys, met with C. J. Mackenzie, the president of the NRC, who enthusiastically supported the proposal. The following day he took them to see C. D. Howe, the Minister of Munitions and Supply.
Howe cabled Anderson expressing the Canadian government's agreement in principle, but requesting a more detailed appraisal of the cost of the proposed laboratory. Sir John Anderson replied that he envisaged a laboratory with about 30 scientists and 25 laboratory assistants, of whom 22 scientists and 6 laboratory assistants would be sent from Britain. The estimated running cost was £60,000 per annum. He agreed that the costs and salaries would be divided between the British and Canadian governments, but the British share would come from a billion-dollar war gift from Canada. The Canadians found this acceptable. Howe and Mackenzie then travelled to London to finalise arrangements for the laboratory's governance. It was agreed that it would be run by a Policy Committee consisting of Howe and MacDonald and be administered by and funded through the NRC, with research directed by a Technical Committee chaired by Halban.
The Canadians decided that the new laboratory should be located in Montreal, where housing accommodation was easier to find than in wartime Ottawa. They hoped to have everything ready by 1 January 1943, but negotiations for laboratory space fell through. A search then commenced for an alternative location. Bertrand Goldschmidt, a French scientist who was already in Canada, ran into Henri Laugier, a French biologist who had been president of the Centre national de la recherche scientifique before the Fall of France, when he had escaped to Canada. Laugier suggested that they acquire some unused wings of a new building at the Université de Montréal, where he was now teaching. These had been earmarked for a medical school, but had never been equipped due to a lack of funds. The space was acquired, but considerable work was required to convert it into a laboratory, and it could not be made ready before mid-February 1943. Ernest Cormier, the university architect, drew up the plans.
The first eight staff arrived in Montreal at the end of 1942. These were Goldschmidt and Pierre Auger from France, George Placzek from Czechoslovakia, S. G. Bauer from Switzerland, Friedrich Paneth and Halban from Austria, and R. E. Newell and F. R. Jackson from Britain. The Battle of the Atlantic was still raging, and men and equipment, which travelled separately, were at risk from German U-boats. The scientists occupied a house at 3470 Simpson Street in downtown Montreal that belonged to McGill University. This soon became so crowded that bathrooms were used for offices, with the bath tubs used to store papers and books. They were relieved to move to the more spacious accommodation at the Université de Montréal in March. The laboratory grew to over 300 staff, about half of whom were Canadians recruited by Laurence.
Placzek became head of the theoretical physics division. Kowarski was designated to be the head of the experimental physics division, but there was a personality clash with Halban, and Kowarski did not wish to accept what he saw as a subordinate position under Halban. At this point, many other scientists said that they would not go without Kowarski, but Sir Edward Appleton, the permanent secretary of the British Department of Scientific and Industrial Research, of which the Tube Alloys was a part, managed to persuade them to go. Kowarski remained at Cambridge, where he worked for James Chadwick. Auger became head of the experimental physics division instead. Paneth became head of the chemistry division. Two other scientists that had escaped from France joined the laboratory: the French chemist Jules Guéron, who had been working for Free France at Cambridge, and Bruno Pontecorvo, an Italian scientist who had worked with Enrico Fermi in Italy before the war.
For the Canadian contingent, Laurence and Mackenzie set out to recruit some top nuclear physicists, of whom there were few in Canada. The first was George Volkoff at the University of British Columbia, who had worked with Robert Oppenheimer on the physics of neutron stars. They also tried to recruit Harry Thode from McMaster University, but found that Harold Urey from the Manhattan Project's SAM Laboratories was also interested in Thode's expertise in testing heavy water with mass spectrography, and had made a more attractive offer. A compromise was reached whereby Thode did work for the Montreal Laboratory, but remained at McMaster University. Promising young Canadian scientists were also recruited, including J. Carson Mark, Phil Wallace and Leo Yaffe.
Research
The Montreal Laboratory investigated multiple avenues of reactor development. One was a homogeneous reactor, in which a uranium compound was dissolved in heavy water to form a slurry, or a "mayonnaise" as the Montreal team called it. This offered various advantages for cooling, control and the ability to draw off plutonium that was produced. Paneth, Goldschmidt and others experimented with methods of preparing such a uranium compound, but none could be found with the required density. They considered using enriched uranium, but it was unavailable. Attention then turned to a heterogeneous reactor, in which a lattice of uranium metal rods were immersed in heavy water. While much less heavy water would be required, there was a danger that the water would decompose into deuterium and oxygen—a potentially explosive combination. There was great interest in breeder reactors, which could breed plutonium from uranium or uranium-233 from thorium, as it was believed that uranium was scarce. A process was devised for separating the uranium from thorium.
To build a working nuclear reactor, the Montreal Laboratory depended on the Americans for heavy water from Trail, which was under American contract, but this was not forthcoming. An American request for Halban to come to New York to discuss heavy water with Fermi and Urey was turned down by the British, and the Americans brought cooperation to a standstill. By June 1943 work at the Montreal Lab had come to a halt. Morale was low and the Canadian Government proposed cancelling the project. The British government seriously considered going it alone on developing nuclear weapons, despite the cost and the expected length of the project. In August 1943, Canadian Prime Minister Mackenzie King hosted the Quebec Conference, at which Winston Churchill and Franklin D. Roosevelt came together, and agreed to resume cooperation. The Quebec Agreement subsumed Tube Alloys into the Manhattan Project, and established the Combined Policy Committee, on which Canada was represented by Howe, to control the Manhattan Project.
While some aspects of cooperation resumed quickly, it took longer to finalize the details with respect to the Montreal Laboratory. Brigadier General Leslie Groves (the director of the Manhattan Project), Chadwick (now the head of the British Mission to the Manhattan Project), and Mackenzie negotiated recommendations, which were approved by the Combined Policy Committee on 13 April 1944. A final agreement was spelt out on 20 May. Under it, the Americans would assist with the construction of a heavy water reactor in Canada, and would provide technical assistance with matters such as corrosion and the effects of radiation on materials. They would not provide details about plutonium or plutonium chemistry, although irradiated uranium slugs would be made available for the British to work it out for themselves. The Americans had already built their own heavy water reactor, Chicago Pile-3, which went critical in May 1944. The September 1944 Hyde Park Agreement extended both commercial and military cooperation into the post-war period.
Hans von Halban had proved to be an unfortunate choice as he was a poor administrator, and did not work well with Mackenzie or the NRC. The Americans saw him as a security risk, and objected to the French atomic patents claimed by the Paris Group (in association with ICI). In April 1944 a Combined Policy Committee meeting at Washington agreed that Canada would build a heavy water reactor. Scientists who were not British subjects would leave, and Cockcroft became the new director of the Montreal Laboratory in May 1944. E. W. R. Steacie became assistant director and head of the Chemistry division when Paneth left. Volkoff eventually succeeded Placzek as head of the Theoretical Physics division. Halban remained as head of the nuclear physics division.
After the Liberation of Paris in August 1944, the French scientists wanted to go home. Auger had already returned to London to join the French Scientific Mission in April 1944. Halban returned on a visit to London and Paris in November 1944, where he saw Frédéric Joliot-Curie for the first time since leaving France. While he maintained that he did not divulge any nuclear secrets to his previous boss (although he had discussed patent rights), Halban was not allowed to work or to leave North America for a year, although he left the Montreal Laboratory in April 1945. In 1946 he settled in England. B. W. Sargent then became head of the nuclear physics division. Cockcroft arranged for Goldschmidt, Guéron and Kowarski to remain until June 1945, later extended until the end of 1945. Goldschmidt was willing to stay longer, and Cockcroft wanted to keep him, but Groves insisted that he should go, and, in the interest of Allied harmony, he did. All the French scientists had left by January 1946.
On 24 August 1944, the decision was taken to build a small reactor to test the group's calculations relating to such matters as lattice dimensions, sheathing materials, and control rods, before proceeding with the full-scale NRX reactor. With Halban gone, Kowarski joined the laboratory, and was given responsibility for the small reactor, which he named ZEEP, for Zero Energy Experimental Pile. He was assisted in the design by Charles Watson-Munro from New Zealand, and George Klein and Don Nazzer from Canada. Building reactors in downtown Montreal was out of the question; the Canadians selected, and Groves approved, a site at Chalk River, Ontario, on the south bank of the Ottawa River some northwest of Ottawa.
The Americans fully supported the reactor project with information and visits. Groves loaned the Montreal Laboratory of heavy water and of pure uranium metal for the reactor, and samples of pure and irradiated uranium and thorium to develop the extraction process. The irradiated materials came from the Manhattan Project's X-10 Graphite Reactor at the Clinton Engineer Works at Oak Ridge, Tennessee. Some of machined pure uranium rods was sold outright to Canada. He also supplied instruments, drawings and technical information, provided expertise from American scientists, and opened a liaison office in Montreal headed by Major H. S. Benbow. The American physicist William Weldon Watson from the Metallurgical Laboratory and chemist John R. Huffman from the SAM Laboratories were assigned to it. They were succeeded by George Weil in November 1945. Benbow was succeeded by Major P. Firmin in December 1945, who in turn was replaced by Colonel A. W. Nielson in February 1946.
The Chalk River Laboratories opened in 1944, and the Montreal Laboratory was closed in July 1946. ZEEP went critical on 5 September 1945, becoming the first operating nuclear reactor outside the United States. Using of heavy water and of uranium metal, it could operate continuously at 3.5 W, or for brief periods at 30 to 50 W. The larger NRX followed on 21 July 1947. With five times the neutron flux of any other reactor, it was the most powerful research reactor in the world. Originally designed in July 1944 with an output of 8 MW, the power was raised to 10 MW through design changes such as replacing uranium rods clad in stainless steel and cooled by heavy water with aluminium-clad rods cooled by light water.
By the end of 1946, the Montreal Laboratory was estimated to have cost US$22,232,000, excluding the cost of the heavy water. The NRX reactor provided Britain, the United States and Canada with a source of fissile plutonium and uranium-233. It also provided a means of efficiently producing medical isotopes like phosphorus-32, research facilities that for a time were superior to those in the United States, and a wealth of technical information related to reactor design and operation. With the passage of the Canadian Atomic Energy Act of 1946, the responsibility for the Chalk River Laboratories passed to the Atomic Energy Control Board.
Atomic spies
On 5 September 1945, Igor Gouzenko, a cypher clerk at the Soviet Union's embassy in Ottawa, and his family defected to Canada. He brought with him copies of cables detailing Soviet intelligence (GRU) espionage activities in Canada. Agents included Alan Nunn May, who secretly supplied tiny samples of uranium-233 and uranium-235 to GRU agent Pavel Angelov in July 1945; Fred Rose, a member of parliament; and NRC scientists Israel Halperin, Edward Mazerall and Durnford Smith. Pontecorvo, who defected to the Soviet Union in 1950, has long been suspected of having been involved in espionage. No evidence that he was a Soviet agent has ever been established, but the GRU obtained samples of uranium and blueprints of the NRX, for which Nunn May could not have been the source, and Pontecorvo remains the prime suspect. When the spy ring became public knowledge in February 1946, the Americans became more cautious about sharing information with Britain and Canada.
Cooperation ends
The Montreal Laboratory had been a fruitful and successful international venture, although the Canadians had on occasion been resentful of British actions that were perceived as high-handed and insensitive. One such action came in November 1945 when the British government suddenly announced that Cockcroft had been appointed the head of the new Atomic Energy Research Establishment in Britain without any prior consultation and at a time when the NRX reactor was still under construction. Cockcroft did not depart Canada until September 1946, but it was a sure sign of waning British interest in collaboration with Canada. The British suggested he be replaced by the British physicist Bennett Lewis, who was eventually appointed, but only after the Canadian-born Walter Zinn turned the job down.
Anglo-American cooperation did not long survive the war. Roosevelt died on 12 April 1945, and the Hyde Park Agreement was not binding on subsequent administrations. The Special Relationship between Britain and the United States "became very much less special". The British government had trusted that America would share nuclear technology, which the British considered a joint discovery. On 9 November 1945, Mackenzie King and British Prime Minister Clement Attlee went to Washington, D.C., to confer with President Harry Truman about future cooperation in nuclear weapons and nuclear power. A Memorandum of Intention that replaced the Quebec Agreement made Canada a full partner. The three leaders agreed that there would be full and effective cooperation, but British hopes for a resumption of cooperation on nuclear weapons were in vain. The Americans soon made it clear that cooperation was restricted to basic scientific research.
At the Combined Policy Committee meeting in February 1946, without prior consultation with Canada, the British announced their intention to build a graphite-moderated nuclear reactor in the United Kingdom. An outraged Howe told Canadian ambassador Lester B. Pearson to inform the committee that nuclear cooperation between Britain and Canada was at an end. The Canadians had been given what they deemed assurances that the Chalk River Laboratories would be a joint enterprise, and regarded the British decision as a breach of faith. Anglo-American cooperation largely ended in April 1946 when Truman declared that the United States would not assist Britain in the design, construction or operation of a plutonium production reactor. The Americans had agreed that such a facility could be built in Canada, but the British were not willing to be dependent on Canada for the supply of fissile material.
Notes
References
External links
National Research Council (Canada)
Nuclear research institutes
Research institutes in Canada
Nuclear history of the United Kingdom
History of the Manhattan Project
Nuclear technology in Canada
Université de Montréal
History of Montreal
Canada in World War II
1942 establishments in Canada
1946 disestablishments in Canada
United Kingdom–United States relations
Canada–United Kingdom relations
Canada–United States relations | Montreal Laboratory | [
"Engineering"
] | 4,997 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
16,267,934 | https://en.wikipedia.org/wiki/Actinorhizal%20plant | Actinorhizal plants are a group of angiosperms characterized by their ability to form a symbiosis with the nitrogen fixing actinomycetota Frankia. This association leads to the formation of nitrogen-fixing root nodules.
Actinorhizal plants are distributed within three clades, and are characterized by nitrogen fixation. They are distributed globally, and are pioneer species in nitrogen-poor environments. Their symbiotic relationships with Frankia evolved independently over time, and the symbiosis occurs in the root nodule infection site.
Classification
Actinorhizal plants are dicotyledons distributed within 3 orders, 8 families and 26 genera, of the angiosperm clade.
All nitrogen fixing plants are classified under the "Nitrogen-Fixing Clade", which consists of the three actinorhizal plant orders, as well as the order fabales. The most well-known nitrogen fixing plants are the legumes, but they are not classified as actinorhizal plants. The actinorhizal species are either trees or shrubs, except for those in the genus Datisca which are herbs. Other species of actinorhizal plants are common in temperate regions like alder, bayberry, sweetfern, avens, mountain misery and coriaria. Some Elaeagnus species, such as sea-buckthorns produce edible fruit. What characterizes an actinorhizal plant is the symbiotic relationship it forms with the bacteria Frankia, in which they infect the roots of the plant. This relationship is what is responsible for the nitrogen-fixation qualities of the plants, and what makes them important to nitrogen-poor environments.
Distribution and ecology
Actinorhizal plants are found on all continents except for Antarctica. Their ability to form nitrogen-fixing nodules confers a selective advantage in poor soils, and are therefore pioneer species where available nitrogen is scarce, such as moraines, volcanic flows or sand dunes. Being among the first species to colonize these disturbed environments, actinorhizal shrubs and trees play a critical role, enriching the soil and enabling the establishment of other species in an ecological succession. Actinorhizal plants like alders are also common in the riparian forest.
They are also major contributors to nitrogen fixation in broad areas of the world, and are particularly important in temperate forests. The nitrogen fixation rates measured for some alder species are as high as 300 kg of N2/ha/year, close to the highest rate reported in legumes.
Evolutionary origin
No fossil records are available concerning nodules, but fossil pollen of plants similar to modern actinorhizal species has been found in sediments deposited 87 million years ago. The origin of the symbiotic association remains uncertain. The ability to associate with Frankia is a polyphyletic character and has probably evolved independently in different clades. Nevertheless, actinorhizal plants and Legumes, the two major nitrogen-fixing groups of plants share a relatively close ancestor, as they are all part of a clade within the rosids which is often called the nitrogen-fixing clade. This ancestor may have developed a "predisposition" to enter into symbiosis with nitrogen fixing bacteria and this led to the independent acquisition of symbiotic abilities by ancestors of the actinorhizal and Legume species. The genetic program used to establish the symbiosis has probably recruited elements of the arbuscular mycorrhizal symbioses, a much older and widely distributed symbiotic association between plants and fungi.
The symbiotic nodules
As in legumes, nodulation is favored by nitrogen deprivation and is inhibited by high nitrogen concentrations. Depending on the plant species, two mechanisms of infection have been described: The first is observed in casuarinas or alders and is called root hair infection. In this case the infection begins with an intracellular penetration of a Frankia hyphae root hair, and is followed by the formation of a primitive symbiotic organ known as a prenodule. The second mechanism of infection is called intercellular entry and is well described in Discaria species. In this case bacteria penetrate the root extracellularly, growing between epidermal cells then between cortical cells. Later on Frankia becomes intracellular but no prenodule is formed. In both cases the infection leads to cell divisions in the pericycle and the formation of a new organ consisting of several lobes anatomically similar to a lateral root. Cortical cells of the nodule are invaded by Frankia filaments coming from the site of infection/the prenodule. Actinorhizal nodules have generally an indeterminate growth, new cells are therefore continually produced at the apex and successively become infected. Mature cells of the nodule are filled with bacterial filaments that actively fix nitrogen. No equivalent of the rhizobial nod factors have been found, but several genes known to participate in the formation and functioning of Legume nodules (coding for haemoglobin and other nodulins) are also found in actinorhizal plants where they are supposed to play similar roles. The lack of genetic tools in Frankia and in actinorhizal species was the main factor explaining such a poor understating of this symbiosis, but the recent sequencing of 3 Frankia genomes and the development of RNAi and genomic tools in actinorhizal species should help to develop a far better understanding in the following years.
Notes
References
External links
Frankia and Actinorhizal plant Website
Biogeochemical cycle
Cycle
Nitrogen cycle
Soil biology
Symbiosis | Actinorhizal plant | [
"Chemistry",
"Biology"
] | 1,194 | [
"Behavior",
"Symbiosis",
"Biological interactions",
"Biogeochemical cycle",
"Nitrogen cycle",
"Biogeochemistry",
"Soil biology",
"Metabolism"
] |
16,268,701 | https://en.wikipedia.org/wiki/Floor%20control | For computer networking, floor control allows users of networked multimedia applications to utilize and share resources such as remote devices, distributed data sets, telepointers, or continuous media such as video and audio without access conflicts. Floors are temporary permissions granted dynamically to collaborating users in order to mitigate race conditions and guarantee mutually exclusive resource usage.
In floor control, a user who wishes to speak makes a request (through their user equipment unit (UE)) for the right to speak, and then waits for a response that either grants or denies the user's request. In accordance with early PoC proposals, the floor is granted only for talk burst on a first received basis, and no queuing of floor control messages is performed.
References
Teleconferencing | Floor control | [
"Technology"
] | 158 | [
"Computing stubs",
"Computer network stubs"
] |
16,268,988 | https://en.wikipedia.org/wiki/Keith%20R.%20Jennings | Keith Robert Jennings is a British chemist known for his contributions to mass spectrometry.
Early life and education
1956 Ph.D. Queen’s College Oxford
Research interests
Structural studies on proteins of significant biological interest
Awards
1985 Thomson Medal for International Service to Mass Spectrometry
1995 Distinguished Contribution in Mass Spectrometry Award
1998 Aston Medal awarded by the British Mass Spectrometry Society
1998 Field and Franklin Award for Outstanding Achievement in Mass Spectrometry
References
External links
Information from University of Warwick website
British chemists
Mass spectrometrists
Alumni of the Queen's College, Oxford
Academics of the University of Warwick
Academics of the University of Sheffield
Living people
1932 births
Thomson Medal recipients | Keith R. Jennings | [
"Physics",
"Chemistry"
] | 140 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,269,602 | https://en.wikipedia.org/wiki/Abstract%20additive%20Schwarz%20method | In mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method for boundary value problems on partial differential equations, formulated only in terms of linear algebra without reference to domains, subdomains, etc. Many if not all domain decomposition methods can be cast as abstract additive Schwarz method, which is often the first and most convenient approach to their analysis.
References
Domain decomposition methods | Abstract additive Schwarz method | [
"Mathematics"
] | 85 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
16,270,055 | https://en.wikipedia.org/wiki/R136b | R136b is a blue supergiant star in the R136 cluster in the Large Magellanic Cloud. It is one of the most massive and most luminous stars known. It is found in the dense R136 open cluster at the centre of NGC 2070 in the Tarantula Nebula.
R136b has the spectral type of Wolf–Rayet star, with strong emission lines. Although it shows enhanced helium and nitrogen at its surface, it is still a very young star, still burning hydrogen in its core via the CNO cycle, and still effectively a main sequence object. Others studies classify the spectrum as a hot supergiant with emission lines of ionised nitrogen and helium, still considering it to be a young star at the core-hydrogen-burning stage, the unusual spectrum caused by strong convection and stellar winds.
References
Stars in the Large Magellanic Cloud
Extragalactic stars
Tarantula Nebula
O-type supergiants
?
Dorado
Large Magellanic Cloud | R136b | [
"Astronomy"
] | 204 | [
"Dorado",
"Constellations"
] |
16,270,380 | https://en.wikipedia.org/wiki/Omicron%20Draconis | Omicron Draconis (Latinised as ο Draconis, abbreviated to ο Dra) is a giant star in the constellation Draco located 322.93 light years from the Earth. Its path in the night sky is circumpolar for latitudes greater than 31o north, meaning the star never rises or sets when viewed in the night sky.
This is a single-lined spectroscopic binary system, but the secondary has been detected using interferometry. It is an RS Canum Venaticorum variable system with eclipses. The total amplitude of variation is only a few hundredths of a magnitude. The secondary star is similar to the Sun, presumably a main sequence star, while the primary is a giant star 25 times larger than the Sun and two hundred times more luminous.
Identities as pole star
Omicron Draconis can be considered the north pole star of Mercury, as it is the closest star to Mercury's north celestial pole. In addition to that, this star is currently the Moon's north pole star, which occurs once every 18.6 years. The pole star status changes periodically, because of the precession of the Moon's rotational axis.
References
External links
2004. Starry Night Pro, Version 5.8.4. Imaginova. . www.starrynight.com
Draconis, Omicron
Draco (constellation)
G-type giants
RS Canum Venaticorum variables
Eclipsing binaries
Draconis, 47
092512
7125
175306
Durchmusterung objects | Omicron Draconis | [
"Astronomy"
] | 324 | [
"Constellations",
"Draco (constellation)"
] |
16,270,616 | https://en.wikipedia.org/wiki/Glossary%20of%20environmental%20science | This is a glossary of environmental science.
Environmental science is the study of interactions among physical, chemical, and biological components of the environment. Environmental science provides an integrated, quantitative, and interdisciplinary approach to the study of environmental systems.
0-9
1-in-100 flood – a flood with 1 in 100 chance of occurring in any given year (used as a safety requirement for the construction industry.)
20/30/10 standard - 20 mg/L Biochemical Oxygen Demand (BOD), 30 mg/L Suspended Solids (SS), 10 units of E. coli: the water quality standard for greywater use in toilets, laundry and surface irrigation.
5Rs - (sustainability) reduce, remanufacture, reuse, recycle, recover.
A
abiotic component - Any non-living chemical or physical part of the environment that affects living and the functioning of , such as the atmosphere and water resources. (see also biotic).
absorption pit (soakaway) – a hole dug in permeable ground and filled with broken stones or granular material and usually covered with earth allowing collected water to soak into the ground.
absorption - one substance taking in another, either physically or chemically.
acclimation - the process of an organism adjusting to chronic change in its environment.
acid mine drainage - the outflow of acidic water from metal mines or coal mines.
acid rain - rain or other forms of precipitation that is unusually acidic.
adaptation - a characteristic of an organism that has been favoured by natural selection.
adaptive radiation - closely related species that look very different, as a result of having adapted to widely different ecological niches.
additionality - (of biodiversity offsets) where the conservation outcomes delivered by a biodiversity offset are demonstrably new and would not have resulted without the offset.
adsorption - one substance taking up another at its surface.
aerobic - requiring air or oxygen; used in reference to decomposition processes that occur in the presence of oxygen.
aerosols - solid or liquid particles suspended within the atmosphere.
affluenza - as defined in the book of the same name 1. the bloated, sluggish and unfulfilled feeling that results from efforts to keep up with the Joneses. 2. an epidemic of stress, overwork, waste and indebtedness caused by dogged pursuit of the Australian dream. 3. an unsustainable addiction to economic growth. The traditional Western environmentally unfriendly high consumption life-style: a play on the words affluence and influenza cf. froogle, freegan.
afforestation - planting new forests on lands that have not been recently forested.
agroforestry - (sustainability) an ecologically based farming system, that, through the integration of trees in farms, increases social, environmental and economic benefits to land users.
air pollution - the modification of the natural characteristics of the atmosphere by a chemical, particulate matter, or biological agent.
albedo - reflectance; the ratio of light from the Sun that is reflected by the Earth's surface, to the light received by it. Unreflected light is converted to infrared radiation (heat), which causes atmospheric warming (see "radiative forcing"). Thus, surfaces with a high albedo, like snow and ice, generally contribute to cooling, whereas surfaces with a low albedo, like forests, generally contribute to warming. Changes in land use that significantly alter the characteristics of land surfaces can alter the albedo.
algal bloom - the rapid and excessive growth of algae; generally caused by high nutrient levels combined with other favourable conditions. Blooms can deoxygenate the water leading to the loss of wildlife.
alien species - see introduced species.
alloy - composite blend of materials made under special conditions. Metal alloys like brass and bronze are well known but there are also many plastic alloys.
alternative fuels - fuels like ethanol and compressed natural gas that produce fewer emissions than the traditional fossil fuels.
anaerobic digestion - the biological degradation of organic materials in the absence of oxygen to yield methane gas (that may be combusted to produce energy) and stabilised organic residues (that may be used as a soil additive).
anaerobic - not requiring air or oxygen; used in reference to decomposition processes that occur in the absence of oxygen.
ancient forest - see old growth forest.
anoxic - with abnormally low levels of oxygen.
anthropogenic - man-made, not natural.
anthroposophy - spiritual philosophy based on the teachings of Rudolf Steiner (25 February 1861 – 30 March 1925) which postulates the existence of an objective, intellectually comprehensible spiritual world accessible to direct experience through inner development - more specifically through cultivating conscientiously a form of thinking independent of sensory experience. Steiner was the initiator of biodynamic gardening.
application efficiency - (sustainability) the efficiency of watering after losses due to runoff, leaching, evaporation, wind etc.
appropriated carrying capacity - another name for the Ecological Footprint, but often used in referring to the imported ecological capacity of goods from overseas.
aquaculture - the cultivation of aquatic organisms under controlled conditions.
aquifer – a bed or layer yielding water for wells and springs etc.; an underground geological formation capable of receiving, storing and transmitting large quantities of water. Aquifer types include: confined (sealed and possibly containing “fossil” water); unconfined (capable of receiving inflow); and Artesian (an aquifer in which the hydraulic pressure will cause the water to rise above the upper confining layer).
arable land - land that can be used for growing crops.
atmosphere – general name for the layer of gases around a material body; the Earth's atmosphere consists, from the ground up, of the troposphere (which includes the planetary boundary layer or peplosphere, the lowest layer), stratosphere, mesosphere, ionosphere (or thermosphere), exosphere and magnetosphere.
autotroph - an organism that produces complex organic compounds from simple inorganic molecules using energy from light or inorganic chemical reactions.
available water capacity – that proportion of soil water that can be readily absorbed by plant roots.
avoidance – (sustainability) the first step in the waste hierarchy where waste generation is prevented (avoided).
B
backflow - movement of water back to source e.g. contaminated water in a plumbing system.
baffle - (landscape design) an obstruction to trap debris in drainage water.
bagasse - the fibrous residue of sugar cane milling used as a fuel to produce steam in sugar mills.
baseload - the steady and reliable supply of energy through the grid. This is punctuated by bursts of higher demand known as “peak-load”. Supply companies must be able to respond instantly to extreme variation in demand and supply, especially during extreme conditions. Gas generators can react quickly while coal is slow but provides the steady "baseload". Renewable energies are generally not available on demand in this way.
batters - (landscape design) the slope of earthworks such as drainage channels.
best practice - a process, or innovative use of technology, equipment or resources or other measurable factors that have a proven record of success.
bioaccumulation - the accumulation of a substance, such as a toxic chemical, in the tissues of a living organism.
biocapacity - a measure of the biological productivity of an area. This may depend on natural conditions or human inputs like farming and forestry practices; the area needed to support the consumption of a defined population.
biocoenosis (alternatively, biocoenose or biocenose ) – all the interacting organisms living together in a specific habitat (or biotope).
biodegradable - capable of being decomposed through the action of organisms, especially bacteria.
biodiversity - the variety of life in all its forms, levels and combinations; includes ecosystem diversity, species diversity, and genetic diversity.
biodiversity banking - a market system for biodiversity offsetting that turns offsets into assets that can be traded.
biodiversity credit - a certificate that represents a measured and evidence-based unit of positive biodiversity outcome that is durable and additional to what would otherwise have occurred.
biodiversity offset - measurable conservation outcomes that result from actions designed to compensate for significant residual impacts on biodiversity, arising from a project and persisting after appropriate avoidance, minimization and restoration measures have been taken. The goal of biodiversity offsets is to achieve "no net loss" while adhering to a "like- for-like" principle, where offsets conserve the same biodiversity values that a project impacts.
bioelement - an element required by a living organism.
bioenergy - used in different senses: in its most narrow sense it is a synonym for biofuel, fuel derived from biological sources. In its broader sense it encompasses also biomass, the biological material used as a biofuel, as well as the social, economic, scientific and technical fields associated with using biological sources for energy.
biofuel - the fuel produced by the chemical and/or biological processing of biomass. Biofuel will either be a solid (e.g. charcoal), liquid (e.g. ethanol) or gas (e.g. methane).
biogas - landfill gas and sewage gas, also called biomass gas.
biogeochemical cycle - a circuit or pathway by which a chemical element or molecule moves through both biotic ("bio-") and abiotic ("geo-") parts of an ecosystem.
biogeochemical cycles - the movement of chemical elements between organisms and non-living components of the atmosphere, aquatic systems and soils.
biological oxygen demand (BOD) - a chemical procedure for determining how fast biological organisms use up oxygen in a body of water.
biological pest control - a method of controlling pests (including insects, mites, weeds and plant diseases) that relies on predation, parasitism, herbivory, or other natural mechanisms.
biological productivity - (bioproductivity) the capacity of a given area to produce biomass; different ecosystems (i.e. pasture, forest, etc.) will have different levels of bioproductivity. Biological productivity is determined by dividing the total biological production (how much is grown and living) by the total area available.
biologically productive land - is land that is fertile enough to support forests, agriculture and / or animal life. All of the biologically productive land of a country comprises its biological capacity. Arable land is typically the most productive area.
biomass - the materials derived from photosynthesis (fossilised materials may or may not be included) such as forest, agricultural crops, wood and wood wastes, animal wastes, livestock operation residues, aquatic plants, and municipal and industrial wastes; the quantity of organic material present in unit area at a particular time mostly expressed as tons of dry matter per unit area; organic matter that can be used as fuel.
biome - a climatic and geographically defined area of ecologically similar communities of plants, animals, and soil organisms, often referred to as ecosystems.
biophysical - the living and non-living components and processes of the ecosphere. Biophysical measurements of nature quantify the ecosphere in physical units such as cubic metres, kilograms or joules.
bioregion - (ecoregion) an area comprising a natural ecological community and bounded by natural borders.
bioremediation - a process using organisms to remove or neutralise contaminants (e.g. petrol), mostly in soil or water.
biosolids - nutrient-rich organic materials derived from wastewater solids (sewage sludge) that have been stabilised through processing.
biosphere - the part of the Earth, including air, land, surface rocks, and water, within which life occurs, and which biotic processes in turn alter or transform.
biosphere - the zone of air, land and water at the surface of the earth that is occupied by living organisms; the combination of all ecosystems on Earth and maintained by the energy of the Sun; the interface between the hydrosphere, geosphere and atmosphere.
biotic potential - the maximum reproductive capacity of a population under optimum environmental conditions.
biotic - relating to, produced by, or caused by living organisms. (see also abiotic).
birth rate - number of people born as a percentage of the total population in any given period of time; number of live births per 1000 people.
blackwater - household wastewater that contains solid waste i.e. toilet discharge.
bluewater - collectible water from rainfall; the water that falls on roofs and hard surfaces usually flowing into rivers and the sea and recharging the ground water. In nature the global average proportion of total rainfall that is blue water is about 40%. Blue water productivity in the garden can be increased by improving irrigation techniques, soil water storage, moderating the climate, using garden design and water-conserving plantings; also safe use of grey water.
boreal - northern; cold temperate Northern Hemisphere forests that grow where there is a mean annual temperature < 0 °C.
broad-acre farm - commercial farm covering a large area; usually a mixed farm in dryland conditions.
brownfield - a term often used to describe land previously used for industrial or commercial purposes with known or suspected pollution including soil contamination due to hazardous waste.
Brundtland Commission Report - a UN report, Our Common Future, published in 1987 and dealing with sustainable development and the policies required to achieve it, which the report characterizes as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs."
C
C3 & C4 plants – C4 plants comprise about 5% of all plants, are most abundant in hot and arid conditions, and include crops like sugar cane and soybeans. During photosynthesis they form molecules with 4-carbon atoms and saturate at the given level of CO2. C3 plants, the other 95%, photosynthesise to form 3 carbon molecules and increase photosynthesis with as CO2 levels increase.
calorie – a basic measure of energy that has been replaced by the SI unit the joule; in physics it approximates the energy needed to increase the temperature of 1 gram of water by 1 °C which is about 4.184 joules. The Calories in food ratings (spelled with a capital C) and nutrition are ‘big C’ Calories or kcal.
calorific value – the energy content of a fuel measured as the heat released on complete combustion.
cancer – a group of diseases in which cells are aggressive (grow and divide without respect to normal limits), invasive (invade and destroy adjacent tissues), and sometimes metastatic (spread to other locations in the body).
capillary action (wicking) – water drawn through a medium by surface tension.
car pooling – giving people lifts to help reduce emissions and traffic.
carbon budget – a measure of carbon inputs and outputs for a particular activity.
carbon credit – a market-driven way of reducing the impact of greenhouse gas emissions; it allows an agent to benefit financially from an emission reduction. There are two forms of carbon credit, those that are part of national and international trade and those that are purchased by individuals. Internationally, to achieve Kyoto Protocol objectives, ‘caps’ (limits) on participating country's emissions are established. To meet these limits countries, in turn, set ‘caps’ (allowances or credits: 1 convertible and transferable credit = 1 tonne of -e emissions) for operators. Operators that meet the agreed ‘caps’ can then sell unused credits to operators who exceed ‘caps’. Operators can then choose the most cost-effective way of reducing emissions. Individual carbon credits would operate in a similar way cf. carbon offset.
carbon cycle – the biogeochemical cycle by which carbon is exchanged between the biosphere, geosphere, hydrosphere, and atmosphere of the Earth.
Carbon Dioxide Equivalent (e ) – the unit used to measure the impacts of releasing (or avoiding the release of) the seven different greenhouse gases; it is obtained by multiplying the mass of the greenhouse gas by its global warming potential. For example, this would be 21 for methane and 310 for nitrous oxide.
carbon dioxide – a gas with the chemical formula ; the most abundant greenhouse gas emitted from fossil fuels.
carbon equivalent (C-e) – obtained by multiplying the -e by the factor 12/44.
carbon footprint – a measure of the carbon emissions that are emitted over the full life cycle of a product or service and usually expressed as grams of -e.
carbon labelling – use of product labels that display greenhouse emissions associated with goods (www.carbontrustcertification.com for product carbon footprint methodology).
carbon neutral – activities where net carbon inputs and outputs are the same. For example, assuming a constant amount of vegetation on the planet, burning wood will add carbon to the atmosphere in the short term but this carbon will cycle back into new plant growth.
carbon pool – a storage reservoir of carbon.
carbon sink – any carbon storage system that causes a net removal of greenhouse gases from the atmosphere.
carbon source – opposite of carbon sink; a net source of carbon for the atmosphere.
carbon stocks – the quantity of carbon held within a carbon pool at a specified time.
carbon taxes – a surcharge on fossil fuels that aims to reduce carbon dioxide emissions.
carcinogen – a substance, radionuclide or radiation that is an agent directly involved in the promotion of cancer or in the facilitation of its propagation.
carrying capacity – the maximum population that an ecosystem can sustain cf. biocapacity.
catchment area – the area that is the source of water for a water supply whether a dam or rainwater tank.
cell – (biology) the structural and functional unit of all known living organisms and is the smallest unit of an organism that is classified as living
CFC – chlorofluorocarbon. CFCs are potent greenhouse gases which are not regulated by the Kyoto Protocol since they are covered by the Montreal Protocol.
chlorinated hydrocarbon – see organochloride
chlorofluorocarbons – one of the more widely known family of haloalkanes.
circular metabolism – a system in which wastes, especially water and materials, are reused and recycled cf. linear metabolism.
Class A pan – (water management) an open pan used as a standard for measuring water evaporation.
cleaner production – the continual effort to prevent pollution, reduce the use of energy, water and material resources and minimise waste – all without reducing production capacity.
clearcutting – a forestry or logging practice in which most or all trees in a forest sector are felled.
climate change – a change in weather over time and/or region; usually relating to changes in temperature, wind patterns and rainfall; although may be natural or anthropogenic, common discourse carries the assumption that recent climate change is anthropogenic.
climate – the general variations of weather in a region over long periods of time; the "average weather" cf. weather.
cogeneration – the simultaneous production of electricity and useful heat from the combustion of the same fuel source.
cohousing – clusters of houses having shared dining halls and other spaces, encouraging stronger social ties while reducing the material and energy needs of the community.
coir – fibre of the coconut.
commercial and industrial waste – (waste management) solid waste generated by the business sector as well as that created by State and Federal government, schools and tertiary institutions. Does not include that from the construction and demolition industry.
commingled materials – (waste management) materials mixed together, such as plastic bottles, glass, and metal containers. Commingled recyclable materials require sorting after collection before they can be recycled.
comparative risk assessment – a methodology which uses science, policy, economic analysis and stakeholder participation to identify and address areas of greatest environmental risk; a method for assessing environmental management priorities. The US EPA (www.epa.gov/seahome/comprisk.html) offers free software which contains the history and methodology of comparative risk, as well as many case studies.
compensation point – the point where the amount of energy produced by photosynthesis equals the amount of energy released by respiration.
Complex system is a system composed of many components which may interact with each other.
compost – the aerobically decomposed remnants of organic matter.
composting – the biological decomposition of organic materials in the presence of oxygen that yields carbon dioxide, heat, and stabilised organic residues that may be used as a soil additive.
confined aquifer – aquifers that have the water table above their upper boundary and are typically found below unconfined aquifers.
conspicuous consumption – the lavish spending on goods and services that are acquired mainly for the purpose of displaying income or wealth rather than to satisfy basic needs of the consumer.
construction and demolition waste – (waste management) includes waste from residential, civil, and commercial construction and demolition activities, such as fill material (e.g. soil), asphalt, bricks and timber. C&D waste excludes construction waste which is included in the municipal waste stream. C&D waste does not generally include waste from the commercial and industrial waste stream.
consumer democracy – using your economic capacity to promote your values.
consumer – organism, human being, or industry that maintains itself by transforming a high-quality energy source into a lower one cf. Producer, primary production.
consumption (ecology) – the use of resources by a living system, the inflow and degradation of energy that is used for system activity.
consumption (economics) – part of disposable income (income after taxes paid and payments received) that is not saved, essentially the goods and services used by households; this includes purchased commodities at the household level (such as food, clothing, and utilities), the goods and services paid for by government (such as defence, education, social services and health care), and the resources consumed by businesses to increase their assets (such as business equipment and housing).
contour ploughing (contour farming) – the farming practice of plowing across a slope following its contours. The rows formed have the effect of slowing water run-off during rainstorms so that the soil is not washed away and allows the water to percolate into the soil.
controlled burning – a technique sometimes used in forest management, farming, prairie restoration or greenhouse gas abatement.
Convention on the International Trade in Endangered Species (CITES) – International agreement among 167 governments aiming to ensure that cross-border trade in wild animals and plants does not threaten their survival. The species covered by CITES are listed in three Appendices, according to the degree of protection they need (see: http://www.cites.org)
Corporate Social Responsibility – integration of social and environmental policies into day-to-day corporate business.
covenants – formal agreements or contracts, often between government and industry sectors. The national packaging covenant and sustainability covenants are examples of voluntary covenants with a regulatory underpinning. Land covenants protect land for wildlife into the future.
critical load – a concept in pollution studies hypothesizing that there exist quantitative thresholds for one or more pollutants above which significant detrimental effects on ecological systems (e.g. the eutrophication of natural waterways) will occur, and/or conversely below which they are not known to occur.
crop coefficient (Kc) – (water management) a variable used to calculate the evapotranspiration of a plant crop based on that of a reference crop.
crop evapotranspiration (ETc) – (water management) is the crop water use – the daily water withdrawal.
crop rotation (crop sequencing) – the practice of growing a series of dissimilar types of crops in the same space in sequential seasons for various benefits such as to avoid the buildup of pathogens and pests that often occurs when one species is continuously cropped.
crude oil – naturally occurring mixture of hydrocarbons under normal temperature and pressure.
cullet – crushed glass that is suitable for recycling by glass manufacturers.
cultural eutrophication - the process that speeds up natural eutrophication because of human activity.
cultural services – the non-material benefits of ecosystems including refreshment, spiritual enrichment, knowledge, artistic satisfaction.
culture jamming – altering existing mass media to criticise itself (e.g. defacing advertisements with an alternative message). Public activism opposing commercialism as little more than propaganda for established interests, and the attempt to find alternative expression.
culvert – drain that passes under a road or pathway, may be a pipe or other conduit.
cut and fill – removing earth from one place to another, usually mechanically.
cyanobacteria (Cyanophyta or blue-green algae) – a phylum of bacteria that obtain their energy through photosynthesis.
cyclone – intense low pressure weather systems; mid-latitude cyclones are atmospheric circulations that rotate clockwise in the Southern Hemisphere and anti-clockwise in the Northern Hemisphere and are generally associated with stronger winds, unsettled conditions, cloudiness and rainfall. Tropical cyclones (which are called hurricanes in the Northern Hemisphere) cause storm surges in coastal areas.
D
DDT - a chlorinated hydrocarbon used as a pesticide that is a persistent organic pollutant.
debt-for-Nature Swap - a financial transaction in which a portion of a developing nation's foreign debt is forgiven in exchange for local investments in conservation measures.
decomposers – consumers, mostly microbial, that change dead organic matter into minerals and heat.
deforestation - the conversion of forested areas to non-forest land for agriculture, urban use, development, or wasteland.
dematerialisation – decreasing the consumption of materials and resources while maintaining quality of life.
desalination producing potable or recyclable water by removing salts from salty or brackish water. This is done by three methods: distillation/freezing; reverse osmosis using membranes and electrodialysis; ion exchange. At present, all these methods are energy intensive.
desert – an area that receives an average annual precipitation of less than or an area in which more water is lost than falls as precipitation.
desertification - the degradation of land in arid, semi arid and dry sub-humid areas resulting from various climatic variations, but primarily from human activities.
detritivore (detritus feeder) - animals and plants that consume detritus (decomposing organic material), and in doing so contribute to decomposition and the recycling of nutrients.
detritus - non-living particulate organic material (as opposed to dissolved organic material).
developing countries – development of a country is measured using a mix of economic factors (income per capita, GDP, degree of modern infrastructure (both physical and institutional), degree of industrialisation, proportion of economy devoted to agriculture and natural resource extraction) and social factors (life expectancy, the rate of literacy, poverty). The UN-produced Human Development Index (HDI) is a compound indicator of the above statistics. There is a strong correlation between low income and high population growth, both within and between countries. In developing countries, there is low per capita income, widespread poverty, and low capital formation. In developed countries there is continuous economic growth and a relatively high standard of living. The term is value-laden and prescriptive, as it implies a natural transition from "undeveloped" to "developed" when such transitions can instead be imposed. Although poverty and physical deprivation are clearly undesirable, it does not follow that it is therefore desirable for "undeveloped" economies to move towards affluent Western-style "developed" free market economies. The terms "industrialised" and "non-industrialised" are no different in this assumption.
dfE – design for the environment; dfE considers 'cradle to grave' costs and benefits associated with material acquisition, manufacture, use, and disposal.
dfM – design for manufacturing; designing products in such a way that they are easy to manufacture.
dfS – design for sustainability; an integrated design approach aiming to achieve both environmental quality and economic efficiency through the redesign of industrial systems.
dfX – design for assembly/disassembly, re-use. recycle.
dieback – (arboriculture) a condition in trees or woody plants in which peripheral parts are killed, either by parasites or due to conditions such as acid rain.
dietary energy supply – food available for human consumption, usually expressed in kilocalories per person per day.
dioxin - any one of a number of chemical compounds that are persistent organic pollutants and are carcinogenic.
distributed water – (water management) purchased water supplied to a user; this is usually through a reticulated mains system (but also through pipes and open channels, irrigation systems supplied to farms).
diversion rate – (waste disposal) the proportion of a potentially recyclable material that has been diverted out of the waste disposal stream and therefore not directed to landfill.
divertible resource – (water management) the proportion of water runoff and recharge that can be accessed for human use.
downcycling – (waste management) recycling in which the quality of an item is diminished with each recycling.
downstream – those processes occurring after a particular activity e.g. the transport of a manufactured product from a factory to the wholesale or retail outlet cf. upstream.
drainage – (water management) that part of irrigation or rainfall that runs off an area or is lost to deep percolation.
drawdown – (water management) drop in water level, generally applied to wells or bores.
dredging - (water management) the repositioning of soil from an aquatic environment, using specialized equipment, in order to initiate infrastructural and/or ecological improvements.
drift net - a type of fishing net used in oceans, coastal seas and freshwater lakes.
drinking water – (potable water) – water fit for human consumption in accordance with World Health Organisation guidelines.
drip irrigation – (water management) a drip hose placed near the plant roots so minimising deep percolation and evaporation.
driver – (ecology) any natural or human-induced factor that directly or indirectly causes a change in an ecosystem. A direct driver is one that unequivocally influences ecosystem processes and that can be measured.
drop-off centre – (waste management) a location where discarded materials can be left for recycling.
drought – an acute water shortage relative to availability, supply and demand in a particular region. An extended period of months or years when a region notes a deficiency in its water supply. Generally, this occurs when a region receives consistently below average precipitation.
dryland salinity - (water management) accumulation of salts in soils, soil water and ground water; may be natural or induced by land clearing
E
eco- - a prefix now added to many words indicating a general consideration for the environment e.g. ecohousing, ecolabel, ecomaterial.
eco-asset – a biological asset that provides financial value to private land owners when they are maintained in or restored to their natural state.
ecolabel - seal or logo indicating a product has met a certain environmental or social standards.
ecological deficit - of a country or region measures the amount by which its Ecological Footprint exceeds the ecological capacity of that region.
Ecological Footprint (Eco-footprint, Footprint)– a measure of the area of biologically productive land and water needed to produce the resources and absorb the wastes of a population using the prevailing technology and resource management schemes; a measure of the consumption of renewable natural resources by a human population, be it that of a country, a region or the whole world given as the total area of productive land or sea required to produce all the crops, meat, seafood, wood and fibre it consumes, to sustain its energy consumption and to give space for its infrastructure.
ecological niche - the habitat of a species or population within its ecosystem.
ecological succession - the more-or-less predictable and orderly changes in the composition or structure of an ecological community with time.
ecological sustainability - the capacity of ecosystems to maintain their essential processes and function and to retain their biological diversity without impoverishment.
ecologically sustainable development - using, conserving and enhancing the human community's resources so that ecological processes, on which all life depends, can be maintained and enriched into the future.
ecology - the scientific study of living organisms and their relationships to one another and their environment; the scientific study of the processes regulating the distribution and abundance of organisms; the study of the design of ecosystem structure and function.
externality – a cost or benefit that are not borne by the producer or supplier of a good or service. In many environmental situations environmental deterioration may be caused by a few while the cost is borne by the community; examples would include overfishing, pollution (e.g. production of greenhouse emissions that are not compensated for in any way by taxes etc.), the environmental cost of land-clearing etc.
ecoregion - (bioregion) the next smallest ecologically and geographically defined area beneath realm (or ecozone).
ecosystem boundary – the spatial delimitation of an ecosystem usually based on discontinuities of organisms and the physical environment.
ecosystem services - the role played by organisms, without charge, in creating a healthy environment for human beings, from production of oxygen to soil formation, maintenance of water quality and much more. These services are now generally divided into four groups, supporting, provisioning, regulating and cultural.
ecosystem - a dynamic complex of plant, animal and microorganism communities and their non-living environment all interacting as a functional unit.
e-cycling – recycling electronic waste.
effective rainfall – the volume of rainfall passing into the soil; that part of rainfall available for plant use after runoff, leaching, evaporation and foliage interception.
energy efficiency - using less energy to provide the same level of energy service.
effluent - a discharge or emission of liquid, gas or other waste product.
El Niño - a warm water current which periodically flows southwards along the coast of Ecuador and Peru in South America, replacing the usually cold northwards flowing current; occurs once every five to seven years, usually during the Christmas season (the name refers to the Christ child); the opposite phase of an El Niño is called a La Niña.
embodied energy - the energy expended over the entire life cycle of a good or service cf. emergy.
emergent property – a property that is not evident in the individual components of an object or system.
emergy – “energy memory” all the available energy that was used in the work of making a product directly and indirectly, expressed in units of one type of available energy (work previously done to provide a product or service); the energy of one type required to make energy of another.
emission standard - a level of emissions that, under law, may not be exceeded.
emissions intensity – emissions expressed as quantity per monetary unit.
emissions trading – see carbon trading.
emissions - substances such as gases or particles discharged into the atmosphere as a result of natural processes of human activities, including those from chimneys, elevated point sources, and tailpipes of motor vehicles.
endangered species – a species which is at risk of becoming extinct because it is either few in number, or threatened by changing environmental or predation parameters.
energetics – the study of how energy flows within an ecosystem: the routes it takes, rates of flow, where it is stored and how it is used.
energy - a property of all systems which can be turned into heat and measured in heat units.
* available energy – energy with the potential to do work (exergy);
* delivered energy – energy delivered to and used by a household, usually gas and electricity;
* direct energy - the energy being currently used, used mostly at home (delivered energy) and for fuels used mainly for transport;
* embodied energy - t the energy expended over the entire life cycle of a good or service OR the energy involved in the extraction of basic materials, processing/manufacture, transport and disposal of a product OR the energy required to provide a good or service;
* geothermal energy – heat emitted from within the Earth’s crust as hot water or steam and used to generate electricity after transformation;
* hydro energy – potential and kinetic energy of water used to generate electricity;
* indirect energy – the energy generated in, and accounted for, by the wider economy as a consequence of an agent’s actions or demands;
* kinetic energy - the energy possessed by a body because of its motion;
* nuclear energy - energy released by reactions within atomic nuclei, as in nuclear fission or fusion (also called atomic energy);
* operational energy – the energy used in carrying out a particular operation;
* potential energy – the energy possessed by a body as a result of its position or condition e.g. coiled springs and charged batteries have potential energy;
* primary energy – forms of energy obtained directly from nature, the energy in raw fuels (electricity from the grid is not primary energy), used mostly in energy statistics when compiling energy balances;
* solar energy – solar radiation used for hot water production and electricity generation (does not include passive solar energy to heat and cool buildings etc.);
* secondary energy – primary energies are transformed in energy conversion processes to more convenient secondary forms such as electrical energy and cleaner fuels;
* stationary energy – that energy that is other than transport fuels and fugitive emissions, used mostly for production of electricity but also for manufacturing and processing and in agriculture, fisheries etc.;
* tidal/ocean/wave energy– mechanical energy from water movement used to generate electricity;
* useful energy – available energy used to increase system production and efficiency;
* wind energy – kinetic energy of wind used for electricity generation using turbines
energy accounting – measuring value by the energy input required for a good or service. A form of accounting that builds in a measure of our impact on nature (rather than being restricted to human-based items).
energy audit - a systematic gathering and analysis of energy use information that can be used to determine energy efficiency improvements. The Australian and New Zealand Standard AS/NZS 3598:2000 Energy Audits defines three levels of audit.
Energy Footprint - the area required to provide or absorb the waste from coal, oil, gas, fuelwood, nuclear energy and hydropower: the Fossil Fuel Footprint is the area required to sequester the emitted taking into account absorption by the sea etc.
energy management - A program of well-planned actions aimed at reducing energy use, recurrent energy costs, and detrimental greenhouse gas emissions.
energy recovery – the productive extraction of energy, usually electricity or heat, from waste or materials that would otherwise have gone to landfill.
energy-for-land ratio - the amount of energy that can be produced per hectare of ecologically productive land. The units used are gigajoules per hectare and year, or GJ/ha/yr. For fossil fuel (calculated as assimilation) the ratio is 100 GJ/ha/yr.
enhanced greenhouse effect - the increase in the natural greenhouse effect resulting from increases in atmospheric concentrations of greenhouse gases due to emissions from human activities.
ENSO (El Niño–Southern Oscillation) a suite of events that occur at the time of an El Niño; at one extreme of the cycle, when the central Pacific Ocean is warm and the atmospheric pressure over Australia is relatively high, the ENSO causes drought conditions over eastern Australia cf. El Niño, Southern Oscillation.
environment - the external conditions, resources, stimuli etc. with which an organism interacts.
Environmental ethics - There are many ethical decisions that human beings make with respect to the environment.
environmental flows - river or creek water flows that are allocated for the maintenance of the waterway ecosystems.
environmental indicator - physical, chemical, biological or socio-economic measure that can be used to assess natural resources and environmental quality.
environmental impact assessment (EIA) - the assessment of the environmental consequences of a plan, policy, program, or actual projects prior to the decision to move forward with the proposed action.
environmental movement (environmentalism) - both the conservation and green movements; a diverse scientific, social, and political movement. In general terms, environmentalists advocate the sustainable management of resources and stewardship of the natural environment through changes in public policy and individual behavior. In its recognition of humanity as a participant in ecosystems, the movement is centered around ecology, health, and human rights.
environmental science - the study of interactions among physical, chemical, and biological components of the environment.
epidemiology - the study of factors affecting the health and illness of populations, and serves as the foundation and logic of interventions made in the interest of public health and preventive medicine.
erosion - displacement of solids (sediment, soil, rock and other particles) usually by the agents of currents such as, wind, water, or ice by downward or down-slope movement in response to gravity or by living organisms.
Escherichia coli (E. coli) – a bacterium used as an indicator of faecal contamination and potential disease organisms in water.
estuary - a semi-enclosed coastal body of water with one or more rivers or streams flowing into it, and with a free connection to the open sea.
ethical consumerism - buying things that are made ethically i.e. without harm to or exploitation of humans, animals or the natural environment. This generally entails favoring products and businesses that take account of the greater good in their operations.
ethical living – adopting lifestyles, consumption and shopping habits that minimise our negative impact, and maximise our positive impact on people, the environment and the economy cf. consumer democracy, sustainable living.
eutrophication - the enrichment of waterbodies with nutrients, primarily nitrogen and phosphorus, which stimulates the growth of aquatic organisms.
eutrophication - an increase in chemical nutrients, typically compounds containing nitrogen or phosphorus, in an ecosystem.
euxenic - with extremely low oxygen cf. anoxic.
evaporation – water converted to water vapour.
evapotranspiration (ET) – the water evaporating from the soil and transpired by plants.
e-waste - electronic waste, especially mobile phones, televisions and personal computers.
extended producer responsibility (EPR) (product take-back) - a requirement (often in law) that producers take back and accept responsibility for the responsible disposal of their products; this encourages the design of products that can be easily repaired, recycled, reused or upgraded.
external water footprint – the embodied water of imported goods cf. internal water footprint.
externality – (environmental economics) by-products of activities that affect the well-being of people or damage the environment, where those impacts are not reflected in market prices. The costs (or benefits) associated with externalities do not enter standard cost accounting schemes. The environment is often cited as a negatively affected externality of the economy (see economic externality).
extinction event - (mass extinction, extinction-level event, ELE) - a sharp decrease in the number of species in a relatively short period of time.
extinction - the cessation of existence of a species or group of taxa, reducing biodiversity.
Extreme points of Earth - the geographical locations that differ relative to other locations on the landmasses, continents or countries.
F
feedback – flow from the products of an action back to interact with the action.
feedlot (feedyard) - a type of Confined Animal Feeding Operation (CAFO) (also known as "factory farming") which is used for finishing livestock, notably beef cattle, prior to slaughter.
fertigate – apply fertiliser through an irrigation system.
fertility rate - number of live births per 1,000 women aged 15 to 44 years cf. birth rate, mortality rate.
fertilizers (also spelled fertilisers) - compounds given to plants to promote growth; they are usually applied either through the soil, for uptake by plant roots, or by foliar feeding, for uptake through leaves.
flyway - the flight paths used in bird migration. Flyways generally span over continents and often oceans.
food chain (food webs, food networks and/or trophic networks) - the feeding relationships between species within an ecosystem.
food miles - the emissions produced and resources needed to transport food and drink around the globe.
food security - food produced in sufficient quantity to meet the full requirements of all people i.e. total global food supply equals the total global demand. For households it is the ability to purchase or produce the food they need for a healthy and active life (disposable income is a crucial issue). Women are typically gatekeepers of household food security. For national food security, the focus is on sufficient food for all people in a nation and it entails a combination of national production, imports and exports. Food security always has components of production, access and utilisation.
Footprint – (Ecological Footprint) in a very general environmental sense a "footprint" is a measure of environmental impact. However, this is usually expressed as an area of productive land (the footprint) needed to counteract the impact.
forage - the plant material (mainly plant leaves) eaten by grazing animals.
forest – land with a canopy cover greater than 30%.
fossil fuel - any hydrocarbon deposit that can be burned for heat or power, such as coal, oil and natural gas (produces carbon dioxide when burnt); fuels formed from once-living organisms that have become fossilized over geological time.
fossil water – groundwater that has remained in an aquifer for thousands or millions of years; when geologic changes seal the aquifer preventing further replenishment, the water becomes trapped inside and is then referred to as fossil water. Fossil water is a limited resource and can only be used once.
freegan - a person using alternative strategies for living based on limited participation in the conventional economy and minimal consumption of resources. Freegans embrace community, generosity, social concern, freedom, cooperation, and sharing - in opposition to materialism, moral apathy, competition, conformity, and greed. The most notorious freegan strategy is "urban foraging" or "dumpster diving". This technique involves rummaging through the garbage of retailers, residences, offices, and other facilities for useful goods. The word freegan is compounded from "free" and "vegan". cf. affluenza, froogle.
freon - DuPont's trade name for its odourless, colorless, nonflammable, and noncorrosive chlorofluorocarbon and hydrochlorofluorocarbon refrigerants, which are used in air conditioning and refrigeration systems Fair trade - a guarantee that a fair price is paid to producers of goods or services; it includes a range of other social and environmental standards including safety standards and the right to form unions.
freshwater - water containing no significant amounts of salt; potable water suitable for all normal uses cf. potable water.
front – (weather) the boundary between warm (high pressure) and cold (low pressure) air masses.
froogle - a play on the word frugal; people who lead low-consumption life-styles: a person who is part of a new movement towards self-sufficiency and waste-reduction achieved by bartering goods and services especially through the internet, making their own products, soap, clothes, and breeding chickens and goats, growing their own food, baking their own bread, harvesting their own water and energy, and helping to develop a sense of community. Sometimes referring to people who have made a resolution to only buy essentials for a particular period of time cf. freegan, affluenza.
fugitive emissions - in the context of the National Greenhouse Gas Inventory, these are greenhouse gases emitted from fuel production itself including, processing, transmission, storage and distribution processes, and including emissions from oil and natural gas exploration, venting, and flaring, as well as the mining of black coal.
full cost pricing - the pricing of commercial goods—such as electric power—that includes not only the private costs of inputs, but also the costs of the externalities required by their production and use cf. externality.
G
G8 - The Group of Eight is an international forum for the world's major industrialised democracies that emerged following the 1973 oil crisis and subsequent global recession. It includes Canada, France, Germany, Italy, Japan, Russia, the UK and the US which represents about 65% of the world economy.
Gaia hypothesis - an ecological hypothesis that proposes that living and nonliving parts of the earth are a complex interacting system that can be thought of as a single organism.
gene pool - the complete set of unique alleles in a species or population.
generalist species - those able to thrive in a wide variety of environmental conditions and can make use of a variety of different resources.
gene - a locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions and/or other functional sequence regions.
genetic diversity - one of the three levels of biodiversity that refers to the total number of genetic characteristics.
greenhouse effect - the process in which the emission of infrared radiation by the atmosphere warms a planet's surface.
greenhouse gas - components of the atmosphere that contribute to the greenhouse effect.
green manure - a type of cover crop grown primarily to add nutrients and organic matter to the soil.
Green Revolution - the ongoing transformation of agriculture that led in some places to significant increases in agricultural production between the 1940s and 1960s.
groundwater - water located beneath the ground surface in soil pore spaces and in the fractures of lithologic formation.
garden organics - organics derived from garden sources e.g. prunings, grass clippings.
genetic engineering - the use of various experimental techniques to produce molecules of DNA containing new genes or novel combinations of genes, usually for insertion into a host cell for cloning; the technology of preparing recombinant DNA in vitro by cutting up DNA molecules and splicing together fragments from more than one organism; the modification of genetic material by man that would otherwise be subject to the forces of nature only.
genome – the total genetic composition of an organism
geosphere - the solid part of planet Earth, the main divisions being the crust, mantle, and liquid core. The lithosphere is the part of the geosphere that consists of the crust and upper mantle.
geothermal energy - energy derived from the natural heat of the earth contained in hot rocks, hot water, hot brine or steam.
global acres see global hectares.
global dimming – a reduction in the amount of direct solar radiation reaching the surface of the earth due to light diffusion as a result of air pollution and increasing levels of cloud. A phenomenon of the last 30–50 years.
economic globalization - the emerging international economy characterized by free trade in goods and services, unrestricted capital flows and more limited national powers to control domestic economies.
global hectares - acres/hectares that have been adjusted according to world average biomass productivity so that they can be compared meaningfully across regions; 1 global hectare is 1 hectare of biologically productive space with world average productivity.
global warming potential - a system of multipliers devised to enable warming effects of different gases to be compared.
global warming – the observable increase in global temperatures considered mainly caused by the human induced enhanced greenhouse effect trapping the Sun's heat in the Earth's atmosphere.
globalisation – the expansion of interactions to a global or worldwide scale; the increasing interdependence, integration and interaction among people and organisations from around the world. A mix of economic, social, technological, cultural, and political interrelationships.
glyphosate – the active ingredient in the herbicide RoundupTM.
governance – the decision-making procedure; who makes decisions, how they are made, and with what information. The structures and processes for collective decision-making involving governmental and non-governmental actors.
Great Pacific Garbage Patch - a gyre of marine debris particles in the central North Pacific Ocean discovered between 1985 and 1988. The patch is characterized by exceptionally high relative pelagic concentrations of plastic, chemical sludge, and other debris that have been trapped by the currents of the North Pacific Gyre.
green architecture - building design that moves towards self-sufficiency sustainability by adopting circular metabolism.
green design - environmentally sustainable design.
green power - Electricity generated from clean, renewable energy sources (such as solar, wind, biomass and hydro power) and supplied through the grid.
green products and services - products or services that have a lesser or reduced effect on human health and the environment when compared with competing products or services that serve the same purpose. Green products or services may include, but are not limited to, those which contain recycled content, reduce waste, conserve energy or water, use less packaging, and reduce the amount of toxics disposed or consumed.
green purchasing - purchasing goods and services that minimise impacts on the environment and that are socially just.
Green Star – a voluntary building rating for green design covering 9 impact categories up to 6 stars which equals world leader.
green waste (green organic material or green organics, sometimes referred to as "green wealth") - plant material discarded as non-putrescible waste - includes tree and shrub cuttings and prunings, grass clippings, leaves, natural (untreated) timber waste and weeds (noxious or otherwise).
green – (sustainability) like ‘eco’ - a word frequently used to indicate consideration for the environment e.g. green plumbers, green purchasing etc., sometimes used as a noun e.g. the Greens.
greenhouse effect - the insulating effect of atmospheric greenhouse gases (e.g., water vapor, carbon dioxide, methane, etc.) that keeps the Earth's temperature about warmer than it would be otherwise cf. enhanced greenhouse effect.
greenhouse gases - any gas that contributes to the greenhouse effect; gaseous constituents of the atmosphere, both natural and from human activity, that absorb and re-emit infrared radiation. Water vapor (H2O) is the most abundant greenhouse gas. Greenhouse gases are a natural part of the atmosphere and include carbon dioxide (), methane (CH4, persisting 9-15 yrs with a greenhouse warming potential (GWP) 22 times that of ), nitrous oxide (N2O persists 120 years and has a GWP of 310), ozone (O3), hydrofluorocarbons, perfluorocarbons and sulfur hexafluoride.
greenlash – dramatic changes in the structure and dynamic behaviour of ecosystems.
greenwashing - companies that portray themselves as environmentally friendly when their business practices do not back this up. Generally applies to excessive use of green marketing and packaging when this does not take account of the total ecological footprint.
greenwater – water replenishing soil moisture, evaporating from soil, plant and other surfaces, and transpired by plants. In nature the global average amount of rainfall becoming green water is about 60%. Of the green water about 55% falls on forests, 25% on grasslands and about 20% on crops. We can increase green water productivity by rainwater harvesting, increased infiltration and runoff collection. Green water cannot be piped or drunk (cannot be sold) and is therefore generally ignored by water management authorities but it is crucial to plants in both nature and agriculture and needs careful management as an important part of the global water cycle.
greywater – household waste water that has not come into contact with toilet waste; includes water from baths, showers, bathrooms, washing machines, laundry and kitchen sinks.
gross primary productivity - total carbon assimilation.
groundwater – water found below the surface – usually in porous rocks, or soil, or in underground aquifers.
growth – increase in size, weight, power etc.
H
habitat - an ecological or environmental area that is inhabited by a particular species.
hard waste - household garbage which is not normally accepted into rubbish bins by local councils, e.g. old stoves, mattresses.
heat– energy derived from the motion of molecules; a form of energy into which all other forms of energy may be degraded.
herbicide – a chemical the kills or inhibits growth of a plant.
herbivory - predation in which an organism known as an herbivore, consumes principally autotrophs such as plants, algae and photosynthesizing bacteria.
heterotroph (chemoorganotrophy) - an organism that requires organic substrates to obtain its carbon for growth and development.
hierarchy – an organisation of parts in which control from the top (generally with few parts), proceeds through a series of levels (ranks) to the bottom (generally of many parts) cf. heterarchy.
high-density polyethylene (HDPE) - A member of the polyethylene family of plastics and is used to make products such as milk bottles, pipes and shopping bags. HDPE may be coloured or opaque.
homoclime – a region with the same climate as the one under investigation.
horsepower (hp) = 745.7 watts.
homeostasis - the property of either an open system or a closed system, especially a living organism, that regulates its internal environment so as to maintain a stable, constant condition.
Horton overland flow - the tendency of water to flow horizontally across land surfaces when rainfall has exceeded infiltration capacity and depression storage capacity.
house energy rating - an assessment of the energy efficiency of residential house or unit designs using a 5 star scale.
household metabolism - the passage of food, energy, water, goods, and waste through the household unit in a similar way to the metabolic activity of an organism cf. industrial metabolism.
humus - organic material in soil lending it a bark brown or black colouration.
human equivalent (He) - the approximate human daily energy requirement of 12,500 kJ or its approximate energy generating capacity at basal metabolic rate which is equivalent to about 80 watts (3.47222kWh/day). A 100 watt light bulb therefore runs at 1.25 He.
humus – semi-persistent organic matter in the soil that can no longer be recognised as tissue.
hydrocarbons - chemicals made up of carbon and hydrogen that are found in raw materials such as petroleum, coal and natural gas, and derived products such as plastics.
hydroelectric power - the electrical power generated using the power of falling water.
hydrological cycle (water cycle) - the natural cycle of water from evaporation, transpiration in the atmosphere, condensation (rain and snow), and flows back to the ocean (e.g. rivers).
hydrosphere - all the Earth's water; this would include water found in the sea, streams, lakes and other waterbodies, the soil, groundwater, and in the air.
I
incineration - combustion (by chemical oxidation) of waste material to treat or dispose of that waste material.
indicator species - any biological species that defines a trait or characteristic of the environment.
industrial agriculture - a form of modern farming that involves industrialized production of livestock, poultry, fish, and crops.
Industrial Revolution - a period in the late 18th and early 19th centuries when major changes in agriculture, manufacturing, and transportation had a profound effect on socioeconomic and cultural conditions.
infiltration – movement of water below topsoil to the plant roots and below.
infiltration - the process by which water on the ground surface enters the soil.
indicators– quantitative markers for monitoring progress towards desired goals.
industrial ecology (term int. Harry Zvi Evan 1973) - the observation that nature produces no waste and therefore provides an example of sustainable waste management. Natural Capitalism espouses industrial ecology as one of its four pillars together with energy conservation, material conservation, and redefinition of commodity markets and product stewardship in terms of a service economy. Publications:
insecticide - a pesticide used to control insects in all developmental forms.
Integrated Pest Management (IPM) - a pest control strategy that uses an array of complementary methods: natural predators and parasites, pest-resistant varieties, cultural practices, biological controls, various physical techniques, and the strategic use of pesticides.
intercropping - the agricultural practice of cultivating two or more crops in the same space at the same time.
in-stream - the use of freshwater where it occurs, usually within a river or stream: it includes hydroelectricity, recreation, tourism, scientific and cultural uses, ecosystem maintenance, and dilution of waste.
integrated pest management (IPM) – pest management that attempts to minimise chemical use by using several pest control options in combination. The goal of IPM is not to eliminate all pests but to reduce pest populations to acceptable levels; an ecologically based pest control strategy that relies heavily on natural mortality factors and seeks out control tactics that disrupt these factors as little as possible.
integrated product life-cycle management - management of all phases of goods and services to be environmentally friendly and sustainable.
inter-generational equity – the intention to leave the world in the best possible condition for future generations.
Intergovernmental Panel on Climate Change (IPCC) - the IPCC was established in 1988 by the World Meteorological Organization and the UN Environment Programme to provide the scientific and technical foundation for the United Nations Framework Convention on Climate Change (UNFCCC), primarily through the publication of periodic assessment reports.
internal water footprint – the water embodied in goods produced within a country (although these may be subsequently exported) cf. external water footprint.
intrinsic value – the value of something that is independent of its utility.
irrigation index – an efficiency indicator showing degree of match between applied and used water. Ideal rating = 1, an Ii of 1.5 means an oversupply of water by 50%.
irrigation scheduling – watering plants according to their needs.
irrigation – important component for agriculture developed across cultures.
ISO 14001- The international standard for companies seeking to certify their environmental management system. International Organization for Standardization (ISO) 14001 standard was first published in 1996 specifying the requirements for an environmental management system in organization (companies and institutions) with the goal of minimizing harmful effects on the environment and the goal of continual improvement of environmental performance.
J
joule (J)– the basic unit of energy; the equivalent of 1 watt of power radiated or dissipated for 1 second. Natural gas consumption is usually measured in megajoules (MJ), where 1 MJ = 1, 000,000 J. On large accounts it may be measured in gigajoules (GJ), where 1 GJ = 1 000,000,000 J.
K
kerbside collection - collection of household recyclable materials (separated or co-mingled) that are left at the kerbside for collection by local council services.
keystone species - a species that has a disproportionate effect on its environment relative to its abundance, affecting many other organisms in an ecosystem and help in determine the types and numbers of various others species in a community.
Kyoto Protocol - an international agreement adopted in December 1997 in Kyoto, Japan. The Protocol sets binding emission targets for developed countries that would reduce their emissions on average 5.2 percent below 1990 levels.
L
land use, Land-use change and forestry (LULUCF) - land uses and land-use changes can act either as sinks or as emission sources. It is estimated that approximately one-fifth of global emissions result from LULUCF activities. The Kyoto Protocol allows parties to receive emissions credit for certain LULUCF activities that reduce net emissions.
landfill- solid waste disposal in which refuse is buried between layers of soil, a method often used to reclaim low-lying ground; the word is sometimes used as a noun to refer to the waste itself.
landfill gas – the gas emissions from biodegrading waste in landfill, including , CH4, and small amounts of nitrogen, oxygen with traces of toluene, benzene and vinyl chloride.
landfill levy - levy applied at differential rates to municipal, commercial and industrial and prescribed wastes disposed to licensed landfills the levies used to foster the environmentally sustainable use of resources and best practice in waste management.
landfill prohibition - The banning of a certain material or product type from disposal to landfills. Occurs occasionally, for example, where a preferable waste management option is available.
landfill (dump or tip and historically as a midden) - a site for the disposal of waste materials by burial and is the oldest form of waste treatment.
land use planning - a branch of public policy which encompasses various disciplines which seek to order and regulate the use of land in an efficient and ethical way.
leaching – the movement of chemical in the upper layers of soil into lower layers or into groundwater by being dissolved in water.
lithosphere - the solid outermost shell of a rocky planet.
considered ideal for gardening and agricultural uses.
leachate (waste) - the mixture of water and dissolved solids (possibly toxic) that accumulates as water passes through waste and collects at the bottom of a landfill site.
leaf area index (LAI) – the ratio of photosynthetic leaf area to ground area covered (optimal for photosynthesis = 3-5). LAI is often optimised by shifts in leaf angle, a form of solar tracking.
level (scale, context or framework) – a context, frame of reference or degree of organisation within an integrated system. A level may or may not be spatially delimited.
life cycle (of a product) - All stages of a product's development, from raw materials, manufacturing through to consumption and ultimate disposal.
Life Cycle Analysis (LCA) - an objective process to evaluate the environmental impacts associated with a product, process, or activity. A means of identifying resource use and waste released to the environment, and to assess management options.
life support systems - according to the World Conservation Union (IUCN), the biophysical processes "that sustain the productivity, adaptability and capacity for renewal of lands, waters, and / or the biosphere as a whole."
lilacwater – recycled water that is unsuitable for drinking.
linear low-density polyethylene - a member of the polyolefin family of plastics. It is a strong and flexible plastic and usually used in film for packaging, bags and for industrial products such as pressure pipe.
linear metabolism - direct conversion of resources into wastes that are often sent directly to landfill
loam - a soil composed of sand, silt, and clay in relatively even concentration (about 40-40-20% concentration respectively), *locally existing capacity - the total ecological production that is found within a country's territories. It is usually expressed in hectares based on world average productivity.
low-density polyethylene - A member of the polyolefin family of plastics. It is a flexible material and usually used as film for packaging or as bags.
low entropy energy - to high-quality energy, or energy that is concentrated and available. Electricity is considered the energy carrier with the lowest entropy (i.e. highest quality) as it can be transformed into mechanical energy at efficiency rates well above 90%. In contrast, fossil fuel chemical energy can only be converted into mechanical energy at a typical efficiency rate of 25% (cars) to 50 percent (modern power plants). The chemical energy of biomass is of lower quality.
M
magma - molten rock that sometimes forms beneath the surface of the Earth (or any other terrestrial planet) that often collects in a magma chamber and is ejected by volcano's.
manure - organic matter used as fertilizer in agriculture.
market benefits - benefits of a climate policy that can be measured in terms of avoided market impacts such as changes in resource productivity (e.g., lower agricultural yields, scarcer water resources) and damages to human-built environment (e.g., coastal flooding due to sea-level rise).
material flow – the cycling of materials, which is driven by the flow of energy.
material identification - words, numbers or symbols used to designate composition of components of a product or packaging. Note: a material identification symbol does not indicate whether an item can be recycled.
materials recovery facility (MRF) - a centre for the reception and transfer of materials recovered from the waste stream. At a MRF, materials are also sorted by type and treated (e.g. cleaned, compressed)
Mauna Loa record - the record of measurement of atmospheric concentrations taken at Mauna Loa Observatory, Mauna Loa, Hawaii, since March 1958. This record shows the continuing increase in average annual atmospheric concentrations.
maximum soil water deficit – amount of water stored in the soil that is readily available to plants
megadiverse countries – The 17 countries that are home to the largest fraction of wild species (Australia is one such)
microorganism – an organism visible only through a microscope.
middle East– 15 countries - Bahrain, Islamic Rep. Iran, Iraq, Israel, Jordan, Kuwait, Lebanon, Oman, Qatar, Saudi Arabia, Syria, United Arab Emirates, Yemen.
mitigation hierarchy - a tool that aims to help management of biodiversity risk and is commonly applied in Environmental Impact Assessments. It includes a hierarchy of steps (but is not limited to): avoidance, minimisation, rehabilitation, restoration, and offset.
mobile garbage bin - A wheeled kerbside container for the collection of garbage or other materials.
monoculture - the practice of producing or growing one single crop over a wide area.
Montreal Protocol –an international treaty signed in 1987 designed to protect the ozone layer by phasing out the production of numerous substances that are responsible for ozone depletion, especially CFC's.
mortality rate – generally understood as the total number of deaths per 1000 people of a given age group
mulch - any composted or non-composted organic material, excluding plastic, that is suitable for placing on soil surfaces to restrict moisture loss from the soil and to provide a source of nutrients to the soil.
municipal waste - solid waste generated from domestic premises (garbage and hard waste) and council activities such as street sweeping, litter and street tree lopping. Also includes waste dropped at transfer stations and construction waste from owner/occupier renovations.
N
National Packaging Covenant - a self-regulatory agreement between packaging industries and government.
natural- the existing air, water, land and energy resources from which all resources derive. Main functions include resource production (such as fish, timber or cereals), waste assimilation (such as absorption, sewage decomposition), and life support services (UV protection, biodiversity, water cleansing, climate stability). The environmental services that must be maintained so that human development can be sustainable.
natural capital - natural resources and ecological processes that are equivalent to financial capital.
natural resources - naturally occurring substances that are considered valuable in their relatively unmodified (natural) form.
natural selection - the process by which favorable heritable traits become more common in successive generations of a population of reproducing organisms, and unfavorable heritable traits become less common.
nature-positive - a global societal goal to halt and reverse nature loss by 2030, measured from a 2020 baseline, and with the aim of achieving full recovery by 2050.
neighbourhood environment improvement plan - plans developed by a local community including residents, special interest groups, local government, local industry and government agencies.
nematocide – a chemical that kills nematodes.
net primary production - the energy or biomass content of plant material that has accumulated in an ecosystem over a period of time through photosynthesis. It is the amount of energy left after subtracting the respiration of primary producers (mostly plants) from the total amount of solar energy that is fixed biologically; gross primary productivity minus respiratory losses (this is the carbon gain).
nickel cadmium batteries - batteries typically used in appliances such as power tools and mobile phones. Cadmium is a heavy metal that poses risk to human and ecosystem health.
noise pollution (environmental noise) - displeasing human or machine created sound that disrupts the activity or happiness of human or animal life.
nonpoint source pollution - water pollution affecting a water body from diffuse sources, rather than a point source which discharges to a water body at a single location.
no-till farming - considered a kind of conservation tillage system and is sometimes called zero tillage.
no net loss - biodiversity policies that aim to neutralise biodiversity loss, defined relative to an appropriate reference scenario; it is the point at which project-related impacts on biodiversity are balanced by measures taken to avoid and minimise the project’s impacts.
non-ferrous metals - those metals that contain little or no iron, e.g. copper, brass and bronze.
Non Government Organisation (NGO) - A not-for-profit or community-based organization.
nutrients – chemicals required for the growth of organisms. Phosphorus, nitrogen and potassium are major plant nutrients but there are also many trace elements, elements that are needed in small quantities for the growing and developing of animal and plant life.
O
Ocean acidification - reduction in pH. Caused by their uptake of anthropogenic carbon dioxide from the atmosphere.
Oceania - the islands of the southern, western, and central Pacific Ocean, including Melanesia, Micronesia, and Polynesia. Sometimes extended to encompass Australia, New Zealand, and Maritime Southeast Asia.
old growth forest - an area of forest that has attained great age and so exhibits unique biological features.
omnivore - a species of animal that eats both plants and animals as its primary food source.
open-pit mining (opencast mining, open-cut mining) - a method of extracting rock or minerals from the earth by their removal from an open pit or borrow.
old growth forests - forests dominated by mature trees and with little or no evidence of any disturbance such as logging, ground clearing and building.
organic agriculture - a holistic production management system that avoids the use of synthetic fertilisers, pesticides and GM organisms, minimises pollution of air, soil and water, and optimises the health and productivity of interdependent communities of plants, animals and people.
organic gardening – gardening that follows, in general principle, the philosophy of organic agriculture
organic – derived from a living organism.
organics - plant or animal matter originating from domestic or industrial sources, e.g. grass clippings, tree prunings, food waste.
overshoot- growth beyond an area's carrying capacity; ecological deficit occurs when human consumption and waste production exceed the capacity of the Earth to create new resources and absorb waste. During overshoot, natural capital is being liquidated to support current use so the Earth's ability to support future life declines.
P
Patterns in nature - are visible regularities of form found in the natural world.
pay-by-weight systems - financial approaches to managing waste that charge prices according to the quantity of waste collected, rather than a price per pick-up or fixed annual charge, as typically applied to households for kerbside services. Pay-by-weight systems may provide an incentive to reduce waste generation.
per capita consumption - the average amount of commodity used per person.
Persistent organic pollutants (POPs) - organic compounds that are resistant to environmental degradation through chemical, biological, and photolytic processes.
pervious surface – one which can be penetrated by air and water.
pesticide - means any substance or mixture of substances intended for preventing, destroying or controlling any pest. This includes substances intended for use as a plant growth regulator, defoliant, desiccant, or agent for thinning fruit or preventing the premature fall of fruit, and substances applied to crops either before or after harvest to protect the commodity from deterioration during storage and transport. (Food and Agriculture Organization of the United Nations, 2003).
photosynthesis – the transformation of radiant energy to chemical energy by plants; the manufacture by plants of carbohydrates from carbon dioxide and water. The reaction is driven by energy from sunlight, catalysed by chlorophyll and releases oxygen as a byproduct. The capture of the Sun's energy (primary production) to power all life on Earth (consumption).
photovoltaic - the direct conversion of light into electricity
phytoplankton– plant plankton cf. Plankton.
plankton – mostly microscopic animal and plant life suspended in water and a valuable food source for animals cf. Phytoplankton.
plant quality - a standard of plant appearance or yield.
plastic - One of many high-polymeric substances, including both natural and synthetic products, but excluding rubbers. At some stage in its manufacture every plastic is capable of flowing, under heat and pressure, if necessary, into the desired final shape.
Polluter Pays Principle (PPP) - the principle that producers of pollution should in some way compensate others for the effects of their pollution.
polyethylene terephthalate (PET) – a clear, tough, light and shatterproof type of plastic, used to make products such as soft drink bottles, film packaging and fabrics.
polypropylene (PP) - a member of the polyelofin family of plastics. PP is light, rigid and glossy and is used to make products such as washing machine agitators, clear film packaging, carpet fibres and housewares.
polystyrene (PS) - a member of the styrene family of plastics. PS is easy to mould and is used to make refrigerator and washing machine components. It can be foamed to make single use packaging, such as cups, meat and produce trays.
polyvinyl chloride (PVC) - a member of the vinyl family of plastics. PVC can be clear, flexible or rigid and is used to make products such as fruit juice bottles, credit cards, pipes and hoses.
postconsumer material or waste - material or product that has served its intended purpose and has been discarded for disposal or recovery. This includes returns of material from the distribution chain; waste that is collected and sorted after use; kerbside waste cf. pre-consumer waste.
potable – safe to drink.
power- the rate at which work is done; electrically, power = current x voltage (P = I V)
Precautionary Principle – where there are threats of serious irreversible environmental damage, lack of full scientific certainty should not be used as a reason for introducing measures to prevent that degradation (Rio Declaration).
precipitation – (weather) any liquid or solid water particles that fall from the atmosphere to the Earth's surface; includes drizzle, rain, snow, snow pellets, ice crystals, ice pellets and ha
preconsumer material or waste - material diverted to the waste stream during a manufacturing process; waste from manufacture and production.
pre-industrial - for the purposes of the IPCC this is defined as 1750.
prescribed waste and prescribed industrial waste - Those wastes listed in the Environment Protection (Prescribed Waste) Regulations 1998 and subject to requirements under the industrial waste management policy 2000. Prescribed wastes carry special handling, storage, transport and often licensing requirements, and attract substantially higher disposal levies than non-prescribed solid wastes.
primary productivity - the fixation rate at which energy is fixed by plants.
producer responsibility – the legal responsibilities of producers/manufacturers for the full life of their products.
producer – (ecology) a plant, that is able to produce its own food from inorganic substance; (energetics) an organism or process that generates concentrated energy from sunlight beyond its own needs.
product stewardship – the principle of shared responsibility by all sectors involved in the manufacture, distribution, use and disposal of products for the consequences of these activities; manufacturing responsibility extending to the entire life of the product.
Product – a thing produced by labour; mostly the material items we buy in shops; (ecology) the results of photosynthesis.
productivity (ecology) - the rate at which radiant energy is used by producers to form organic substances as food for consumers.
provisioning services – one of the major ecosystem services: the products obtained from ecosystems e.g. genetic resources, food, fibre and fresh water.
pyrolysis - advanced thermal technology involving the thermal decomposition of organic compounds in the complete absence of oxygen under pressure and at elevated temperature.
R
radiative forcing - changes in the energy balance of the earth-atmosphere system in response to a change in factors such as greenhouse gases, land-use change, or solar radiation. Positive radiative forcing increases the temperature of the lower atmosphere, which in turn increases temperatures at the Earth's surface. Negative radiative forcing cools the lower atmosphere. Radiative forcing is most commonly measured in units of watts per square meter (W/m2).
rain garden – an engineered area for the collection, infiltration and evapotranspiration of rainwater runoff, mostly from impervious surfaces; it reduces rain runoff by allowing stormwater to soak into the ground (as opposed to flowing into storm drains and surface waters which can cause erosion, water pollution, flooding, and diminished groundwater). They can also absorb water contaminants that would otherwise end up in water bodies. The terminology arose in Maryland, USA in 1990s as a more marketable expression for bioremediation.
rainwater harvesting – collecting rainwater either in storages or the soil mostly close to where it falls; the attempt to increase rainwater productivity by storing it in pondages, wetlands etc., and helping to avoid the need for infrastructure to bring water from elsewhere. Practiced on a large scale upstream this reduces available water downstream.
rangeland – a region where grazing or browsing livestock is the main land use.
raw materials - materials that are extracted from the ground and processed e.g. bauxite is processed into aluminium.
reclaimed water - water taken from a waste (effluent) stream and purified to a level suitable for further use.
recovered material – (waste) material that would have otherwise been disposed of as waste or used for energy recovery, but has instead been collected and recovered (reclaimed) as a material input thus avoiding the use of new primary materials.
recovery rate – (waste) the recovery rate is the percentage of materials consumed that is recovered for recycling.
recyclables – strictly, all materials that may be recycled, but this may include the recyclable containers and paper/cardboard component of kerbside waste (excluding garden organics).
recycled content - proportion, by mass, of recycled material in a product or packaging. Only pre-consumer and post-consumer materials are considered as recycled content.
recycled material – see recovered material.
recycled water – treated stormwater, greywater or blackwater suitable for uses like toilet flushing, irrigation, industry etc. It is non-drinking water and is indicated using a lilac non-drinking label.
recycling - a wide range of activities, including collection, sorting, reprocessing and manufacture of products into new goods.
reforestation – the direct human conversion of non-forested land to forested land through planting, seeding or promotion of natural seed sources, on land that was once forested but no longer so. According to the language of the Kyoto Protocol, for the first commitment period (2008–2012), reforestation activities are limited to reforestation occurring on lands that did not contain forest at the start of 1990; replanting of forests on lands that have recently been harvested.
regulating services – (sustainability) the benefits obtained from the regulation of ecosystem processes including, for example, the regulation of climate, water or disease.
renewable energy - any source of energy that can be used without depleting its reserves. These sources include sunlight (solar energy) and other sources such as, wind, wave, biomass, geothermal and hydro energy.
renewable energy certificates - Market trading mechanisms created through the Renewable Energy (Electricity) Act 2000 in connection with the commonwealth government's mandatory renewable energy target. The certificates provide a 'premium' revenue stream for electricity generated from renewable sources.
reprocessing – (waste) changing the physical structure and properties of a waste material that would otherwise have been sent to landfill, in order to add financial value to the processed material, this may involve a range of technologies including composting, anaerobic digestion and energy from waste technologies such as pyrolysis, gasification and incineration.
residual waste – (waste) waste that remains after the separation of recyclable materials (including green waste).
resource flow - the totality of changes in multiple resource stocks, or at least any pair of them, over a specified period of time
resource intensity – ratio of resource consumption relative to its economic or physical output; for example, litres of water used per dollar spent, or litres of water used per tonne of aluminium produced. At the national level, energy intensity is the ratio of total primary energy consumption of the country to either the gross domestic product, or the physical output (total goods produced).
resource productivity – the output obtained for a given resource input.
resource recovery – (waste) the process of obtaining matter or energy from discarded materials.
resource stock - the total amount of a resource often related to resource flow (the amount of resources harvested or used per unit of time). To harvest a resource stock sustainably, the harvest must not exceed the net production of the stock. Stocks are measured in mass, volume, or energy and flows in mass, volume, or energy per unit of time.
respiration – (biology) uptake by a living organism of oxygen from the air (or water) which is then used to oxidise organic matter or food. The outputs of this oxidation are usually and H2O; the metabolic process by which organisms meet their internal energy needs and release .
retail therapy – using shopping to obtain a ‘lift’ to make up for other things lacking in our lives.
retrofit - to replace existing items with updated items.
reuse - the second pillar of the waste hierarchy - recovering value from a discarded resource without reprocessing or remanufacture e.g.clothes sold though opportunity shops strictly represent a form of re-use, rather than recycling
risk – the probability of a (negative) occurrence.
S
salinisation – (ecology) the process by which land becomes salt-affected.
salinity – (ecology) salt in water and soils, generally in the context of human activity such as clearing and planting for annual crops rather than perennial trees and shrubs. Can make soils infertile.
scale – the physical dimensions, in either space or time, of phenomena or events; cf. a level which may or may not have a scale.
sectors – (economics) economic groupings used to generalise patterns of expenditure and use.
sediment – (ecology) soil or other particles that settle to the bottom of water bodies.
self-organisation – the process by which systems use energy to develop structure and organisation.
sentinel indicator – (ecology) an indicator that captures the essence of the process of change affecting a broad area of interest and which is also easily communicated.
septic sewage – sewage in which anaerobic respiration is taking place characterised by a blackish colour and smell of hydrogen sulphide.
septic tank - a type of sedimentation tank in which the sludge is retained long enough for the organic content to undergo anaerobic digestion. Typically used for receiving the sewage from houses and other premises that are too isolated for connection to a sewer.
sequestration – (global warming) the removal of carbon dioxide from the Earth's atmosphere and storage in a sink as when trees absorb in photosynthesis and store it in their tissues.
sewage- water and raw effluent disposed through toilets, kitchens and bathrooms. Includes water-borne wastes from domestic uses of water from households, or similar uses in trade or industry.
sewer - a pipe conveying sewage.
sewerage - a system of pipes and mechanical appliances for the collection and transportation of domestic and industrial sewages.
sewerage system – sewage system infrastructure: the network of pipes, pumping stations and treatment plants used to collect, transport, treat and discharge sewage.
sewer-mining - tapping directly into a sewer (either before or after a sewage treatment plant) and extracting wastewater for treatment and use.
shredder flock - the residue from shredded car bodies, whitegoods and the like.Silent Spring'' - environmental science book by Rachel Carson published in 1962 that inspired the environmental movement and later led to the creation of the U.S. Environmental Protection Agency in 1970.simple living - a lifestyle individuals may pursue for a variety of motivations, such as spirituality, health, or ecology. Others may choose simple living for reasons of social justice or a rejection of consumerism. Some may emphasise an explicit rejection of "westernised values", while others choose to live more simply for reasons of personal taste, a sense of fairness or for personal economy. Simple living as a concept is distinguished from the simple lifestyles of those living in conditions of poverty in that its proponents are consciously choosing to not focus on wealth directly tied to money or cash-based economics.sinks - processes or places that remove or store gases, solutes or solids; any process, activity or mechanism that results in the net removal of greenhouse gases, aerosols, or precursors of greenhouse gases from the atmosphere.slow Food – the slow food movement was founded in Italy in 1986 by Carlo Petrini as a response to the negative impact of multinational food industries. Slow Food is a counteracting force to Fast Food as it encourages using local seasonal produce, restoring time-honoured methods of production and preparation, and sharing food at communal tables. Slow Food encourages environmentally sustainable production, ethical treatment of animals and social justice. Gatherings of Slow Food supporters are called convivia and in September Victoria has 11 of these. Slow Food members seek to defend biodiversity in our food supply, to better appreciate how our lives can be improved by understanding the sensation of taste, and to celebrate the connection between plate and planet.sludge - waste in a state between liquid and solid.sodicity – (ecology) a measure of the sodium content of soil. Sodic soils are dispersible and are thus vulnerable to erosion.sodification - the build-up in soils of sodium relative to potassium and magnesium in the composition of the exchangeable cations of the clay fraction.soil acidification - reduction in pH, usually in soil. Acidification can result in poorly structured or hard-setting topsoils that cannot support sufficient vegetation to prevent erosion.soil bulk density – the relative density of a soil measured by dividing the dry weight of a soil by its volume.soil compaction – the degree of compression of soil. Heavy compaction can impede plant growth.soil conditioner - any composted or non-composted material of organic origin that is produced or distributed for adding to soils, it includes 'soil amendment', 'soil additive', 'soil improver' and similar materials, but excludes polymers that do not biodegrade, such as plastics, rubbers, and coatings.soil moisture deficit – the volume of water needed to raise the soil water content of the root zone to field capacity.soil organic carbon (SOC) – the total organic carbon of a soil exclusive of carbon from undecayed plant and animal residue.soil organic matter (SOM) – the organic fraction of the soil exclusive of undecayed plant and animal residues.soil structure – the way soil particles are aggregated into aggregates or “crumbs”, important for the passage of air and watersoil water storage – total amount of water stored in the soil in the plant root zone.solar energy - the radiant energy of the Sun, which can be converted into other forms of energy, such as heat or electricity.solar power - electricity generated from solar radiation.solid industrial waste - solid waste generated from commercial, industrial or trade activities, including waste from factories, offices, schools, universities, State and Federal government operations and commercial construction and demolition work. Excludes wastes that are prescribed under the Environment Protection Act 1970 and quarantine wastes.solid inert waste - hard waste and dry vegetative material and which as a negligible activity or effect on the environment, such as demolition material, concrete, bricks, plastic, glass, metals and shredded tyres.solid waste - non-hazardous, non-prescribed solid waste materials ranging from municipal garbage to industrial waste, generally: domestic and municipal; commercial and industrial; construction and demolition; other.source separation – (waste) separation of recyclable material from other waste at the point and time the waste is generated, i.e. at its source. This includes separation of recyclable material into its component categories, e.t. paper, glass, aluminium, and may include further separation within each category, e.g. paper into computer paper, office whites and newsprint; The practice of segregating materials into discrete materials streams prior to collection by or delivery to reprocessing facilities.specialist species – those that can only thrive in a narrow range of environmental conditions and/or have a limited diet.specific heat capacity – the amount of energy needed to increase the temperature of 1 kg of a substance by 1oC. It can be considered a measure of resistance to an increase in temperature and important for energy saving.stakeholders - parties having an interest in a particular project or outcome.State Environment Protection Policies - statutory instruments under the Environment Protection Act 1970 that identify beneficial uses of the environment that are to be protected, establish environmental indicators and objectives and define attainment programs to implement the policies.State of the Environment reporting - a scientific assessment of environmental conditions, focusing on the impacts of human activities, their significance for the environment and social responses to the identified trends.steady state – a constant pattern e.g. a balance of inflows and outflows.stormwater – rainfall that accumulates in natural or artificial systems after heavy rain; surface run-off or water sent to (stormwater) drains during heavy rain.strategic Environmental Assessment (SEA) - a system of incorporating environmental considerations into policies, plans and programs esp in the EU.sullage – domestic waste water from baths, basins, showers, laundries, kitchens and floor waste (but not from toilets).Superfund –a United States federal government program designed to fund the cleanup of sites contaminated with hazardous substances and pollutants. It was established as the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA).supporting services – (sustainability) ecosystem services that are necessary for the production of all other ecosystem services e.g. biomass production, production of atmospheric oxygen, soil formation, nutrient and water cycling.surface runoff – that part of rainfall passing out of an area into the drainage system.suspended solids (SS) – solid particles suspended in water; used as an indicator of water quality.sustainability - the Brundtland definition is ‘Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs’.sustainability covenant - Under Section 49 of the Environment Protection Act 1970, a Sustainability Covenant is an agreement which a person or body undertakes to increase the resource use efficiency and/or reduce ecological impacts of activities, products, services and production processes. Parties can voluntarily enter into such agreements with EPA, or could be required to if they are declared by Governor in Council, on the recommendation of EPA, to have potential for significant impact on the environment.sustainability science - the multidisciplinary scientific study of sustainability, focusing especially on the quantitative dynamic interactions between nature and society. Its objective is a deeper and more fundamental understanding of the rapidly growing inter-dependence of the nature-society system and the intention to make this sustainable. It critically examines the tools used by sustainability accounting and the methods of sustainability governance.sustainability Triangle – a graphic indication of the action needed to stabilize levels below about 500 ppm. It shows stabilization ‘wedges’ indicating savings made per year by the use of a particular strategy.sustainable consumption - sustainable resource use - a change to society's historical patterns of consumption and behaviour that enables consumers to satisfy their needs with better performing products or services that use fewer resources, cause less pollution and contribute to social progress worldwide.sustainable development – see Sustainability.swale – an open channel transporting surface run-off to a drainage system, usually grassed; a swale promotes infiltration, the filtration of sediment by plants and ornamental interest.system – a set of parts organised into a whole, usually processing a flow of energy.
Ttake-back - a concept commonly associated with product stewardship, placing responsibility on brand-owners, retailers, manufacturers or other supply chain partners to accept products returned by consumers once they have reached the end of their useful life. Products may then be recycled, treated or sent to landfill.technosphere – synthetic and composite components and materials for med by human activity. True technosphere materials, like plastics, are not biodegradable.temperate – with moderate temperatures, weather, or climate; neither hot nor cold; mean annual temperature between 0 – 20 deg C.thermal mass – (architecture) any mass that can absorb and store heat and can therefore be used to buffer temperature change. Concrete, bricks and tiles need a lot of heat energy to change their temperature and therefore have high thermal mass, timber has low thermal mass.third pipe system – a third pipe, in addition to the standard water supply pipe and sewer disposal pipe, which carries recycled water for irrigation purposes.threshold – (ecology) a point that, when crossed, can bring rapid and sometimes unpredictable change in a trend. An example would be the sudden altering of ocean currents due to the melting of ice at the poles.topsoil – mostly fertile surface soil moved or introduced to topdress gardens, roadbanks, lawns etc.total energy use – as applied in this book is the total of combined direct and indirect energy usetotal fertility rate – the number of children that, on average, a woman would have in her lifetime at present age-specific fertility rates. Calculated as the average number of children born per woman of every given age in a particular year and totalled for all ages.total water use - in water accounting: distributed water use + self-extracted water use + reuse water use cf. water consumption; here used to mean total direct and indirect water use.town water – water supplied by government or private enterprise and known as the mains or reticulated water supply.transfer station – (waste) a facility allowing drop-off and consolidation of garbage and a wide range of recyclable materials. Transfer stations have become an integral part of municipal waste management, playing an important role in materials recovery and improving transportation economics associated with municipal waste disposal.transgenic plant – a plant into which genetic material has been transferred by genetic engineering.Triple Bottom Line – a form of sustainability accounting going beyond the financial ‘bottom line’ to consider the social and environmental as well as economic consequences of an organisation's activity; generally included with economic accounts. Term coined by John Elkington in 1994tropical – occurring in the tropics (the region on either side of the equator); hot and humid with a mean annual temperature greater than 20oC.turbine - A machine for converting the heat energy in steam or high temperature gas into mechanical energy. In a turbine, a high velocity flow of steam or gas passes through successive rows of radial blades fastened to a central shaft.
UUnited Nations - an international organisation based in New York and formed to promote international peace, security, and cooperation under a charter signed by 51 founding countries in San Francisco in 1945United Nations Framework Convention on Climate Change (UNFCCC) – The UNFCCC and the Convention on Biological Diversity (CBD) were established at the 1992 U.N. Conference on Environment and Development in Rio de Janeiro, Brazil. The Kyoto Protocol was then formulated by the UNFCCC and sets specific timelines and timetables for reducing industrialized nations’ GHG emissions and allows some international trading in carbon credits. For more information visit:upstream – those processes necessary before a particular activity is completed e.g. for a manufactured product this would be the extraction, transport of materials etc. needed prior to the process of manufacture cf. downstream.urban heat island - the tendency for urban areas to have warmer air temperatures than the surrounding rural landscape, due to the low albedo of streets, sidewalks, parking lots, and buildings. These surfaces absorb solar radiation during the day and release it at night, resulting in higher night temperatures.urban metabolism – the functional flow of materials and energy required by cities.
Vveloway - cycle track; cycleway; contrasts with freeway.vinyl - a type of plastic (usually PVC) used to make products such as fruit juice bottles, credit cards, pipes and hoses.virtual water - the volume of water required to produce a commodity or service. First coined by Professor J.A. Allan of the University of London in the early 1990s, though this is now more widely known as cf. embedded (embodied) water.visual waste audit - observing, estimating and recording data on waste streams and practices without physical weighing.volatile organic compound (VOC) – molecules containing carbon and differing proportions of other elements such as hydrogen, oxygen, fluorine and chlorine. With sunlight and heat they form ground-level ozone.volt - The unit of potential difference between two points is the volt (V) (commonly called voltage). One thousand volts equals 1 kilovolt (kV).
Wwaste - any material (liquid, solid or gaseous) that is produced by domestic households and commercial, institutional, municipal or industrial organisations, and which cannot be collected and recycled in any way for further use. For solid wastes, this involves materials that currently go to landfills, even though some of the material is potentially recyclable.waste analysis -the quantifying of different waste streams, recording and detailing of it as a proportion of the total waste stream, determining its destination and recording details of waste practices.waste assessment - observing, measuring, and recording data and collecting and analysing waste samples. Some practitioners consider an assessment to be one where observations are carried out visually, without sorting and measuring individual streams (see visual waste audit).waste audit -see waste assessment.waste avoidance – primary pillar of the waste hierarchy; avoidance works on the principle that the greatest gains result from efficiency-centred actions that remove or reduce the need to consume materials in the first place, but deliver the same outcome.waste factors - (used in round-wood calculations) give the ratio of one cubic metre of round wood used per cubic metre (or tonne) of product.waste generation - generation of unwanted materials including recyclables as well as garbage. Waste generation = materials recycled + waste to landfill.waste hierarchy (waste management hierarchy)– a concept promoting waste avoidance ahead of recycling and disposal, often referred to in community education campaigns as 'reduce, reuse, recycle.' The waste hierarchy is recognised in the Environment Protection Act 1970, promoting management of wastes in the order of preference: avoidance, reuse, recycling, recovery of energy, treatment, containment, disposal.waste management - practices and procedures that relate to how the waste is dealt with.waste minimisation - techniques to keep waste generation at a minimum level in order to divert materials from landfill and thereby reduce the requirement for waste collection, handling and disposal to landfill; recycling and other efforts made to reduce the amount of waste going into the waste stream.waste reduction - Measures to reduce the amount of waste generated by an individual, household or organisation.waste stream - Waste materials that are either of a particular type (e.g. 'timber waste stream') or produced a particular source (e.g. 'C&I waste stream').waste treatment - where some additional processing is undertaken of a particular waste. This may be done to reduce its toxicity, or increase its degradability or compostability.wastewater - used water; generally not suitable for drinking.water consumption - in water accounting: distributed water use + self-extracted water use + reuse water use - distributed water supplied to other users - in-stream use (where applicable).water cycle (hydrological cycle) passage of the water between the oceans and waterbodies, land and atmosphere.water entitlement - the entitlement, as defined in a statutory water plan, to a share of water from a water source.Water Footprint - the total volume of freshwater that is required in a given period to perform a particular task or to produce the goods and services consumed at any level of the action hierarchy. Country water footprint is a concept introduced by Hoekstra in 2002 as a consumption-based indicator of water use in a country – the volume of water needed to produce the goods and services consumed by the inhabitants of a country.water harvesting – see rainwater harvesting.water intensity - volume of water used per unit of production or service delivery; this is generally further reduced to monetary unit return per given volume of water used. Essentially equivalent to water productivity.water neutral – a scientifically based calculator for individuals to be extended to cover the construction industry, the food and beverage sector and other corporations or organisations. The water offset calculators aimed at business and other organisations are being developed and will be launched with the Individual Water Offset Calculator.water productivity – the efficiency of outcomes for the amount of water used; the quantity of water required to produce a given outcome. WP-field relates to crop output e.g. kg of wheat produced per m3 of water. WP-basin relates to water productivity in the widest possible sense as including crop, fishery yield, environmental services etc. Increasing WP means obtaining increasing value from the available water.water quality - the microbiological, biological, physical and chemical characteristics of water.water resources - water in various forms, such as groundwater, surface water, snow and ice, at present in the land phase of the hydrological cycle—some parts may be renewable seasonally, but others may be effectively mined.water restrictions - mandatory staged restrictions on the use of water, which are relative to water storage levels.water trading - transactions involving water access entitlements or water allocations assigned to water access entitlements.water treatment - the process of converting raw untreated water to a public water supply safe for human consumption; can involve, variously, screening, initial disinfection, clarification, filtration, pH correction and final disinfection.water table – upper level of water in saturated ground.watershed – a water catchment area (North America) or drainage divide (non-American usage).weather - the hourly/daily change in atmospheric conditions which over a longer period constitute the climate of a region cf. climate.weathering - is the breaking down of rocks, soil, and minerals as well as wood and artificial materials through contact with the Earth's atmosphere, water, and biological organisms. well-being – a context-dependent physical and mental condition determined by the presence of basic material for a good life, freedom and choice, health, good social relations, and security.wetlands - areas of permanent or intermittent inundation, whether natural or artificial, with water that is static or flowing, fresh, brackish or salt, including areas of marine water not exceeding 6 m at low tide. (Adapted from definition of the Ramsar Convention on Wetlands of International Importance). Engineered wetlands are becoming more frequent and are sometimes called constructed wetlands. In urban areas wetlands are sometimes referred to as the kidney of a city.whitegoods - household electrical appliances like refrigerators, washing machines, clothes dryers, and dishwashers.wind energy - the kinetic energy present in the motion of the wind. Wind energy can be converted to mechanical or electrical energy. A traditional mechanical windmill can be used for pumping water or grinding grain. A modern electrical wind turbine converts the force of the wind to electrical energy for consumption on-site and/or export to the electricity grid.wind turbines – see wind energy.work – physical or mental effort; a force exerted for a distance; an energy transformation process which results in a change of concentration or form of energy.
Zzero waste''' – turning waste into resource; the redesign of resource-use so that waste can ultimately be reduced to zero; ensuring that by-products are used elsewhere and goods are recycled, in emulation of the cycling of wastes in nature.
See also
Environmental science
Climate change acronyms
Glossary of climate change
List of environmental issues
List of sustainability topics
References
External links
Environmental Terminology Discovery Service — EEA
(multilingual environmental glossary in 28 languages: ar, bg, cs, da, de, el, en, es, et, eu, fi, fr, hu, is, it, lt, lv, mt, nl, no, pl, pt, ro, ru, sk, sl, sv, tr)
Environmental science
Wikipedia glossaries using unordered lists | Glossary of environmental science | [
"Environmental_science"
] | 22,439 | [
"nan"
] |
16,270,868 | https://en.wikipedia.org/wiki/Eta1%20Doradus | {{DISPLAYTITLE:Eta1 Doradus}}
Eta1 Doradus, Latinized from η1 Doradus, is a star in the southern constellation of Dorado. It is visible to the naked eye as a dim, white-hued star with an apparent visual magnitude of 5.72. This object is located approximately 335 light years distant from the Sun, based on parallax, and is drifting further away with a radial velocity of +18 km/s. It is circumpolar south of latitude 24°S.
This object is an A-type main-sequence star with a stellar classification of A0V. It is 94 million years old with a high rotation rate, showing a projected rotational velocity of 149. The star has 2.46 times the mass of the Sun and is radiating 49 times the Sun's luminosity from its photosphere at an effective temperature of 10,325 K. It is the southern pole star of Venus.
References
External links
2004. Starry Night Pro, Version 5.8.4. Imaginova. . www.starrynight.com
Dorado
A-type main-sequence stars
042525
Doradus, Eta1
028909
PD-66 00493
2194
Southern pole stars | Eta1 Doradus | [
"Astronomy"
] | 261 | [
"Dorado",
"Constellations"
] |
16,271,745 | https://en.wikipedia.org/wiki/Hushing | Hushing is an ancient and historic mining method using a flood or torrent of water to reveal mineral veins. The method was applied in several ways, both in prospecting for ores, and for their exploitation. Mineral veins are often hidden below soil and sub-soil, which must be stripped away to discover the ore veins. A flood of water is very effective in moving soil as well as working the ore deposits when combined with other methods such as fire-setting.
Hushing was used during the formation and expansion of the Roman Empire from the 1st century BC on to the end of the empire. It was also widely used later, and apparently survived until modern times where the cost of explosives was prohibitive. It was widely used in the United States, where it was known as "booming".
A variant known as hydraulic mining where jets or streams of water are used to break down deposits, especially of alluvial gold and alluvial tin, is commonly used.
History
The method is well described by Pliny the Elder in Book XXXIII of his Naturalis Historia from the 1st century AD. He distinguishes the use of the method for prospecting for ore and use during mining itself. It was used during the Roman period for hydraulic mining of alluvial gold deposits, and in opencast vein mining, for removal of rock debris, created by mechanical attack and fire-setting. He describes how tanks and reservoirs are built near the suspected veins, filled with water from an aqueduct, and the water suddenly released from a sluice-gate onto the hillside below, scouring the soil away to reveal the bedrock and any veins occurring there.
Method
The power behind a large release of water is very great, especially if it forms a single water wave, and is well known as a strong force in coastal erosion and river erosion. Such a wave could be created by a sluice gate covering one end of the reservoir, possibly a permanent fixture such as a swinging flap or a rising gate. The size of the tank controlled the height of the wave and its volume. Hushing was most effective when used on steep ground such as the brow of a hill or mountain, the force of falling water lessening as the slope becomes smaller. The rate of attack would be controlled by the water supply, and perhaps more difficult the higher the deposit to be cleared.
If veins of ore were found using the method, then hushing could also remove the rock debris created when attacking the veins. Pliny also describes the way hillsides could be undermined, and then collapsed to release the ore-bearing material. The Romans developed the method into a sophisticated way of extracting large alluvial gold deposits such as those at Las Médulas in northern Spain, and for hard rock gold veins such as those at Dolaucothi in Wales. The development of the mine at Dolaucothi shows the versatility of the method in finding and then exploiting ore deposits.
There are the remains of numerous tanks and reservoirs still to be seen at the site, one example being shown at right. It was a small tank built for prospection on the north side of the isolated opencast north of the main mine. It was presumably built to prospect the ground to one side of the opencast for traces of the gold-bearing veins extending to the north. It failed to find the veins here, so was abandoned. It probably precedes the construction of the 7 mile long aqueduct supplying the main site, and was fed by a small leat from a tributary of the river Cothi about a mile further north up the valley. The method could be applied to any ore type, and succeeded best in hilly terrain. The Romans were well experienced in building the long aqueducts needed to supply the large volumes of water needed by the method, and construction was probably directed by army engineers.
Earlier evidence
The earlier history of the method is obscure, although there is an intriguing reference by Strabo writing ca 25 BC in his Geographica, Book IV, Chapter 6, to gold extraction in the Val d'Aosta in the Alps. He describes the problem gold miners had with a local tribe because of the great volumes of water they had taken from the local river, reducing it to a trickle and so affecting the local farmers. Whether or not they used the water for hushing remains unknown, but it seems possible because the method requires large volumes of water to be operated. Later, when the Romans assumed control of the mining operations, the locals charged them for using the water. The tribe occupied the higher mountains and controlled the water sources, and had not yet been subdued by the Romans:
{{quote|The country of the Salassi has gold mines also, which in former times, when the Salassi were powerful, they kept possession of, just as they were also masters of the passes. The Durias River was of the greatest aid to them in their mining — I mean in washing the gold; and therefore, in making the water branch off to numerous places, they used to empty the common bed completely. But although this was helpful to the Salassi in their hunt for the gold, it distressed the people who farmed the plains below them, because their country was deprived of irrigation; for, since its bed was on favourable ground higher up, the river could give the country water. And for this reason both tribes were continually at war with each other. But after the Romans got the mastery, the Salassi were thrown out of their gold-works and country too; however, since they still held possession of the mountains, they sold water to the publicans who had contracted to work the gold mines; but on account of the greediness of the publicans. Salassi were always in disagreement with them too.}}
The historian Polybius, who lived from 220 to 170 BC, was writing much earlier in The Histories (Book 34), and he records that gold mining in the Alpine region was so successful that the price of gold in Italy fell by a third during this period. From his description of large nuggets, and the find being made only two feet below the ground level, with deposits reaching down to 15 feet, it is likely to have been an alluvial deposit where water methods such as hushing would have been very effective. Modern attempts to identify the mines point to one especially large ancient gold mine at Bessa in Northern Italy. It appears to have been worked intensively in pre-Roman days and continued to expand with Roman involvement. The scale of the aqueducts there seems to support Strabo's comments.
Later examples
The technique appears to have been neglected through the medieval period, because Georgius Agricola, writing in the 16th century in his De re metallica, does not mention hushing at all, although he does describe many other uses of water power, especially for washing ore and driving watermills. However, the technique was used on a large scale in the lead mines of northern Britain from at least Elizabethan times onwards. The method was described in some detail by Westgarth Forster in his book A Treatise on a Section of the Strata from Newcastle upon Tyne to the Mountain of Cross Fell in Cumberland (1809), and also in the 1842 Royal Commission on Children in Mines in relation to children being used in the lead mines of the Pennines.
The remnants of hush gullies are visible at many places in the Pennines and at other locations such as the extensive lead mines at Cwmystwyth in Ceredigion, Wales, and at the Stiperstones in Shropshire. Another notable example is the Great Dun Fell hush gully near Cross Fell, Cumbria, probably formed in the Georgian era in the search for lead and silver. This gully is about 100 feet deep, carries a small stream, and is a prominent landmark on the bleak moors. The dams used to store the water are also often visible at the head of the stream.
Although the term "hushing" was not used in south-west England, there is a reference to the technique being used at Tregardock in North Cornwall, where in around 1580 mine adventurers used the method to work a lead-silver deposit, although lives were lost in the attempt. Phil Newman, writing in 2011, states that there is possible archaeological evidence for use of the technique at two sites on Dartmoor in Devon, in the form of channels running downhill that apparently originate from contour-following leats, though he says research is needed for confirmation.
In south-eastern Lancashire hushing was used to extract limestone from the glacial boulder clay so that it could be used to make lime for agriculture, mortar, plaster and limewash. Bennett notes leases of land for this purpose in the 17th and 18th centuries and remains can still be seen at sites like Shedden Clough. Hushing for limestone seems to have been limited to the eastern side of the Pennine ridge, between Burnley and the Cliviger Gorge, and probably occurred here because of the cost of obtaining supplies from further away, as well as the suitability of the boulder clay and the availability of water supplies.
The technique was also used during alluvial gold mining in Africa, at least until the 1930s, when it was described by Griffith in his book Alluvial Mining (2nd Ed., 1960). The water outlet could be controlled by an automatic system which allowed water to flow through the sluice gate when the overflow triggered a release mechanism.
See also
Dartmoor tin-mining
Derbyshire lead mining history
Dolaucothi Gold Mines
Mining in Cornwall
Placer mining
Roman engineering
Roman technology
Roman mining
Notes
References
Oliver Davies, Roman Mines in Europe, Clarendon Press (Oxford), 1935.
Jones G. D. B., I. J. Blakey, and E. C. F. MacPherson, Dolaucothi: the Roman aqueduct, Bulletin of the Board of Celtic Studies 19 (1960): 71-84 and plates III-V.
Lewis, P. R. and G. D. B. Jones, The Dolaucothi gold mines, I: the surface evidence, The Antiquaries Journal, 49, no. 2 (1969): 244-72.
Lewis, P. R. and G. D. B. Jones, Roman gold-mining in north-west Spain, Journal of Roman Studies 60 (1970): 169-85.
Lewis, P. R., The Ogofau Roman gold mines at Dolaucothi, The National Trust Year Book 1976-77 (1977).
Annels, A and Burnham, BC, The Dolaucothi Gold Mines, University of Wales, Cardiff, 3rd Ed (1995).
Hodge, A.T. (2001). Roman Aqueducts & Water Supply, 2nd ed. London: Duckworth.
Timberlake, S, Early leats and hushing remains: suggestions and disputes for roman mining and prospection for lead'', Bulletin of the Peak District mines Historical Society, 15 (2004), 64 ff.
External links
Royal Commission on Children in Mines describes hushing in 1842
Roman technology
Hushing in Yorkshire mines
Great Dun Fell hush gulley
Hushing in Gunnerdale, Yorkshire
Roman gold mine with numerous aqueducts
Hushing as used at Cwmystwyth mine
Remains of hushing systems in Wales by Timberlake
Shedings at Shedden Clough
Shedden Clough Hushings
Traditional mining
History of mining
Roman aqueducts outside Rome
Aqueducts in the United Kingdom
Hydraulic engineering | Hushing | [
"Physics",
"Engineering",
"Environmental_science"
] | 2,336 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
16,273,092 | https://en.wikipedia.org/wiki/Pressure%20flow%20hypothesis | The pressure flow hypothesis, also known as the mass flow hypothesis, is the best-supported theory to explain the movement of sap through the phloem of plants. It was proposed in 1930 by Ernst Münch, a German plant physiologist.
Organic molecules such as sugars, amino acids, certain hormones, and messenger RNAs are known to be transported in the phloem through the cells called sieve tube elements. According to the hypothesis, the high concentration of organic substances, particularly sugar, inside the phloem at a source such as a leaf creates a diffusion gradient (osmotic gradient) that draws water into the cells from the adjacent xylem. This creates turgor pressure, also called hydrostatic pressure, in the phloem. The hypothesis states that this is why sap in plants flows from the sugar producers (sources) to sugar absorbers (sinks).
Sugar sources and sinks
A sugar source is any part of the plant that is producing or releasing sugar. During the plant's growth period, usually during the spring, storage organs such as the roots are sugar sources, and the plant's many growing areas are sugar sinks. After the growth period, when the meristems are dormant, the leaves are sources, and storage organs are sinks. Developing seed-bearing organs (such as fruit) are always sinks.
Mechanism
While the movement of water and minerals through the xylem is usually driven by negative pressures (tension), movement through the phloem is driven by turgor pressure and an osmotic pressure gradient between the source and the sink. Cells in a sugar source actively transport sucrose molecules into the companion cells. The sucrose then diffuses through the plasmodesmata from the companion cells to the sieve tube elements. As a result, the concentration of sucrose increases in the sieve tube elements. This causes water to move into the sieve tube element by osmosis, creating pressure that pushes the sap down the tube. In sugar sinks, cells actively transport sucrose out of the sieve tube elements, first to the apoplast and then to the symplast of the sink. The phloem sugar is consumed by cellular respiration or converted into starch, which is insoluble and exerts no osmotic effect. With much of the sucrose having been removed, the water exits the phloem by osmosis or is drawn by transpiration into nearby xylem vessels, lowering the turgor pressure within the phloem. The sucrose concentration in sieve tubes is typically 10–30% in the leaves but only 0.5% in the photosynthesis cells. The gradient of sugar from source to sink causes pressure flow through the sieve tube toward the sink. The presence of sieve plates greatly increases the resistance along the pathway, thereby generating and maintaining substantial pressure gradients in the sieve elements between source and sink.
The movement in phloem is multi-directional, unlike in xylem cells, where the flow is upwards only. Because of this multi-directional flow, coupled with the fact that sap cannot easily move between adjacent sieve tubes, it is not unusual for sap in adjacent sieve tubes to flow in opposite directions.
Evidence
Various evidence supports the hypothesis. Firstly, there is an excretion of solution from the phloem when the stem is cut or punctured by the stylet of an aphid, indicating that the phloem sap is under pressure. Secondly, concentration gradients of organic solutes are proven to be present between the sink and the source. Additionally, when viruses or growth chemicals are applied to an actively photosynthesising leaf, they are translocated downwards to the roots. When applied to shaded leaves, such downward translocation of chemicals does not occur, showing that diffusion is not a possible process involved in translocation.
Criticisms
The hypothesis has been criticised. Some argue that mass flow is a passive process, while sieve tube vessels are supported by companion cells. Thus, the hypothesis neglects the living nature of phloem. Moreover, amino acids and sugars (examples of organic solutes) are translocated at different rates, contrary to the hypothesis’s assumption that all materials being transported would travel at a uniform speed.
One criticism of the pressure flow mechanism is that it does not explain the phenomenon of bidirectional movement i.e. simultaneous movement of different substances in opposite directions. The phenomenon of bidirectional movement has been demonstrated by applying two different substances at the same time to the phloem of a stem at two different points, and following their longitudinal movement along the stem. If the mechanism of translocation operates according to pressure flow hypothesis, bidirectional movement in a single sieve tube is not possible. Experiments to demonstrate bidirectional movement in a single sieve tube are very technically difficult to perform. Some experiments indicate that bidirectional movement may occur in a single sieve tube, whereas others do not. The bidirectional movement of solutes in the translocation process and the fact that translocation is heavily affected by changes in environmental conditions like temperature and metabolic inhibitors are two defects of the hypothesis.
Other theories
Some plants appear not to load phloem by active transport. In these cases, a mechanism known as the polymer trap mechanism was proposed by Robert Turgeon. In this model, small sugars such as sucrose move into intermediary cells through narrow plasmodesmata, where they are polymerised to raffinose and other larger oligosaccharides. As larger molecules, they are unable to move back but can proceed through wider cell wall channels (plasmodesmata) into the sieve tube element.
This symplastic phloem loading is confined mostly to plants in tropical rainforests and is seen as more primitive. The actively transported apoplastic phloem loading is viewed as more advanced, as it is found in the later-evolved plants, particularly those in temperate and arid conditions. This mechanism may therefore have allowed plants to colonise the cooler locations.
References
Plant physiology | Pressure flow hypothesis | [
"Biology"
] | 1,272 | [
"Plant physiology",
"Plants"
] |
16,275,208 | https://en.wikipedia.org/wiki/Atmospheric%20window | An atmospheric window is a region of the electromagnetic spectrum that can pass through the atmosphere of Earth. The optical, infrared and radio windows comprise the three main atmospheric windows. The windows provide direct channels for Earth's surface to receive electromagnetic energy from the Sun, and for thermal radiation from the surface to leave to space. Atmospheric windows are useful for astronomy, remote sensing, telecommunications and other science and technology applications.
In the study of the greenhouse effect, the term atmospheric window may be limited to mean the infrared window, which is the primary escape route for a fraction of the thermal radiation emitted near the surface. In other fields of science and technology, such as radio astronomy and remote sensing, the term is used as a hypernym, covering the whole electromagnetic spectrum as in the present article.
Role in Earth's energy budget
Atmospheric windows, especially the optical and infrared, affect the distribution of energy flows and temperatures within Earth's energy balance. The windows are themselves dependent upon clouds, water vapor, trace greenhouse gases, and other components of the atmosphere.
Out of an average 340 watts per square meter (W/m2) of solar irradiance at the top of the atmosphere, about 200 W/m2 reaches the surface via windows, mostly the optical and infrared. Also, out of about 340 W/m2 of reflected shortwave (105 W/m2) plus outgoing longwave radiation (235 W/m2), 80-100 W/m2 exits to space through the infrared window depending on cloudiness. About 40 W/m2 of this transmitted amount is emitted by the surface, while most of the remainder comes from lower regions of the atmosphere. In a complementary manner, the infrared window also transmits to the surface a portion of down-welling thermal radiation that is emitted within colder upper regions of the atmosphere.
The "window" concept is useful to provide qualitative insight into some important features of atmospheric radiation transport. Full characterization of the absorption, emission, and scattering coefficients of the atmospheric medium is needed in order to perform a rigorous quantitative analysis (typically done with atmospheric radiative transfer codes). Application of the Beer-Lambert Law may yield sufficient quantitative estimates for wavelengths where the atmosphere is optically thin. Window properties are mostly encoded within the absorption profile.
Other applications
In astronomy
Up until the 1940s, astronomers used optical telescopes to observe distant astronomical objects whose radiation reached the earth through the optical window. After that time, the development of radio telescopes gave rise to the more successful field of radio astronomy that is based on the analysis of observations made through the radio window.
In telecommunications
Communications satellites greatly depend on the atmospheric windows for the transmission and reception of signals: the satellite-ground links are established at frequencies that fall within the spectral bandwidth of atmospheric windows. Shortwave radio does the opposite, using frequencies that produce skywaves rather than those that escape through the radio windows.
In remote sensing
Both active (signal emitted by satellite or aircraft, reflection detected by sensor) and passive (reflection of sunlight detected by the sensor) remote sensing techniques work with wavelength ranges contained in the atmospheric windows.
See also
Optical window
Infrared window
Radio window
Water window, for soft x-rays
References
Electromagnetic spectrum
Atmosphere of Earth | Atmospheric window | [
"Physics"
] | 643 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
16,276,631 | https://en.wikipedia.org/wiki/AM1%2A | AM1* is a semiempirical molecular orbital technique in computational chemistry. The method was developed by Timothy Clark and co-workers (in Computer-Chemie-Centrum, Universität Erlangen-Nürnberg) and published first in 2003.
Indeed, AM1* is an extension of AM1 molecular orbital theory and uses AM1 parameters and theory unchanged for the elements H, C, N, O and F. But, other elements have been parameterized using an additional set of d-orbitals in the basis set and with two-center core–core parameters, rather than the Gaussian functions used to modify the core–core potential in AM1. Additionally, for transition metal-hydrogen interactions, a distance dependent term is used to calculate core-core potentials rather than the constant term.
AM1* parameters are now available for H, C, N, O, F, Al, Si, P, S, Cl, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Br, Zr, Mo, Pd, Ag, I and Au.
AM1* is implemented in VAMP 10.0 and Materials Studio (Accelrys Software Inc.).
References
Semiempirical quantum chemistry methods | AM1* | [
"Chemistry"
] | 263 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Computational chemistry",
"Physical chemistry stubs",
"Semiempirical quantum chemistry methods"
] |
16,276,793 | https://en.wikipedia.org/wiki/Cell-free%20protein%20array | Cell-free protein array technology produces protein microarrays by performing in vitro synthesis of the target proteins from their DNA templates. This method of synthesizing protein microarrays overcomes the many obstacles and challenges faced by traditional methods of protein array production that have prevented widespread adoption of protein microarrays in proteomics. Protein arrays made from this technology can be used for testing protein–protein interactions, as well as protein interactions with other cellular molecules such as DNA and lipids. Other applications include enzymatic inhibition assays and screenings of antibody specificity.
Overview and background
The runaway success of DNA microarrays has generated much enthusiasm for protein microarrays. However, protein microarrays have not quite taken off as expected, even with the necessary tools and know-how from DNA microarrays being in place and ready for adaptation. One major reason is that protein microarrays are much more laborious and technically challenging to construct than DNA microarrays.
The traditional methods of producing protein arrays require the separate in vivo expression of hundreds or thousands of proteins, followed by separate purification and immobilization of the proteins on a solid surface. Cell-free protein array technology attempts to simplify protein microarray construction by bypassing the need to express the proteins in bacteria cells and the subsequent need to purify them. It takes advantage of available cell-free protein synthesis technology which has demonstrated that protein synthesis can occur without an intact cell as long as cell extracts containing the DNA template, transcription and translation raw materials and machinery are provided. Common sources of cell extracts used in cell-free protein array technology include wheat germ, Escherichia coli, and rabbit reticulocyte. Cell extracts from other sources such as hyperthermophiles, hybridomas, Xenopus oocytes, insect, mammalian and human cells have also been used.
The target proteins are synthesized in situ on the protein microarray, directly from the DNA template, thus skipping many of the steps in traditional protein microarray production and their accompanying technical limitations. More importantly, the expression of the proteins can be done in parallel, meaning all the proteins can be expressed together in a single reaction. This ability to multiplex protein expression is a major time-saver in the production process.
Methods of synthesis
In situ methods
In the in situ method, protein synthesis is carried out on a protein array surface that is pre-coated with a protein-capturing reagent or antibody. Once the newly synthesized proteins are released from the ribosome, the tag sequence that is also synthesized at the N- or C-terminus of each nascent protein will be bound by the capture reagent or antibody, thus immobilizing the proteins to form an array. Commonly used tags include polyhistidine (His)6 and glutathione s-transferase (GST).
Various research groups have developed their own methods, each differing in their approach, but can be summarized into 3 main groups.
Nucleic acid programmable protein array (NAPPA)
NAPPA uses DNA template that has already been immobilized onto the same protein capture surface. The DNA template is biotinylated and is bound to avidin that is pre-coated onto the protein capture surface. Newly synthesized proteins which are tagged with GST are then immobilized next to the template DNA by binding to the adjacent polyclonal anti-GST capture antibody that is also pre-coated onto the capture surface. The main drawback of this method is the extra and tedious preparation steps at the beginning of the process: (1) the cloning of cDNAs in an expression-ready vector; and (2) the need to biotinylate the plasmid DNA but not to interfere with transcription. Moreover, the resulting protein array is not ‘pure’ because the proteins are co-localized with their DNA templates and capture antibodies.
Protein in situ array (PISA)
Unlike NAPPA, PISA completely bypasses DNA immobilization as the DNA template is added as a free molecule in the reaction mixture. In 2006, another group refined and miniaturized this method by using multiple spotting technique to spot the DNA template and cell-free transcription and translation mixture on a high-density protein microarray with up to 13,000 spots. This was made possible by the automated system used to accurately and sequentially supply the reagents for the transcription/translation reaction occurs in a small, sub-nanolitre droplet.
In situ puromycin-capture
This method is an adaptation of mRNA display technology. PCR DNA is first transcribed to mRNA, and a single-stranded DNA oligonucleotide modified with biotin and puromycin on each end is then hybridized to the 3’-end of the mRNA. The mRNAs are then arrayed on a slide and immobilized by the binding of biotin to streptavidin that is pre-coated on the slide. Cell extract is then dispensed on the slide for in situ translation to take place. When the ribosome reaches the hybridized oligonucleotide, it stalls and incorporates the puromycin molecule to the nascent polypeptide chain, thereby attaching the newly synthesized protein to the microarray via the DNA oligonucleotide. A pure protein array is obtained after the mRNA is digested with RNase. The protein spots generated by this method are very sharply defined and can be produced at a high density.
Nano-well array format
Nanowell array formats are used to express individual proteins in small volume reaction vessels or nanowells (Figure 4). This format is sometimes preferred because it avoids the need to immobilize the target protein which might result in the potential loss of protein activity. The miniaturization of the array also conserves solution and precious compounds that might be used in screening assays. Moreover, the structural properties of individual wells help to prevent cross-contamination among chambers. In 2012 an improved NAPPA was published, which used a nanowell array to prevent diffusion. Here the DNA was immobilized in the well together with an anti-GST antibody. Then cell-free expression mix was added and the wells closed by a lid. The nascent proteins containing a GST-tag were bound to the well surface enabling a NAPPA-array with higher density and nearly no cross-contaminations.
DNA array to protein array (DAPA)
DNA array to protein array (DAPA) is a method developed in 2007 to repeatedly produce protein arrays by ‘printing’ them from a single DNA template array, on demand (Figure 5). It starts with the spotting and immobilization of an array of DNA templates onto a glass slide. The slide is then assembled face-to-face with a second slide pre-coated with a protein-capturing reagent, and a membrane soaked with cell extract is placed between the two slides for transcription and translation to take place. The newly synthesized his-tagged proteins are then immobilized onto the slide to form the array. In the publication in 18 of 20 replications a protein microarray copy could be generated. Potentially the process can be repeated as often as needed, as long as the DNA is unharmed by DNAses, degradation or mechanical abrasion.
Advantages
Many of the advantages of cell-free protein array technology address the limitations of cell-based expression system used in traditional methods of protein microarray production.
Rapid and cost-effective
The method avoids DNA cloning (with the exception of NAPPA) and can quickly convert genetic information into functional proteins by using PCR DNA. The reduced steps in production and the ability to miniaturize the system saves on reagent consumption and cuts production costs.
Improves protein availability
Many proteins, including antibodies, are difficult to express in host cells due to problems with insolubility, disulfide bonds or host cell toxicity. Cell-free protein array makes many of such proteins available for use in protein microarrays.
Enables long term storage
Unlike DNA, which is a highly stable molecule, proteins are a heterogeneous class of molecules with different stability and physiochemical properties. Maintaining the proteins’ folding and function in an immobilized state over long periods of storage is a major challenge for protein microarrays. Cell-free methods provide the option to quickly obtaining protein microarrays on demand, thus eliminating any problems associated with long-term storage.
Flexible
The method is amenable to a range of different templates: PCR products, plasmids and mRNA. Additional components can be included during synthesis to adjust the environment for protein folding, disulfide bond formation, modification or protein activity.
Limitation
Post-translational modification of proteins in proteins generated by cell-free protein synthesis is still limited compared to the traditional methods, and may not be as biologically relevant.
Applications
Protein interactions: To screen for protein–protein interactions and protein interactions with other molecules such as metabolites, lipids, DNA and small molecules.; enzyme inhibition assay: for high throughput drug candidate screening and to discover novel enzymes for use in biotechnology; screening antibody specificity.
References
External links
NAPPA
PISA and DAPA
Protein arrays resource page
Molecular biology
Microarrays | Cell-free protein array | [
"Chemistry",
"Materials_science",
"Biology"
] | 1,919 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry"
] |
16,277,070 | https://en.wikipedia.org/wiki/Arformoterol | Arformoterol, sold under the brand name Brovana among others, is a medication used for the treatment of chronic obstructive pulmonary disease (COPD).
It is a long-acting β2 adrenoreceptor agonist (LABA) and it is the active (R,R)-(−)-enantiomer of formoterol. It was approved for medical use in the United States in October 2006. It is available as a generic medication.
Medical uses
Arformoterol is indicated for the maintenance treatment of bronchoconstriction in people with chronic obstructive pulmonary disease (COPD).
References
External links
Beta2-adrenergic agonists
Drugs acting on the respiratory system
Enantiopure drugs
4-Methoxyphenyl compounds
Phenols
Substituted amphetamines
Formamides
Phenylethanolamines | Arformoterol | [
"Chemistry"
] | 189 | [
"Stereochemistry",
"Enantiopure drugs"
] |
16,277,372 | https://en.wikipedia.org/wiki/Name%20calling | Name-calling is a form of argument in which insulting or demeaning labels are directed at an individual or group. This phenomenon is studied by a variety of academic disciplines such as anthropology, child psychology, and political science. It is also studied in rhetoric and a variety of other disciplines.
In politics and public opinion
Politicians sometimes resort to name-calling during political campaigns or public events with the intentions of gaining advantage over, or defending themselves from, an opponent or critic. Often such name-calling takes the form of labelling an opponent as an unreliable and untrustworthy source, such as use of the term "flip-flopper".
Common misconceptions
Gratuitous verbal abuse or "name-calling" is not on its own an example of the abusive argumentum ad hominem logical fallacy. The fallacy occurs only if personal attacks are employed to devalue a speaker's argument by attacking the speaker; personal insults in the middle of an otherwise sound argument are not fallacious ad hominem attacks.
References
Harassment and bullying
Informal fallacies
Names
Pejorative terms | Name calling | [
"Biology"
] | 224 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
16,277,487 | https://en.wikipedia.org/wiki/Canter%20rhythm | Canter time, canter timing or canter rhythm is a two-beat regular rhythmic pattern of a musical instrument or in dance steps within time music. The term is borrowed from the canter horse gait, which sounds three hoof beats followed by a pause, i.e., 3 accents in time.
In waltz dances it may mark the 1st and the 4th eighths of the measure, producing a overlay beat over the time. In other words, when a measure is cued as "one, two-and three", the canter rhythm marks "one" and "and". This rhythm is the basis of the Canter Waltz. In modern ballroom dancing, an example is the Canter Pivot in the Viennese Waltz.
In Vals (a style of Tango), the canter rhythm is also known as medio galope (which actually means "canter" in Spanish) and may accent beats 1 and 2 of the measure.
The Canter Waltz or Canter is a dance with waltz music characterized by the canter rhythm of steps. A 1922 dance manual describes it as follows: "The Canter Waltz has been revived and presents an opportunity to show the use of "direction" in the straight backward and forward series of walking steps. This dance is walking to waltz time but walking most quietly and gracefully. There are two steps to the three counts of music. Step forward on 1 and make the second step between the 2 and 3 count. Give the first step the accent, although the steps are almost of the same value. It may, perhaps, help the student practicing alone with the aid of the victrola to count "one-and two-and three-and", making the second step on the second "and", until able to do the step smoothly."
See also
Duple metre
Triple metre
Polyrhythm
Syncopation
References
Rhythm and meter
Waltz, Canter
Waltz | Canter rhythm | [
"Physics"
] | 395 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
16,278,208 | https://en.wikipedia.org/wiki/Polyhexanide | Polyhexanide (polyhexamethylene biguanide, PHMB) is a polymer used as a disinfectant and antiseptic. In dermatological use, it is spelled polihexanide (INN) and sold under various brand names. PHMB has been shown to be effective against Pseudomonas aeruginosa, Staphylococcus aureus, Escherichia coli, Candida albicans, Aspergillus brasiliensis, enterococci, and Klebsiella pneumoniae. Polihexanide, sold under the brand name Akantior is a medication used for the treatment of Acanthamoeba keratitis.
Products containing PHMB are used for inter-operative irrigation, pre- and post-surgery skin and mucous membrane disinfection, post-operative dressings, surgical and non-surgical wound dressings, surgical bath/hydrotherapy, chronic wounds like diabetic foot ulcer and burn wound management, routine antisepsis during minor incisions, catheterization, first aid, surface disinfection, and linen disinfection. PHMB eye drops have been used as a treatment for eyes affected by Acanthamoeba keratitis.
It is sold as a swimming pool and spa disinfectant in place of chlorine or bromine based products under the name Baquacil.
PHMB is also used as an ingredient in some contact lens cleaning products, cosmetics, personal deodorants and some veterinary products. It is also used to treat clothing (Purista), purportedly to prevent the development of unpleasant odors.
The PHMB hydrochloride salt (solution) is used in the majority of formulations.
Medical uses
Polihexanide is indicated for the treatment of Acanthamoeba keratitis in people aged 12 years of age and older.
Society and culture
Legal status
In May 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Akantior, intended for the treatment of Acanthamoeba keratitis, a severe, progressive and sight threatening corneal infection characterized by intense pain and photophobia. Acanthamoeba keratitis is a rare disease primarily affecting contact lens wearers. The applicant for this medicinal product is SIFI SPA. Polihexanide was approved for medical use in the European Union in August 2024.
Safety
In 2011, polyhexamethylene biguanide was classified as category 2 carcinogen by the European Chemical Agency, but it is still allowed in cosmetics in small quantities if exposure by inhalation is impossible.
Name controversy
In some sources, particularly when listed as a cosmetics ingredient (INCI), the polymer is wrongly named as polyaminopropyl biguanide.
References
Antiseptics and disinfectants
Biguanides
Polymers | Polyhexanide | [
"Chemistry",
"Materials_science"
] | 618 | [
"Polymers",
"Polymer chemistry"
] |
7,359,066 | https://en.wikipedia.org/wiki/BPO%20security | Information security has emerged as a significant concern for banks, mobile phone companies and other businesses that use call centers or business process outsourcing, or BPO. There have been instances of theft of personal data reported from call centers.
Britain's Financial Services Authority examined standards in India in April 2005 and the Banking Code Standards Board audited eight India-based call centers in 2006, handling more than a million calls per month from the UK. the examinations did not extend to Africa-based call centers staffed by workers of Indian origin.
The BCSB report stated that "Customer data is subject to the same level of security as in the UK. High risk and more complex processes are subject to higher levels of scrutiny than similar activities onshore."
India's NASSCOM has said that they take breach in security extremely seriously and will assist the police in their probe.
Common countermeasures
There are three identifiable types of illicit activities concerning fraud emanating from call centers:
Crooks who pretend to be legitimate call centres.
Hackers who gain access to call centre information through illegal means
Call centre agents who illegally misuse the information they have access to in call centres.
3rd and 4th party software implementation, allowing for "back-doors" to be entered remotely, sometimes under the "credentials" of security.
While items 1 and 2 are mostly subject to police action, call centres can use internal procedures to minimise risk. Such mitigation measures include but are not limited to:
Creating a paperless environment, preventing employees from writing down and removing information by ensuring that all work processes are done on the computer, without having to record anything on forms or notes.
Prohibiting the use of cellphones and cameras on the floor.
Prohibiting paper, pens and digital recording devices from being brought onto the floor.
Preventing internet access for employees on the floor.
Limiting functionality and access of personal computers or terminals used by call center agents (for example, disabling USB ports). Companies may also use data loss prevention software to block attempts to download, copy, or transmit sensitive electronic data.
Every call center is undoubtedly unique, as are organization’s requirements. But in nearly all cases, a biometric-based multi-modal platform solution for the call center can dramatically improve fraud prevention. By bringing seamless, multi-modal biometrics to your customers, agents and IVR, you prevent fraud losses, reduce your average handle times, contain more calls within IVR, and deliver markedly better customer experiences from any phone, anywhere.
See also
Globalization
Business process outsourcing in India
References
External links
BPO: In India data security cost skyrockets
India acts on call centre fraud
Offshore Outsourcing: Big Savings, Big Risk
Banking
Business process
Data security | BPO security | [
"Engineering"
] | 555 | [
"Cybersecurity engineering",
"Data security"
] |
7,359,182 | https://en.wikipedia.org/wiki/World%20Wetlands%20Day | World Wetlands Day is an environmentally related celebration which dates back to the year 1971 when several environmentalists gathered to reaffirm protection and love for wetlands, which are water ecosystems containing plant life and other organisms that bring ecological health in abundance to not only water bodies but environments as a whole. The World Wetlands Secretary Department is originally from Gland, Switzerland. The adoption of the Ramsar convention in the Iranian city of Ramsar occurred on February 2, 1971.
World Wetlands Day is celebrated on the second day of February every year, though it was not celebrated until 1997. This day serves to highlight the influence and positive production that wetlands have had on the world and brings communities together for the benefit of Mother Nature. This day also raises global awareness of wetlands' significant role not only for people but for the planet. Community protectors and environmental enthusiasts all come together on this day to celebrate their love for nature through celebration, which recognises what wetlands have done for not only humans, but all sorts of organisms in the world.
Over time, human construction has led to various ecological problems affecting wetlands. Overpopulation and construction has led to a decrease in environmental conservation. Many wetlands are being lost and ecologists claim that human should recognise the dilemma before a natural filter and conserver of the world is lost.
Partnership
Since 1998, the Ramsar Secretariat has partnered with Danone Group Evian Fund for Water (based out of Paris and founded in Barcelona, Spain) for financial support. For the Ramsar Secretariat, also known as Ramsar Convention on Wetlands of International Importance Especially as Waterfowl Habitat, this financial support has produced a variety of outreach materials including logos, posters, factsheets, handouts and guide documents to support countries' activities organized to celebrate WWD. These materials are available for free download on the World Wetlands Day website in the three languages of the convention: English, French, and Spanish. With that being said, all the materials are also available in their design files for event organizers to customize and adapt them to their local languages and contexts. A few print copies are available to countries upon request to the Secretariat.
World Wetlands Day Youth Photo Contest
Starting in 2015, a month-long Wetlands Youth Photo Contest that starts on 2 February was introduced as a part of a new approach to target young people and get them involved in WWD. People between ages 15 through 24 can take a picture of a certain wetland and upload it to the World Wetlands Day website between the months of February and March.
Since 1997 the Ramsar website has posted reports from about 100 countries of their WWD activities. In 2016 a map of events was introduced to help countries promote their activities and to facilitate reporting after WWD.
World Wetlands Day themes
Each year a theme is selected to focus attention and help raise public awareness about the value of wetlands. Countries organize a variety of events to raise awareness such as; lectures, seminars, nature walks, children's art contests, sampan races, community clean-up days, radio and television interviews, letters to newspapers, to the launch of new wetland policies, new Ramsar Sites and new programs at the national level.
The theme for World Wetlands Day in 2024 is "Wetlands and human wellbeing" and in 2023 was "Wetlands Restoration" and it will based on the restoration of wetlands.
External links
World Wetlands Day official website
Ramsar Convention official website
National CleanUp Day
References
Environmental awareness days
February observances
International observances
Wetlands | World Wetlands Day | [
"Environmental_science"
] | 697 | [
"Hydrology",
"Wetlands"
] |
7,359,494 | https://en.wikipedia.org/wiki/Laser%20accelerometer | A laser accelerometer is an accelerometer that uses a laser to measure changes in velocity/direction.
Mechanism
It employs a frame with three orthogonal input axes and multiple proof masses. Each proof mass has a predetermined blanking surface. A flexible beam supports each proof mass. The flexible beam permits movement of the proof mass on its axis.
A laser light source provides a light ray. The laser source has a transverse field characteristic with a central null intensity region. A mirror transmits a beam of light to a detector. The detector is positioned to be centered on the light ray and responds to the light's intensity to provide an intensity signal. The signal's magnitude is related to the intensity of the light ray.
The proof mass blanking surface is centrally positioned within and normal to the light ray null intensity region to provide increased blanking of the light ray in response to transverse movement of the mass on the input axis.
In response to acceleration in the direction of the input axis, the proof mass deflects the beam and moves the blanking surface in a direction transverse to the light ray to partially blank the light beam. A control responds to the intensity signal to apply a restoring force to restore the proof mass to a central position and provides an output signal proportional to the restoring force.
Applications
Accelerometers are added to many devices, including (smart) watches, phones and vehicles of all kinds. Accelerometers oriented vertically function as gravimeters, useful for mining. Other applications include medical diagnostics and satellite measurements for climate change studies.
Lasers
Basic lasers operate with a frequency range (line width) of some 500 mHz. The range is widened by small temperature changes and vibrations, and by imperfections in the laser cavity. The line width of a specialised scientific laser approaches 1mHz.
History
2021
An accelerometer was announced that used infrared light to measure the change in distance between two micromirrors in a Fabry–Perot cavity. The proof mass is a single silicon crystal with a mass of 10–20 mg, suspended from the first mirror using flexible 1.5 μm-thick silicon nitride () beams. The suspension allows the proof mass to move freely, with nearly ideal translational motion. The second (concave) mirror acts as the fixed reference point. Light of a certain frequency resonates – bounces back and forth – between the two mirrors in the cavity, increasing its intensity, while other frequencies are discarded. Under acceleration, the proof mass displacement relative to the concave mirror changes the intensity of reflected light. The change in intensity is measured by a single-frequency laser that matches the cavity's resonant frequency.The device can sense displacements under 1 femtometre (10−15 m) and detect accelerations as low as 3.2 × 10-8 g (the acceleration due to Earth's gravity) with uncertainty under 1%.
An accelerometer was announced with a line width of 20 Hz. The SolsTiS accelerometer has a titanium-doped sapphire cavity that is shaped in a way to encourage a narrow line width and to rapidly dissipate waste heat. The device exploits the wave qualities of atoms. The laser is divided into multiple beams. One beam strikes a diffuse rubidium gas refrigerated to around 10−7 K. This temperature is achieved by using Doppler cooling with six beams to slow/cool the atoms. The atoms split into two quantum waves. A second pulse reverses the split, while a third allows them to interfere with each other, creating an interference pattern that reflects acceleration the waves underwent while separated. Another laser pulse detects the interference patterns in the various atoms, which reflects the amount of acceleration. Military-grade laser accelerometers, drift (accumulate errors at the rate of) kilometres a day. The new devices reduce drift to 2 km a month.
See also
List of laser articles
References
External links
Laser applications
Gravity
Accelerometers
Sensors | Laser accelerometer | [
"Physics",
"Technology",
"Engineering"
] | 821 | [
"Accelerometers",
"Physical quantities",
"Acceleration",
"Measuring instruments",
"Sensors"
] |
7,359,723 | https://en.wikipedia.org/wiki/Australian%20Aboriginal%20enumeration | The Australian Aboriginal counting system was used together with message sticks sent to neighbouring clans to alert them of, or invite them to, corroborees, set-fights, and ball games. Numbers could clarify the day the meeting was to be held (in a number of "moons") and where (the number of camps' distance away). The messenger would have a message "in his mouth" to go along with the message stick.
A common misconception among non-Aboriginals is that Aboriginals did not have a way to count beyond two or three. However, Alfred Howitt, who studied the peoples of southeastern Australia, disproved this in the late nineteenth century, although the myth continues in circulation today.
The system in the table below is that used by the Wotjobaluk of the Wimmera (Howitt used this tribal name for the language called Wergaia in the AIATSIS language map). Howitt wrote that it was common among nearly all peoples he encountered in the southeast: "Its occurrence in these tribes suggests that it must have been general over a considerable part of Victoria". As can be seen in the following tables, names for numbers were based on body parts, which were counted starting from the little finger. In his manuscripts, Howitt suggests counting commenced on the left hand.
Wotjobaluk counting system
{| class="wikitable"
!Aboriginal name
!literal Translation
!Translation
!Number
|-
|Giti mŭnya
|little hand
|little finger
|1
|-
|Gaiŭp mŭnya
|from gaiŭp = one, mŭnya = hand
|the Ring finger
|2
|-
|Marŭng mŭnya
|from marung = the desert pine (Callitris verrucosa). (i.e., the middle finger being longer than the others, as the desert pine is taller than other trees in Wotjo country.)
|the middle finger
|3
|-
|Yolop-yolop mŭnya
|from yolop = to point or aim
|index finger
|4
|-
|Bap mŭnya
|from Bap = mother
|the thumb
|5
|-
|Dart gŭr
|from dart = a hollow, and gur = the forearm
|the inside of the wrist
|6
|-
|Boibŭn
|a small swelling (i.e., the swelling of the flexor muscles of the forearm)
|the forearm
|7
|-
|Bun-darti
|a hollow, referring to the hollow of the inside of the elbow joint
|inside of elbow
|8
|-
|Gengen dartchŭk
|from gengen = to tie, and dartchuk = the upper arm. This name is given also to the armlet of possum pelt which is worn around the upper arm.
|the biceps
|9
|-
|Borporŭng
|
|the point of the shoulder
|10
|-
|Jarak-gourn
|from jarak = reed, and gourn = neck, (i.e. is, the place where the reed necklace is worn.)
|throat
|11
|-
|Nerŭp wrembŭl
|from nerŭp = the butt or base of anything, and wrembŭl= ear
|earlobe
|12
|-
|Wŭrt wrembŭl''''
|from wŭrt = above and also behind, and wrembŭl = ear
|that part of the head just above and behind the ear
|13
|-
|Doke doke|from doka = to move
|
|14
|-
|Det det''
|hard
|crown of the head
|15
|}
A similar system but with one more place was described by Howitt for the Wurundjeri, speakers of the Woiwurrung language, in information given to Howitt by the elder William Barak. He makes it clear that once counting has reached "the top of the head. From this place the count follows the equivalents on the other side."
Other languages
See also
Wurundjeri
Alfred Howitt
References
Bibliography
Howitt, A.W. 1904. The native tribes of south-east Australia. London: McMillan and Co. Reprinted. 1996. Canberra: Aboriginal Studies Press. pp. 696–699 describe the system in Wotjobaluk, while p700-703 describe the Wurundjeri system.
Australian Aboriginal words and phrases
Numerals | Australian Aboriginal enumeration | [
"Mathematics"
] | 917 | [
"Numeral systems",
"Numerals"
] |
7,359,952 | https://en.wikipedia.org/wiki/Plateau%E2%80%93Rayleigh%20instability | In fluid dynamics, the Plateau–Rayleigh instability, often just called the Rayleigh instability, explains why and how a falling stream of fluid breaks up into smaller packets with the same total volume but less surface area per droplet. It is related to the Rayleigh–Taylor instability and is part of a greater branch of fluid dynamics concerned with fluid thread breakup. This fluid instability is exploited in the design of a particular type of ink jet technology whereby a jet of liquid is perturbed into a steady stream of droplets.
The driving force of the Plateau–Rayleigh instability is that liquids, by virtue of their surface tensions, tend to minimize their surface area. A considerable amount of work has been done recently on the final pinching profile by attacking it with self-similar solutions.
History
The Plateau–Rayleigh instability is named for Joseph Plateau and Lord Rayleigh. In 1873, Plateau found experimentally that a vertically falling stream of water will break up into drops if its length is greater than about 3.13 to 3.18 times its diameter, which he noted is close to . Later, Rayleigh showed theoretically that a vertically falling column of non-viscous liquid with a circular cross-section should break up into drops if its length exceeded its circumference, which is indeed times its diameter.
Theory
The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is (for example, in the liquid jet nozzle, there is vibration on the liquid stream due to a friction between the nozzle and the liquid stream). If the perturbations are resolved into sinusoidal components, we find that some components grow with time, while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per unit length) and the radius of the original cylindrical stream. The diagram to the right shows an exaggeration of a single component.
By assuming that all possible components exist initially in roughly equal (but minuscule) amplitudes, the size of the final drops can be predicted by determining by wave number which component grows the fastest. As time progresses, it is the component with the maximal growth rate that will come to dominate and will eventually be the one that pinches the stream into drops.
Although a thorough understanding of how this happens requires a mathematical development (see references), the diagram can provide a conceptual understanding. Observe the two bands shown girdling the stream—one at a peak and the other at a trough of the wave. At the trough, the radius of the stream is smaller, hence according to the Young–Laplace equation the pressure due to surface tension is increased. Likewise at the peak the radius of the stream is greater and, by the same reasoning, pressure due to surface tension is reduced. If this were the only effect, we would expect that the higher pressure in the trough would squeeze liquid into the lower-pressure region in the peak. In this way we see how the wave grows in amplitude over time.
But the Young–Laplace equation is influenced by two separate radius components. In this case one is the radius, already discussed, of the stream itself. The other is the radius of curvature of the wave itself. The fitted arcs in the diagram show these at a peak and at a trough. Observe that the radius of curvature at the trough is, in fact, negative, meaning that, according to Young–Laplace, it actually decreases the pressure in the trough. Likewise the radius of curvature at the peak is positive and increases the pressure in that region. The effect of these components is opposite the effects of the radius of the stream itself.
The two effects, in general, do not exactly cancel. One of them will have greater magnitude than the other, depending upon wave number and the initial radius of the stream. When the wave number is such that the radius of curvature of the wave dominates that of the radius of the stream, such components will decay over time. When the effect of the radius of the stream dominates that of the curvature of the wave, such components grow exponentially with time.
When all the maths is done, it is found that unstable components (that is, components that grow over time) are only those where the product of the wave number with the initial radius is less than unity (). The component that grows the fastest is the one whose wave number satisfies the equation
Examples
Water dripping from a faucet/tap
A special case of this is the formation of small droplets when water is dripping from a faucet/tap. When a segment of water begins to separate from the faucet, a neck is formed and then stretched. If the diameter of the faucet is big enough, the neck does not get sucked back in, and it undergoes a Plateau–Rayleigh instability and collapses into a small droplet.
Urination
Another everyday example of Plateau–Rayleigh instability occurs in urination, particularly standing male urination. The stream of urine experiences instability after about 15 cm (6 inches), breaking into droplets, which causes significant splash-back on impacting a surface. By contrast, if the stream contacts a surface while still in a stable state – such as by urinating directly against a urinal or wall – splash-back is almost completely eliminated.
Inkjet printing
Continuous inkjet printers (as opposed to drop-on-demand inkjet printers) generate a cylindrical stream of ink that breaks up into droplets prior to staining printer paper. By adjusting the size of the droplets using tunable temperature or pressure perturbations and imparting electrical charge to the ink, inkjet printers then steer the stream of droplets using electrostatics to form specific patterns on printer paper
Notes
External links
Plateau–Rayleigh Instability – a 3D lattice kinetic Monte Carlo simulation
Savart–Plateau–Rayleigh instability of a water column – Adaptive numerical simulation
An MIT lecture on falling fluid jets, including the Plateau -Rayleigh instability Pdf form, quite mathematical.
Fluid dynamics
Fluid dynamic instabilities
Articles containing video clips | Plateau–Rayleigh instability | [
"Chemistry",
"Engineering"
] | 1,266 | [
"Piping",
"Chemical engineering",
"Fluid dynamic instabilities",
"Fluid dynamics"
] |
7,360,123 | https://en.wikipedia.org/wiki/Adaptive%20response | The adaptive response is a DNA damage response pathway prevalent across bacteria that protects DNA from damage by external agents or by errors during replication. It is initiated specifically against alkylation, particularly methylation, of guanine or thymine nucleotides or phosphate groups on the sugar-phosphate backbone of DNA. Under sustained exposure to low-level treatment with alkylating mutagens, bacteria can adapt to the presence of the mutagen, rendering subsequent treatment with high doses of the same agent less effective.
== Function ==
Environmental influence plays a crucial role in the developmental plasticity of genotypes due to the introduction of DNA damaging agents. This phenomenon and the defense mechanism that has evolved to protect an organism’s genotype against damage and prevent multiple phenotypes is known as the adaptive response. Since the adaptive response is able to prevent the possibility of different phenotypes it, therefore, allows organisms to minimize the stress effects it experiences from different stressors and eventually develop a resistance to the stressors. The effects of various chemical, biological, and physical genotoxic damaging agents jeopardize the genotypic integrity of all organisms; however, many evolutionary defense mechanisms have developed so that the stressors stimulate the adaptive response to reduce the stress to a more reasonable and manageable level and reduce genetic damage.
Many of these defense mechanisms have contributed to the nonspecific adaptive response by "conditioning" the effected organisms with small amounts of particular stressors to stimulate cellular conformation changes and increase the resistance when the organism is exposed to higher concentrations of that particular stressor. For example, the decomposition of water produces highly reactive hydroxyl free radicals that can damage DNA, therefore, stimulating DNA repair mechanisms. This DNA up-regulation is involved in the adaptive response because the organism is being conditioned to protect itself against these stressors. Reactive oxygen species (ROS) are very damaging to DNA and highly associated with the adaptive response. When free radicals attack the important biomolecules that makeup organisms, harmful molecular intermediates react with and damage DNA leading to base damage or breaks in the dsDNA strand. The adaptive response is helpful to prevent damage and maintain the integrity of the genome.
The E. coli Ada response
This response was first identified in E. coli. The E. coli adaptive response constitutes four genes: ada, alkA, alkB, and aidB, each one working in specific residues, all regulated by the E. coli Ada protein.
The E. coli adaptive response is mediated by the Ada protein, which covalently transfers methylation damage from DNA to one of its two active methyl acceptor cysteine residues: Cys38 and Cys321. The Ada protein can repair damage by transferring methyl groups from O6-methylguanine or O4-methylthymine to Cys321 and also from methylphosphotriesters to Cys38 residue through an irreversible process. It can also convert the protein from a weak to a strong activator of transcription, increasing alkylation repair activity.
Ada
The ada gene has regulatory and repair activities, both really close to each other. For the regulation to occur, the ada protein must be activated, which is a consequence of the DNA repair activity.
alkA
The alkA gene product is a glycosylase that can repair a variety of lesions, removing a base from the sugar-phosphate backbone, producing an abasic site.
aidB
The aidB product is a flavin-containing protein.
alkB
alkB is an iron-dependent oxidoreductase, and it is associated with DNA repair because this gene is able to repair lesions in phage DNA prior to infection. It has been also demonstrated that alkB is required for reactivation of MMS-treated (methylating agent methyl methanesulfonate) single-stranded phage and since there are no lesions to be removed, it has been suggested that alkBB is involved in replication of damaged template DNA. Also, the fact that alkB can confer resistance to a methylating agent it suggests that it functions by itself.
Mechanism
Although little is known about the mechanism of the adaptive response, it is believed that changes in gene transcription and the activation of cellular defenses are involved. It has recently been suggested that specific mechanistic pathways of the adaptive response can activate the important tumor suppressor protein p53. A key experiment that reveals the underlying mechanisms is that which involves the treatment with protein synthesis inhibitors to Oedogonium Chlamydomonas and Closterium cells. This experiment resulted in DNA-binding proteins being synthesized in the cells conditioned with the stressor. Furthermore, reverse adaptive response suggests that a high conditioning dose followed by a second low dose produces roughly the same magnitude of response. This could suggest that the mechanisms work by cellular response modulation, not prevention, to the impending damage. The adaptive response is not instantaneous and takes several hours to develop, however after development it can last for months given that the stressor exposure is limited and will not overwhelm the cell. This is known as being dose and time-dependent with a maximum response occurring at 4 hours after an initial conditioning dose of 100 cGy (centigray) radiation stressor.
References
DNA repair
Gene expression | Adaptive response | [
"Chemistry",
"Biology"
] | 1,085 | [
"DNA repair",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
7,360,695 | https://en.wikipedia.org/wiki/Data%20synchronization | Data synchronization is the process of establishing consistency between source and target data stores, and the continuous harmonization of the data over time. It is fundamental to a wide variety of applications, including file synchronization and mobile device synchronization.
Data synchronization can also be useful in encryption for synchronizing public key servers.
Data synchronization is needed to update and keep multiple copies of a set of data coherent with one another or to maintain data integrity, Figure 3. For example, database replication is used to keep multiple copies of data synchronized with database servers that store data in different locations.
Examples
Examples include:
File synchronization, such as syncing a hand-held MP3 player to a desktop computer;
Cluster file systems, which are file systems that maintain data or indexes in a coherent fashion across a whole computing cluster;
Cache coherency, maintaining multiple copies of data in sync across multiple caches;
RAID, where data is written in a redundant fashion across multiple disks, so that the loss of any one disk does not lead to a loss of data;
Database replication, where copies of data on a database are kept in sync, despite possible large geographical separation;
Journaling, a technique used by many modern file systems to make sure that file metadata are updated on a disk in a coherent, consistent manner.
Challenges
Some of the challenges which user may face in data synchronization:
data formats complexity;
real-timeliness;
data security;
data quality;
performance.
Data formats complexity
Data formats tend to grow more complex with time as the organization grows and evolves. This results not only in building simple interfaces between the two applications (source and target), but also in a need to transform the data while passing them to the target application. ETL (extraction transformation loading) tools can be helpful at this stage for managing data format complexities.
Real-timeliness
In real-time systems, customers want to see the current status of their order in e-shop, the current status of a parcel delivery—a real time parcel tracking—, the current balance on their account, etc. This shows the need of a real-time system, which is being updated as well to enable smooth manufacturing process in real-time, e.g., ordering material when enterprise is running out stock, synchronizing customer orders with manufacturing process, etc. From real life, there exist so many examples where real-time processing gives successful and competitive advantage.
Data security
There are no fixed rules and policies to enforce data security. It may vary depending on the system which you are using. Even though the security is maintained correctly in the source system which captures the data, the security and information access privileges must be enforced on the target systems as well to prevent any potential misuse of the information. This is a serious issue and particularly when it comes for handling secret, confidential and personal information. So because of the sensitivity and confidentiality, data transfer and all in-between information must be encrypted.
Data quality
Data quality is another serious constraint. For better management and to maintain good quality of data, the common practice is to store the data at one location and share with different people and different systems and/or applications from different locations. It helps in preventing inconsistencies in the data.
Performance
There are five different phases involved in the data synchronization process:
data extraction from the source (or master, or main) system;
data transfer;
data transformation;
data load to the target system.
data updation
Each of these steps is critical. In case of large amounts of data, the synchronization process needs to be carefully planned and executed to avoid any negative impact on performance.
File-based solutions
There are tools available for file synchronization, version control (CVS, Subversion, etc.), distributed filesystems (Coda, etc.), and mirroring (rsync, etc.), in that all these attempt to keep sets of files synchronized. However, only version control and file synchronization tools can deal with modifications to more than one copy of the files.
File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. The automatic process prevents copying already identical files, thus can save considerable time relative to a manual copy, also being faster and less error prone.
Version control tools are intended to deal with situations where more than one user attempts to simultaneously modify the same file, while file synchronizers are optimized for situations where only one copy of the file will be edited at a time. For this reason, although version control tools can be used for file synchronization, dedicated programs require less overhead.
Distributed filesystems may also be seen as ensuring multiple versions of a file are synchronized. This normally requires that the devices storing the files are always connected, but some distributed file systems like Coda allow disconnected operation followed by reconciliation. The merging facilities of a distributed file system are typically more limited than those of a version control system because most file systems do not keep a version graph.
Mirror (computing): A mirror is an exact copy of a data set. On the Internet, a mirror site is an exact copy of another Internet site. Mirror sites are most commonly used to provide multiple sources of the same information, and are of particular value as a way of providing reliable access to large downloads.
Theoretical models
Several theoretical models of data synchronization exist in the research literature, and the problem is also related to the problem of Slepian–Wolf coding in information theory. The models are classified based on how they consider the data to be synchronized.
Unordered data
The problem of synchronizing unordered data (also known as the set reconciliation problem) is modeled as an attempt to compute the symmetric difference
between two remote sets
and of b-bit numbers. Some solutions to this problem are typified by:
Wholesale transfer In this case all data is transferred to one host for a local comparison.
Timestamp synchronization In this case all changes to the data are marked with timestamps. Synchronization proceeds by transferring all data with a timestamp later than the previous synchronization.
Mathematical synchronization In this case data are treated as mathematical objects and synchronization corresponds to a mathematical process.
Ordered data
In this case, two remote strings and need to be reconciled. Typically, it is assumed that these strings differ by up to a fixed number of edits (i.e. character insertions, deletions, or modifications). Then data synchronization is the process of reducing edit distance between and , up to the ideal distance of zero. This is applied in all filesystem based synchronizations (where the data is ordered). Many practical applications of this are discussed or referenced above.
It is sometimes possible to transform the problem to one of unordered data through a process known as shingling (splitting the strings into shingles).
Error handling
In fault-tolerant systems, distributed databases must be able to cope with the loss or corruption of (part of) their data. The first step is usually replication, which involves making multiple copies of the data and keeping them all up to date as changes are made. However, it is then necessary to decide which copy to rely on when loss or corruption of an instance occurs.
The simplest approach is to have a single master instance that is the sole source of truth. Changes to it are replicated to other instances, and one of those instances becomes the new master when the old master fails.
Paxos and Raft are more complex protocols that exist to solve problems with transient effects during failover, such as two instances thinking they are the master at the same time.
Secret sharing is useful if failures of whole nodes are very common. This moves synchronization from an explicit recovery process to being part of each read, where a read of some data requires retrieving encoded data from several different nodes. If corrupt or out-of-date data may be present on some nodes, this approach may also benefit from the use of an error correction code.
DHTs and Blockchains try to solve the problem of synchronization between many nodes (hundreds to billions).
See also
SyncML, a standard mainly for calendar, contact and email synchronization
Synchronization (computer science)
References
Fault-tolerant computer systems | Data synchronization | [
"Technology",
"Engineering"
] | 1,738 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
7,360,758 | https://en.wikipedia.org/wiki/Online%20Mendelian%20Inheritance%20in%20Animals | Online Mendelian Inheritance in Animals (OMIA) is an online database of genes, inherited disorders and traits in more than 550 animal species. It is modelled on, and is complementary to, Online Mendelian Inheritance in Man (OMIM). It aims to provide a publicly accessible catalogue of all animal phenes, excluding those in human and mouse, for which species specific resources are already available (OMIM, MLC). Authored by Professor Frank Nicholas of the University of Sydney, with some contribution from colleagues, the database contains textual information and references as well as links to relevant PubMed and Gene records at the NCBI.
OMIA is hosted by the University of Sydney, with an Entrez mirror located at the NCBI.
See also
Medical classification
Online Mendelian Inheritance in Man (OMIM)
References
OMIA (Online Mendelian Inheritance in Animals): an enhanced platform and integration into the Entrez search interface at NCBI. Nucleic Acids Res. 2006 Jan 1;34(Database issue):D599-601.
Online Mendelian Inheritance in Animals (OMIA): a comparative knowledgebase of genetic disorders and other familial traits in non-laboratory animals. Nucleic Acids Res. 2003 Jan 1;31(1):275-7.
External links
Online Mendelian Inheritance in Animals (OMIA)
OMIA mirror at NCBI
Biological databases
Genetic animal diseases
Diagnosis codes | Online Mendelian Inheritance in Animals | [
"Biology"
] | 294 | [
"Bioinformatics",
"Biological databases"
] |
7,361,176 | https://en.wikipedia.org/wiki/Cevimeline | Cevimeline (trade name Evoxac) is a synthetic analog of the natural alkaloid muscarine with a particular agonistic effect on M1 and M3 receptors. It is used in the treatment of dry mouth and Sjögren's syndrome.
Medical uses
Cevimeline is used in the treatment of xerostomia (dry mouth) and Sjögren's syndrome. It increases the production of saliva.
Side effects
Known side effects include nausea, vomiting, diarrhea, excessive sweating, rash, headache, runny nose, cough, drowsiness, hot flashes, blurred vision, and difficulty sleeping.
Contraindications include asthma and angle closure glaucoma.
Mechanism of action
Cevimeline is a cholinergic agonist. It has a particular effect on M1 and M3 receptors. By activating the M3 receptors of the parasympathetic nervous system, cevimeline stimulates secretion by the salivary glands, thereby alleviating dry mouth.
See also
Pilocarpine — a similar parasympathomimetic medication for dry mouth (xerostomia)
Bethanechol — a similar muscarinic parasympathomimetic with longer-lasting effect
References
External links
Evoxac (cevimeline HCl hydrate capsules) Full Prescribing Information
Daiichi Sankyo
M1 receptor agonists
M3 receptor agonists
Oxathiolanes
Quinuclidines
Spiro compounds | Cevimeline | [
"Chemistry"
] | 308 | [
"Organic compounds",
"Spiro compounds"
] |
7,361,378 | https://en.wikipedia.org/wiki/Retrogression%20heat%20treatment | Retrogression heat treatment (RHT) is a heat treatment process that rapidly heat treats age-hardenable aluminum alloys. Mainly induction heating is used for RHT. In the past, it was mainly used for 6061 and 6063 aluminum alloys. Therefore, forming of complex shapes is possible, without creating damages like cracks. Even hard tempers (for example -T6) can be formed easily after subjecting these alloys to RHT.
References
Materials science | Retrogression heat treatment | [
"Physics",
"Materials_science",
"Engineering"
] | 97 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
7,361,848 | https://en.wikipedia.org/wiki/Black%20swan%20theory | The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on a Latin expression which presumed that black swans did not exist. The expression was used until around 1697 when Dutch mariners saw black swans living in Australia. After this, the term was reinterpreted to mean an unforeseen and consequential event.
The reinterpreted theory was developed by Nassim Nicholas Taleb, starting in 2001, to explain:
The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.
The non-computability of the probability of consequential rare events using scientific methods (owing to the very nature of small probabilities).
The psychological biases that blind people, both individually and collectively, to uncertainty and to the substantial role of rare events in historical affairs.
Taleb's "black swan theory" (which differs from the earlier philosophical versions of the problem) refers only to statistically unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences. More technically, in the scientific monograph "Silent Risk", Taleb mathematically defines the black swan problem as "stemming from the use of degenerate metaprobability".
Background
The phrase "black swan" derives from a Latin expression; its oldest known occurrence is from the 2nd-century Roman poet Juvenal's characterization in his Satire VI of something being "rara avis in terris nigroque simillima cygno" ("a bird as rare upon the earth as a black swan"). When the phrase was coined, the black swan was presumed by Romans not to exist. The importance of the metaphor lies in its analogy to the fragility of any system of thought. A set of conclusions is potentially undone once any of its fundamental postulates is disproved. In this case, the observation of a single black swan would be the undoing of the logic of any system of thought, as well as any reasoning that followed from that underlying logic.
Juvenal's phrase was a common expression in 16th century London as a statement of impossibility. The London expression derives from the Old World presumption that all swans must be white because all historical records of swans reported that they had white feathers. In that context, a black swan was impossible or at least nonexistent.
However, in 1697, Dutch explorers led by Willem de Vlamingh became the first Europeans to see black swans, in Western Australia. The term subsequently metamorphosed to connote the idea that a perceived impossibility might later be disproved. Taleb notes that in the 19th century, John Stuart Mill used the black swan logical fallacy as a new term to identify falsification.
Black swan events were discussed by Taleb in his 2001 book Fooled By Randomness, which concerned financial events. His 2007 book The Black Swan extended the metaphor to events outside financial markets. Taleb regards almost all major scientific discoveries, historical events, and artistic accomplishments as "black swans"—undirected and unpredicted. He gives the rise of the Internet, the personal computer, World War I, the dissolution of the Soviet Union, and the September 11, 2001 attacks as examples of black swan events.
Taleb asserts:What we call here a Black Swan (and capitalize it) is an event with the following three attributes.
First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme 'impact'. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.
I stop and summarize the triplet: rarity, extreme 'impact', and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.
Identifying
Based on the author's criteria:
The event is a surprise (to the observer).
The event has a major effect.
After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals.
According to Taleb, the COVID-19 pandemic was not a black swan, as it was expected with great certainty that a global pandemic would eventually take place. Instead, it is considered a white swan—such an event has a major effect, but is compatible with statistical properties.
Coping with black swans
The practical aim of Taleb's book is not to attempt to predict events which are unpredictable, but to build robustness against negative events while still exploiting positive events. Taleb contends that banks and trading firms are very vulnerable to hazardous black swan events and are exposed to unpredictable losses. On the subject of business, and quantitative finance in particular, Taleb critiques the widespread use of the normal distribution model employed in financial engineering, calling it a Great Intellectual Fraud. Taleb elaborates the robustness concept as a central topic of his later book, Antifragile: Things That Gain From Disorder.
In the second edition of The Black Swan, Taleb provides "Ten Principles for a Black-Swan-Robust Society".
Taleb states that a black swan event depends on the observer. For example, what may be a Black Swan surprise for a turkey is not a Black Swan surprise to its butcher; hence the objective should be to "avoid being the turkey" by identifying areas of vulnerability to "turn the Black Swans white".
Epistemological approach
Taleb claims that his black swan is different from the earlier philosophical versions of the problem, specifically in epistemology (as associated with David Hume, John Stuart Mill, Karl Popper, and others), as it concerns a phenomenon with specific statistical properties which he calls, "the fourth quadrant".
Taleb's problem is about epistemic limitations in some parts of the areas covered in decision making. These limitations are twofold: philosophical (mathematical) and empirical (human-known) epistemic biases. The philosophical problem is about the decrease in knowledge when it comes to rare events because these are not visible in past samples and therefore require a strong a priori (extrapolating) theory; accordingly, predictions of events depend more and more on theories when their probability is small. In the "fourth quadrant", knowledge is uncertain and consequences are large, requiring more robustness.
According to Taleb, thinkers who came before him who dealt with the notion of the improbable (such as Hume, Mill, and Popper) focused on the problem of induction in logic, specifically, that of drawing general conclusions from specific observations. The central and unique attribute of Taleb's black swan event is that it is high-impact. His claim is that almost all consequential events in history come from the unexpected – yet humans later convince themselves that these events are explainable in hindsight.
One problem, labeled the ludic fallacy by Taleb, is the belief that the unstructured randomness found in life resembles the structured randomness found in games. This stems from the assumption that the unexpected may be predicted by extrapolating from variations in statistics based on past observations, especially when these statistics are presumed to represent samples from a normal distribution. These concerns often are highly relevant in financial markets, where major players sometimes assume normal distributions when using value at risk models, although market returns typically have fat tail distributions.
Taleb said:I don't particularly care about the usual. If you want to get an idea of a friend's temperament, ethics, and personal elegance, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Can you assess the danger a criminal poses by examining only what he does on an ordinary day? Can we understand health without considering wild diseases and epidemics? Indeed the normal is often irrelevant. Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the 'normal,' particularly with 'bell curve' methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. Its nickname in this book is GIF, Great Intellectual Fraud.More generally, decision theory, which is based on a fixed universe or a model of possible outcomes, ignores and minimizes the effect of events that are "outside the model". For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday (1987), but might not model the breakdown of markets following the September 11, 2001 attacks. Consequently, the New York Stock Exchange and Nasdaq exchange remained closed till September 17, 2001, the most protracted shutdown since the Great Depression. A fixed model considers the "known unknowns", but ignores the "unknown unknowns", made famous by a statement of Donald Rumsfeld. The term "unknown unknowns" appeared in a 1982 New Yorker article on the aerospace industry, which cites the example of metal fatigue, the cause of crashes in Comet airliners in the 1950s.
Deterministic chaotic dynamics reproducing the Black Swan Event have been researched in economics. That is in agreement with Taleb's comment regarding some distributions which are not usable with precision, but which are more descriptive, such as the fractal, power law, or scalable distributions and that awareness of these might help to temper expectations. Beyond this, Taleb emphasizes that many events simply are without precedent, undercutting the basis of this type of reasoning altogether.
Taleb also argues for the use of counterfactual reasoning when considering risk.
See also
Subjective probability
References
Bibliography
.
.
The U.S. response to NEOs- avoiding a black swan event
External links
.
Finance
Epistemological theories
Metatheory of science
Theory
Metaphors referring to birds
Nassim Nicholas Taleb | Black swan theory | [
"Biology"
] | 2,185 | [
"Behavior",
"Behavioral economics",
"Behaviorism"
] |
7,362,029 | https://en.wikipedia.org/wiki/Cellulin | Cellulin or cellulin granules are a type of polysaccharide found exclusively within the oomycetes of the order Leptomitales. Cellulin granules are composed of β-glucan and chitin. The experimentally determined composition of cellulin is 39% glucan (composed of beta-1,3- and beta-1,6-linked glucose units) and 60% chitin.
Research
β-cellulin is a possible treatment to be able to repair corneal cells. At specific concentration of β-cellulin at 0.2, 2 and 20 ng/mL rapid repair was induced to corneal epithelial stem cells. During these concentrations β-cellulin promotes phosphorylation of erk1/2 signaling pathway in mice during cornea repair. To confirm this, the mutation of erk1/2 inhibited this pathway and slowed the repair of cornea cells in mice.
By increasing the growth factors up to 60 ng/mL of β-FGF and EGF and up to 30 ng/mL of activin A/β-cellulin, the production of insulin producing cells increased. However increasing the concentration of the growth factors further, had no additional effect on the increase. This study can possibly be the insight for developing a new way to treat type-1 diabetes, which currently can only be treated with injection of insulin.
Despite being produced in large quantities by pancreatic islet cells, β-cellulin, an epidermal growth factor, appears to have little relevance in regulating insulin production.
See also
β-glucan
Cellulose
Chitin
References
Polysaccharides
Water moulds | Cellulin | [
"Chemistry"
] | 347 | [
"Carbohydrates",
"Polysaccharides"
] |
7,362,064 | https://en.wikipedia.org/wiki/MARS%20%28ticket%20reservation%20system%29 | , short for Magnetic-electronic Automatic Reservation System, is a train ticket reservation system used by the Japan Railways Group (JR Group) companies and travel agencies in Japan. It was developed jointly by Hitachi and the former Japanese National Railways (JNR), and inherited by the (JR Systems), which is jointly owned by the seven railway companies of the JR Group: the East Japan Railway Company (JR East), Central Japan Railway Company (JR Central), West Japan Railway Company (JR West), Hokkaido Railway Company (JR Hokkaido), Shikoku Railway Company (JR Shikoku), Kyushu Railway Company (JR Kyushu), and Japan Freight Railway Company (JR Freight).
The MARS system used in JR ticket offices is Japan’s largest online real-time system, providing a year-round availability of 99.999%. It offers a range of services, including seat reservations on Shinkansen and Limited Express trains and fare calculation for basic fare tickets, commuter passes, and express tickets. It is currently connected to approximately 10,000 terminals at JR ticket offices and travel agencies, as well as to online systems run by the individual JR companies. The system is accessed about 8 million times every day, with a daily average of over 1.9 million tickets sold.
Outline
The host computer of the system was previously located in Kokubunji, Tokyo until 2013, when it was moved to an undisclosed location in the northern part of the Kantō region. The system is managed by JR Systems since 1 April 1987 following the division and privatization of JNR.
Ticket offices at JR stations equipped with MARS terminals are called , selling tickets of all JR Group trains and partly highway buses and route buses and ferries. It is possible for passengers to reserve tickets of buses and trains from one month prior to the given trip. In the JR Central region, these are instead called by the name きっぷうりば kippu uriba, meaning "ticket sales counter".
Naming
Originally short for "Magnetic-electronic Automatic (seat) Reservation System", the backronym was later changed to "Multi Access Reservation System". It has since been reverted to its original meaning.
History
MARS-1
The MARS-1 system was created by Mamoru Hosaka, Yutaka Ohno, and others at the Japanese National Railways' R&D Institute (now the Railway Technical Research Institute), and was built in 1958. It was the world's first seat reservation system for trains, and entered service in February 1960, initially only providing bookings for the Kodama and Tsubame limited express services. The MARS-1 was capable of reserving seat positions, and was controlled by a Hitachi mainframe transistor computer with a central processing unit consisting of a thousand transistors and a magnetic drum memory unit for data storage, which was where the MARS acronym originated from.
In 2008, the MARS-1 system received a "One Step on Electro Technology -Look Back to the Future-" commemorative plaque from the Institute of Electrical Engineers of Japan.
MARS 100/200
MARS 300
MARS 500
MARS 501
Introduced in stages between 2002 and 2004, the MARS 501 introduced the concept of an Ethernet-based client–server model. Also, the ticket paper type was changed to thermal paper.
MARS 505
The latest version of MARS uses the MARS 505 system which was introduced in April 2020, which expanded on contactless, and ticketless boarding and booking capabilities brought along by the rise of mobile apps on smartphones and tablets.
References
External links
JR Railway Information Systems
Passenger rail transport in Japan
Travel technology
Computer-related introductions in 1960
Train-related introductions in 1960
1960 software
Computer reservation systems
Route planning software
Hitachi products | MARS (ticket reservation system) | [
"Technology"
] | 751 | [
"Computer reservation systems",
"Computer systems"
] |
7,362,094 | https://en.wikipedia.org/wiki/N-Myc | N-myc proto-oncogene protein also known as N-Myc or basic helix-loop-helix protein 37 (bHLHe37), is a protein that in humans is encoded by the MYCN gene.
Function
The MYCN gene is a member of the MYC family of transcription factors and encodes a protein with a basic helix-loop-helix (bHLH) domain. This protein is located in the cell nucleus and must dimerize with another bHLH protein in order to bind DNA. N-Myc is highly expressed in the fetal brain and is critical for normal brain development.
The MYCN gene has an antisense RNA, N-cym or MYCNOS, transcribed from the opposite strand which can be translated to form a protein product. N-Myc and MYCNOS are co-regulated both in normal development and in tumor cells, so it is possible that the two transcripts are functionally related. It has been shown that the antisense RNA encodes for a protein, named NCYM, that has originated de novo and is specific to human and chimpanzee. This NCYM protein inhibits GSK3b and thus prevents MYCN degradation. Transgenic mice that harbor human MYCN/NCYM pair often show neuroblastomas with distant metastasis, which are atypical for normal mice. Thus NCYM represents a rare example of a de novo gene that has acquired molecular function and plays a major role in oncogenesis.
Clinical significance
Amplification and overexpression of N-Myc can lead to tumorigenesis. Excess N-Myc is associated with a variety of tumors, most notably neuroblastomas where patients with amplification of the N-Myc gene tend to have poor outcomes. MYCN can also be activated in neuroblastoma and other cancers through somatic mutation. Intriguingly, recent genome-wide H3K27ac profiling in patient-derived NB samples revealed four distinct SE-driven epigenetic subtypes, characterized by their own and specific master regulatory networks. Three of them are named after the known clinical groups: MYCN-amplified, MYCN non-amplified high-risk, and MYCN non-amplified low-risk NBs, while the fourth displays cellular features which resemble multipotent Schwann cell precursors. Interestingly, the cyclin gene CCND1 was regulated through distinct and shared SEs in the different subtypes, and, more importantly, some tumors showed signals belonging to multiple epigenetic signatures, suggesting that the epigenetic landscape is likely to contribute to intratumoral heterogeneity.
Interactions
N-Myc has been shown to interact with MAX.
N-Myc is also stabilized by aurora A which protects it from degradation. Drugs that target this interaction are under development, and are designed to change the conformation of aurora A. Conformational change in Aurora A leads to release of N-Myc, which is then degraded in a ubiquitin-dependent manner.
Being independent from MYCN/MAX interaction, MYCN is also a transcriptional co-regulator of p53 in MYCN-amplified neuroblastoma. MYCN alters transcription of p53 target genes which regulate apoptosis responses and DNA damage repair in cell cycle. This MYCN-p53 interaction is through exclusive binding of MYCN to C-terminal domains of tetrameric p53. As a post-translational modification, MYCN binding to C-terminal domains of tetrameric p53 impacts p53 promoter selectivity and interferes other cofactors binding to this region.
See also
Myc
References
Further reading
External links
Transcription factors
Human proteins | N-Myc | [
"Chemistry",
"Biology"
] | 790 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
7,362,109 | https://en.wikipedia.org/wiki/Long%20path%20laser | The Long Path laser was an early high energy infrared laser at the Lawrence Livermore National Laboratory used to study inertial confinement fusion. Long path was completed in 1972 and was the first ICF laser ever to use neodymium doped glass as the lasing medium. It was capable of delivering about 50 joules of light at 1,062 nm onto a target in ~10 ns pulses. It did not use spatial filters to smooth the beam after amplification stages and thus had a fairly poor beam quality. Long path was mostly used to investigate laser energy absorption in deuterated plastic targets.
See also
Laser
Lawrence Livermore National Laboratory
List of laser articles
List of laser types
External links
Inertial confinement fusion research lasers
Lawrence Livermore National Laboratory | Long path laser | [
"Physics"
] | 154 | [
"Plasma physics stubs",
"Plasma physics"
] |
7,362,120 | https://en.wikipedia.org/wiki/Orosomucoid | Orosomucoid (ORM) or alpha-1-acid glycoprotein (α1AGp, AGP or AAG) is an acute phase protein found in plasma. It is an alpha-globulin glycoprotein and is modulated by two polymorphic genes. It is synthesized primarily in hepatocytes and has a normal plasma concentration between 0.6–1.2 mg/mL (1–3% plasma protein). Plasma levels are affected by pregnancy, burns, certain drugs, and certain diseases, particularly HIV.
The only established function of ORM is to act as a carrier of basic and neutrally charged lipophilic compounds. In medicine, it is known as the primary carrier of basic (positively charged) drugs (whereas albumin carries acidic (negatively charged) and neutral drugs), steroids, and protease inhibitors. Aging causes a small decrease in plasma albumin levels; if anything, there is a small increase in alpha-1-acid glycoprotein. The effect of these changes on drug protein binding and drug delivery, however, appear to be minimal. AGP shows a complex interaction with thyroid homeostasis: AGP in low concentrations was observed to stimulate the thyrotropin (TSH) receptor and intracellular accumulation of cyclic AMP. High AGP concentrations, however, inhibited TSH signalling.
Alpha-1-acid glycoprotein has been identified as one of four potentially useful circulating biomarkers for estimating the five-year risk of all-cause mortality (the other three are albumin, very low-density lipoprotein particle size, and citrate).
Orosomucoid increases in amount in obstructive jaundice while it diminishes in hepatocellular jaundice and in intestinal infections.
See also
Alpha-1 antitrypsin deficiency
Alpha-1 antitrypsin
References
External links
Acute-phase proteins
Glycoproteins
Molecular biology
Proteomics | Orosomucoid | [
"Chemistry",
"Biology"
] | 420 | [
"Glycoproteins",
"Glycobiology",
"Biochemistry",
"Molecular biology"
] |
7,362,208 | https://en.wikipedia.org/wiki/ANGIS | The Australian National Genomic Information Service (ANGIS) provided access for biologists to a comprehensive system of bioinformatics software, databases, documentation, training and support, on a subscription basis. While clearly targeted at Australian researchers, the tools ANGIS provided were available online to investigators worldwide. ANGIS was closed at the end of 2009.
Tools
BioManager was the main resource ANGIS provided. Although ANGIS/Biomanager was closed 30 November 2009, Prof Peter Reeves of the University of Sydney continues to host an implementation of BioManager on a voluntary basis. BioManager was developed from the earlier tool BioNavigator which was originally developed by a now defunct Australian bioinformatics company, Entigen(1). BioManager is a bioinformatics workflow management system that allows integration and use of multiple bioinformatic computer packages through a single web user interface. Data from analyses is stored on the system or used as inputs to other packages in BioManager. As provided by ANGIS, BioManager was a subscription-based service, with access made available to the Australian and New Zealand academic communities. Access from outside Australia and NZ was by enquiry. ANGIS also provided a variety of training resources and courses to help make these tools readily usable by the scientific community.
History
ANGIS began as a project at the University of Sydney in 1990, originally the Sydney University Sequence Analysis Interface (SUSAI) as a multi-disciplinary effort spearheaded by Trevor Cole, Alex Reisner and Peter Reeves. One year later in 1991, SUSAI became ANGIS through the formation of the Australian Genomic Information Center (AGIC), a government sanctioned research center. In March 2007, oversight for ANGIS was passed to the University of Sydney in New South Wales, Australia within the newly formed Sydney Bioinformatics. ANGIS appears to have ended as an entity with the closing of Sydney Bioinformatics on 31 December 2010.
Teaching
ANGIS was deeply involved in post-graduate training courses and workshops. General Bioinformatic application workshops, along with specialist workshops in proteomics, microarray, database searching and phylogenetics were held in Sydney and elsewhere in the country. In-house training courses in these areas ranged in length from 1–4 days.
BioManager has been used as the main Bioinformatics training tools for Australian and other international universities and academic subscriptions to ANGIS/BioManager included teaching logins for student use.
References
Notes
(1) eBioinformatics/Encompass/Entigen closed in 2001
External links
Biomanager service
Bioinformatics software | ANGIS | [
"Biology"
] | 539 | [
"Bioinformatics",
"Bioinformatics software"
] |
7,362,920 | https://en.wikipedia.org/wiki/Public%20analyst | Public Analysts are scientists in the British Isles whose principal task is to ensure the safety and correct description of food by testing for compliance with legislation. Most Public Analysts are also Agricultural Analysts who carry out similar work on animal feedingstuffs and fertilisers. Nowadays this includes checking that the food labelling is accurate. They also test drinking water, and may carry out chemical and biological tests on other consumer products. While much of the work is done by other scientists and technicians in the laboratory, the Public Analyst has legal responsibility for the accuracy of the work and the validity of any opinion expressed on the results reported. The UK-based Association of Public Analysts includes members with similar roles if different titles in other countries.
History
The office of Public Analyst was established by the Adulteration of Food and Drink Act 1860 (23 & 24 Vict. c. 84), the first three appointments being in London, Birmingham and Dublin. The first Scottish analyst was Henry Littlejohn in Edinburgh in 1862, who, with a strong medicinal background and brilliant mind, established many of the critical foundations of public analysis. The Sale of Food and Drugs Act 1875 (38 & 39 Vict. c. 63) made food analysis compulsory and the Sale of Food and Drugs Act 1899 (62 & 63 Vict. c. 51) extended its scope. Sampling officers generally operated through local public health or sanitary committees. By 1894 there were 99 public analysts overseeing 237 English and Welsh districts. The City of London Corporation had three food inspectors and a wharf and warehouse inspector in 1908. Bradford employed an inspector who made 756 visits to fish and chip shops in 1915. In the 1930s the staff in Birmingham comprised three qualified assistants, a clerk and a laboratory attendant.
The Nuisances Removal Act for England 1855 (18 & 19 Vict. c. 121) and the Public Health Act 1875 (38 & 39 Vict. c. 55) gave authority for taking food samples "at all reasonable times". Inspectors, police constables and samplers were responsible for taking food samples, which were divided into three parts, for the vendors, the inspectors and the analysts and sealed into bottles. Food systems were engineered to allow inspection through portals, manholes and windows. Prosecution was not common though fines and prison sentences were not unknown. Adulteration rates fell from 13.8% of samples in 1879 to 4.8% in 1930. Inspectors were empowered to follow milk to sources outside their formal jurisdiction in checking for infection with tuberculosis. Sanitary authorities were required to register all dairies and enforce cleanliness regulations.
The Manchester Corporation (General Powers) Act 1899 (62 & 63 Vict. c. clxxxviii), as amended in 1904, contained what were known as `milk clauses', which empowered officials to prosecute anyone who knowingly sold milk from cows with tuberculosis of the udder, to demand the isolation of infected cows and notification of any cow exhibiting signs of tuberculosis of the udder and to inspect the cows and take samples from herds which supplied milk to the city. By 1910 these provisions had been copied by 67 boroughs and 24 urban districts .
The Society of Public Analysts was established in 1874, later becoming the Society for Analytical Chemistry and joining with other societies to form the Royal Society of Chemistry in 1980.
Since the separation of the UK and Ireland, the function of the Public Analyst operates under different legislation, but the term and general duties are the same. The original work was chemical testing, and this is still a major part, but nowadays microbiological examination of food is an important activity, particularly in Scotland, where Public Analyst laboratories also carry out a statutory Food Examiner role.
UK
The primary UK legislation is the Food Safety Act 1990. All local authorities are required to appoint a Public Analyst, although there have always been fewer Public Analysts and their laboratories than local authorities, most being shared by a number of local authorities. On the UK mainland there has always been a mixture of public sector and private sector laboratories. This remains the case today - but they all provide an equivalent service, and avoidance of conflicts of interest are ensured by the statutory terms of appointment. There is a statutory qualification requirement for Public Analysts, known as the Mastership in Chemical Analysis (MChemA), awarded by the Royal Society of Chemistry. This is a specialist postgraduate qualification by examination that verifies knowledge and understanding of food and its potential defects, interpretation of food law, and the application and interpretation of chemical analysis for food law enforcement.
The Public Analysts’ laboratories must be third-party accredited to International Standard BS EN ISO/IEC 17025:2017.
In the mid 1980s there were some 40 Public Analyst Laboratories in the UK with over 100 appointed Public Analysts. By 1993 that had reduced to 34 Laboratories and around 80 Public Analysts, and by 2010 the number of Public Analyst Laboratories had reduced to 22 with only about 26 Public Analysts. As of 2022 there are 15 Public Analyst laboratories remaining in the UK. In part, the reduction in number of laboratories over the decades has been due to rationalisation and benefits from economies of scale; however, by a larger part, it has arisen due to lack of adequate funding. Although some of the remaining laboratories are larger than many that no longer exist, the overall capacity of the system is now far less than it used to be.
Enforcement of food law in the UK is done by local authorities, principally their environmental health officers and trading standards officers. Whilst these officers are empowered to take samples of food, the actual assessment in terms of chemical analysis or microbiological examination and subsequent interpretation that are necessary to determine whether a food complies with legislation, is carried out by Public Analysts and Food Examiners respectively, scientists whose qualifications and experience are specified by regulations.
Ireland
Public Analyst Laboratories in Cork, Dublin and Galway provide an analytical service to the Food Safety Authority.
Crown Dependencies
There is one Public Analyst Laboratory in each of Guernsey, Isle of Man and Jersey serving the needs of these islands.
Australia
There is also one Public Analyst Laboratory in Australia.
Practice
The Public Analyst runs a laboratory which will:
Analyse food:
for composition: many foods have legally defined, customary or expected compositions
for additives: which must be legally permitted and within prescribed concentrations
for contamination: chemical, microbiological
to assess the accuracy of labelling
to investigate whether complaints by the public are justified
Interpret relevant law passed by the EU and UK or Ireland:
act as expert witness in prosecutions
In addition to their central rôle in relation to food law enforcement, Public Analysts provide expert scientific support to local authorities and the private sector in various other areas, for example they:
analyse drinking, bathing water including swimming pools, industrial effluents, industrial process waters and other waters
investigate environmental products and processes including assessing land contamination, building materials and examining fuels
advise on waste management
investigate and monitor air pollution
advise on consumer safety - in particular consumer products such as toys
monitor asbestos and other hazards
carry out toxicological work to assist HM Coroners
Sampling
Sampling is largely outside the control of the Public Analyst.
Local authorities have a duty to check the safety of food and to provide adequate protection of the consumer. To achieve that, they devise sampling plans, seeking to balance their need to monitor food against limited resources and other demands on their budgets. A typical sampling plan for a local authority might include samples of the following:
samples from a particular source - a supermarket, manufacturer or caterer or country
meat products - to check %meat or %fat or non-meat or additives or species
product marketing claims
undeclared ingredients in prepared foods
contaminated products
nutritional content of prepared meals
References
See also
Chartered Chemist
Food Safety Act 1990
Environmental chemistry
Food scientists
Analytical chemistry
Local government in the United Kingdom
Royal Society of Chemistry
Public health in the United Kingdom
Food safety | Public analyst | [
"Chemistry",
"Environmental_science"
] | 1,566 | [
"Environmental chemistry",
"nan",
"Royal Society of Chemistry"
] |
7,363,084 | https://en.wikipedia.org/wiki/Perkin%20triangle | A Perkin triangle is a specialized apparatus for the distillation of air-sensitive materials.
Some compounds have high boiling points and are sensitive to air. A simple vacuum distillation system can be used, whereby the vacuum is replaced with an inert gas after the distillation is complete. However, this is a less satisfactory system if one desires to collect fractions under a reduced pressure. To do this, a "pig" adapter can be added to the end of the condenser, or for better results or for very air-sensitive compounds, a Perkin triangle apparatus can be used.
The Perkin triangle uses a series of glass or Teflon taps to allow fractions to be isolated from the rest of the still, without the main body of the distillation being removed from either the vacuum or heat source, so that the reflux may continue. To do this, the sample is first isolated from the vacuum through the taps. The vacuum over the sample is then replaced with an inert gas such as nitrogen or argon. The collection vessel or still receiver can then be removed and stoppered. Finally, a fresh collection vessel can be added to the system, evacuated, and linked back to the distillation system through the taps to collect the next fraction. The process is repeated until all fractions have been collected.
Solvent drying
A Perkin triangle is also a convenient device for drying solvents. The solvent can be allowed to reflux over a drying agent housed in the still pot (shown as 2 in the figure) for a suitable time to dry solvent. The collecting tap (shown as 5 in the figure) can then be opened to collect the solvent in a Schlenk flask for storage. Depending on the boiling point of the solvent, a vacuum can be applied.
Reference textbook
External links
Royal Society of Chemistry: Classic Kit: 'Perkin's' triangle
Distillation
Laboratory equipment
Laboratory glassware | Perkin triangle | [
"Chemistry"
] | 397 | [
"Distillation",
"Separation processes"
] |
7,363,390 | https://en.wikipedia.org/wiki/Sexolog%C3%ADa%20y%20Sociedad | Sexología y Sociedad () is a medical journal published in Cuba. The journal was first published in 1994, and is currently published by the Cuban National Center for Sex Education. The journal is published in both English and Spanish languages. The editor is Mariela Castro.
External links
Medicine in Cuba
Multilingual journals
Academic journals established in 1994
Sexology journals
Triannual journals | Sexología y Sociedad | [
"Biology"
] | 79 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
7,363,669 | https://en.wikipedia.org/wiki/Fischer%20glycosidation | Fischer glycosidation (or Fischer glycosylation) refers to the formation of a glycoside by the reaction of an aldose or ketose with an alcohol in the presence of an acid catalyst. The reaction is named after the German chemist, Emil Fischer, winner of the Nobel Prize in chemistry, 1902, who developed this method between 1893 and 1895.
Commonly, the reaction is performed using a solution or suspension of the carbohydrate in the alcohol as the solvent. The carbohydrate is usually completely unprotected. The Fischer glycosidation reaction is an equilibrium process and can lead to a mixture of ring size isomers, and anomers, plus in some cases, small amounts of acyclic forms. With hexoses, short reactions times usually lead to furanose ring forms, and longer reaction times lead to pyranose forms. With long reaction times the most thermodynamically stable product will result which, owing to the anomeric effect, is usually the alpha anomer.
See also
Fischer–Speier esterification - a more general reaction where an alcohol and carboxylic acid are coupled to form an ester
Helferich method - a glycosidation carried out with phenol
References
Carbohydrate chemistry
Glycosides
Substitution reactions
Organic reactions
Name reactions
Emil Fischer | Fischer glycosidation | [
"Chemistry"
] | 292 | [
"Carbohydrates",
"Glycosides",
"Organic reactions",
"Name reactions",
"Carbohydrate chemistry",
"Biomolecules",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
7,364,243 | https://en.wikipedia.org/wiki/Schlenk%20flask | A Schlenk flask, or Schlenk tube, is a reaction vessel typically used in air-sensitive chemistry, invented by Wilhelm Schlenk. It has a side arm fitted with a PTFE or ground glass stopcock, which allows the vessel to be evacuated or filled with gases (usually inert gases like nitrogen or argon). These flasks are often connected to Schlenk lines, which allow both operations to be done easily.
Schlenk flasks and Schlenk tubes, like most laboratory glassware, are made from borosilicate glass such as Pyrex.
Schlenk flasks are round-bottomed, while Schlenk tubes are elongated. They may be purchased off-the-shelf from laboratory suppliers or made from round-bottom flasks or glass tubing by a skilled glassblower.
Evacuating a Schlenk flask
Typically, before solvent or reagents are introduced into a Schlenk flask, the flask is dried and the atmosphere of the flask is exchanged with an inert gas. A common method of exchanging the atmosphere of the flask is to flush the flask out with an inert gas. The gas can be introduced through the sidearm of the flask, or via a wide bore needle (attached to a gas line). The contents of the flask exit the flask through the neck portion of the flask. The needle method has the advantage that the needle can be placed at the bottom of the flask to better flush out the atmosphere of the flask. Flushing a flask out with an inert gas can be inefficient for large flasks and is impractical for complex apparatus.
An alternative way to exchange the atmosphere of a Schlenk flask is to use one or more "vac-refill" cycles, typically using a vacuum-gas manifold, also known as a Schlenk line. This involves pumping the air out of the flask and replacing the resulting vacuum with an inert gas. For example, evacuation of the flask to and then replenishing the atmosphere with inert gas leaves 0.13% of the original atmosphere (). Two such vac-refill cycles leaves 0.000173% (). Most Schlenk lines easily and quickly achieve a vacuum of 1 mmHg (~1.3 mBar).
Varieties
When using Schlenk systems, including flasks, the use of grease is often necessary at stopcock valves and ground glass joints to provide a gas tight seal and prevent glass pieces from fusing. In contrast, teflon plug valves may have a trace of oil as a lubricant but generally no grease. In the following text any "connection" is assumed to be rendered mostly air free through a series of vac-refill cycles.
Standard Schlenk flask
The standard Schlenk flask is a round bottom, pear-shaped, or tubular flask with a ground glass joint and a side arm. The side arm contains a valve, usually a greased stopcock, used to control the flask's exposure to a manifold or the atmosphere. This allows a material to be added to a flask through the ground glass joint, which is then capped with a septum. This operation can, for example, be done in a glove box. The flask can then be removed from the glove box and taken to a Schlenk line. Once connected to the Schlenk line, the inert gas and/or vacuum can be applied to the flask as required. While the flask is connected to the line under a positive pressure of inert gas, the septum can be replaced with other apparatus, for example a reflux condenser. Once the manipulations are complete, the contents can be vacuum dried and placed under a static vacuum by closing the side arm valve. These evacuated flasks can be taken back into a glove box for further manipulation or storage of the flasks' contents.
Schlenk bomb
A "bomb" flask is subclass of Schlenk flask which includes all flasks that have only one opening accessed by opening a Teflon plug valve. This design allows a Schlenk bomb to be sealed more completely than a standard Schlenk flask even if its septum or glass cap is wired on. Schlenk bombs include structurally sound shapes such as round bottoms and heavy walled tubes. Schlenk bombs are often used to conduct reactions at elevated pressures and temperatures as a closed system. In addition, all Schlenk bombs are designed to withstand the pressure differential created by the ante-chamber when pumping solvents into a glove box.
In practice Schlenk bombs can perform many of the functions of a standard Schlenk flask. Even when the opening is used to fit a bomb to a manifold, the plug can still be removed to add or remove material from the bomb. In some situations, however, Schlenk bombs are less convenient than standard Schlenk flasks: they lack an accessible ground glass joint to attach additional apparatus; the opening provided by plug valves can be difficult to access with a spatula, and it can be much simpler to work with a septum designed to fit a ground glass joint than with a Teflon plug.
The name "bomb" is often applied to containers used under pressure such as a bomb calorimeter. While glass does not equal the pressure rating and mechanical strength of most metal containers, it does have several advantages. Glass allows visual inspection of a reaction in progress, it is inert to a wide range of reaction conditions and substrates, it is generally more compatible with common laboratory glassware, and it is more easily cleaned and checked for cleanliness.
Straus flask
A Straus flask (often misspelled "Strauss") is subclass of "bomb" flask originally developed by Kontes Glass Company, commonly used for storing dried and degassed solvents. Straus flasks are sometimes referred to as solvent bombs — a name which applies to any Schlenk bomb dedicated to storing solvent. Straus flasks are mainly differentiated from other "bombs" by their neck structure. Two necks emerge from a round bottom flask, one larger than the other. The larger neck ends in a ground glass joint and is permanently partitioned by blown glass from direct access to the flask. The smaller neck includes the threading required for a teflon plug to be screwed in perpendicular to the flask. The two necks are joined through a glass tube. The ground glass joint can be connected to a manifold directly or through an adapter and hosing. Once connected, the plug valve can be partially opened to allow the solvent in the Straus flask to be vacuum transferred to other vessels. Or, once connected to the line, the neck can be placed under a positive pressure of inert gas and the plug valve can be fully removed. This allows direct access to the flask through a narrow glass tube now protected by a curtain of inert gas. The solvent can then be transferred through cannula to another flask. In contrast, other bomb flask plugs are not necessarily ideally situated to protect the atmosphere of the flask from the external atmosphere.
Solvent pot
Straus flasks are distinct from "solvent pots", which are flasks that contain a solvent as well as drying agents. Solvent pots are not usually bombs, or even Schlenk flasks in the classic sense. The most common configuration of a solvent pot is a simple round bottom flask attached to a 180° adapter fitted with some form of valve. The pot can be attached to a manifold and the contents distilled or vacuum transferred to other flasks free of soluble drying agents, water, oxygen or nitrogen. The term "solvent pot" can also refer to the flask containing the drying agents in a classic solvent still system. Due to fire risks, solvent stills have largely been replaced by solvent columns in which degassed solvent is forced through an insoluble drying agent before being collected. Solvent is usually collected from solvent columns through a needle connected to the column which pierces the septum of a flask or through a ground glass joint connected to the column, as in the case of a Straus flask.
References
Further reading
Laboratory glassware
Air-free techniques
German inventions | Schlenk flask | [
"Chemistry",
"Engineering"
] | 1,764 | [
"Vacuum systems",
"Air-free techniques"
] |
7,364,325 | https://en.wikipedia.org/wiki/Erotic%20lactation | Erotic lactation is sexual arousal by sucking on a female breast. Depending on the context, the practice can also be referred to as adult suckling, adult nursing, and adult breastfeeding. Practitioners sometimes refer to themselves as being in an adult nursing relationship (ANR). Two persons in an exclusive relationship can be called a nursing couple.
Milk fetishism and lactophilia are medical, diagnostic terms for paraphilias and are used for disorders according to the precise criteria of ICD-10 and DSM-IV.
Physiology
Breasts, and especially nipples, are highly erogenous zones, for both men and women. Nipple and breast stimulation of women are a near-universal aspect of human sexuality, though men's nipples are not as sexualized. Humans are the only primates whose female members have permanently enlarged breasts after the onset of puberty; the breasts of other primate species are enlarged only during pregnancy and nursing. One hypothesis postulates that the breasts grew as a frontal counterpart to the buttocks as primates became upright to attracting mates, a model first developed in 1967. Other hypotheses include that by chance breasts act as a cushion for infant heads, are a signal of fertility, or elevate the infant's head in breastfeeding to prevent suffocation. Paradoxically, there is even a school that believes that they are an evolutionary flaw, and can actually suffocate a nursing infant.
The association of pleasure and nutrition holds true as well for the lips, also erogenous zones, where pleasure may have led to "kiss feeding", in which mothers chew food before passing it on to the child.
Unintended milk flow (galactorrhea) is often caused by nipple stimulation and it is possible to reach normal milk production exclusively by suckling on the breast. Nipple stimulation of any sort is noted in reducing the incidence of breast cancer.
Some people lose the ability to be aroused while breastfeeding, and thus would not find lactation with a sexual partner to be erotic. This can be a result of physical reasons (soreness) or psychological reasons (conflicted about their breasts being used other than for an infant).
Motivations
Because female breasts and nipples are generally regarded as an important part of sexual activity in most cultures, it is not uncommon that couples may proceed from oral stimulation of the nipples to actual breastfeeding. In its issue of March 13, 2005, the London weekly newspaper The Sunday Times gave a report of a scientific survey (composed of 1690 British men) Indicating that in 25 to 33% of all couples, the male partner had suckled his wife's breasts. Regularly, the men gave a genuine emotional need as their motive.
Erotic lactation is sometimes seen as a kink. Those who partake in it can become sexually aroused by seeing a person lactate, having sex with a lactating person or sucking on their breasts.
Social implications
The breasts have two main roles in human society: nutritive and sexual. Breastfeeding in general is considered by some to be a mild form of exhibitionism, especially in Western societies (see breastfeeding in public). Breastfeeding parents have faced legal ramifications for nursing their children into toddler-hood or in public, or for photographing themselves while nursing.
Researcher Nikki Sullivan, in her book A Critical Introduction to Queer Theory, calls erotic lactation a manifestation of "Queer." She defines Queer as an ideology; that is, as a "sort of vague and indefinable set of practices and (political) positions that has the potential to challenge normative knowledges and identities." Drawing on a statement of David Halperin, she continues "since queer is a positionality rather than an identity in the humanist sense, it is not restricted to gays and lesbians but can be taken up by anyone who feels marginalized as a result of their sexual practices." The heteronormative profile of breastfeeding assumes certain norms:
an infant up to twelve months old;
motivations of nutritional and developmental benefits for the child and physiological benefits for the mother;
possible secondary motivations of convenience and cheapness;
practice in private, domestic settings; and
breast milk-consumption exclusivity to the youngest infant
Additionally, any relevant third party is assumed to be the mother's significant other and this person is relegated to a supportive role to maximize the breastfeeding mother's success.
Varieties
Various methods are employed to practice erotic lactation. They are listed according to prevalence, in decreasing order:
Lactation games
Lactation games include any kind of sexual activity which includes a person's breast milk. Such activity is widespread, and often unintentional, in the time after someone gives birth, since many people experience a let-down reflex (releasing milk) when sexually aroused.
Lactation pornography
While lactation does appear in pornography, it is a specialty niche and is considered taboo by many because of its proximity to incest and children. Most breast representations are without milk, and abound in the media in an erotic way both in and out of pornography.
Adult nursing relationship
An adult nursing relationship (ANR) involves the suckling of milk from a person's breast on a regular basis by one or more partner(s). Successful ANRs depend on a stable and long-term relationship, as otherwise it is very difficult to maintain a steady milk flow. Couples may begin an ANR by transferring regular suckling from a child to a sexual partner (e.g. spouse). Such a relationship may form as an expression of close intimacy and mutual tenderness, and may even exist without sex. Breastfeeding can have a strong stabilizing effect on the partnership. The person breastfeeding may experience orgasms or a pleasurable let-down reflex.
ANRs have also been employed in cases where a parent may desire to breastfeed their child, but has to find an alternative to inducing lactation. They may have difficulty beginning lactation, and so supplement the infant's suckling with that of a partner. Or there are cases where breastfeeding was interrupted for an extended period of time as a result of infant prematurity, infant absence, or parent's illness (taking prescription medication). In such cases, adult nursing has often caused lactation to continue until it was possible for the child to resume breastfeeding. Others may want to nurse an adopted child, so use an ANR to stimulate breast milk production before the adoption occurs. Though such scenarios do not have erotic motivations, erotic expression may be an additional aspect of the relationship.
Pumping
Some people experience sensual pleasure from using a breast pump to extract milk from their breasts or from expressing milk manually—with or without a partner. In addition to the sensual pleasure, women have reported feeling more feminine while producing milk and continue with lactation for emotional or sensual reasons after weaning a baby.
Lactation prostitution
This is the act of breastfeeding adults for pay (not to be confused with breastfeeding infants or babies for pay, i.e. wet nursing). In 2003, there was a report of Chinese brothel that offered lactation services to its clients.
Infantilism
As a part of the sexual fetish of infantilism, the non-lactating partner assumes the role of a baby in sexual role-play. Breastfeeding might play a secondary role in this type of relationship; and being pampered by "mommy", wearing diapers, or a hidden incestuous character may be the predominant motivation in this kind of relationship.
Lactation, re-lactation and induced lactation
Erotic lactation between partners or an adult nursing relationship may develop from natural breastfeeding of a baby. During the lactation period the partner starts to suckle on the female breast, and continues after the baby is weaned off. Milk production is continually stimulated and the milk flow continues. According to the book Body parts: critical explorations in corporeality, adult nursing may occur when an "individual, usually a mother, may choose to continue lactating after weaning a child, so that she avoids the significant physical challenge that inducing lactation can entail."
However, milk production can be "artificially" and intentionally induced in the absence of any pregnancy in an individual. This is called induced lactation, while someone who has lactated before and restarts is said to relactate. This can be done by regularly sucking on the nipples (several times a day), massaging and squeezing the female breasts, or with additional help from temporary use of milk-inducing drugs, such as the dopamine antagonist Domperidone. In principle—with considerable patience and perseverance—it is possible to induce lactation by sucking on the nipples alone.
It is not necessary that the individual has ever been pregnant, and they can be well in their postmenopausal period. Once established, lactation adjusts to demand. As long as there is regular breast stimulation, lactation is possible.
Adult lactation historically and culturally
Though birth is the beginning of the separation between mother and child, breastfeeding slows this process, making the mother and infant connect physically continually, sometimes for years. As a source of nourishment, the immediacy of this connection is intensified. Breastfeeding has a sexual element as a result of physiological factors. In a study conducted in 1999, approximately 33 to 50 percent of mothers found breast feeding erotic, and among them 25 percent felt guilty because of this. This study corroborated a study in 1949 that found that in a few cases where the arousal was strong enough to induce orgasm, some nursing mothers abandoned breastfeeding altogether. In a 1988 questionnaire on orgasm and pregnancy published in a Dutch magazine for women, when asked "Did you experience, while breastfeeding, a sensation of sexual excitement?", 34 percent (or 153 total) answered in the affirmative. An additional 71 percent answered in the affirmative when asked "Did you experience, while breastfeeding, pleasurable contractions in the uterine region".
Adult lactation in history
Since the European Middle Ages, a multitude of subliminally erotic, visionary experiences of saints have been passed on in which breastfeeding plays a major role. One prominent example is the Lactatio of Saint Bernard of Clairvaux.
Roman Charity
Roman Charity (or Caritas Romana) is a story of a woman, Pero, who secretly breastfeeds her father, Cimon, after he is incarcerated and sentenced to death by starvation. She is found out by a jailer, but her act of selflessness impresses officials and wins her father's release. The story comes from the Roman writer Valerius Maximus in the years AD 14–AD 37. In about AD 1362 the story was retold by the famous writer Giovanni Boccaccio. After Boccaccio, hundreds or possibly thousands of paintings were created, which tell the story. A variant of this story can be found at the conclusion of John Steinbeck's 1939 novel The Grapes of Wrath. Primarily, the story tells of a conflict. An existing taboo (implied incest and adult breastfeeding of a woman's milk) or saving a life by breaking the taboo. In this aspect there is no erotic focus to the story.
Valerius Maximus tells another story about a woman breastfeeding her mother, which is followed by the very short story of a woman breastfeeding her father. The second, father-daughter story in fact consists of one sentence only. Thirteen hundred years later, Boccaccio retells the (first) mother-daughter story, and does not mention the father-daughter story, and the first is apparently forgotten, leading to nearly all "caritas romana" oil paintings and drawings showing only the father-daughter story.
Pre-industrial England
Adult suckling was used to treat ailing adults and treat illnesses including eye disease and pulmonary tuberculosis. The writer Thomas Moffat recorded one physician's use of a wet nurse in a tome first published in 1655.
Islamic law
In traditional Islamic law, a child under the age of two (besides many strict rules like that the suckling should be of such quantity that it could be said that the bones of the child were strengthened and the flesh allowed to grow. And if that cannot be ascertained, then if a child suckles for one full day and night, or if it suckles fifteen times to its fill, it will be sufficient), is that woman's child through a foster relationship (the woman is then called "milk mother"). However, according to the Jurist Abu's-Su`ud (c.1490–1574), this only applies to sucklings under the age of two and a half years. Also, according to Ayatollah Ali Sistani, a highly praised scholar for the Shia Muslims: "The child should not have completed two years of his age". The same latter source states at least eight conditions that should apply before that child is considered a son/daughter of the feeding woman. (This is not considered to be an adoption, which is strictly proscribed by the Qu'ran.) A modern Saudi Jurist, in 1983, upheld that if a man suckles from his wife, their marriage is nullified. The query remains a popular one into the 21st century, and has come up in Saudi advice columns. A Sunni cleric Sheik Ezzat Atiya (عزت عطية), President of the Hadith Department of Egypt's al-Azhar University issued a fatwa in 2007 encouraging women to breastfeed their male business colleagues so that the man could become symbolically related to the woman, thereby precluding any sexual relations and the need for both sexes to observe modesty. "Breast feeding an adult puts an end to the problem of the private meeting." It was later denounced and declared defamatory to Islam.
China
A Beijing restaurant offered breast-milk-based dishes on its menu. In China, many websites routinely advertise membership to breastfeeding club where customers can get access to lactating women who they can pay to suckle from their breasts.
In 2013 a domestic staff agency in China named Xinxinyu was reported to be providing wet nurses for the sick and other adults as well as for newborns. The agency's clients could choose to drink the breast milk directly from the breast or to drink it via a breast pump. The reports caused controversy in China, with one writer describing it as "adding to China's problem of treating women as consumer goods and the moral degradation of China's rich." The agency was forced to suspend its operations by Chinese authorities for a number of reasons, one of which was for missing three years of annual checks.
Germany
In 1903, German philosopher Carl Buttenstedt published his marriage guidebook "Die Glücksehe – Die Offenbarung im Weibe, eine Naturstudie" (The Marriage of Happiness – The Revelation in the Woman, a study from nature), in which he described and recommended the lactational amenorrhea method (LAM) as a form of contraception and natural family planning that also deepens the relationship between wife and husband. He explicitly described erotic lactation as a source of great sexual pleasure for both partners, claiming that this is intended by nature especially on the part of the woman. This particular aspect of his broader general marriage philosophy gained a lot of attention and sparked wide debate. While some welcomed Buttenstedt's advice as inspirational for new ways to improve sexual satisfaction between marriage partners, others warned that this technique could "pathologically increase sexual sensation of both partners." Consequently, the book was banned by the Nazis in 1938.
Japan
The Bonyu Bar (Mother's Milk Bar), located in Tokyo's entertainment and red-light district of Kabukicho, employs nursing women who provide customers with breast milk in a glass for 2,000 yen (about 15 euros) or directly from the nipple for 5000 yen (about 37.50 euros). In the latter case the women can run their fingers through the customers' hair, coo and say their name as they suckle.
See also
Lactation
Mammary intercourse
Breast fetishism
Stimulation of nipples
Rada (fiqh), Islamic jurisprudence related to wetnursing, sometimes extended to adults
Sexual fetishism
Notes
Footnotes
References
Abdella Doumato, Eleanor (2000). "Getting God's Ear: Women, Islam, and Healing in Saudi Arabia and the Gulf". Columbia University Press
Boswell-Penc, Maia (2006). Tainted Milk: Breastmilk, Feminisms, And the Politics of Environmental Degradation. SUNY Press
Budin, Pierre (1907). Translated by William Joseph; Marie Alois Maloney. The Nursling: The Feeding and Hygiene of Premature and Full-term Infants Caxton, 48.
Elhadj, Elie (2006). "The Islamic Shield: Arab Resistance to Democratic and Religious Reforms". Universal Publishers
Forth, Christopher E.; Crozier, Ivan (2005) Body parts: critical explorations in corporeality. Lanham, Maryland. Lexington Books. ppp. 133–136.
Harrison, Helen; Kositsky, Ann (1983). The Premature Baby Book: A Parents Guide to Coping and Caring in the First Years. St. Martin's Press p. 158.
Imber, Colin (1997). "Islamic law". Edinburgh University Press
Prior, Mary (1991). Women in English Society, 1500–1800. Routledge, 6.
Further reading
Lundell, T. Louisa, PhD (2006). The Lore and Lure of Mother's Milk. Trafford Publishing, 19–24.
Oral eroticism
Paraphilias
Sexual fetishism
Sexual acts
Sexology
Breast | Erotic lactation | [
"Biology"
] | 3,677 | [
"Behavior",
"Sexual acts",
"Sexology",
"Behavioural sciences",
"Sexuality",
"Mating"
] |
7,364,432 | https://en.wikipedia.org/wiki/Industrial%20design%20rights%20in%20the%20European%20Union | Industrial design rights in the European Union are provided at both the Union level by virtue of the Community design and at the national level under individual national laws.
Eligible designs
A design is defined as "the appearance of the whole or a part of a product resulting from the features of, in particular, the lines, contours, colours, shape, texture and/or materials of the product itself and/or its ornamentation".
Designs may be protected if:
they are novel, that is if no design identical or differing only in immaterial details has been made available to the public;
they have individual character, that is the "informed user" would find the overall impression different from other designs which are available to the public. Where a design forms part of a more complex product, the novelty and individual character of the design are judged on the part of the design which is visible during normal use.
Designs are not protected insofar as their appearance is wholly determined by their technical function, or by the need to interconnect with other products to perform a technical function (the "must-fit" exception). However modular systems such as Lego or Meccano may be protected.
Community design
Registered and unregistered Community designs are available under EU Regulation 6/2002, which provide a unitary right covering the European Union. Protection for a registered Community design is for up to 25 years, subject to the payment of renewal fees every five years. The unregistered Community design lasts for three years after a design is made available to the public and infringement only occurs if the protected design has been copied.
National laws
National systems of registered designs remain in place alongside the system of Community designs: registration in a small number of countries is cheaper than Community registration, and may be more appropriate for smaller manufacturers. The Benelux countries (Belgium, Netherlands, Luxembourg) form a single area with respect to designs, administered by the Benelux Office for Intellectual Property.
National laws are harmonised by the Directive on the legal protection of designs: the criteria for eligibility and the duration of protection are the same as for registered Community designs. Many Member States also protect unregistered design rights under their national law, but these are not covered by the Directive.
International treaties
The protection of industrial design rights is required by the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, Arts. 25 & 26), to which the European Union is a party. The Regulation on Community designs provides for the recognition of the priority date of an application for design right registration in a country which is either a member of the World Trade Organization or a party to the Paris Convention for the Protection of Industrial Property.
On 1 January 2008, the European Union became a party to the Geneva Act of the Hague Agreement Concerning the International Registration of Industrial Designs . This followed the proposal of the European Commission on 22 December 2005 .
Spare parts used for repair
The protection of "component parts of complex products", in particular spare parts for cars, was left to Member States' discretion in Directive 98/71/EC, given the divergence of practices and opinions. As required by that Directive, the European Commission has conducted research on the question, which found that spare parts such as wings and bumpers were 6.4–10.3% more expensive in countries where these parts were protected by industrial design rights compared with countries where no such protection existed: it has proposed that the design right protection on these parts be abolished throughout the European Union.
See also
European Union trade mark
World Intellectual Property Organization
Japanese design law
References
Parties to the Hague Agreement Concerning the International Registration of Industrial Designs.
Proposal for a Council Decision approving the accession of the European Community to the Geneva Act of the Hague Agreement concerning the international registration of industrial designs, adopted in Geneva on 2 July 1999
Commission proposal for an amendment to the Directive 98/71/EC
External links
Designs page of the European Commission Directorate-General for the Internal Market
Office for Harmonisation in the Internal Market
www.handsoffmydesign.com e-Learning about European Union Design Protection
Agreement on Trade-Related Aspects of Intellectual Property Rights
Paris Convention for the Protection of Industrial Property
Geneva Act of the Hague Agreement Concerning the International Registration of Industrial Designs
Design Law wiki
Industrial design | Industrial design rights in the European Union | [
"Engineering"
] | 859 | [
"Industrial design",
"Design engineering",
"Design"
] |
7,364,640 | https://en.wikipedia.org/wiki/Bright%20Sparklers%20fireworks%20disaster | The Bright Sparklers fireworks disaster occurred in Sungai Buloh, Selangor, Malaysia on 7 May 1991 at 3:45 (MST). The Bright Sparklers fireworks factory in Sungai Buloh, Selangor caught fire and caused a huge explosion. Twenty six people were killed and over a hundred people were injured in the disaster. The explosion was strong enough to rip off the roofs of some local houses, and ended up damaging over 200 residential properties and was felt as far as 7-8 kilometers from the side.
Cause
The tragedy is believed to have been caused by explosive chemicals spilled during an experiment in the canteen of the factory. The chemicals touched off fires that rapidly spread to a nearby pile of large firecrackers, known as the "bazookas". These in turn set off the chain of explosions that ripped apart the factory and the nearby buildings, including the factory and nearby Kampung Baru Sungai Buloh.
Victims
26 people were killed and 103 people were injured. Victims were taken to the Kuala Lumpur Hospital for further treatments.
Memorial
A small memorial in the design of a Chinese pavilion was erected at the site in 1998. Underneath it are three memorial stones, each written in Malay, Chinese and Tamil.
The site is near the Kampung Selamat MRT station.
In popular culture
TV3's documentary programme, Detik Tragik (Tragic Moments) produced an episode about the fireworks disaster.
See also
1991 Culemborg, Netherlands fireworks disaster – occurred around 3–4 months before the Bright Sparklers disaster
2000 Enschede, Netherlands fireworks disaster, a similar incident
2004 Kolding, Denmark fireworks disaster, a similar incident
2015 Tianjin explosions
2017 Tangerang, Indonesia fireworks disaster, happened in the neighbouring country of Indonesia
2020 Port of Beirut disaster
List of fires
List of industrial disasters
References
1991 in Malaysia
Explosions in 1991
Fireworks accidents and incidents
Industrial fires and explosions
1990s fires in Asia
1991 fires
1991 industrial disasters
May 1991 events in Asia
Explosions in Malaysia
1991 disasters in Malaysia | Bright Sparklers fireworks disaster | [
"Chemistry"
] | 403 | [
"Industrial fires and explosions",
"Explosions"
] |
7,364,643 | https://en.wikipedia.org/wiki/I.D.%20%28magazine%29 | I.D. (The International Design Magazine) was a magazine covering the art, business, and culture of design. It was published eight times a year by F+W Media.
History
I.D. was founded in 1954 as Industrial Design. The name was later abbreviated to an initialism; in the 1980s, the initials came to stand for International Design to reflect the magazine's broadened scope.
Since 1954, the magazine published the Annual Design Review, a juried design competition curated by I.D. staff and industry practitioners.
I.D. won five National Magazine Awards: three for General Excellence (1995, 1997, 1999), one for Design (1997), and one for Special Interests (2000).
The last issue of I.D. was published in January/February 2010.
In June 2011, I.D. magazine was re-launched online in partnership with Behance. The new I.D. magazine featured user-submitted designs that were curated to offer examples of innovative work happening today.
By March 2016, the magazine's website had been shut down.
References
External links
Architecture magazines
Defunct magazines published in the United States
Design magazines
Eight times annually magazines published in the United States
Industrial design
Magazines established in 1954
Magazines disestablished in 2010
Online magazines with defunct print editions
Visual arts magazines published in the United States | I.D. (magazine) | [
"Engineering"
] | 273 | [
"Industrial design",
"Design magazines",
"Design",
"Design engineering"
] |
7,364,669 | https://en.wikipedia.org/wiki/Innexin | Innexins are transmembrane proteins that form gap junctions in invertebrates. Gap junctions are composed of membrane proteins that form a channel permeable to ions and small molecules connecting the cytoplasm of adjacent cells. Although gap junctions provide similar functions in all multicellular organisms, it was not known what proteins invertebrates used for this purpose until the late 1990s. While the connexin family of gap junction proteins was well-characterized in vertebrates, no homologues were found in non-chordates.
Innexins or related proteins are widespread among Eumetazoa, with the exception of echinoderms.
Discovery
Gap junction proteins with no sequence homology to connexins were initially identified in fruit flies. It was suggested that these proteins are specific invertebrate gap junctions, and they were thus named "innexins" (invertebrate analog of connexins). They were later identified in diverse invertebrates. Invertebrate genomes may contain more than a dozen innexin genes. Once the human genome was sequenced, innexin homologues were identified in humans and then in other vertebrates, indicating their ubiquitous distribution in the animal kingdom. These homologues were called "pannexins" (from the Greek pan - all, throughout, and Latin nexus - connection, bond). However, increasing evidence suggests that pannexins do not form gap junctions unless overexpressed in tissue and thus, differ functionally from innexins.
Structure
Innexins have four transmembrane segments (TMSs) and, like the vertebrate connexin gap junction protein, innexin subunits together form a channel (an "innexon") in the plasma membrane of the cell. Two innexons in apposed plasma membranes can form a gap junction. Innexons are made from eight subunits, instead of the six subunits of connexons. Structurally, innexins and connexins are very similar, consisting of 4 transmembrane domains, 2 extracellular and 1 intracellular loop, along with intracellular N- and C-terminal tails. Despite this shared topology, the protein families do not share enough sequence similarity to confidently infer common ancestry.
Pannexins are similar to innexins and are usually considered a sub-group, but they do not participate in the formation of gap junctions and the channels have seven subunits.
Vinnexins, viral homologues of innexins, were identified in polydnaviruses that occur in obligate symbiotic associations with parasitoid wasps. It was suggested that vinnexins may function to alter gap junction proteins in infected host cells, possibly modifying cell-cell communication during encapsulation responses in parasitized insects.
Function
Innexins form gap junctions found in invertebrates. They also form non-junctional membrane channels with properties similar to those of pannexons. N-terminal- elongated innexins can act as a plug to manipulate hemichannel closure and provide a mechanism connecting the effect of hemichannel closure directly to apoptotic signal transduction from the intracellular to the extracellular compartment.
The vertebrate homolog pannexin do not form gap junctions. They only form the hemichannel "pannexons". These hemichannels can be present in plasma, ER and Golgi membranes. They transport Ca2+, ATP, inositol triphosphate and other small molecules and can form hemichannels with greater ease than connexin subunits.
Transport reaction
The transport reactions catalyzed by innexin gap junctions is:
Small molecules (cell 1 cytoplasm) ⇌ small molecules (cell 2 cytoplasm)
Or for hemichannels:
Small molecules (cell cytoplasm) ⇌ small molecules (out)
Examples
Caenorhabditis elegans
unc-7
unc-9
inx-3
Drosophila melanogaster
Inx2
Inx3
Inx4 (zero population growth, zpg)
Ogre
shaking-B
Hirudo medicinalis
Hm-inx1
Hm-inx2
Hm-inx3
Hm-inx6
See also
connexin
pannexin
References
Further reading
External links
Description at wustl.edu
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | Innexin | [
"Biology"
] | 925 | [
"Protein families",
"Protein classification",
"Membrane proteins"
] |
7,364,702 | https://en.wikipedia.org/wiki/DMC%20Mining%20Services | DMC Mining Services is a mining services contractor, otherwise known as a mining contractor. Its headquarters are in Vaughan, Ontario, and Salt Lake City, Utah.
The company began in 1980 as Dynatec Mining Limited. DMC Mining Services had three ownership assets in its portfolio: Ambatovy, FNX Interest, and Coal-Bed Methane. Dynatec Mining Limited began as a privately held mining services company that specialized in shaft sinking, mine construction, lateral development for hard rock and soft rock mines, and raise development. The company's principal founders were W. Robert Dengler, William M. Shaver, and Fred Edwards. Dynatec grew to become one of the major Canadian mine contracting firms providing services to the mining industry in Canada and abroad and was employed by many of the major mining companies.
Dynatec Mining Limited was acquired by the Metallurgical services division of Sherritt International and spun out as a publicly traded company "Dynatec Corporation" on the TSX as a dividend in kind to Sherritt shareholders. The company's focus shifted to equity ownership of mining properties through the late 1990s until it was subsequently acquired in a friendly takeover offer by Sherritt during 2007.
In October 2007 Sherritt sold the mining services division of the former Dynatec Corporation to FNX Mining.
References
External links
DMC Mining Services
Mining companies of Canada
Companies based in Vaughan
Mining engineering companies | DMC Mining Services | [
"Engineering"
] | 291 | [
"Mining engineering",
"Engineering companies",
"Mining engineering companies"
] |
7,365,070 | https://en.wikipedia.org/wiki/Digital%20edition | A digital edition is an online magazine or online newspaper delivered in electronic form which is formatted identically to the print version. Digital editions are often called digital facsimiles to underline the likeness to the print version. Digital editions have the benefit of reduced cost to the publisher and reader by avoiding the time and the expense to print and deliver paper edition. This format is considered more environmentally friendly due to the reduction of paper and energy use. These editions also often feature interactive elements such as hyperlinks both within the publication itself and to other internet resources, search option and bookmarking, and can also incorporate multimedia such as video or animation to enhance articles themselves or for advertisement purposes. Some delivery methods also include animation and sound effects that replicate turning of the page to further enhance the experience of their print counterparts. Magazine publishers have traditionally relied on two revenue sources: selling ads and selling magazines. Additionally some publishers are using other electronic publication methods such as RSS to reach out to readers and inform them when new digital editions are available.
Current technologies are generally either reader-based, requiring a download of an application and subsequent download of each edition, or browser-based, often using Macromedia Flash, requiring no application download (such as Adobe Acrobat). Some application-based readers allow users to access editions while not connected to internet. Dedicated hardware such as the Amazon Kindle and the iPad is also available for reading digital editions of select books, popular national magazines such as Time, The Atlantic, and Forbes and popular national newspapers such as the New York Times, Wall Street Journal, and Washington Post.
Archives of print newspapers, in some cases dating hundreds of years back, are being digitized and made available online. Google is indexing existing digital archives produced by the newspapers themselves or by third parties.
Newspaper and magazine archival began with microform film formats solving the problem of efficiently storing and preserving. This format, however, lacked accessibility. Many libraries, especially state libraries in the United States are archiving their collections digitally and converting existing microfilm to digital format. The Library of Congress provides project planning assistance and the National Endowment for the Humanities procures funding through grants from its National Digital Newspaper Program.
Digital magazines, ezines, e-editions and emags are sometimes referred to as digital editions, however some of these formats are published only in digital format unlike digital editions which replicate a printed edition as well.
Digital magazines
Digital-replica magazines number in thousands—consumer and business publications, house magazines for associations, institutions and corporations – and conversion from print to digital was still increasing as of 2009.
A 2008 report funded by digital-replica technology providers and auditing agencies counted 1,786 digital-replica editions having more than 7 million circulation among business-to-business publications, of which 230 editions were audited The same report counted 1,470 digital-replica editions of consumer magazines having 5.5 million digital circulation, of which 240 editions were audited. These authors estimated that by year end of 2009 there would be 8,000 digital magazines, having a combined distribution of more than 30 million people.
Surveys have shown that, while not all subscribers prefer a digital edition, some do because of the environmental benefit and also because digital magazines are searchable and may easily be passed along or linked to. One such survey funded by a digital publisher reported on inputs from more than 30,000 subscribers to business, consumer and other digital magazines.
Digital magazine business models
Reduced printing and distribution costs
The publishers' choice to save by moving some or all subscribers from print to digital is widely accepted. Oracle magazine, which has 176,000 of its 516,000 subscribers receiving digital according to its June 2009 BPA circulation statement, is said to be the most widely circulated digital edition of a business-to-business publication. Publishers who do this need to choose whether to make some issues all-digital, move some subscribers to digital edition, add some digital-only subscribers, or send all subscribers the digital edition.
Paid subscription revenue
In 2009, a major consumer magazine, PC Magazine, went all-digital, charging an annual subscription fee for its digital-replica edition.
Many consumer magazines and newspapers are already available in eReader formats that are sold through booksellers.
Sponsorship and advertising revenue
Digital editions often carry special "front cover" advertising, or advertising on the email message alerting the subscriber of the digital edition. Publishers also produce special digital-only inserts and rich-media ads or advertorials.
Designed-for-digital issues
Another approach is to fully replace printed issues with digital ones, or to use digital editions for extra issues that would otherwise have to be printed.
Notes
Publishing
Digital media
Online magazines
Zines
Electronic publishing | Digital edition | [
"Technology"
] | 954 | [
"Multimedia",
"Digital media"
] |
7,365,406 | https://en.wikipedia.org/wiki/Tell%20Hammeh | Tell Hammeh () is a relatively small tell in the central Jordan Valley, Hashemite Kingdom of Jordan, located where the Zarqa River valley opens into the Jordan Valley.
It is the site of some of the earliest bloomery smelting of iron, from around 930 BC.
It is close to several of the larger tells in this part of the Jordan Valley (e.g. Tell Deir 'Alla, Tell al-Sa'idiyeh) as well as to the natural resources desirable in metal production: access to water, outcrops of marly clays (see Veldhuijzen 2005b, 297), and above all the only iron ore deposit of the wider region at Mugharet al-Warda.
Excavation
The excavations at Hammeh are part of the Deir 'Alla Regional Project, a joint undertaking of Yarmouk University in Irbid, Jordan, and Leiden University in the Netherlands, in collaboration with the Jordanian Department of Antiquities.
The site's most intriguing feature is the presence of a substantial and very early iron smelting operation, as evidence by large quantities of slag, technical ceramics, furnace remnants etc. This activity dates to 930 BC.
Fieldwork at Tell Hammeh took place in 1996, 1997, and 2000. The first two (rescue) seasons were directed by Dr E.J. van der Steen; the third season was directed by Dr H.A. Veldhuijzen. A fourth season, planned in 2003, had to be abandoned due to the invasion of Iraq. As with the third season, the focus of new excavation would primarily be on the iron smelting evidence. A new excavation was to start in May 2009.
Research
Extensive research has been carried out on the metallurgical material from Tell Hammeh. Both excavation and archaeometric analyses were carried out by Dr H.A. Veldhuijzen, first at Leiden University, then since 2001 at the UCL Institute of Archaeology, as a part of the joint excavations conducted by Yarmouk University and Leiden University and co-directed by Prof. Dr. Zeidan Kafafi and Dr. Gerrit Van der Kooij.
Chronology and iron smelting activities
Several periods are attested at Hammeh. From bedrock upward, remains of Chalcolithic (ca. 4500-3000 BC) and Early Bronze Age (ca. 3000-2000 BC) occupation were found, followed by more substantial layers of Late Bronze Age (ca. 1600-1150 BC) material. Hammeh appears continuously settled through the Late Bronze Age and Iron Age I (ca. 1150-1000 BC), up to the moment when iron production started in the early Iron Age II (see van der Steen 2004).
At that point in time, domestic structures, at least in the excavated areas, cease to exist, and are covered, without a clear interruption, by a stratigraphically well defined phase of iron production. This phase has a complex internal layering, likely reflecting seasonal activity over an extended period of time. (Veldhuijzen 2005a).
This phase consists of large quantities of various types of slag, most belonging to a bloomery iron smelting operation, and a fraction to primary smithing (i.e. bloom-smithing or bloom consolidation).
Very soon or immediately after iron production ceased, habitation of the site resumed. This later Iron Age II phase seems to form the last extensive occupation of Tell Hammeh. Based on examination of the extensive pottery finds from this post-smelting phase, it can be assumed that the iron production activities must have ended no later than 750 BC. No settlement structures contemporary to the iron smelting phase are presently known from Tell Hammeh.
See also
Hama (disambiguation)
References
External links
Information on Hammeh and iron smelting
Archaeological sites in Jordan
History of metallurgy
Prehistoric mines
Mines in Jordan | Tell Hammeh | [
"Chemistry",
"Materials_science"
] | 820 | [
"Metallurgy",
"History of metallurgy"
] |
7,365,444 | https://en.wikipedia.org/wiki/Ryanodine | Ryanodine is a poisonous diterpenoid found in the South American plant Ryania speciosa (Salicaceae). It was originally used as an insecticide.
The compound has extremely high affinity to the open-form ryanodine receptor, a group of calcium channels found in skeletal muscle, smooth muscle, and heart muscle cells. It binds with such high affinity to the receptor that it was used as a label for the first purification of that class of ion channels and gave its name to it.
At nanomolar concentrations, ryanodine locks the receptor in a half-open state, whereas it fully closes them at micromolar concentration. The effect of the nanomolar-level binding is that ryanodine causes release of calcium from calcium stores as the sarcoplasmic reticulum in the cytoplasm, leading to massive muscle contractions. The effect of micromolar-level binding is paralysis. This is true for both mammals and insects.
See also
Diamide insecticides, a class of insecticides with the same mechanism of action as ryanodine
Ryanodine receptor
Dihydropyridine channel
References
Further reading
Bertil Hille, Ionic Channels of Excitable Membranes, 2nd edition, Sinauer Associates, Sunderland, MA, 01375,
Insecticides
Pyrroles
Carboxylate esters
Alcohols
Cyclopentanes
Diterpene alkaloids
Isopropyl compounds
Plant toxins | Ryanodine | [
"Chemistry"
] | 300 | [
"Chemical ecology",
"Plant toxins"
] |
7,365,510 | https://en.wikipedia.org/wiki/Mindnet | MindNet is the name of several automatically acquired databases of lexico-semantic relations developed by members of the Natural Language Processing Group at Microsoft Research during the 1990s. It is considered one of the world's largest lexicons and databases that could make automatic semantic descriptions along with WordNet, FrameNet, HowNet, and Integrated Linguistic Database. It is particularly distinguished from WordNet by the way it was created automatically from a dictionary.
MindNet was designed to be continuously extended. It was first built out of the Longman Dictionary of Contemporary English (LDOCE) and later included American Heritage and the full text of Microsoft Encarta. The system can analyze linguistic representations of arbitrary text. The underlying technology is based on the same parser used in the Microsoft Word grammar checker and was deployed in the natural language query engine in Microsoft's Encarta 99 encyclopedia.
References
Lexical databases | Mindnet | [
"Technology"
] | 182 | [
"Computing stubs"
] |
7,365,812 | https://en.wikipedia.org/wiki/Babcock-Hart%20Award | The Babcock-Hart Award has been awarded since 1948 by the Institute of Food Technologists. It is given for significant contributions in food technology that resulted in public health through some aspects of nutrition. It was first named the Stephan M. Babcock Award after the agricultural chemist Stephen M. Babcock of the University of Wisconsin–Madison for his "single-grain experiment" of 1907–1911, but renamed the Babcock-Hart Award following the death of Babcock's colleague Edwin B. Hart in 1953.
Award winners receive a plaque from the International Life Sciences Institute-North America, headquartered in Washington, DC and a USD 3000 honorarium.
Winners
References
List of International Life Science Institute Awards, including Babcock-Hart
Food technology awards | Babcock-Hart Award | [
"Technology"
] | 156 | [
"Science and technology awards",
"Food technology awards"
] |
7,366,298 | https://en.wikipedia.org/wiki/Solid-state%20drive | A solid-state drive (SSD) is a type of solid-state storage device that uses integrated circuits to store data persistently. It is sometimes called semiconductor storage device, solid-state device, or solid-state disk.
SSDs rely on non-volatile memory, typically NAND flash, to store data in memory cells. The performance and endurance of SSDs vary depending on the number of bits stored per cell, ranging from high-performing single-level cells (SLC) to more affordable but slower quad-level cells (QLC). In addition to flash-based SSDs, other technologies such as 3D XPoint offer faster speeds and higher endurance through different data storage mechanisms.
Unlike traditional hard disk drives (HDDs), SSDs have no moving parts, allowing them to deliver faster data access speeds, reduced latency, increased resistance to physical shock, lower power consumption, and silent operation.
Often interfaced to a system in the same way as HDDs, SSDs are used in a variety of devices, including personal computers, enterprise servers, and mobile devices. However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time. Despite these limitations, SSDs are increasingly replacing HDDs, especially in performance-critical applications and as primary storage in many consumer devices.
SSDs come in various form factors and interface types, including SATA, PCIe, and NVMe, each offering different levels of performance. Hybrid storage solutions, such as solid-state hybrid drives (SSHDs), combine SSD and HDD technologies to offer improved performance at a lower cost than pure SSDs.
Attributes
An SSD stores data in semiconductor cells, with its properties varying according to the number of bits stored in each cell (between 1 and 4). Single-level cells (SLC) store one bit of data per cell and provide higher performance and endurance. In contrast, multi-level cells (MLC), triple-level cells (TLC), and quad-level cells (QLC) store more data per cell but have lower performance and endurance. SSDs using 3D XPoint technology, such as Intel’s Optane, store data by changing electrical resistance instead of storing electrical charges in cells, which can provide faster speeds and longer data persistence compared to conventional flash memory. SSDs based on NAND flash slowly leak charge when not powered, while heavily-used consumer drives may start losing data typically after one to two year in storage. SSDs have a limited lifetime number of writes, and also slow down as they reach their full storage capacity.
SSDs also have internal parallelism that allows them to manage multiple operations simultaneously, which enhances their performance.
Unlike HDDs and similar electromechanical magnetic storage, SSDs do not have moving mechanical parts, which provides advantages such as resistance to physical shock, quieter operation, and faster access times. Their lower latency results in higher input/output rates (IOPS) than HDDs.
Some SSDs are combined with traditional hard drives in hybrid configurations, such as Intel's Hystor and Apple's Fusion Drive. These drives use both flash memory and spinning magnetic disks in order to improve the performance of frequently-accessed data.
Traditional interfaces (e.g. SATA and SAS) and standard HDD form factors allow such SSDs to be used as drop-in replacements for HDDs in computers and other devices. Newer form factors such as mSATA, M.2, U.2, NF1/M.3/NGSFF, XFM Express (Crossover Flash Memory, form factor XT2) and EDSFF and higher speed interfaces such as NVM Express (NVMe) over PCI Express (PCIe) can further increase performance over HDD performance.
Comparison with other technologies
Hard disk drives
Traditional HDD benchmarks tend to focus on the performance characteristics such as rotational latency and seek time. As SSDs do not need to spin or seek to locate data, they are vastly superior to HDDs in such tests. However, SSDs have challenges with mixed reads and writes, and their performance may degrade over time. Therefore, SSD testing typically looks at when the full drive is first used, as the new and empty drive may have much better write performance than it would show after only weeks of use.
The reliability of both HDDs and SSDs varies greatly among models. Some field failure rates indicate that SSDs are significantly more reliable than HDDs. However, SSDs are sensitive to sudden power interruption, sometimes resulting in aborted writes or even cases of the complete loss of the drive.
Most of the advantages of solid-state drives over traditional hard drives are due to their ability to access data completely electronically instead of electromechanically, resulting in superior transfer speeds and mechanical ruggedness. On the other hand, hard disk drives offer significantly higher capacity for their price.
In traditional HDDs, a rewritten file will generally occupy the same location on the disk surface as the original file, whereas in SSDs the new copy will often be written to different NAND cells for the purpose of wear leveling. The wear-leveling algorithms are complex and difficult to test exhaustively. As a result, one major cause of data loss in SSDs is firmware bugs.
Memory cards
While both memory cards and most SSDs use flash memory, they have very different characteristics, including power consumption, performance, size, and reliability. Originally, solid state drives were shaped and mounted in the computer like hard drives. In contrast, memory cards (such as Secure Digital (SD), CompactFlash (CF), and many others) were originally designed for digital cameras and later found their way into cell phones, gaming devices, GPS units, etc. Most memory cards are physically smaller than SSDs, and designed to be inserted and removed repeatedly.
Failure and recovery
SSDs have different failure modes from traditional magnetic hard drives. Because solid-state drives contain no moving parts, they are generally not subject to mechanical failures. However, other types of failures can occur. For example, incomplete or failed writes due to sudden power loss may be more problematic than with HDDs, and the failure of a single chip may result in the loss of all data stored on it. Nonetheless, studies indicate that SSDs are generally reliable, often exceed their manufacturer-stated lifespan and having lower failure rates than HDDs. However, studies also note that SSDs experience higher rates of uncorrectable errors, which can lead to data loss, compared to HDDs.
The endurance of an SSD is typically listed on its datasheet in one of two forms:
either n DW/D (n drive writes per day)
or m TBW (maximum terabytes written), abbreviated TBW.
For example, a Samsung 970 EVO NVMe M.2 SSD (2018) with 1 TB of capacity has an endurance rating of 600 TBW.
Recovering data from SSDs presents challenges due to the non-linear and complex nature of data storage in solid-state drives. The internal operations of SSDs vary by manufacturer, with commands (e.g. TRIM and the ATA Secure Erase) and programs like (e.g. hdparm) being able to erase and modify the bits of a deleted file.
Reliability metrics
The JEDEC Solid State Technology Association (JEDEC) has established standards for SSD reliability metrics, which include:
Unrecoverable Bit Error Ratio (UBER)
Terabytes Written (TBW) – the total number of terabytes that can be written to a drive within its warranty period
Drive Writes Per Day (DWPD) – the number of times the full capacity of the drive can be written to per day within its warranty period
Applications
In a distributed computing environment, SSDs can be used as a distributed cache layer that temporarily absorbs the large volume of user requests to slower HDD-based backend storage systems. This layer provides much higher bandwidth and lower latency than the storage system would, and can be managed in a number of forms, such as a distributed key-value database and a distributed file system. On supercomputers, this layer is typically referred to as burst buffer.
Flash-based solid-state drives can be used to create network appliances from general-purpose personal computer hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware.
SSDs based on an SD card with a live SD operating system are easily write-locked. Combined with a cloud computing environment or other writable medium, an OS booted from a write-locked SD card is reliable, persistent and impervious to permanent corruption.
Hard-drive cache
In 2011, Intel introduced a caching mechanism for their Z68 chipset (and mobile derivatives) called Smart Response Technology, which allows a SATA SSD to be used as a cache (configurable as write-through or write-back) for a conventional, magnetic hard disk drive. A similar technology is available on HighPoint's RocketHybrid PCIe card.
Solid-state hybrid drives (SSHDs) are based on the same principle, but integrate some amount of flash memory on board of a conventional drive instead of using a separate SSD. The flash layer in these drives can be accessed independently from the magnetic storage by the host using ATA-8 commands, allowing the operating system to manage it. For example, Microsoft's ReadyDrive technology explicitly stores portions of the hibernation file in the cache of these drives when the system hibernates, making the subsequent resume faster.
Dual-drive hybrid systems are combining the usage of separate SSD and HDD devices installed in the same computer, with overall performance optimization managed by the computer user, or by the computer's operating system software. Examples of this type of system are bcache and dm-cache on Linux, and Apple's Fusion Drive.
Architecture and function
The primary components of an SSD are the controller and the memory used to store data. Traditionally, early SSDs used volatile DRAM for storage, but since 2009, most SSDs utilize non-volatile NAND flash memory, which retains data even when powered off. Flash memory SSDs store data in metal–oxide–semiconductor (MOS) integrated circuit chips, using non-volatile floating-gate memory cells.
Controller
Every SSD includes a controller, which manages the data flow between the NAND memory and the host computer. The controller is an embedded processor that runs firmware to optimize performance, managing data, and ensuring data integrity.
Some of the primary functions performed by the controller are:
Bad block mapping
Read and write caching
Encryption
Crypto-shredding
Error detection and correction using error-correcting code (ECC), such as BCH code
Garbage collection
Read scrubbing and management of read disturb
Wear leveling
The overall performance of an SSD can scale with the number of parallel NAND chips and the efficiency of the controller. For example, controllers that enable parallel processing of NAND flash chips can improve bandwidth and reduce latency.
Micron and Intel pioneered faster SSDs by implementing techniques such as data striping and interleaving to enhance read/write speeds. More recently, SandForce introduced controllers that incorporate data compression to reduce the amount of data written to the flash memory, potentially increasing both performance and endurance.
Wear leveling
Wear leveling is a technique used in SSDs to ensure that write and erase operations are distributed evenly across all blocks of the flash memory. Without this, specific blocks could wear out prematurely due to repeated use, reducing the overall lifespan of the SSD. The process moves data that is infrequently changed (cold data) from heavily used blocks, so that data that changes more frequently (hot data) can be written to those blocks. This helps distribute wear more evenly across the entire SSD. However, this process introduces additional writes, known as write amplification, which must be managed to balance performance and durability.
Memory
Flash memory
Most SSDs use non-volatile NAND flash memory for data storage, primarily due to its cost-effectiveness and ability to retain data without a constant power supply. NAND flash-based SSDs store data in semiconductor cells, with the specific architecture influencing performance, endurance, and cost.
There are various types of NAND flash memory, categorized by the number of bits stored in each cell:
Single-Level Cell (SLC): Stores 1 bit per cell. SLC provides the highest performance, reliability, and endurance but is more expensive.
Multi-Level Cell (MLC): Stores 2 bits per cell. MLC offers a balance between cost, performance, and endurance.
Triple-Level Cell (TLC): Stores 3 bits per cell. TLC is less expensive but slower and less durable than SLC and MLC.
Quad-Level Cell (QLC): Stores 4 bits per cell. QLC is the most affordable option but has the lowest performance and endurance.
Over time, SSD controllers have improved the efficiency of NAND flash, incorporating techniques such as interleaved memory, advanced error correction, and wear leveling to optimize performance and extend the lifespan of the drive. Lower-end SSDs often use QLC or TLC memory, while higher-end drives for enterprise or performance-critical applications may use MLC or SLC.
In addition to the flat (planar) NAND structure, many SSDs now use 3D NAND (or V-NAND), where memory cells are stacked vertically, increasing storage density while improving performance and reducing costs.
DRAM and DIMM
Some SSDs use volatile DRAM instead of NAND flash, offering very high-speed data access but requiring a constant power supply to retain data. DRAM-based SSDs are typically used in specialized applications where performance is prioritized over cost or non-volatility. Many SSDs, such as NVDIMM devices, are equipped with backup power sources such as internal batteries or external AC/DC adapters. These power sources ensure data is transferred to a backup system (usually NAND flash or another storage medium) in the event of power loss, preventing data corruption or loss. Similarly, ULLtraDIMM devices use components designed for DIMM modules, but only use flash memory, similar to a DRAM SSD.
DRAM-based SSDs are often used for tasks where data must be accessed at high speeds with low latency, such as in high-performance computing or certain server environments.
3D XPoint
3D XPoint is a type of non-volatile memory technology developed by Intel and Micron, announced in 2015. It operates by changing the electrical resistance of materials in its cells, offering much faster access times than NAND flash. 3D XPoint-based SSDs, such as Intel’s Optane drives, provide lower latency and higher endurance than NAND-based drives, although they are more expensive per gigabyte.
Other
Drives known as hybrid drives or solid-state hybrid drives (SSHDs) use a hybrid of spinning disks and flash memory. Some SSDs use magnetoresistive random-access memory (MRAM) for storing data.
Cache and buffer
Many flash-based SSDs include a small amount of volatile DRAM as a cache, similar to the buffers in hard disk drives. This cache can temporarily hold data while it is being written to the flash memory, and it also stores metadata such as the mapping of logical blocks to physical locations on the SSD.
Some SSD controllers, like those from SandForce, achieve high performance without using an external DRAM cache. These designs rely on other mechanisms, such as on-chip SRAM, to manage data and minimize power consumption.
Additionally, some SSDs use an SLC cache mechanism to temporarily store data in single-level cell (SLC) mode, even on multi-level cell (MLC) or triple-level cell (TLC) SSDs. This improves write performance by allowing data to be written to faster SLC storage before being moved to slower, higher-capacity MLC or TLC storage.
On NVMe SSDs, Host Memory Buffer (HMB) technology allows the SSD to use a portion of the system’s DRAM instead of relying on a built-in DRAM cache, reducing costs while maintaining a high level of performance.
In certain high-end consumer and enterprise SSDs, larger amounts of DRAM are included to cache both file table mappings and written data, reducing write amplification and enhances overall performance.
Battery and supercapacitor
Higher-performing SSDs may include a capacitor or battery, which helps preserve data integrity in the event of an unexpected power loss. The capacitor or battery provides enough power to allow the data in the cache to be written to the non-volatile memory, ensuring no data is lost.
In some SSDs that use multi-level cell (MLC) flash memory, a potential issue known as "lower page corruption" can occur if power is lost while programming an upper page. This can result in previously written data becoming corrupted. To address this, some high-end SSDs incorporate supercapacitors to ensure all data can be safely written during a sudden power loss.
Some consumer SSDs have built-in capacitors to save critical data such as the Flash Translation Layer (FTL) mapping table. Examples include the Crucial M500 and Intel 320 series. Enterprise-class SSDs, such as the Intel DC S3700 series, often come with more robust power-loss protection mechanisms like supercapacitors or batteries.
Host Interface
The host interface of an SSD refers to the physical connector and the signaling methods used to communicate between the SSD and the host system. This interface is managed by the SSD's controller and is often similar to those found in traditional hard disk drives (HDDs). Common interfaces include:
Serial ATA: One of the most widely used interfaces in consumer SSDs. SATA 3.0 supports transfer speeds up to 6.0 Gbit/s.
Serial attached SCSI: Primarily used in enterprise environments, SAS interfaces are faster and more robust than SATA. SAS 3.0 offers speeds of up to 12.0 Gbit/s.
PCI Express (PCIe): A high-speed interface used in high-performance SSDs. PCIe 3.0 x4 supports transfer speeds of up to 31.5 Gbit/s.
M.2: A newer interface designed for SSDs that is more compact than SATA or PCIe, often found in laptops and high-end desktops. M.2 supports both SATA (up to 6.0 Gbit/s) and PCIe (up to 31.5 Gbit/s) interfaces.
U.2: Another interface used for enterprise-grade SSDs, providing PCIe 3.0 x4 speeds but with a more robust connector suitable for server environments.
Fibre Channel: Typically used in enterprise systems, Fibre Channel interfaces offer high data transfer speeds, with modern versions supporting up to 128 Gbit/s.
USB: Many external SSDs use the Universal Serial Bus interface, with modern versions like USB 3.1 Gen 2 supporting speeds of up to 10 Gbit/s.
Thunderbolt: Some high-end external SSDs use the Thunderbolt interface.
Parallel ATA (PATA): An older interface used in early SSDs, with speeds up to 1064 Mbit/s. PATA has largely been replaced by SATA due to higher data transfer rates and greater reliability.
Parallel SCSI: An interface primarily used in servers, with speeds ranging from 40 Mbit/s to 2560 Mbit/s. It has mostly been replaced by Serial Attached SCSI. The last SCSI-based SSD was introduced in 2004.
SSDs may support various logical interfaces, which define the command sets used by operating systems to communicate with the SSD. Two common logical interfaces include:
Advanced Host Controller Interface (AHCI): Initially designed for HDDs, AHCI is commonly used with SATA SSDs but is less efficient for modern SSDs due to its overhead.
NVM Express (NVMe): A modern interface designed specifically for SSDs, NVMe takes full advantage of the parallelism in SSDs, providing significantly lower latency and higher throughput than AHCI.
Configurations
The size and shape of any device are largely driven by the size and shape of the components used to make that device. Traditional HDDs and optical drives are designed around the rotating platter(s) or optical disc along with the spindle motor inside. Since an SSD is made up of various interconnected integrated circuits (ICs) and an interface connector, its shape is no longer limited to the shape of rotating media drives. Some solid-state storage solutions come in a larger chassis that may even be a rack-mount form factor with numerous SSDs inside. They would all connect to a common bus inside the chassis and connect outside the box with a single connector.
For general computer use, the 2.5-inch form factor (typically found in laptops and used for most SATA SSDs) is the most popular, in three thicknesses (7.0mm, 9.5mm, 14.8 or 15.0mm; with 12.0mm also available for some models). For desktop computers with 3.5-inch hard disk drive slots, a simple adapter plate can be used to make such a drive fit. Other types of form factors are more common in enterprise applications. An SSD can also be completely integrated in the other circuitry of the device, as in the Apple MacBook Air (starting with the fall 2010 model). , mSATA and M.2 form factors also gained popularity, primarily in laptops.
Standard HDD form factors
The benefit of using a current HDD form factor would be to take advantage of the extensive infrastructure already in place to mount and connect the drives to the host system. These traditional form factors are known by the size of the rotating media (i.e., 5.25-inch, 3.5-inch, 2.5-inch or 1.8-inch) and not the dimensions of the drive casing.
Standard card form factors
For applications where space is at a premium, like for ultrabooks or tablet computers, a few compact form factors were standardized for flash-based SSDs.
There is the mSATA form factor, which uses the PCI Express Mini Card physical layout. It remains electrically compatible with the PCI Express Mini Card interface specification while requiring an additional connection to the SATA host controller through the same connector.
M.2 form factor, formerly known as the Next Generation Form Factor (NGFF), is a natural transition from the mSATA and physical layout it used, to a more usable and more advanced form factor. While mSATA took advantage of an existing form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint. The M.2 standard allows both SATA and PCI Express SSDs to be fitted onto M.2 modules.
Some high performance, high capacity drives uses standard PCI Express add-in card form factor to house additional memory chips, permit the use of higher power levels, and allow the use of a large heat sink. There are also adapter boards that converts other form factors, especially M.2 drives with PCIe interface, into regular add-in cards.
Disk-on-a-module form factors
A disk-on-a-module (DOM) is a flash drive with either 40/44-pin Parallel ATA (PATA) or SATA interface, intended to be plugged directly into the motherboard and used as a computer hard disk drive (HDD). DOM devices emulate a traditional hard disk drive, resulting in no need for special drivers or other specific operating system support. DOMs are usually used in embedded systems, which are often deployed in harsh environments where mechanical HDDs would simply fail, or in thin clients because of small size, low power consumption, and silent operation.
storage capacities range from 4 MB to 128 GB with different variations in physical layouts, including vertical or horizontal orientation.
Box form factors
Many of the DRAM-based solutions use a box that is often designed to fit in a rack-mount system. The number of DRAM components required to get sufficient capacity to store the data along with the backup power supplies requires a larger space than traditional HDD form factors.
Bare-board form factors
Form factors which were more common to memory modules are now being used by SSDs to take advantage of their flexibility in laying out the components. Some of these include PCIe, mini PCIe, mini-DIMM, MO-297, and many more. The SATADIMM from Viking Technology uses an empty DDR3 DIMM slot on the motherboard to provide power to the SSD with a separate SATA connector to provide the data connection back to the computer. The result is an easy-to-install SSD with a capacity equal to drives that typically take a full 2.5-inch drive bay. At least one manufacturer, Innodisk, has produced a drive that sits directly on the SATA connector (SATADOM) on the motherboard without any need for a power cable. Some SSDs are based on the PCIe form factor and connect both the data interface and power through the PCIe connector to the host. These drives can use either direct PCIe flash controllers or a PCIe-to-SATA bridge device which then connects to SATA flash controllers.
There are also SSDs that are in the form of PCIe cards, these are sometimes called HHHL (Half Height Half Length), or AIC (Add in Card) SSDs.
Ball grid array form factors
In the early 2000s, a few companies introduced SSDs in Ball Grid Array (BGA) form factors, such as M-Systems' (now SanDisk) DiskOnChip and Silicon Storage Technology's NANDrive (now produced by Greenliant Systems), and Memoright's M1000 for use in embedded systems. The main benefits of BGA SSDs are their low power consumption, small chip package size to fit into compact subsystems, and that they can be soldered directly onto a system motherboard to reduce adverse effects from vibration and shock.
Such embedded drives often adhere to the eMMC and eUFS standards.
Development and history
Early SSDs using RAM and similar technology
The first devices resembling solid-state drives (SSDs) used semiconductor technology, with an early example being the 1978 StorageTek STC 4305. This device was a plug-compatible replacement for the IBM 2305 hard drive, initially using charge-coupled devices for storage and later switching to dynamic random-access memory (DRAM). The STC 4305 was significantly faster than its mechanical counterparts and cost around $400,000 for a 45 MB capacity. Though early SSD-like devices existed, they were not widely used due to their high cost and small storage capacity.
In the late 1980s, companies like Zitel began selling DRAM-based SSD products under the name "RAMDisk." These devices were primarily used in specialized systems like those made by UNIVAC and Perkin-Elmer.
SSDs using Flash
Flash memory, a key component in modern SSDs, was invented in 1980 by Fujio Masuoka at Toshiba. Flash-based SSDs were patented in 1989 by the founders of SanDisk, which released its first product in 1991: a 20 MB SSD for IBM laptops. While the storage capacity was limited and the price high (around $1,000), this marked the beginning of a transition to flash memory as an alternative to traditional hard drives.
In the 1990s, new manufacturers of flash memory drives emerged, including STEC, Inc., M-Systems, and BiTMICRO.
As the technology advanced, SSDs saw dramatic improvements in capacity, speed, and affordability. By 2016, commercially available SSDs had more capacity than the largest available HDDs. By 2018, flash-based SSDs had reached capacities of up to 100 TB in enterprise products, with consumer SSDs offering up to 16 TB. These advancements were accompanied by significant increases in read and write speeds, with some high-end consumer models reaching speeds of up to 14.5 GB/s.
In 2021, NVMe 2.0 with Zoned Namespaces (ZNS) was announced. ZNS allows data to be mapped directly to its physical location in memory, providing direct access on an SSD without a flash translation layer. In 2024, Samsung announced what it called the world's first SSD with a hybrid PCIe interface, the Samsung 990 EVO. The hybrid interface runs in either the x4 PCIe 4.0 or x2 PCIe 5.0 modes, a first for an M.2 SSD.
SSD prices have also fallen dramatically, with the cost per gigabyte decreasing from around $50,000 in 1991 to less than $0.05 by 2020.
Enterprise flash drives
Enterprise flash drives (EFDs) are designed for high-performance applications requiring fast input/output operations per second (IOPS), reliability, and energy efficiency. EFDs often have higher specifications than consumer SSDs, making them suitable for mission-critical applications. The term was first used by EMC in 2008 to describe SSDs built for enterprise environments.
One example of an EFD is the Intel DC S3700 series, launched in 2012. These drives were notable for their consistent performance, maintaining IOPS variation within a narrow range, which is crucial for enterprise environments.
Another significant product is the Toshiba PX02SS series, launched in 2016. Designed for write-intensive applications like online transaction processing, these drives achieved impressive read and write speeds and high endurance ratings.
Drives using other persistent memory technologies
In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand. Unlike NAND flash, 3D XPoint uses a different method to store data, offering higher IOPS performance, although sequential read and write speeds remain slower compared to traditional SSDs.
Consumer use
As SSD technology continues to improve, they are increasingly used in ultra-mobile PCs and lightweight laptop systems. The first flash-memory SSD based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began shipping in Japan on 3 July 2006 with a 16 GB flash memory hard drive. Another of the first mainstream releases of SSD was the XO Laptop, built as part of the One Laptop Per Child project. Mass production of these computers, built for children in developing countries, began in December 2007. By 2009, Dell, Toshiba, Asus, Apple, and Lenovo had begun producing laptops with SSDs.
By 2010, Apple's MacBook Air line began using solid state drives as the default. In 2011, Intel's Ultrabook became the first widely available consumer computers using SSDs aside from the MacBook Air. At present, SDD devices are widely used and distributed by a number of companies, with a small number of companies manufacturing the NAND flash devices within them.
Sales
SSD shipments were 11 million units in 2009, 17.3 million units in 2011 for a total of US$5 billion, 39 million units in 2012, and were expected to rise to 83 million units in 2013
to 201.4 million units in 2016 and to 227 million units in 2017.
Revenues for the SSD market worldwide totaled $585 million in 2008, rising over 100% from $259 million in 2007.
File-system support
The same file systems used on hard disk drives can typically also be used on solid state drives. File systems that support SSDs generally also support the TRIM command, which helps the SSD to recycle discarded data. The file system does not need to manage wear leveling or other flash memory characteristics, as they are handled internally by the SSD. Some log-structured file systems (e.g. F2FS, JFFS2) help to reduce write amplification on SSDs, especially in situations where only very small amounts of data are changed, such as when updating file-system metadata.
If an operating system does not support using TRIM on discrete swap partitions, it might be possible to use swap files inside an ordinary file system instead. For example, OS X does not support swap partitions; it only swaps to files within a file system, so it can use TRIM when, for example, swap files are deleted.
Linux
Since 2010, standard Linux drive utilities have taken care of appropriate partition alignment by default.
Kernel support for the TRIM operation was introduced in version 2.6.33 of the Linux kernel mainline, released on 24 February 2010. The ext4, Btrfs, XFS, JFS, and F2FS file systems include support for the discard (TRIM or UNMAP) function. To make use of TRIM, a file system must be mounted using the discard parameter. Linux swap partitions are by default performing discard operations when the underlying drive supports TRIM, with the possibility to turn them off. Support for queued TRIM, a SATA 3.1 feature that results in TRIM commands not disrupting the command queues, was introduced in Linux kernel 3.12, released on November 2, 2013.
An alternative to the kernel-level TRIM operation is to use a user-space utility called that goes through all of the unused blocks in a filesystem and dispatches TRIM commands for those areas. Theutility is usually run by cron as a scheduled task.
Linux performance considerations
During installation, Linux distributions usually do not configure the installed system to use TRIM and thus the /etc/fstab file requires manual modifications. This is because the current Linux TRIM command implementation might not be optimal. It has been proven to cause a performance degradation instead of a performance increase under certain circumstances. Linux sends an individual TRIM command to each sector, instead of a vectorized list defining a TRIM range as recommended by the TRIM specification.
For performance reasons, it is recommended to switch the I/O scheduler from the default CFQ (Completely Fair Queuing) to NOOP or Deadline. CFQ was designed for traditional magnetic media and seek optimization, thus many of those I/O scheduling efforts are wasted when used with SSDs. As part of their designs, SSDs offer much bigger levels of parallelism for I/O operations, so it is preferable to leave scheduling decisions to their internal logic, especially for high-end SSDs.
A scalable block layer for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVMe by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization. As of version 4.0 of the Linux kernel, released on 12 April 2015, VirtIO block driver, the SCSI layer (which is used by Serial ATA drivers), device mapper framework, loop device driver, unsorted block images (UBI) driver (which implements erase block management layer for flash memory devices) and RBD driver (which exports Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following releases.
macOS
Versions since Mac OS X 10.6.8 (Snow Leopard) support TRIM but only when used with an Apple-purchased SSD. TRIM is not automatically enabled for third-party drives, although it can be enabled by using third-party utilities such as Trim Enabler. The status of TRIM can be checked in the System Information application or in the system_profiler command-line tool.
Versions since OS X 10.10.4 (Yosemite) include sudo trimforce enable as a Terminal command that enables TRIM on non-Apple SSDs. There is also a technique to enable TRIM in versions earlier than Mac OS X 10.6.8, although it remains uncertain whether TRIM is actually utilized properly in those cases.
Microsoft Windows
Prior to version 7, Microsoft Windows did not take any specific measures to support solid state drives. From Windows 7, the standard NTFS file system provides support for the TRIM command.
By default, Windows 7 and newer versions execute TRIM commands automatically if the device is detected to be a solid-state drive. However, because TRIM irreversibly resets all freed space, it may be desirable to disable support where enabling data recovery is preferred over wear leveling. Windows implements TRIM for more than just file-delete operations. The TRIM operation is fully integrated with partition- and volume-level commands such as format and delete, with file-system commands relating to truncate and compression, and with the System Restore (also known as Volume Snapshot) feature.
Defragmentation should be disabled on solid-state drives because the location of the file components on an SSD does not significantly impact its performance, but moving the files to make them contiguous using the Windows Defrag routine will cause unnecessary write wear on the limited number of write cycles on the SSD. The SuperFetch feature will also not materially improve performance and causes additional overhead in the system and SSD.
Windows Vista
Windows Vista generally expects hard disk drives rather than SSDs. Windows Vista includes ReadyBoost to exploit characteristics of USB-connected flash devices, but for SSDs it only improves the default partition alignment to prevent read-modify-write operations that reduce the speed of SSDs. Most SSDs are typically split into 4 KiB sectors, while earlier systems may be based on 512 byte sectors with their default partition setups unaligned to the 4 KiB boundaries. Windows Vista does not send the TRIM command to solid-state drives, but some third-party utilities such as SSD Doctor will periodically scan the drive and TRIM the appropriate entries.
Windows 7
Windows 7 and later versions have native support for SSDs. The operating system detects the presence of an SSD and optimizes operation accordingly. For SSD devices, Windows 7 disables ReadyBoost and automatic defragmentation. Despite the initial statement by Steven Sinofsky before the release of Windows 7, however, defragmentation is not disabled, even though its behavior on SSDs differs. One reason is the low performance of Volume Shadow Copy Service on fragmented SSDs. The second reason is to avoid reaching the practical maximum number of file fragments that a volume can handle.
Windows 7 also includes support for the TRIM command to reduce garbage collection for data that the operating system has already determined is no longer valid.
Windows 8.1 and later
Windows 8.1 and later Windows systems also support automatic TRIM for PCI Express SSDs based on NVMe. For Windows 7, the KB2990941 update is required for this functionality and needs to be integrated into Windows Setup using DISM if Windows 7 has to be installed on the NVMe SSD. Windows 8/8.1 also supports the SCSI unmap command, an analog of SATA TRIM, for USB-attached SSDs or SATA-to-USB enclosures. It is also supported over USB Attached SCSI Protocol (UASP).
While Windows 7 supported automatic TRIM for internal SATA SSDs, Windows 8.1 and Windows 10 support manual TRIM as well as automatic TRIM for SATA, NVMe and USB-attached SSDs. Disk Defragmenter in Windows 10 and 11 may execute TRIM to optimize an SSD.
ZFS
Solaris as of version 10 Update 6 (released in October 2008), and recent versions of OpenSolaris, Solaris Express Community Edition, Illumos, Linux with ZFS on Linux, and FreeBSD all can use SSDs as a performance booster for ZFS. A low-latency SSD can be used for the ZFS Intent Log (ZIL), where it is named the SLOG. An SSD may also be used for the level 2 Adaptive Replacement Cache (L2ARC), which is used to cache data for reading.
FreeBSD
ZFS for FreeBSD introduced support for TRIM on September 23, 2012. The Unix File System also supports the TRIM command.
Standardization organizations
The following are noted standardization organizations and bodies that work to create standards for solid-state drives (and other computer storage devices). The table below also includes organizations which promote the use of solid-state drives. This is not necessarily an exhaustive list.
See also
Board solid-state drive
List of solid-state drive manufacturers
List of flash memory controller manufacturers
Hard disk drive
RAID
Flash Core Module
RAM drive
References
Further reading
"Solid-state revolution: in-depth on how SSDs really work". Lee Hutchinson. Ars Technica. June 4, 2012.
Mai Zheng, Joseph Tucek, Feng Qin, Mark Lillibridge, "Understanding the Robustness of SSDs under Power Fault", FAST'13
Cheng Li, Philip Shilane, Fred Douglis, Hyong Shim, Stephen Smaldone, Grant Wallace, "Nitro: A Capacity-Optimized SSD Cache for Primary Storage", USENIX ATC'14
External links
JEDEC Continues SSD Standardization Efforts
Linux & NVM: File and Storage System Challenges (PDF)
Linux and SSD Optimization
Understanding the Robustness of SSDs under Power Fault (USENIX 2013, by Mai Zheng, Joseph Tucek, Feng Qin and Mark Lillibridge)
20th-century inventions
Computer storage devices
Non-volatile memory
Solid-state computer storage
Solid-state computer storage media | Solid-state drive | [
"Technology"
] | 8,704 | [
"Computer storage devices",
"Recording devices"
] |
7,367,038 | https://en.wikipedia.org/wiki/Vacuum%20packing | Vacuum packing is a method of packaging that removes air from the package prior to sealing. This method involves placing items in a plastic film package, removing air from inside and sealing the package. Shrink film is sometimes used to have a tight fit to the contents. The intent of vacuum packing is usually to remove oxygen from the container to extend the shelf life of foods and, with flexible package forms, to reduce the volume of the contents and package.
Vacuum packing reduces atmospheric oxygen, limiting the growth of aerobic bacteria or fungi, and preventing the evaporation of volatile components. It is also commonly used to store dry foods over a long period of time, such as cereals, nuts, cured meats, cheese, smoked fish, coffee, and potato chips (crisps). On a more short-term basis, vacuum packing can also be used to store fresh foods, such as vegetables, meats, and liquids, because it inhibits bacterial growth.
Vacuum packing greatly reduces the bulk of non-food items. For example, clothing and bedding can be stored in bags evacuated with a domestic vacuum cleaner or a dedicated vacuum sealer. This technique is sometimes used to compact household waste, for example where a charge is made for each full bag collected.
Vacuum packaging products, using plastic bags, canisters, bottles, or mason jars, are available for home use.
For delicate food items that might be crushed by the vacuum packing process (such as potato chips), an alternative is to replace the interior gas with nitrogen. This has the same effect of inhibiting deterioration due to the removal of oxygen.
Types
Edge, suction, and external vacuum sealers
External vacuum sealers involve a bag being attached to the vacuum-sealing machine externally. The machine will remove the air and seal the bag, which is all done outside the machine. A heat sealer is often used to seal the pack. Typically these units use a dry piston vacuum pump which is often considered a "maintenance-free" pump. For sealing dry goods only, this is the preferred method. Moist foods are known to cause internal corrosion on these dry piston pumps.
Single-chamber vacuum sealers
Single-chamber sealers require the entire product to be placed within the machine. Like external sealers, a plastic bag is typically used for packaging. Once the product is placed in the machine, the lid is closed and air is removed. Then, there is a heat seal inside the chamber that will seal the bag, after sealing the bag the chamber is refilled with air by the automatic opening of a vent to the outside. This oncoming pressure squeezes all remaining air in the bag. The lid is then opened and the product removed. Chamber sealers are typically used for low-to-medium-volume packaging. This style of vacuum machine is also capable of sealing liquids due to equal pressure in the chamber and the bag eliminating the risk of the liquid being sucked out of the open edge of the bag.
Double-chamber vacuum sealers
Double-chamber sealers require the entire product to be placed in a plastic bag within the machine. Once the product is placed in the machine on the seal bar, the lid is closed and air is removed. Then a seal bar inside the chamber seals the product in the bag, after sealing the bag the chamber is refilled with air by the automatic opening of a vent to the outside. This oncoming pressure squeezes all remaining air in the bag. The lid is then opened and the product removed. Double-chamber sealers are typically used for medium-volume packaging, and also have the capability to vacuum seal liquids. The lid generally swings from one side to another, increasing production speed over a single-chamber model. Double-chamber vacuum packaging machines generally have either spring-weighted lids or fully automatic lids.
Double-chamber vacuum packaging machines are commonly used for:
Fresh meat
Processed meat
Cheese (hard and soft)
Candy and chocolate
Rotary belt type vacuum sealers
Rotary belt type vacuum packaging machine or vacuum sealer features the same function as the double-chamber vacuum packaging machine as a 'vacuum bag sealer'. But the rotary belt vacuum packaging machine is more convenient, as the belt rotates automatically while the bags are placed to the sealing bar and vacuum sealing process completed. The vacuumed and sealed bags are automatically unloaded, which obviously is more convenient.
The packaging plate of the machine is adjustable to 4 degrees, which allows the vacuum packaging of food with soup and liquid.
Rotary belt type packaging machines are commonly used for:
Fresh meat
Processed meat
Seafood
Pickles
Cheese (hard and soft)
Candy and chocolate
Any other packs that needs vacuum sealing, and the size of the pack is not too big.
Automatic belt vacuum chamber machines
Automatic belt chamber sealers require the entire product to be placed in a plastic bag or flow wrapped pouch within the machine. The product travels on the conveyor belt, it is automatically positioned in the machine on the seal bar, the lid is closed and air is removed. Then a seal bar inside the chamber seals the product in the bag. After sealing the bag, the chamber is refilled with air by the automatic opening of a vent to the outside. This oncoming pressure squeezes all remaining air in the bag. The lid is then opened and the product removed. Automatic belt vacuum chamber machines are typically used for high-speed packaging of large items, and also have the capability to vacuum seal liquids. The lid generally travels straight up and down.
Automatic belt vacuum chamber packaging machines are commonly used for:
Fresh meat (large portions)
Processed meat
Large sausage logs
Cheese (hard and soft)
Thermoforming HFFS vacuum packaging machines
Vacuum packaging in large production facilities can be done with thermoforming machines. These are Form-Fill-Seal style machines that form the package from rolls of packaging film (webbing). Products are loaded into the thermoformed pockets, the top web is laid and sealed under a vacuum, MAP (modified atmosphere), or skin packaging producing rapidly packaged products. Thermoforming can greatly increase packaging production speed.
Thermoformed plastics can be customized for size, color, clarity, and shape to fit products perfectly, creating a consistent appearance. One of the most commonly used thermoformed plastics is PET, known for a high-strength barrier resistant to outside tampering and an ease of molding into designated designs and shapes. Some common uses for Thermoforming in vacuum packaging include:
Fresh and marinated meat
Sausage
Cheese
Candy and chocolate
Grain
Grab-and-go snacks (beef jerky, snack sticks)
Pharmaceutical and medical products
Coins and collectables
Food storage
Food safety
In an oxygen-depleted environment, anaerobic bacteria can proliferate, potentially causing food-safety issues. Some pathogens of concern in vacuum packed foods are spore-forming non-proteolytic Clostridium botulinum, Yersinia enterocolitica, and Listeria monocytogenes. Vacuum packing is often used in combination with other food processing techniques, such as retorting or refrigeration, to inhibit the growth of anaerobic organisms.
Shelf life
Depending on the product, atmosphere, temperature, and the barrier properties of the package, vacuum packaging extends the shelf life of many foods. The shelf life of meats can be extended by vacuum packaging, particularly when used with modified atmosphere packaging.
High barrier-chamber vacuum shrink bags
The amount of shelf life enhanced by a vacuum bag is dependent on the structure in the material. A standard vacuum bag is composed of a PA/PE structure where PA is for puncture resistance and PE is for sealing. The high barrier category includes the usage of more layers focused on the prevention of oxygen permeability, and therefore shelf life protection. There are two materials used in high barrier structures, polyvinylidene chloride (PVDC) and ethylene vinyl alcohol (EVOH). Shelf life indication can be effectively measured by how many cubic centimeters of oxygen can permeate through 1 square meter of material over a 24-hour period. A standard PA/PE bag allows on average 100 cubic centimeters, PVDC allows on average over 10, and EVOH on average 1 cubic centimeter. Multi-layer structures allow the ability to use strong oxygen-barrier materials for enhanced shelf life protection.
Freezer burn
When foods are frozen without preparation, freezer burn can occur. It happens when the surface of the food is dehydrated, and this leads to a dried and leathery appearance. Freezer burn also changes the flavor and texture of foods. Vacuum packing reduces freezer burn by preventing the food from exposure to the cold, dry air.
References
Further reading
Robertson, G.L., Food Packaging: Principles and Practice, 3rd edition, 2013,
Yam, K. L., Encyclopedia of Packaging Technology, John Wiley & Sons, 2009,
The Colonel In The Kitchen: A Surprising History Of Sous Vide at National Public Radio
Food preservation
Packaging
Vacuum
Articles containing video clips | Vacuum packing | [
"Physics"
] | 1,834 | [
"Vacuum",
"Matter"
] |
7,367,379 | https://en.wikipedia.org/wiki/Julius%20%28software%29 | Julius is a speech recognition engine, specifically a high-performance, two-pass large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers. It can perform almost real-time computing (RTC) decoding on most current personal computers (PCs) in 60k word dictation task using word trigram (3-gram) and context-dependent Hidden Markov model (HMM). Major search methods are fully incorporated.
It is also modularized carefully to be independent from model structures, and various HMM types are supported such as shared-state triphones and tied-mixture models, with any number of mixtures, states, or phones. Standard formats are adopted to cope with other free modeling toolkit. The main platform is Linux and other Unix workstations, and it works on Windows. Julius is free and open-source software, released under a revised BSD style software license.
Julius has been developed as part of a free software toolkit for Japanese LVCSR research since 1997, and the work has been continued at Continuous Speech Recognition Consortium (CSRC), Japan from 2000 to 2003.
From rev.3.4, a grammar-based recognition parser named Julian is integrated into Julius. Julian is a modified version of Julius that uses hand-designed type of finite-state machine (FSM) termed a deterministic finite automaton (DFA) grammar as a language model. It can be used to build a kind of voice command system of small vocabulary, or various spoken dialog system tasks.
About models
To run, the Julius recognizer needs a language model and an acoustic model for each language.
Julius adopts acoustic models in Hidden Markov Model Toolkit (HTK) ASCII format, pronunciation dictionary in HTK-like format, and word 3-gram language models in ARPA standard format: forward 2-gram and reverse 3-gram as trained from speech corpus with reversed word order.
Although Julius is only distributed with Japanese models, the VoxForge project is working to create English acoustic models for use with the Julius Speech Recognition Engine.
In April 2018, thanks to the effort of Mozilla foundation, a 350-hour audio corpus of spoken English was made available. The new English ENVR-v5.4 open-source speech model was released along with Polish PLPL-v7.1 models and are available from SourceForge.
See also
List of speech recognition software
References
External links
, at osdn.jp
Speech recognition software
Computational linguistics
Free software projects | Julius (software) | [
"Technology"
] | 524 | [
"Natural language and computing",
"Computational linguistics"
] |
7,367,566 | https://en.wikipedia.org/wiki/CollectSPACE | collectSPACE is an online publication and community for space history enthusiasts featuring articles and photos about space artifacts and memorabilia, information on past, current, and upcoming space events, space history collecting resources, and links to other space-related websites. It also provides an array of message boards where registered members can discuss various aspects of space history and the space collecting hobby; buy, sell, or trade items; or pose "what if?" historical questions. Users often abbreviate the website's name as "cS," and members often refer to each other as "cSers."
collectSPACE, founded and edited by Robert Pearlman, has published articles and reviews by authors Andrew Chaikin (A Man on the Moon), Kris Stoever (For Spacious Skies), James Oberg (Red Star in Orbit), Frederick Ordway III (Imagining Space), Francis French (In the Shadow of the Moon), David Hitt (Homesteading Space), Russell Still (Relics of the Space Race), Colin Burgess (Into That Silent Sea), Jay Gallentine (Ambassadors From Earth) and Apollo astronaut Walt Cunningham, among others.
History
The website's intended name was spacememorabilia.com, for which a logo had been designed; however, the URL was owned (though not in use) by former Gemini and Apollo astronaut Pete Conrad. Pearlman instead bought the URL collectSPACE.com, which came online on July 20, 1999, the 30th anniversary of the Apollo 11 Moon landing (Conrad died unexpectedly July 8).
collectSpace originally contained a photo gallery, drawing on Pearlman's personal collection; "Sightings," a calendar of astronaut appearances; and a short article about Apollo 11 anniversary toys. "Sightings" was chosen to show up in Internet searches for Sightings, a TV series about UFOs. The site's original tagline was "memorabilia from the conquest of the final frontier," which became "The Source for Space History & Artifacts."
collectSPACE earned national media attention later in 1999 for its role in halting a controversial eBay auction for Space Shuttle Challenger debris. In September 1999, it first covered a space memorabilia auction—Christie's East—followed by Superior Galleries of Beverly Hills, California the following month. collectSPACE was the first to webcast space memorabilia auctions, providing live audio (and one year, video) from Superior Gallery's auction floor, as well as live hammer results (auction houses subsequently added their own webcast capabilities or partnered with eBay for live online bidding).
The site's message board went online in November 1999. Among those posting and replying to messages have been former Apollo (EECOM flight controller) Sy Liebergot; Stephen Clemmons, a member of the Apollo 1 ground support crew; Project Mercury astronaut Scott Carpenter's daughter Kris Stoever; astronaut Pete Conrad's son, Pete Conrad, III; National Air and Space Museum curator Allan Needell, space historian Dwayne A. Day, Who's Who in Space authors Michael Cassutt and Rex Hall, Kraig McNutt of "Today In Space History," and The Surfaris' former bassist Andrew Lagomarsino, among others. A number of astronauts are known to be cS readers.
collectSPACE was nominated for The Houston Chronicle's best blog in its Ultimate Houston Readers Pick for 2005.
In 2006, collectSPACE was the first to reveal the name of NASA's next planned crewed spacecraft, Orion, and publish its logo; as well as the name Altair for the next planned lunar lander.
Charitable auctions
In the wake of the 9-11 terrorist attacks, collectSPACE organized Heroes Helping Heroes, an online auction benefiting the American Red Cross. In partnership with Yahoo! Auctions, the site offered bidders the chance to have an item of their choice signed by one of 22 retired astronauts, who volunteered to participate. $12,686 was raised.
Between 2003 and 2006, collectSPACE hosted annual silent auctions benefiting the Astronaut Scholarship Foundation. The astronaut experiences and artifacts auctions have raised more than $180,000 for exceptional college students seeking degrees in science and engineering.
References
External links
Internet forums
Space organizations
American educational websites
Space advocacy organizations | CollectSPACE | [
"Astronomy"
] | 875 | [
"Space advocacy organizations",
"Astronomy organizations",
"Space organizations"
] |
7,367,581 | https://en.wikipedia.org/wiki/Onsemi | ON Semiconductor Corporation (stylized and doing business as onsemi) is an American semiconductor supplier company, based in Scottsdale, Arizona. Products include power and signal management, logic, discrete, and custom devices for automotive, communications, computing, consumer, industrial, LED lighting, medical, military/aerospace and power applications. onsemi runs a network of manufacturing facilities, sales offices and design centers in North America, Europe, and the Asia Pacific regions. Based on its 2016 revenues of $3.907 billion, onsemi ranked among the worldwide top 20 semiconductor sales leaders, and was ranked No. 483 on the 2022 Fortune 500 based on its 2021 sales.
History
onsemi was founded in 1999. The company was originally a spinoff of Motorola's Semiconductor Components Group headquartered in Phoenix, Arizona. It continues to manufacture Motorola's discrete, standard analog, and standard logic devices. On April 28, 2000, Onsemi launched its initial public offering (IPO).
Steve Hanson was the first president and chief executive officer of the company until 2002. Keith Jackson from Fairchild Semiconductor replaced Hanson as the second leader for the next 20 years.
Major acquisitions added SANYO Semiconductor in 2011 and Fairchild Semiconductor in 2016, with total workforce over 30,000 and expanded its product portfolio.
In April 2019, the company signed the UN Global Compact.
In September 2020, chief executive officer Keith Jackson announced his retirement from the company. In December 2020, Hassane El-Khoury, previously the president and chief executive officer of Cypress Semiconductor, succeeded Jackson.
In February 2022, it was announced that BelGaN Group BV had completed the acquisition of all shares of ON Semiconductor Belgium BV from the onsemi group.
As of March 1, 2023, onsemi's headquarters was located in Scottsdale, Arizona.
Acquisitions
In April 2000, onsemi completed the acquisition of Cherry Semiconductor Corp. (CSC) for $250 million. CSC was founded in 1972 as Micro Components Corporation (MCC). CSC was headquartered in East Greenwich, Rhode Island, USA.
In 2003, onsemi acquired TESLA SEZAM (manufacturer of semiconductor chips) and TEROSIL (production of silicon) in the Czech Republic. Both of these companies were the successors of the former state-owned company TESLA.
In May 2006, onsemi completed the acquisition of LSI Logic Gresham, Oregon Design & Manufacturing Facility.
In January 2008, onsemi completed the acquisition of the CPU Voltage and PC Thermal Monitoring Business from Analog Devices, Inc., for $184 million.
In March 2008, onsemi completed the acquisition of AMI Semiconductor for $915 million.
On July 17, 2008, onsemi and Catalyst Semiconductor, Inc. announced the acquisition of Catalyst Semiconductor, Inc. by onsemi for $115 million. On October 9, 2008, Catalyst Semiconductor, Inc. announced the approval of the acquisition. On October 10, 2008, onsemi announced the completion of the acquisition.
In November 2009, onsemi completed the acquisition of PulseCore Semiconductor for $17 million.
In December 2009, onsemi announced the acquisition of California Micro Devices.
In June 2010, onsemi completed the acquisition of Sound Design Technologies, Ltd., for $22 million.
In January 2011, onsemi completed the acquisition of SANYO Semiconductor.
In February 2011, onsemi completed the acquisition of the CMOS Image Sensor Business Unit from Cypress Semiconductor, for $31.4 million.
In May 2014, onsemi completed the acquisition of Truesense Imaging, Inc.
In June 2014, onsemi announced a $400 million deal to acquire California-based Aptina Imaging Corp.
In July 2014, onsemi and Fujitsu Semiconductor announced Strategic Partnership (including foundry services agreement and the definitive agreement pursuant to which onsemi will become a 10% shareholder of Fujitsu's 8-inch wafer fab in Aizuwakamatsu, Japan)
In July 2015, onsemi completed the acquisition of Axsem AG.
In November 2015, onsemi announced the acquisition of Fairchild Semiconductor for $2.4 billion.
In August 2016, onsemi has entered into a definitive agreement with respect to the divestiture of the ignition IGBT business to Littelfuse and has also entered into a separate definitive agreement with Littelfuse to sell its transient voltage suppression diode and switching thyristor product lines, for a combined $104 million in cash.
In September 2016, onsemi completed the acquisition of Fairchild Semiconductor.
In March 2017, onsemi announced that it would acquire and license mmWave technology for automotive radar applications developed by IBM's Haifa, Israel, research team. It included staff, equipment, research facilities and intellectual property.
In May 2018, onsemi acquired Ireland-based company, SensL Technologies Ltd.
In June 2019, onsemi acquired Quantenna Communications for about $1 billion. In October 2021, Bloomberg News reported that onsemi was looking to sell off Quantenna's assets. After failing to find a buyer, onsemi shut down the division in 2022.
In April 2019, onsemi agreed to acquire GlobalFoundries 300mm wafer fabrication facility in East Fishkill, New York. In February 2023, it was announced the acquisition had been completed.
In August 2021 onsemi agreed to acquire GT Advanced Technologies.
In July 2024 onsemi completed the acquisition of SWIR Vision Systems®
In December 2024 onsemi announced to acquire Qorvo’s Silicon Carbide JFET Business, including United Silicon Carbide Subsidiary
Products
onsemi manufactures products in the following areas:
Custom: ASICs; Custom Foundry Services; Custom ULP Memory; Custom CMOS Image Sensors; Integrated passive devices
Discrete: Bipolar Transistors; Diodes & Rectifiers; IGBTs & FETs; Thyristors; Silicon Carbide (SiC)
Power Management: AC/DC Controllers & Regulators; DC/DC Controllers, Converters, & Regulators; Drivers; Thermal Management; Voltage & Current Management
Logic: Clock Generation; Clock & Data Distribution; Memory; Microcontrollers; Standard Logic
Signal Management: Amplifiers & Comparators; Analog Switches; Audio/Video ASSP; Digital Potentiometers; EMI/RFI Filters; Interfaces; Optical, Image, & Touch Sensors
In 2013, the company introduced the industry's highest resolution optical image stabilization (OIS) integrated circuit (IC) for smartphone camera modules.
Corporate responsibility
Onsemi plans to achieve net-zero emissions by 2040. The industrial and automotive sectors, which are among the company's most important end markets, are responsible for more than 65% of global greenhouse gas emissions. This highlights the need for climate initiatives.
In May 2024, the company's ESG risk rating was at 20.8%.
Operations
The company has three segments:
Analog and Mixed-Signal Group (AMG)
Intelligent Sensing Group (ISG)
Power Solutions Group (PSG)
R&D
There are several Solution Engineering Centers (SEC) and Design Centers around the world. The company established the "onsemi Silicon Carbide Crystal Center" at the Penn State's Materials Research Institute in 2023.
Solution engineering centers
United States: San Jose, California; Portland, Oregon; Detroit, Michigan; Nampa and Meridian, Idaho
Germany: Munich
South Korea: Seoul
China: Shanghai, Shenzhen
Taiwan: Taipei
Japan: Osaka, Tokyo
Slovakia: Piešťany
Design centers
United States: Phoenix, Arizona; Santa Clara, California; Sunnyvale, California; Longmont, Colorado; Pocatello, Idaho; Lower Gwynedd, Pennsylvania; East Greenwich, Rhode Island; Austin, Texas; Plano, Texas; Lindon, Utah; South Portland, Maine; Bedford, New Hampshire
Canada: Burlington, Waterloo
Belgium: Mechelen, Oudenaarde
Czech Republic: Brno, Rožnov pod Radhoštěm
Germany: Munich
Ireland: Limerick
Romania: Bucharest
Slovakia: Bratislava
Switzerland: Marin, Dübendorf
India: Bangalore
Israel: Haifa
Japan: Aizu, Gifu, Gunma
South Korea: Seoul, Bucheon
Taiwan: Zhubei
Manufacturing facilities
Current
Canada: Burlington
United States: Mountain Top, Pennsylvania (200 mm); Gresham, Oregon (200 mm); Nampa, Idaho (200 mm, 300 mm); East Fishkill, New York (300 mm)
Czech Republic: Rožnov pod Radhoštěm (150 mm)
China: Leshan; Shenzhen; Suzhou
Japan: Gunma; Aizu-Wakamatsu (200 mm)
Malaysia: Senawang, Negeri Sembilan (2 Plants, 150 mm)
South Korea: Bucheon (150 mm, 200 mm)
Philippines: Carmona; Tarlac City; Cebu
Vietnam: Thuan An, Binh Duong; Bien Hoa, Dong Nai
Sold
United States: Pocatello, Idaho (200 mm); South Portland, Maine (200 mm)
Belgium: Oudenaarde (150 mm)
Japan: Niigata (125 mm, 150 mm)
Closed
United States: Phoenix, Arizona (150 mm, sold); Rochester, New York (150 mm, sold); East Greenwich, Rhode Island (150 mm, sold)
Japan: Aizu (150 mm)
Awards
In 2000, onsemi won the Forbes Advertising Excellence best in category Industrial Machinery/Electrical Components.
onsemi won the Hot 100 Electronic products of 2009 and 2012 by EDN magazine.
In 2012, onsemi won the IR Magazine U.S. Awards in three fields, Best IR by a CEO or chairman for mid cap; No. 56 best company in the U.S. in terms of Investor Relations; No. 3 in Best Investor Relations in technology sector for mid/small cap companies.
In 2012 the company won the "Large Company of the Year Award" from the IEEE.
In 2016, 2017, 2018, 2019, 2020, 2021 and 2022, onsemi was named in World's Most Ethical Companies by Ethisphere Institute.
The company's subsidiary AMI Semiconductor (AMIS) has also won many awards, such as President's Award and Preferred Supplier from Rockwell Collins, Strategic Supplier Award from Emerson Rosemount, Inc., Outstanding Technical Support in New Product Development from Alliant Techsystems.
See also
Freescale Semiconductor, another Motorola semiconductor spinoff
List of semiconductor fabrication plants
References
External links
1999 establishments in Arizona
2000 initial public offerings
American brands
American companies established in 1999
Companies listed on the Nasdaq
Corporate spin-offs
Electronics companies established in 1999
Equipment semiconductor companies
Manufacturing companies based in Arizona
Manufacturing companies based in Phoenix, Arizona
Multinational companies headquartered in the United States
Power-line communication Internet access
Semiconductor companies of the United States | Onsemi | [
"Engineering"
] | 2,219 | [
"Equipment semiconductor companies",
"Semiconductor fabrication equipment"
] |
7,367,688 | https://en.wikipedia.org/wiki/VoxForge | VoxForge is a free speech corpus and acoustic model repository for open source speech recognition engines.
VoxForge was set up to collect transcribed speech to create a free GPL speech corpus in order to be uses with open source speech recognition engines. The speech audio files will be 'compiled' into acoustic models for use with open source speech recognition engines such as Julius, ISIP, and Sphinx and HTK (note: HTK has distribution restrictions).
VoxForge has used LibriVox as a source of audio data since 2007.
See also
Speech recognition in Linux
List of speech recognition software
References
Sources
Deep learning for spoken language identification
VOXFORGE.ORG FREE SPEECH CORPUS (Google translate)
Tools for Collecting Speech Corpora via Mechanical-Turk
An Integrated Approach to Robust Speech Recognition for a Command and Control Application on the Motorcycle
External links
Computational linguistics
Free software projects
Speech recognition
Speech recognition software
Corpora | VoxForge | [
"Technology"
] | 184 | [
"Natural language and computing",
"Computational linguistics"
] |
7,368,244 | https://en.wikipedia.org/wiki/Hypofluorous%20acid | Hypofluorous acid, chemical formula , is the only known oxyacid of fluorine and the only known oxoacid in which the main atom gains electrons from oxygen to create a negative oxidation state. The oxidation state of the oxygen in this acid (and in the hypofluorite ion and in its salts called hypofluorites) is 0, while its valence is 2. It is also the only hypohalous acid that can be isolated as a solid. HOF is an intermediate in the oxidation of water by fluorine, which produces hydrogen fluoride, oxygen difluoride, hydrogen peroxide, ozone and oxygen. HOF is explosive at room temperature, forming HF and :
This reaction is catalyzed by water.
It was isolated in the pure form by passing gas over ice at −40 °C, rapidly collecting the HOF gas away from the ice, and condensing it:
The compound has been characterized in the solid phase by X-ray crystallography as a bent molecule with an angle of 101°. The O–F and O–H bond lengths are 144.2 and 96.4 picometres, respectively. The solid framework consists of chains with O–H···O linkages. The structure has also been analyzed in the gas phase, a state in which the H–O–F bond angle is slightly narrower (97.2°).
Thiophene chemists commonly call a solution of hypofluorous acid in acetonitrile (generated in situ by passing gaseous fluorine through water in acetonitrile) Rozen's reagent.
Difference from other hypohalous acids
The formal oxidation state of oxygen in hypofluorous acid and hypofluorite is 0; the same oxidation state found in molecular oxygen. In most oxygen compounds, including the other hypohalous acids, oxygen takes on a state of -2. The oxygen (0) atom is the root of hypofluorous acid's strength as an oxidizer, in contrast to the halogen (+1) atom in other hypohalic acids.
This alters the acid's chemistry. Where reduction of a general hypohalous acid reduces the halogen atom and yields the corresponding elemental halogen gas,
reduction of hypofluorous acid instead reduces the oxygen atom and yields fluoride directly.
Unlike other hypohalous acids, HOF is a weaker oxidant than elemental fluorine.
Hypofluorites
Hypofluorites are formally derivatives of , which is the conjugate base of hypofluorous acid. One example is trifluoromethyl hypofluorite (), which is a trifluoromethyl ester of hypofluorous acid. The conjugate base is known in salts such as lithium hypofluorite.
See also
Hypochlorous acid, a related compound that is more technologically important but has not been obtained in pure form.
References
Halogen oxoacids
Triatomic molecules
Hypofluorites
Mineral acids | Hypofluorous acid | [
"Physics",
"Chemistry"
] | 673 | [
"Acids",
"Inorganic compounds",
"Mineral acids",
"Molecules",
"Triatomic molecules",
"Matter"
] |
7,368,301 | https://en.wikipedia.org/wiki/Letter%20of%20introduction | The letter of introduction, along with the visiting card, was an important part of polite social interaction in the 18th and 19th centuries. It remains important in formal situations, such as an ambassador presenting his or her credentials (a letter of credence), and in certain business circles.
In general, a person would not interact socially with others unless they had been properly introduced, whether in person or by letter. A person of lower social status would request a patron of higher social status to write a letter of introduction to a third party, also of higher social status than the first person. It was important to observe the niceties of etiquette in requesting, writing and presenting such letters, in such matters as the quality of the paper used, and whether it would be delivered unsealed to allow the requesting party to read it. For example, it was best practice to deliver a letter of introduction to the intended recipient with a visiting card, to allow the recipient to reciprocate by calling upon the sender the next day.
When Benjamin Franklin served as Ambassador to France (1776–1785) he was besieged by those traveling to America who desired letters of introduction, and he drafted the following letter:
See also
Letter of recommendation
References
Etiquette
Introduction, Letter Of | Letter of introduction | [
"Biology"
] | 257 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
7,368,425 | https://en.wikipedia.org/wiki/Glob%20%28comics%29 | The Glob is the name of different fictional characters appearing in American comic books published by Marvel Comics.
Publication history
The first Glob debuted in The Incredible Hulk (vol. 2) #121 (November 1969), and was created by Roy Thomas and Herb Trimpe. Roy Thomas has stated that the character was a conscious imitation of the Heap. Thomas intended to call the character the Shape, but editor Stan Lee thought that name sounded too feminine, and insisted on the name "the Glob".
The second Glob debuted in The Incredible Hulk (vol. 2) #389 (January 1992), and was created by Tom Field and Gary Barker.
Fictional character biography
Joseph "Joe" Timms
Joe Timms is a petty criminal who escaped from prison to see his dying wife, only to drown in a swamp bog. After the Hulk throws nuclear waste into the bog, Timms is revived as the Glob, a slimy monster with immense strength, but little intelligence. Subsequently, the Glob battles Hulk before being dissolved by an experimental anti-radiation fluid. He is later resurrected by the Leader before being destroyed in an explosion.
The Glob's brain later reformed into the Golden Brain. Yagzan and the Cult of Entropy used it as a weapon, but lost by the Entropists in an encounter with the Man-Thing. The Golden Brain psionically molded itself into an amnesiac blond-haired man. The man had been captured and mutated by Yagzan into a clay-based lifeform of the Glob. It battled the Man-Thing, reducing itself to mud again, which suffocates Yagzan, and killed him.
However, it was later revealed that the Glob had been enslaved by the Collector. It eventually rebelled against him with the assistance of the Hulk and the Man-Thing.
The Glob is then taken into S.H.I.E.L.D. custody and joins the Paranormal Containment Unit.
In the Avengers: Standoff! storyline, the Glob appears as an inmate of Pleasant Hill, a gated community established by S.H.I.E.L.D.
Sumner Samuel Beckwith
Sumner Samuel Beckwith was a geneticist working for the Pantheon who transformed into a humanoid composed of bog matter after testing an experimental recreation of the Super-Soldier Serum on himself. Subsequently, he battles the Hulk, who mistakes him for the original Glob, before being incinerated by the Man-Thing.
Powers and abilities
Both the Globs are monstrous creatures resembling a semi-solid mass of vegetable matter, with inhuman strength, stamina, and durability, though limited in brainpower and athletics. The Globs' bodies are difficult to harm, because their muddy exteriors can absorb physical attacks painlessly.
Joe Timms became the first Glob as a result from exposure to toxic waste in the swamp. As the Golden Brain, it can materialize an electrically-charged duplicate of its Glob form and recreate a physically perfect human body for itself.
Sumner Beckwith became the second Glob when he injected himself with a duplicate version of the Super-Soldier Formula. Unlike the original, it could excrete slime-like material from its own body to smother living beings or regenerate lost limbs. He earned a Ph.D. in genetics, before his transformation.
Other characters named Glob
There have been three other characters known as Glob in the Marvel Universe. These include:
The Glob, an imaginary flaming monster from Strange Tales #88.
The Glop, who was originally known as the Glob in Journey into Mystery #72.
Glob Herman, a student at the Xavier Institute.
Reception
The Glob was ranked #31 on a listing of Marvel Comics' monster characters.
References
External links
Glob at Marvel.com
Characters created by Herb Trimpe
Characters created by Roy Thomas
Comics characters introduced in 1969
Comics characters introduced in 1992
Fictional characters from Miami
Fictional geneticists
Fictional monsters
Fictional mute characters
Fictional superorganisms
Marvel Comics psychics
Marvel Comics characters with accelerated healing
Marvel Comics characters with superhuman durability or invulnerability
Marvel Comics characters with superhuman strength
Marvel Comics mutates
Marvel Comics scientists
Marvel Comics supervillains
Marvel Comics undead characters | Glob (comics) | [
"Biology"
] | 891 | [
"Superorganisms",
"Fictional superorganisms"
] |
11,996,851 | https://en.wikipedia.org/wiki/IKK2 | IKK-β also known as inhibitor of nuclear factor kappa-B kinase subunit beta is a protein that in humans is encoded by the IKBKB (inhibitor of kappa light polypeptide gene enhancer in B-cells, kinase beta) gene.
Function
IKK-β is an enzyme that serves as a protein subunit of IκB kinase, which is a component of the cytokine-activated intracellular signaling pathway involved in triggering immune responses. IKK's activity causes activation of a transcription factor known as Nuclear Transcription factor kappa-B or NF-κB. Activated IKK-β phosphorylates a protein called the inhibitor of NF-κB, IκB (IκBα), which binds NF-κB to inhibit its function. Phosphorylated IκB is degraded via the ubiquitination pathway, freeing NF-κB, and allowing its entry into the nucleus of the cell where it activates various genes involved in inflammation and other immune responses.
Clinical significance
IKK-β plays a significant role in brain cells following a stroke. If NF-κB activation by IKK-β is blocked, damaged cells within the brain stay alive, and according to a study performed by the University of Heidelberg and the University of Ulm, the cells even appear to make some recovery.
Inhibition of IKK and IKK-related kinases has been investigated as a therapeutic option for the treatment of inflammatory diseases and cancer. The small-molecule inhibitor of IKK2 SAR113945, developed by Sanofi-Aventis, was evaluated in patients with knee osteoarthritis.
Interactions
IKK-β (IKBKB) has been shown to interact with
HDAC9,
CDC37,
CHUK
CTNNB1,
FANCA,
IKBKG
IRAK1,
NFKBIA,
MAP3K14,
NFKB1,
NFKBIB,
NCOA3,
PPM1B,
TNFRSF1A, and
TRAF2.
References
See also
IκB kinase
Molecular neuroscience
Programmed cell death
Genes mutated in mice
EC 2.7.11 | IKK2 | [
"Chemistry",
"Biology"
] | 450 | [
"Signal transduction",
"Senescence",
"Molecular neuroscience",
"Molecular biology",
"Programmed cell death"
] |
11,997,299 | https://en.wikipedia.org/wiki/Nonode | A nonode is a type of thermionic valve that has nine active electrodes. The term most commonly applies to a seven-grid vacuum tube, also sometimes called an enneode. An example was the EQ80/UQ80, which was used as an FM quadrature detector. It was developed during the introduction of TV and FM radio and delivered an output voltage large enough to directly drive an end pentode while still allowing for some negative feedback. As most of the grids were tied together, even an 8-pin Rimlock base was sufficient in the case of the EQ40.
See also
References
Vacuum tubes
de:Elektronenröhre#Enneode | Nonode | [
"Physics"
] | 144 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
11,997,933 | https://en.wikipedia.org/wiki/Tidal%20stripping | Tidal stripping occurs when a larger galaxy pulls stars and other stellar material from a smaller galaxy because of strong tidal forces.
An example of this scenario is the interacting pair of galaxies NGC 2207 and IC 2163, which are currently in the process of tidal stripping.
See also
Galactic tide
Interacting galaxy
Galactic ram pressure stripping
References
Tidal forces | Tidal stripping | [
"Astronomy"
] | 67 | [
"Galaxy stubs",
"Astronomy stubs"
] |
11,999,136 | https://en.wikipedia.org/wiki/NGC%204051 | NGC 4051 is an intermediate spiral galaxy in the constellation of Ursa Major. It was discovered on 6 February 1788 by John Herschel.
NGC 4051 contains a supermassive black hole with a mass of 1.73 million . This galaxy was studied by the Multicolor Active Galactic Nuclei Monitoring 2m telescope.
The galaxy is a Seyfert galaxy that emits bright X-rays. However, in early 1998 the X-ray emission ceased as observed by the Beppo-SAX satellite. X-ray emission had risen back to normal by August 1998.
NGC 4051 is a member of the Ursa Major Cluster. Its peculiar velocity is −490 ± 34 km/s, consistent with the rest of the cluster.
Supernovae
Three supernovae have been discovered in NGC 4051:
SN 1983I (type Ic, mag. 13.5) was discovered independently by J. Kielkopf et al, on 11 May 1983, and by Tsvetkov on 12 May 1983.
SN 2003ie (type II, mag. 15.2) was discovered by Ron Arbour on 19 September 2003.
SN 2010br (type Ib/c, mag. 17.7) was discovered by Vitali Nevski on 10 April 2010.
References
Notes
External links
SN 2010br located 19".5 east and 10" south of the center at 12 03 10.96 +44 31 42.8 / Wikisky DSS2 zoom-in of same region
Ursa Major Cluster
Ursa Major
Seyfert galaxies
Intermediate spiral galaxies
4051
038068 | NGC 4051 | [
"Astronomy"
] | 328 | [
"Ursa Major",
"Constellations"
] |
11,999,293 | https://en.wikipedia.org/wiki/Stacks%20%28Mac%20OS%29 | Stacks are a feature found in Apple's macOS, starting in Mac OS X Leopard. As the name implies, they "stack" files into a small organized folder on the Dock. At the WWDC07 Keynote Presentation, Steve Jobs stated that in Leopard, the user will be given a default stack called Downloads, in which all downloaded content will be placed.
In the initial release of Leopard, Stacks could be shown two ways, in a "fan" or a "grid". With the release of the 10.5.2 update, a third "list" view was added. This list view allows folder icons to display their contents in pop-out side menus. Originally, if the fan view was too long to fit within the screen, it was automatically displayed as a grid. The user could also choose to have a fan stack always display as a grid, but they could not choose to make it fan out (due to the reason above). After the update, the top item in the fan would allow the user to open the folder in a Finder window.
The list view also shows an Options pop-out menu which, when opened, allows users to change the display method used by the Stack (fan, grid or list), the order items in the Stack are displayed (by name, date created, date modified, date added and kind), and the appearance of the Stack icon in the dock (folder or stack). These options are available in the other three methods by either right-clicking on the icon with the right button of a two-button mouse, or by holding down the Control key on the keyboard while simultaneously clicking with a one-button mouse. Holding down the primary mouse button will target the contextual menu as well.
With the release of Mac OS X Snow Leopard, Stacks have been further enhanced. Stacks will allow viewing a subfolder without moving to a Finder window. Stacks have also been modified to include scroll-bars for folders with many files.
See also
Hypercard stacks
References
Apple's official page about Mac OS X 10.5 Leopard
CNET's review of Leopard, including information about Stacks
MacOS user interface
Graphical user interface elements | Stacks (Mac OS) | [
"Technology"
] | 448 | [
"Components",
"Graphical user interface elements"
] |
11,999,323 | https://en.wikipedia.org/wiki/NGC%201350 | NGC 1350 is a spiral galaxy located 87 million light years away in the southern constellation Fornax (the Furnace). It was discovered by Scottish astronomer James Dunlop on 24 November 1826.
Characteristics
NGC 1350 measures roughly 130,000 light years across: slightly larger than our own galaxy, the Milky Way. It is classified as an Sa(r) galaxy, meaning that it is a spiral with arms wound tightly enough to form a prominent central ring. The faint outer ring (called a "pseudo-ring") is sometimes added to the beginning of the classification with the designation "R'1." NGC 1350 is seen on the outskirts of the Fornax cluster of galaxies, but its membership is uncertain due to distance.
Supernova
One supernova has been observed in NGC 1350: SN 1959A (type unknown, mag. 16) was discovered by H. S. Gates on 6 January 1959.
Image
The image on the right is an almost-true color composite image made with the VLT's 8.2 meter Kueyen telescope on 26 Jan 2000, at the European Southern Observatory site at Cerro Paranal, Chile. Observations were done at the following wavelengths (and assigned the following colors): B (blue) for 6 minutes, V (green) for 4 minutes, R (orange) for 3 minutes, and I (red) for 3 minutes. The image covers a region of 8.0 x 5.0 arcminutes of sky. North is to the left and East is down.
The viewing angle and the two rings make NGC 1350 look somewhat like a cosmic "eye." Another feature is the tenuous nature of the outer arms, through which a number of background galaxies can be seen. The outer region's blue tint indicates the presence of star formation.
NGC 1316 group
NGC 1350 is a member of the NGC 1316 group (also known as LGG 94), which includes at least 20 galaxies, including IC 335, NGC 1310, NGC 1316, NGC 1317, NGC 1326A, NGC 1341, NGC 1365, NGC 1380, NGC 1381, NGC 1382, NGC 1404, and NGC 1427A.
References
External links
Fornax
Spiral galaxies
1350
013059
-06-08-023
358-013
03291-3347
18261124
Discoveries by James Dunlop | NGC 1350 | [
"Astronomy"
] | 492 | [
"Fornax",
"Constellations"
] |
11,999,467 | https://en.wikipedia.org/wiki/Telos%20Alliance | Telos Alliance is an American corporation manufacturing audio products primarily for broadcast stations. Headquartered in Cleveland, Ohio, US, the company is divided into six divisions:
Telos Systems manufactures talkshow systems, IP audio codecs and transceivers, as well as streaming audio encoders.
Omnia Audio makes audio processors for AM, FM, HD Radio, and Internet audio streaming applications.
Axia Audio builds mixing consoles and audio distribution systems based on Livewire IP networking, an audio over Ethernet protocol.
Linear Acoustic, whose product line includes TV loudness controls, metering and monitoring devices, along with mixing and metadata tools.
25-Seven Systems specializes in broadcast delays, time management and processing products.
Minnetonka Audio Software delivers software-based audio automation to media production infrastructures.
History and founder
Telos Alliance began as Telos Systems, a part-time project founded in 1985 by radio station engineer and talk show host (WFBQ, WMMS) Steve Church. Its first product was a telephone hybrid, the Telos 10, which was based on digital signal processing.
Church visited Fraunhofer in Germany in the late 1980s. There, he learned of MPEG-1 Audio Layer III audio coding. Telos became the first licensee in the United States of what is now known as MP3. MP3 became part of the solution to long-distance remote broadcasts using Integrated Services Digital Network (ISDN). This became the preferred alternative to leased lines available since the 1920s and satellite links available since the 1970s.
Audio over IP (AoIP) technology called Livewire made its debut in 2003 at the NAB Show in Las Vegas. The original Livewire-capable products included mixing consoles, analog, AES, mic and GPIO nodes. Other manufacturers began making their own AoIP broadcast equipment and there was a need for AoIP gear from different manufacturers to communicate with each other. Telos, along with other manufacturers, developed the AES67 standard for AoIP interoperability.
Church received many accolades for his work over the years. In 2010, the National Association of Broadcaster's (NAB) honored him with its Radio Engineering award. He stepped down as CEO of Telos in January 2011, and died on September 28, 2012, after a three-year battle with brain cancer.
In the following years, the company also expanded its product lines. Telos Systems continued to develop broadcast telephone systems, IP audio codecs & transceivers, and processing as well as encoding for streaming audio. Networked radio consoles, audio interfaces and routing control, networked intercom, and related software were created under the Axia Audio brand name. Audio processing, processing and encoding products for streaming audio, voice processing, analysis tools, and studio audio processing was developed under the Omnia Audio brand. The three companies were under the larger corporate umbrella known as Telos Systems.
Growth of the company continued with the acquisition of new partners. Linear Acoustic of Lancaster PA was acquired, along with its product line of TV loudness controls, metering and monitoring devices, along with mixing and metadata tools. The corporate name was changed to The Telos Alliance. Shortly thereafter, 25-Seven came on board. This Boston-based company specializes in broadcast delays, time management and processing products which result in more efficient and profitable radio operations. In September 2015, Minnetonka Audio Software joined the Telos Alliance through a merger of the companies. The Minnetonka, Minnesota-based company delivers a file-based software alternative to hardware program optimizers, providing audio automation to media production infrastructures.
In September 2016, Linear Acoustic and Minnetonka Audio were rebranded as The TV Solutions Group, which provides consulting and partnerships with television broadcasters seeking to transition to the latest technology.
References
Companies based in Cleveland
Broadcast engineering
Manufacturing companies established in 1985
Manufacturers of professional audio equipment
Audio equipment manufacturers of the United States
American companies established in 1985 | Telos Alliance | [
"Engineering"
] | 799 | [
"Broadcast engineering",
"Electronic engineering"
] |
12,000,232 | https://en.wikipedia.org/wiki/ExploraVision | ExploraVision is a scientific national contest held in the United States and Canada, a joint project by Toshiba Corporation and the National Science Teachers Association. Designed for K–12 students of all interest, skill and ability levels, ExploraVision encourages its participants to create and explore a vision of future technology by developing new ways to apply current science. Since 1992, more than 360,000 students from across the United States and Canada have competed.
Requirements
Each student is limited to one entry per year. Each team must have no more than 4 students. Students and teachers/mentors complete a Toshiba/NSTA ExploraVision Awards Entry Form, signed by the students, coach and mentor, an abstract of their project, a detailed project description, a list of technology used that is available at present time, a bibliography, and five Web page graphics that will be used later to create an official web page for the project. If a team advances to the national level, they will then be challenged with 3 other tasks: 1. Make a prototype displaying how their project would work. 2. Create a video showing both what your project does and why it would be useful. 3. Make a website based on your webpage graphics that displays everything you submitted originally. There is a 1st and 2nd place for national winners. The 1st-place winners receive $10,000 worth of college funds each, and 2nd-place winners get $5,000. Both teams go on an all-expense-paid trip to Washington, D.C., where they get to be on live television, receive their awards and participate in many other activities.
References
External links
Official ExploraVision Site
Past Winners from Education World
Recurring events established in 1992
1992 establishments in the United States
Science competitions
Toshiba
Science events | ExploraVision | [
"Technology"
] | 366 | [
"Science and technology awards",
"Science competitions"
] |
12,002,936 | https://en.wikipedia.org/wiki/Displacement%E2%80%93length%20ratio | The displacement–length ratio (DLR or D/L ratio) is a calculation used to express how heavy a boat is relative to its waterline length.
DLR was first published in
It is calculated by dividing a boat's displacement in long tons (2,240 pounds) by the cube of one one-hundredth of the waterline length (in feet):
DLR can be used to compare the relative mass of various boats no matter what their length. A DLR less than 200 is indicative of a racing boat, while a DLR greater than 300 or so is indicative of a heavy cruising boat.
See also
Sail Area-Displacement ratio
References
Ship measurements
Nautical terminology
Engineering ratios
Naval architecture | Displacement–length ratio | [
"Mathematics",
"Engineering"
] | 143 | [
"Naval architecture",
"Metrics",
"Engineering ratios",
"Quantity",
"Marine engineering"
] |
12,003,118 | https://en.wikipedia.org/wiki/Loewner%27s%20torus%20inequality | In differential geometry, Loewner's torus inequality is an inequality due to Charles Loewner. It relates the systole and the area of an arbitrary Riemannian metric on the 2-torus.
Statement
In 1949 Charles Loewner proved that every metric on the 2-torus satisfies the optimal inequality
where "sys" is its systole, i.e. least length of a noncontractible loop. The constant appearing on the right hand side is the Hermite constant in dimension 2, so that Loewner's torus inequality can be rewritten as
The inequality was first mentioned in the literature in .
Case of equality
The boundary case of equality is attained if and only if the metric is flat and homothetic to the so-called equilateral torus, i.e. torus whose group of deck transformations is precisely the hexagonal lattice spanned by the cube roots of unity in .
Alternative formulation
Given a doubly periodic metric on (e.g. an imbedding in which is invariant by a isometric action), there is a nonzero element and a point such that , where is a fundamental domain for the action, while is the Riemannian distance, namely least length of a path joining and .
Proof of Loewner's torus inequality
Loewner's torus inequality can be proved most easily by using the computational formula for the variance,
Namely, the formula is applied to the probability measure defined by the measure of the unit area flat torus in the conformal class of the given torus. For the random variable X, one takes the conformal factor of the given metric with respect to the flat one. Then the expected value E(X 2) of X 2 expresses the total area of the given metric. Meanwhile, the expected value E(X) of X can be related to the systole by using Fubini's theorem. The variance of X can then be thought of as the isosystolic defect, analogous to the isoperimetric defect of Bonnesen's inequality. This approach therefore produces the following version of Loewner's torus inequality with isosystolic defect:
where ƒ is the conformal factor of the metric with respect to a unit area flat metric in its conformal class.
Higher genus
Whether or not the inequality
is satisfied by all surfaces of nonpositive Euler characteristic is unknown. For orientable surfaces of genus 2 and genus 20 and above, the answer is affirmative, see work by Katz and Sabourau below.
See also
Pu's inequality for the real projective plane
Gromov's systolic inequality for essential manifolds
Gromov's inequality for complex projective space
Eisenstein integer (an example of a hexagonal lattice)
Systoles of surfaces
References
Riemannian geometry
Differential geometry
Geometric inequalities
Differential geometry of surfaces
Systolic geometry | Loewner's torus inequality | [
"Mathematics"
] | 608 | [
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry"
] |
12,003,373 | https://en.wikipedia.org/wiki/AG%20magazin | AG magazin is a Serbian magazine for architecture and construction, founded in 2001. The magazine was created for the purposes of revealing new information about world projects, new engineering achievements, trends in house building and environmental issues. It is based in Belgrade.
References
External links
AG magazin official website
Architecture magazines
Engineering magazines
Magazines established in 2001
Mass media in Belgrade
Magazines published in Serbia
Serbian-language magazines | AG magazin | [
"Engineering"
] | 81 | [
"Architecture stubs",
"Architecture"
] |
12,004,229 | https://en.wikipedia.org/wiki/Joseph%20Priestley%20and%20Dissent | Joseph Priestley (13 March 1733 (old style) – 8 February 1804) was a British natural philosopher, political theorist, clergyman, theologian, and educator. He was one of the most influential Dissenters of the late 18th-century.
A member of marginalized religious groups throughout his life and a proponent of what was called "rational Dissent", Priestley advocated religious toleration (challenging even William Blackstone), helped Theophilus Lindsey found the Unitarian church and promoted the repeal of the Test and Corporation Acts in the 1780s. As the foremost British expounder of providentialism, he argued for extensive civil rights, believing that individuals could bring about progress and eventually the Millennium. Priestley's religious beliefs were integral to his metaphysics as well as his politics and he was the first philosopher to "attempt to combine theism, materialism, and determinism," a project that has been called "audacious and original."
Defender of Dissenters and political philosopher
Priestley claimed throughout his life that politics did not interest him and that he did not participate in it. What appeared to others as political arguments were for Priestley always, at their root, religious arguments. Many of what we would call Priestley's political writings were aimed at supporting the repeal of the Test and Corporation Acts, a political issue that had its foundation in religion.
Between 1660 and 1665, Parliament passed a series of laws that restricted the rights of dissenters: they could not hold political office, teach school, serve in the military or attend Oxford and Cambridge unless they ascribed to the thirty-nine Articles of the Church of England. In 1689, a Toleration Act was passed that restored some of these rights, if dissenters subscribed to 36 of the 39 articles (Catholics and Unitarians were excluded), but not all Dissenters were willing to accept this compromise and many refused to conform. Throughout the 18th century Dissenters were persecuted and the laws against them were erratically enforced. Dissenters continually petitioned Parliament to repeal the Test and Corporation Acts, claiming that the laws made them second-class citizens. The situation worsened in 1753 after the passage of Lord Hardwicke's Marriage Act which stipulated that all marriages must be performed by Anglican ministers; some refused to perform Dissenting weddings at all.
Priestley's friends urged him to publish a work on the injustices borne by Dissenters, a topic to which he had already alluded in his Essay on a Course of Liberal Education for Civil and Active Life (1765). The result was Priestley's Essay on the First Principles of Government, which Priestley's major modern biographer calls his "most systematic political work," in 1768. The book went through three English editions and was translated into Dutch. Jeremy Bentham credited it with inspiring his "greatest happiness principle." The Essay on Government is not strictly utilitarian, however; like all of Priestley's works, it is infused with the belief that society is progressing towards perfection. Although much of the text rearticulates John Locke's arguments from his Two Treatises on Government (1689), it also makes a useful distinction between political and civil rights and argues for protection of extensive civil rights. He distinguishes between a private and a public sphere of governmental control; education and religion, in particular, he maintains, are matters of private conscience and should not be administered by the state. As Kramnick states, "Priestley's fundamental maxim of politics was the need to limit state interference on individual liberty." For early liberals like Priestley and Jefferson, the "defining feature of liberal politics" was its emphasis on the separation of church and state. In a statement that articulates key elements of early liberalism and anticipates utilitarian arguments, Priestley wrote:
It must necessarily be understood, therefore, that all people live in society for their mutual advantage; so that the good and happiness of the members, that is the majority of the members of any state, is the great standard by which every thing relating to that state must finally be determined.
Priestley acknowledged that revolution was necessary at times but believed that Britain had already had its only necessary revolution in 1688, although his later writings would suggest otherwise. Priestley's later radicalism emerged from his belief that the British government was infringing upon individual freedom. Priestley would repeatedly return to these themes throughout his career, particularly when defending the rights of Dissenters.
Critic of William Blackstone's Commentaries
In another attempt to champion the rights of Dissenters, Priestley defended their constitutional rights against the attacks of William Blackstone, an eminent legal theorist. Blackstone's Commentaries, fast becoming the standard reference for legal interpretation, stated that dissent from the Church of England was a crime and argued that Dissenters could not be loyal subjects. Furious, Priestley lashed out with his Remarks on Dr. Blackstone's Commentaries (1769), correcting Blackstone's grammar, his history and his interpretation of the law. Blackstone, chastened, replied in a pamphlet and altered his Commentaries in subsequent editions; he rephrased the offending passages but still described Dissent as a crime.
Founder of Unitarianism
When Parliament rejected the Feather's Tavern petition in 1772, which would have released Dissenters from subscribing to the thirty-nine articles, many Dissenting ministers, as William Paley wrote, "could not afford to keep a conscience." Priestley's friend from Leeds, Theophilus Lindsey, decided to try. He gave up his church, sold his books so that he would have money to live on and established the first Unitarian chapel in London. The radical publisher Joseph Johnson helped him find a building, which became known as Essex Street Chapel. Priestley's patron at the time, Lord Shelburne, promised that he would keep the church out of legal difficulties (barrister John Lee, later Attorney-General, also helped), and Priestley and many others hurried to raise money for Lindsey.
On 17 April 1774, the chapel had its first service. Lindsey had designed his own liturgy, of which many were critical. Priestley rushed to his defense with Letter to a Layman, on the Subject of the Rev. Mr. Lindsey's Proposal for a Reformed English Church (1774), claiming that only the form of worship had been altered and attacking those who only followed religion as a fashion. Priestley attended the church regularly while living in Calne with Shelburne and even occasionally preached there. He continued to support institutionalized Unitarianism after he moved to Birmingham in 1780, encouraging the foundation of new Unitarian chapels throughout Britain and the United States. He wrote numerous letters in defence of Unitarianism, in particular against certain ministers and scholars such as Samuel Horsley, Alexander Geddes, George Horne and Thomas Howes. These letters were compiled and published (by the author) in an "annual reply" covering the years 1786-1789. He also compiled and edited a liturgy and hymnbook for the new denomination.
Religious activist
In 1787, 1789 and 1790, Dissenters again tried to repeal the Test and Corporation Acts. Although initially it looked as if they might succeed, by 1790, with the fears of the French Revolution looming in the minds of many members of Parliament, few were swayed by Charles James Fox's arguments for equal rights. Political cartoons, one of the most effective and popular media of the time, skewered the Dissenters and Priestley specifically. In the midst of these trying times, it was the betrayal of William Pitt and Edmund Burke that most angered Priestley and his friends; they had expected the two men's support and instead both argued vociferously against the repeal. Priestley wrote a series of Letters to William Pitt and Letters to Burke in an attempt to persuade them otherwise, but to no avail. These publications unfortunately also inflamed the populace against him.
In its propaganda against the "radicals," Pitt's administration argued that Priestley and other Dissenters wanted to overthrow the government. Dissenters who had supported the French revolution came under increasing suspicion as skepticism over the revolution's benefits and ideals grew. When in 1790 Richard Price, the other leading Dissenting minister in Britain at the time, gave a rousing sermon supporting the French revolutionaries and comparing them to English revolutionaries of 1688, Burke responded with his famous Reflections on the Revolution in France. Priestley rushed to the defense of his friend and of the revolutionaries, publishing one of the many responses, along with Thomas Paine and Mary Wollstonecraft, that became part of the "Revolution Controversy." Paradoxically, it is Burke, the secular statesman, who argued against science and maintained that religion should be the basis of civil society while Priestley, the Dissenting minister, argued that religion could not provide the basis for society and should be restricted to one's private life.
Political adviser to Lord Shelburne
Priestley also served as a kind of political adviser to Lord Shelburne while he working for him as a tutor and librarian; he gathered information for him on parliamentary issues and served as a conduit of information for Dissenting and American interests. Priestley published several political works during these years, most of which were focused on the rights of dissenters, such as An Address to Protestant Dissenters . . . on the Approaching Election of Members of Parliament (1774). This pamphlet was published anonymously and Schofield calls it "the most outspoken of anything he ever wrote." Priestley called on Dissenters to vote against those in Parliament who had, by refusing to repeal the Test and Corporation Acts, denied them their rights. He wrote a second part dedicated to defending the rebelling American colonists at the behest of Benjamin Franklin and John Fothergill. The pamphlets created a stir throughout Britain but the results of the election did not favor Shelburne's party.
Materialist philosopher and theologian
In a series of five major metaphysical works, all written between 1774 and 1778, Priestley laid out his materialist view of the world and tried "to defend Christianity by making its metaphysical framework more intelligible," even though such a position "entailed denial of free will and the soul." The first major work to address these issues was The Examination of Dr. Reid's Inquiry ... Dr. Beattie's Essay ... and Dr. Oswald's Appeal (1774). He challenged Scottish common-sense philosophy, which claimed that "common sense" trumped reason in matters of religion. Relying on Locke and Hartley's associationism, he argued strenuously against Reid's theory of mind and maintained that ideas did not have to resemble their referents in the world; ideas for Priestley were not pictures in the mind but rather causal associations. From these arguments, Priestley concluded that "ideas and objects must be of the same substance," a radically materialist view at the time. The book was popular and readers of all persuasions read it. Charles Lamb wrote to Samuel Taylor Coleridge, recommending "that clear, strong, humorous, most entertaining piece of reasoning" and Priestley heard rumors that even Hume had read the work and "declared that the manner of the work was proper, as the argument was unanswerable."
When arguing for materialism in his Examination Priestley strongly suggested that there was no mind-body duality. Such opinions shocked and angered many of his readers and reviewers who believed that for the soul to exist, there had to be a mind-body duality. In order to clarify his position he wrote Disquisitions relating to Matter and Spirit (1777), which claimed that both "matter" and "force" are active, and therefore that objects in the world and the mind must be made of the same substance. Priestley also argued that discussing the soul was impossible because it is made of a divine substance and humanity cannot gain access to the divine. He therefore denied the materialism of the soul while simultaneously claiming its existence. Although he buttressed his arguments with familiar scholarship and ancient authorities, including scripture, he was labeled an atheist. At least a dozen hostile refutations of the work were published by 1782.
Priestley continued this series of arguments in The Doctrine of Philosophical Necessity Illustrated (1777); the text was designed as an "appendix" to the Disquisitions and "suggests that materialism and determinism are mutually supporting." Priestley explicitly stated that humans had no free will: "all things, past, present, and to come, are precisely what the Author of nature really intended them to be, and has made provision for." His notion of "philosophical necessity," which he was the first to claim was consonant with Christianity, at times resembles absolute determinism; it is based on his understanding of the natural world and theology: like the rest of nature, man's mind is subject to the laws of causation, but because a benevolent God has created these laws, Priestley argued, the world as a whole will eventually be perfected. He argued that the associations made in a person's mind were a necessary product of their lived experience because Hartley's theory of associationism was analogous to natural laws such as gravity. Priestley contends that his necessarianism can be distinguished from fatalism and predestination because it relies on natural law. Isaac Kramnick points out the paradox of Priestley's positions: as a reformer, he argued that political change was essential to human happiness and urged his readers to participate, but he also claimed in works such as Philosophical Necessity that humans have no free will. Philosophical Necessity influenced the 19th-century utilitarians John Stuart Mill and Herbert Spencer, who were drawn to its determinism. Immanuel Kant, entranced by Priestley's determinism but repelled by his reliance on observed reality, created a transcendental version of determinism that he claimed allowed liberty to the mind and soul.
In the last of his important books on metaphysics, Letters to a Philosophical Unbeliever (1780), Priestley continues to defend his thesis that materialism and determinism can be reconciled with a belief in a God. The seed for this book had been sown during his trip to Paris with Shelburne. Priestley recalled in his Memoirs:
As I chose on all occasions to appear as a Christian, I was told by some of them [philosophes], that I was the only person they had ever met with, of whose understanding they had any opinion, who professed to believe Christianity. But on interrogating them on the subject, I soon found that they had given no proper attention to it, and did not really know what Christianity was ... Having conversed so much with unbelievers at home and abroad, I thought I should be able to combat their prejudices with some advantage, and with this view I wrote ... the first part of my 'Letters to a Philosophical Unbeliever', in proof of the doctrines of a God and a providence, and ... a second part, in defence of the evidences [sic] of Christianity.
The text addresses those whose faith is shaped by books and fashion; Priestley draws an analogy between the skepticism of educated men and the credulity of the masses. He again argues for the existence of God using what Schofield calls "the classic argument from design ... leading from the necessary existence of a creator-designer to his self-comprehension, eternal existence, infinite power, omnipresence, and boundless benevolence." In the three volumes, Priestley discusses, among many other works, Baron d'Holbach's Systeme de la Nature, often called the "bible of atheism." He claimed that d'Holbach's "energy of nature," though it lacked intelligence or purpose, was really a description of God. Priestley believed that David Hume's style in the Dialogues Concerning Natural Religion (1779) was just as dangerous as its ideas; he feared the open-endedness of the Humean dialogue.
Notes
Bibliography
For a complete bibliography of Priestley's writings, see list of works by Joseph Priestley.
Fitzpatrick Martin. "Heretical Religion and Radical Political Ideas in Late Eighteenth-Century England." The Transformation of Political Culture: England and Germany in the Late Eighteenth Century. Ed. Eckhart Hellmuth. Oxford: ?, 1990.
Fitzpatrick, Martin. "Joseph Priestley and the Cause of Universal Toleration." The Price-Priestley Newsletter 1 (1977): 3–30.
Garrett, Clarke. "Joseph Priestley, the Millennium, and the French Revolution." Journal of the History of Ideas 34.1 (1973): 51–66.
Gibbs, F. W. Joseph Priestley: Adventurer in Science and Champion of Truth. London: Thomas Nelson and Sons, 1965.
Haakonssen, Knud, ed. Enlightenment and Religion: Rational Dissent in Eighteenth-Century Britain. Cambridge: Cambridge University Press, 1996. .
Jackson, Joe, A World on Fire: A Heretic, An Aristocrat and the Race to Discover Oxygen. New York: Viking, 2005. .
Kramnick, Isaac. "Eighteenth-Century Science and Radical Social Theory: The Case of Joseph Priestley's Scientific Liberalism." Journal of British Studies 25 (1986): 1–30.
McEvoy, John G. "Enlightenment and dissent in science: Joseph Priestley and the limits of theoretical reasoning." Enlightenment and Dissent 2 (1983): 47–68; 57–8.
McLachlan, John. Joseph Priestley Man of Science 1733–1804: An Iconography of a Great Yorkshireman. Braunton and Devon: Merlin Books Ltd., 1983. .
Philip, Mark. "Rational Religion and Political Radicalism." Enlightenment and Dissent 4 (1985): 35–46.
Schofield, Robert E. The Enlightenment of Joseph Priestley: A Study of his Life and Work from 1733 to 1773. University Park: Pennsylvania State University Press, 1997. .
Schofield, Robert E. The Enlightened Joseph Priestley: A Study of His Life and Work from 1773 to 1804. University Park: Pennsylvania State University Press, 2004. .
Sheps, Arthur. "Joseph Priestley's Time Charts: The Use and Teaching of History by Rational Dissent in late Eighteenth-Century England." Lumen 18 (1999): 135–154.
Tapper, Alan. "Joseph Priestley." Dictionary of Literary Biography 252: British Philosophers 1500–1799. Eds. Philip B. Dematteis and Peter S. Fosl. Detroit: Gale Group, 2002.
Thorpe, T.E. Joseph Priestley. London: J. M. Dent, 1906.
Uglow, Jenny. The Lunar Men: Five Friends Whose Curiosity Changed the World. New York: Farrar, Straus and Giroux, 2002. .
External links
The Joseph Priestley Society
www.josephpriestley.com – Comprehensive site which includes a bibliography, links to related sites, images, information on manuscript collections, and other helpful information.
Full-text links
A General History of the Christian Church (full text from google books)
A History of the Corruptions of Christianity (full text from google books)
The Doctrines of Heathen Philosophy compared with those of Revelation (full text from google books)
Institutes of Natural and Revealed Religion, Vol. 1 of 2 (full text from google books)
Institutes of Natural and Revealed Religion, Vol. 2 of 2 (full text from google books)
An History of Early Opinions Concerning Jesus Christ, Vol. 1 (full text from google books)
An History of Early Opinions Concerning Jesus Christ, Vol. 2 (full text from google books)
An History of Early Opinions Concerning Jesus Christ, Vol. 3 (full text from google books)
An History of Early Opinions Concerning Jesus Christ, Vol. 4 (full text from google books)
A Free Address to Protestant Dissenters (full text from google books)
English Christian theologians
Christian theological movements
Priestley, Joseph and Dissent
Determinism
English Unitarians
Materialists
Eponymous political ideologies | Joseph Priestley and Dissent | [
"Physics"
] | 4,138 | [
"Materialism",
"Matter",
"Materialists"
] |
12,004,466 | https://en.wikipedia.org/wiki/Glow%20plug%20%28model%20engine%29 | A glow plug engine, or glow engine, is a type of small internal combustion engine typically used in model aircraft, model cars and similar applications. The ignition is accomplished by a combination of heating from compression, heating from a glow plug and the catalytic effect of the platinum within the glow plug on the methanol within the fuel.
History
German inventor Ray Arden invented the first glow plug for model engines in 1947.
Model glow plug design
The glow plugs used in model engines are significantly different from those used in full-size diesel engines. In full-size engines, the glow plug is used only for starting. In model engines, the glow plug is an integral part of the ignition system because of the catalytic effect of the platinum wire. The glow plug is a durable, mostly platinum, helical wire filament recessed into the plug's tip. When an electric current runs through the plug, or when exposed to the heat of the combustion chamber, the filament glows, enabling it to help ignite the special fuel used by these engines. Power can be applied using a special connector attaching to the outside of the engine, and may use a rechargeable battery or DC power source.
There are three types/shapes (at least) of glow plugs. The standard glow plug, which comes in long/standard and short (for smaller engines), in both open and idle-bar configurations, has a threaded tube that penetrates the combustion chamber to varying degrees. Due to the small size of the combustion chamber changing brands or styles of standard glow plug can affect the compression ratio. Turbo style (European/metric) and Nelson style (North American/English) glow plugs do not penetrate the combustion chamber. Instead they have an angled shoulder that seals against a matching surface at the bottom of the glow plug hole. As a Turbo or Nelson plug is installed and seals the combustion chamber, they create a smooth surface inside the head. This smooth surface is very desirable for high-performance application such as Control Line Speed events and also high-revving RC Cars. The design of Turbo/Nelson plugs allow switching between brands without the possibility of affecting compression. Turbo and Nelson plugs are not interchangeable as they have different threads and dimensions.
Fuel
Glow fuel generally consists of methanol with varying degrees of nitromethane content as an oxidizer for greater power, generally between 5% and 30% of the total blend. These volatiles are suspended in a base oil of castor oil, synthetic oil or a blend of both for lubrication and heat control. The lubrication system is a "total loss" type, meaning that the oil is expelled from the exhaust after circulating through the engine. The fuel ignites when it comes in contact with the heating element of the glow plug. Between strokes of the engine, the wire remains hot, continuing to glow partly due to thermal inertia, but largely due to the catalytic combustion reaction of methanol remaining on the platinum filament. This keeps the filament hot, allowing it to ignite the next charge, thus sustaining the power cycle.
Some aircraft engines are designed to run on fuel with no nitromethane content whatsoever. Glow fuel of this type is referred to as "FAI fuel" after the aeronautical governing body of the same name, which requires such fuel in some competitions.
Starting
To start a glow engine, a direct current of around 3 amps and 1.5 volts is applied to the plug from a "glow plug igniter" or "glow driver", powered by a high current single cell rechargeable battery, or a purpose-built "power panel" running on a 12VDC source. The current heats the platinum filament, causing it to glow red hot, hence the name. The engine is then spun from the outside using a manual crank, built-in rope-based recoil starter, spring-loaded motor or purpose-built electric motor, or by hand, to introduce fuel to the chamber. Once the fuel has ignited and the engine is running, the electrical connection is no longer needed and can be removed. Each combustion keeps the glow plug filament hot, which along with the catalysis of methanol oxidation by the platinum, allows the ignition of the next charge in a self-sustaining power cycle.
The rechargeable battery may be of NiMH, NiCD, Li-ion, or lead-acid type. The higher fully-charged voltages of lead-acid (2.0) and Li-ion (4.2) cells, if applied directly to a regular 1.5 volt glow plug, will cause it to burn out instantaneously, so either a resistor of the proper value and wattage, or a high-power germanium transistor's base/emitter junction (in a series connection with one of the plug's terminals) can be used to limit the current through the plug to an appropriate level. Even with an appropriate power input, glow plugs can burn out at any time, and hobbyists are encouraged to carry spares.
Technically a glow plug engine is fairly similar to a diesel engine and hot bulb engine in that it uses internal heat to ignite the fuel, but since the ignition timing is not controlled by fuel injection (as in an ordinary diesel engine), or electrically (as in a spark ignition engine), it must be adjusted by changing fuel/air mixture and plug/coil design (usually through adjusting various inlets and controls on the engine itself.) A richer mixture will tend to cool the filament and so retard ignition, slowing the engine. A leaner mixture produces more power, but the engine is less well lubricated, which can cause overheating and detonation. This "configuration" can also be adjusted by using varying plug designs for a more exact thermal control. Of all internal combustion engine types, the glow plug engine most resembles the hot bulb engine, since on both types the ignition occurs due to a "hot spot" within the engine combustion chamber.
Glow plug engines can be designed for two-cycle operation (ignition every rotation) or four-cycle operation (ignition every two rotations). The two-cycle (or two-stroke) version produces more power, but the four-cycle engines have more low-end torque, are less noisy and have a lower-pitched, more realistic sound.
Considerations when using glow plugs
A glow plug engine must be operated with the correct glow plug temperature. Large engines can operate with lower temperatures, while smaller engines radiate heat to the air more quickly and require a hotter glow plug to maintain the correct temperature for ignition. The ambient temperature also dictates the best glow plug temperature; in cold weather, hotter plugs are needed. Since glow plug engines are air-cooled, an engine that "runs hot" can sometimes benefit from a lower plug temperature, although this may cause rougher idling and difficulty in tuning. The operating speed of the engine must also be considered; if the engine is to run at consistently high RPM, such as with an airplane or a car on a mostly straight track, a lower plug temperature is more efficient. If the engine is to operate at lower RPM, combustion will not heat the engine as much, and a hotter plug is required.
The fuel type and the fuel/air mixture must also be considered. The greater the nitromethane content in the fuel, the hotter the fuel will burn; high "nitro" fuels require cooler glow plugs. Lean mixtures (low fuel-to-air ratio) burn hotter than rich mixtures (higher fuel-to-air ratio) and operating temperatures can be raised to levels that can prematurely destroy the glow plug if too lean a mixture is used ("over-leaning").
If the engine slows down ("sags") when the battery power is removed, the plug temperature or the nitromethane content of the fuel should be increased, as the engine is not sufficiently hot. If the engine backfires when it is hand-cranked, it is operating too hot and the glow plug temperature or "nitro" content should be lowered.
Glow plugs have a limited lifetime and users are advised to have several replacement plugs on hand. Replacement plugs must be the correct type; plugs for turbo engines are not compatible with plugs for standard engines. The plugs should be tightened a quarter-turn past a snug fit to avoid over-tightening. Glow plugs, like all incandescent objects, are extremely hot, and glow plugs should never be removed when hot. Likewise, care must be taken when fueling because a hot glow plug can ignite fuel. Overheating of the battery can also be dangerous and only well-made connectors should be used.
Technical specifications
Turbo Glow Plug
Overall Length: 17mm (.67")
Diameter: .35" (9mm)
Thread size: M8x.75mm
Normal Glow Plug
Length: .8"
Diameter: 6.35mm
Threads: 1/4-32 UNEF (most often used thread specification for model engines)
See also
Nitro engine
References
External links
All about glow plugs
How to Choose the Right Glow Plug
Model engines
Model aircraft | Glow plug (model engine) | [
"Technology"
] | 1,904 | [
"Model engines",
"Engines"
] |
12,004,499 | https://en.wikipedia.org/wiki/Priority%20%28biology%29 | Priority is a principle in biological taxonomy by which a valid scientific name is established based on the oldest available name. It is a decisive rule in botanical and zoological nomenclature to recognise the first binomial name (also called binominal name in zoology) given to an organism as the correct and acceptable name. The purpose is to select one scientific name as a stable one out of two or more alternate names that often exist for a single species.
The International Code of Nomenclature for algae, fungi, and plants (ICN) defines it as: "A right to precedence established by the date of valid publication of a legitimate name or of an earlier homonym, or by the date of designation of a type." Basically, it is a scientific procedure to eliminate duplicate or multiple names for a species, for which Lucien Marcus Underwood called it "the principle of outlaw in nomenclature".
History
The principle of priority has not always been in place. When Carl Linnaeus laid the foundations of modern nomenclature, he offered no recognition of prior names. The botanists who followed him were just as willing to overturn Linnaeus's names. The first sign of recognition of priority came in 1813, when A. P. de Candolle laid out some principles of good nomenclatural practice. He favoured retaining prior names, but left wide scope for overturning poor prior names.
In botany
During the 19th century, the principle gradually came to be accepted by almost all botanists, but debate continued to rage over the conditions under which the principle might be ignored. Botanists on one side of the debate argued that priority should be universal and without exception. This would have meant a one-off major disruption as countless names in current usage were overturned in favour of archaic prior names. In 1891, Otto Kuntze, one of the most vocal proponents of this position, did just that, publishing over 30000 new combinations in his Revisio Generum Plantarum. He then followed with further such publications in 1893, 1898 and 1903. His efforts, however, were so disruptive that they appear to have benefited his opponents. By the 1900s, the need for a mechanism for the conservation of names was widely accepted, and details of such a mechanism were under discussion. The current system of "modified priority" was essentially put in place at the Cambridge Congress of 1930.
In zoology
By the 19th century, the Linnaean binomial system was generally adopted by zoologists. In doing so, many zoologists tried to dig up the oldest possible scientific names as a result of which proper and consistent names prevailing at the time including those by the eminent zoologists like Louis Agassiz, Georges Cuvier, Charles Darwin, Thomas Huxley, Richard Owen, etc. came to be challenged. Scientific organisations tried to established practical rules to changing names, but not a uniform system.
The first zoological code with priority rule was first formulated in 1842 by a committee appointed by the British Association. The committee comprising Charles Darwin, John Stevens Henslow, Leonard Jenyns, William Ogilby, John O. Westwood, John Phillips, Ralph Richardson and Hugh Edwin Strickland. The first meeting was at Darwin's house in London. The committee's report written by Strickland was implemented as the Rules of Zoological Nomenclature, and popularly known as the Stricklandian Code. It was not endorsed by all zoologists as it allowed naming, renaming and reclassifying with relative ease, as Science reported: "The worst feature of this abuse is not so much the bestowal of unknown names of well-known creatures as the transfer of one to another."
Principle
In zoology, the principle of priority is defined by the International Code of Zoological Nomenclature (4th edition, 1999) in its article 23:The valid name of a taxon is the oldest available name applied to it, unless that name has been invalidated or another name is given precedence by any provision of the Code or by any ruling of the Commission [the International Commission on Zoological Nomenclature]. For this reason priority applies to the validity of synonyms [Art. 23.3], to the relative precedence of homonyms [Arts. 53-60], the correctness or otherwise of spellings [Arts. 24, 32], and to the validity of nomenclatural acts (such as acts taken under the Principle of the First Reviser [Art. 24.2] and the fixation of name-bearing types [Arts. 68, 69, 74.1.3, 75.4]).There are exceptions: another name may be given precedence by any provision of the Code or by any ruling of the Commission. According to the ICZN preamble:Priority of publication is a basic principle of zoological nomenclature; however, under conditions prescribed in the Code its application may be modified to conserve a long-accepted name in its accustomed meaning. When stability of nomenclature is threatened in an individual case, the strict application of the Code may under specified conditions be suspended by the International Commission on Zoological Nomenclature.In botany, the principle if defined by the Shenzhen Code (or the International Code of Nomenclature for algae, fungi, and plants) in 2017 in its article 11:Each family or lower-ranked taxon with a particular circumscription, position, and rank can bear only one correct name. Special exceptions are made for nine families and one subfamily for which alternative names are permitted (see Art. 18.5 and 19.8). The use of separate names is allowed for fossil-taxa that represent different parts, life-history stages, or preservational states of what may have been a single organismal taxon or even a single individual (Art. 1.2).
Concept
Priority has two aspects:
The first formal scientific name published for a plant or animal taxon shall be the name that is to be used, called the valid name in zoology and correct name in botany (principle of synonymy).
Once a name has been used, no subsequent publication of that name for another taxon shall be valid (zoology) or validly published (botany) (principle of homonymy).
Note that nomenclature for botany and zoology is independent, and the rules of priority regarding homonyms operate within each discipline but not between them. Thus, an animal and a plant can bear the same name, which is then called a hemihomonym.
There are formal provisions for making exceptions to the principle of priority under each of the Codes. If an archaic or obscure prior name is discovered for an established taxon, the current name can be declared a nomen conservandum (botany) or conserved name (zoology), and so conserved against the prior name. Conservation may be avoided entirely in zoology as these names may fall in the formal category of nomen oblitum. Similarly, if the current name for a taxon is found to have an archaic or obscure prior homonym, the current name can be declared a nomen protectum (zoology) or the older name suppressed (nomen rejiciendum, botany).
Application
In botany and horticulture, the principle of priority applies to names at the rank of family and below. When moves are made to another genus or from one species to another, the "final epithet" of the name is combined with the new genus name, with any adjustments necessary for Latin grammar, for example:
When Festuca subgenus Schedonorus was moved to the genus Lolium, its name became Lolium subgenus Schedonorus.
Xiphion danfordiae Baker was moved to Juno danfordiae (Baker) Klatt, Iridodictyum danfordiae (Baker) Nothdurft and Iris danfordiae (Baker) Boiss. The name enclosed in parentheses cites the author who published the specific epithet, and the name after the parentheses cites the author who published the new combination of the specific epithet with the generic name.
Orthocarpus castillejoides var. humboldtiensis D.D. Keck was moved to Castilleja ambigua var. humboldtiensis (D.D. Keck) J.M. Egger.
When Caladenia alata was moved to the genus Petalochilus, the grammatical gender of the Latin words required a change in ending of the species epithet to the masculine form, Petalochilus alatus.
In zoology, the principle of priority applies to names between the rank of superfamily and subspecies (not to varieties, which are below the rank of subspecies). Also unlike in botany, the authorship of new combinations is not tracked, and only the original authority is ever cited. Example:
A.A. Girault published a description of a wasp, as Epentastichus fuscus, on 10 December 1913, and on 29 December 1913, he published a description of a related species, as Neomphaloides fusca. Eventually, both of these species were later transferred to the same genus, Aprostocetus, at which point they both would have become Aprostocetus fuscus (Girault, 1913), except that the one published 19 days later was the junior homonym, and its name was replaced with Aprostocetus fuscosus Bouček, 1988.
Examples
In 1855, John Edward Gray published the name Antilocapra anteflexa for a new species of pronghorn, based on a pair of horns. However, it is now thought that his specimen belonged to an unusual individual of an existing species, Antilocapra americana, with a name published by George Ord in 1815. The older name, by Ord, takes priority; with Antilocapra anteflexa becoming a junior synonym.
In 1856, Johann Jakob Kaup published the name Leptocephalus brevirostris for a new species of eel. However, it was realized in 1893 that the organism described by Kaup was in fact the juvenile form of the European eel (see eel life history for the full story). The European eel was named Muraena anguilla by Carl Linnaeus in 1758. So Muraena anguilla is the name to be used for the species, and Leptocephalus brevirostris must be considered as a junior synonym and not be used. Today the European eel is classified in the genus Anguilla (Garsault, 1764,) so its currently used name is Anguilla anguilla (Linnaeus, 1758).
See also
Kew Rule
References
Scientific nomenclature
Botanical nomenclature
Zoological nomenclature
Taxonomy (biology) | Priority (biology) | [
"Biology"
] | 2,158 | [
"Zoological nomenclature",
"Botanical nomenclature",
"Botanical terminology",
"Biological nomenclature",
"Taxonomy (biology)"
] |
12,004,602 | https://en.wikipedia.org/wiki/Available%20name | In zoological nomenclature, an available name is a scientific name for a taxon of animals that has been published after 1757 and conforming to all the mandatory provisions of the International Code of Zoological Nomenclature for the establishment of a zoological name. In contrast, an unavailable name is a name that does not conform to the rules of that code and that therefore is not available for use as a valid name for a taxon. Such a name does not fulfil the requirements in Articles 10 through 20 of the Code, or is excluded under Article 1.3.
Requirements
For a name to be available, in addition to meeting certain criteria for publication, there are a number of general requirements it must fulfill: it must include a description or definition of the taxon, must use only the Latin alphabet, must be formulated within the binomial nomenclature framework, must be newly-proposed (not a redescription under the same name of a taxon previously made available) and originally used as a valid name rather than as a synonym, must not be for a hybrid or hypothetical taxon, must not be for a taxon below the rank of subspecies, etc. In some rare cases, a name which does not meet these requirements may nevertheless be available, for historical reasons, as the criteria for availability have become more stringent with successive Code editions. For example, a name originally appearing along with an illustration but no formal description may be an available name, but only if the illustration was published prior to 1930 (under Article 12.2.7).
All available names must refer to a type, even if one was not provided at the time the name was first proposed. For species-level names, the type is usually a single specimen (a holotype, lectotype, or neotype); for generic-level names, the type is a single species; for family-level names, the type is a single genus. This hierarchical system of typification provides a concrete empirical anchor for all zoological names.
An available name is not necessarily a valid name, because an available name may be a homonym or subsequently be placed into synonymy. However, a valid name must always be an available one.
Unavailable names
Unavailable names include names that have not been published, such as "Oryzomys hypenemus" and "Ubirajara jubatus", names without an accompanying description (nomina nuda), such as the subgeneric name Micronectomys proposed for the Nicaraguan rice rat, names proposed with a rank below that of subspecies (infrasubspecific names), such as Sorex isodon princeps montanus for a form of the taiga shrew, and various other categories.
Despite the frequent confusion caused by common sense, an unavailable name is not necessarily a nomen nudum. A good examplification of this is the case of the unavailable dinosaur name "Ubirajara jubatus", which was assumed by common sense to be a nomen nudum before a detailed analysis of its nomenclatural status.
Contrast to botany
Under the International Code of Nomenclature for algae, fungi, and plants, this term is not used. In botany, the corresponding term is validly published name. The botanical equivalent of zoology's term "valid name" is correct name.
References
Bibliography
Hershkovitz, P. 1970. Supplementary notes on Neotropical Oryzomys dimidiatus and Oryzomys hammondi (Cricetinae). Journal of Mammalogy 51(4): 789-794.
Hutterer, R. & Zaitsev, M.V. 2004. Cases of homonymy in some Palaearctic and Nearctic taxa of the genus Sorex L. (Mammalia: Soricidae). Mammal Study 29:89-91.
International Commission for Zoological Nomenclature. 1999. International Code of Zoological Nomenclature, 4th edition. London: The International Trust for Zoological Nomenclature. Available online at https://web.archive.org/web/20090524144249/http://www.iczn.org/iczn/index.jsp. Accessed September 27, 2009.
Zoological nomenclature | Available name | [
"Biology"
] | 867 | [
"Zoological nomenclature",
"Biological nomenclature"
] |
12,004,717 | https://en.wikipedia.org/wiki/Pu%27s%20inequality | In differential geometry, Pu's inequality, proved by Pao Ming Pu, relates the area of an arbitrary Riemannian surface homeomorphic to the real projective plane with the lengths of the closed curves contained in it.
Statement
A student of Charles Loewner, Pu proved in his 1950 thesis that every Riemannian surface homeomorphic to the real projective plane satisfies the inequality
where is the systole of .
The equality is attained precisely when the metric has constant Gaussian curvature.
In other words, if all noncontractible loops in have length at least , then and the equality holds if and only if is obtained from a Euclidean sphere of radius by identifying each point with its antipodal.
Pu's paper also stated for the first time Loewner's inequality, a similar result for Riemannian metrics on the torus.
Proof
Pu's original proof relies on the uniformization theorem and employs an averaging argument, as follows.
By uniformization, the Riemannian surface is conformally diffeomorphic to a round projective plane. This means that we may assume that the surface is obtained from the Euclidean unit sphere by identifying antipodal points, and the Riemannian length element at each point is
where is the Euclidean length element and the function , called the conformal factor, satisfies .
More precisely, the universal cover of is , a loop is noncontractible if and only if its lift goes from one point to its opposite, and the length of each curve is
Subject to the restriction that each of these lengths is at least , we want to find an that minimizes the
where is the upper half of the sphere.
A key observation is that if we average several different that satisfy the length restriction and have the same area , then we obtain a better conformal factor , that also satisfies the length restriction and has
and the inequality is strict unless the functions are equal.
A way to improve any non-constant is to obtain the different functions from using rotations of the sphere , defining . If we average over all possible rotations, then we get an that is constant over all the sphere. We can further reduce this constant to minimum value allowed by the length restriction. Then we obtain the obtain the unique metric that attains the minimum area .
Reformulation
Alternatively, every metric on the sphere invariant under the antipodal map admits a pair of opposite points at Riemannian distance satisfying
A more detailed explanation of this viewpoint may be found at the page Introduction to systolic geometry.
Filling area conjecture
An alternative formulation of Pu's inequality is the following. Of all possible fillings of the Riemannian circle of length by a -dimensional disk with the strongly isometric property, the round hemisphere has the least area.
To explain this formulation, we start with the observation that the equatorial circle of the unit -sphere is a Riemannian circle of length . More precisely, the Riemannian distance function
of is induced from the ambient Riemannian distance on the sphere. Note that this property is not satisfied by the standard imbedding of the unit circle in the Euclidean plane. Indeed, the Euclidean distance between a pair of opposite points of the circle is
only , whereas in the Riemannian circle it is .
We consider all fillings of by a -dimensional disk, such that the metric induced by the inclusion of the circle as the boundary of the disk is the Riemannian
metric of a circle of length . The inclusion of the circle as the boundary is then called a strongly isometric imbedding of the circle.
Gromov conjectured that the round hemisphere gives the "best" way of filling the circle even when the filling surface is allowed to have positive genus .
Isoperimetric inequality
Pu's inequality bears a curious resemblance to the classical isoperimetric inequality
for Jordan curves in the plane, where is the length of the curve while is the area of the region it bounds. Namely, in both cases a 2-dimensional quantity (area) is bounded by (the square of) a 1-dimensional quantity (length). However, the inequality goes in the opposite direction. Thus, Pu's inequality can be thought of as an
"opposite" isoperimetric inequality.
See also
Filling area conjecture
Gromov's systolic inequality for essential manifolds
Gromov's inequality for complex projective space
Loewner's torus inequality
Systolic geometry
Systoles of surfaces
References
Riemannian geometry
Geometric inequalities
Differential geometry of surfaces
Systolic geometry | Pu's inequality | [
"Mathematics"
] | 928 | [
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry"
] |
12,005,097 | https://en.wikipedia.org/wiki/WindowProc | In Win32 application programming, WindowProc (or window procedure), also known as WndProc is a user-defined callback function that processes messages sent to a window. This function is specified when an application registers its window class and can be named anything (not necessarily WindowProc).
Message handling
The window procedure is responsible for handling all messages that are sent to a window. The function prototype of WindowProc is given by:
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
hwnd is a handle to the window to which the message was sent and uMsg identifies the actual message by its identifier, as specified in winuser.h.
wParam and lParam are parameters whose meaning depends on the message. An application should identify the message and take the required action.
Default processing
Hundreds of different messages are produced as a result of various events taking place in the system, and typically, an application processes only a small fraction of these messages. In order to ensure that all messages are processed, Windows provides a default window procedure called DefWindowProc that provides default processing for messages that the application itself does not process.
An application usually calls DefWindowProc at the end of its own WindowProc function, so that unprocessed messages can be passed down to the default procedure.
See also
Event loop
Desktop Window Manager
External links
"Writing the Window Procedure" at Microsoft Learn
DefWindowProc at Microsoft Learn
Events (computing)
Microsoft application programming interfaces | WindowProc | [
"Technology"
] | 332 | [
"Information systems",
"Events (computing)"
] |
12,005,759 | https://en.wikipedia.org/wiki/Page%20address%20register | A page address register (PAR) contains the physical addresses of pages currently held in the main memory of a computer system. PARs are used in order to avoid excessive use of an address table in some operating systems. A PAR may check a page's number against all entries in the PAR simultaneously, allowing it to retrieve the pages physical address quickly. A PAR is used by a single process and is only used for pages which are frequently referenced (though these pages may change as the process's behaviour changes in accordance with the principle of locality). An example computer which made use of PARs is the Atlas.
See also
Translation Lookaside Buffer (TLB)
References
Virtual memory
Computer memory | Page address register | [
"Technology"
] | 141 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
12,006,607 | https://en.wikipedia.org/wiki/Detection%20of%20genetically%20modified%20organisms | The detection of genetically modified organisms in food or feed is possible by biochemical means. It can either be qualitative, showing which genetically modified organism (GMO) is present, or quantitative, measuring in which amount a certain GMO is present. Being able to detect a GMO is an important part of GMO labeling, as without detection methods the traceability of GMOs would rely solely on documentation.
Polymerase chain reaction (PCR)
The polymerase chain reaction (PCR) is a biochemistry and molecular biology technique for isolating and exponentially amplifying a fragment of DNA, via enzymatic replication, without using a living organism. It enables the detection of specific strands of DNA by making millions of copies of a target genetic sequence. The target sequence is essentially photocopied at an exponential rate, and simple visualisation techniques can make the millions of copies easy to see.
The method works by pairing the targeted genetic sequence with custom designed complementary bits of DNA called primers. In the presence of the target sequence, the primers match with it and trigger a chain reaction. DNA replication enzymes use the primers as docking points and start doubling the target sequences. The process is repeated over and over again by sequential heating and cooling until doubling and redoubling has multiplied the target sequence several million-fold. The millions of identical fragments are then purified in a slab of gel, dyed, and can be seen with UV light. It is not prone to contamination. Irrespective of the variety of methods used for DNA analysis, only PCR in its different formats has been widely applied in GMO detection/analysis and generally accepted for regulatory compliance purposes. Detection methods based on DNA rely on the complementarity of two strands of DNA double helix that hybridize in a sequence-specific manner. The DNA of GMO consists of several elements that govern its functioning. The elements are promoter sequence, structural gene and stop sequence for the gene.
Quantitative detection
Quantitative PCR (Q-PCR) is used to measure the quantity of a PCR product (preferably real-time, QRT-PCR). It is the method of choice to quantitatively measure amounts of transgene DNA in a food or feed sample. Q-PCR is commonly used to determine whether a DNA sequence is present in a sample and the number of its copies in the sample. The method with currently the highest level of accuracy is quantitative real-time PCR. QRT-PCR methods use fluorescent dyes, such as Sybr Green, or fluorophore-containing DNA probes, such as TaqMan, to measure the amount of amplified product in real time. If the targeted genetic sequence is unique to a certain GMO, a positive PCR test proves that the GMO is present in the sample.
Qualitative detection
Whether or not a GMO is present in a sample can be tested by Q-PCR, but also by multiplex PCR. Multiplex PCR uses multiple, unique primer sets within a single PCR reaction to produce amplicons of varying sizes specific to different DNA sequences, i.e. different transgenes. By targeting multiple genes at once, additional information may be gained from a single test run that otherwise would require several times the reagents and more time to perform. Annealing temperatures for each of the primer sets must be optimized to work correctly within a single reaction, and amplicon sizes, i.e., their base pair length, should be different enough to form distinct bands when visualized by gel electrophoresis.
Event-specific vs. construct-specific detection
When producers, importers or authorities test a sample for the unintended presence of GMOs, they usually do not know which GMO to expect. While EU authorities prefer an event-specific approach to this problem, US authorities rely on construct-specific test schemes.
Event-specific detection
An event-specific detection searches for the presence of a DNA sequence unique to a certain GMO, usually the junction between the transgene and the organism's original DNA. This approach is ideal to precisely identify a GMO, yet highly similar GMOs will pass completely unnoticed. Event-specific detection is PCR-based.
Construct-specific detection
The construct-specific detection methods can either be DNA or protein based. DNA based detection looks for a part of the foreign DNA inserted in a GMO. For technical reasons, certain DNA sequences are shared by several GMOs. Protein-based methods detect the product of the transgene, for example the Bt toxin. Since different GMOs may produce the same protein, construct-specific detection can test a sample for several GMOs in one step, but is unable to tell precisely which of the similar GMOs are present. Especially in the USA, protein-based detection is used for the construct-specific approach.
Shortcomings of current detection methods
Currently, it is highly unlikely that the presence of unexpected or even unknown GMOs will be detected, since either the DNA sequence of the transgene or its product, the protein, must be known for detection. In addition, even testing for known GMOs is time-consuming and costly, as current reliable detection methods can test for only one GMO at a time. Therefore, research programmes such as Co-Extra are developing improved and alternative testing methods, for example DNA microarrays.
Alternative detection methods
Improving PCR based detection
Improving PCR based detection of GMOs is a further goal of the European research programme Co-Extra. Research is now underway to develop multiplex PCR methods that can simultaneously detect many different transgenic lines. Another major challenge is the increasing prevalence of transgenic crops with stacked traits. This refers to transgenic cultivars derived from crosses between transgenic parent lines, combining the transgenic traits of both parents. One GM maize variety now awaiting a decision by the European Commission, MON863 x MON810 x NK603, has three stacked traits. It is resistant to an herbicide and to two different kinds of insect pests. Some combined testing methods could give results that would triple the actual GM content of a sample containing this GMO.
Detecting unknown GMOs
Almost all transgenic plants contain a few common building blocks that make unknown GMOs easier to find. Even though detecting a novel gene in a GMO can be like finding a needle in a haystack, the fact that the needles are usually similar makes it much easier. To trigger gene expression, scientists couple the gene they want to add with what is known as a transcription promoter. The high-performing 35S promoter is a common feature to many GMOs. In addition, the stop signal for gene transcription in most GMOs is often the same: the NOS terminator. Researchers now compile a set of genetic sequences characteristic of GMOs. After genetic elements characteristic of GMOs are selected, methods and tools are developed for detecting them in test samples. Approaches being considered include microarrays and anchor PCR profiling.
Near infrared fluorescence (NIR)
Near infrared fluorescence (NIR) detection is a method that can reveal what kinds of chemicals are present in a sample based on their physical properties. By hitting a sample with near infrared light, chemical bonds in the sample vibrate and re-release the light energy at a wavelength characteristic for a specific molecule or chemical bond. It is not yet known if the differences between GMOs and conventional plants are large enough to detect with NIR imaging. Although the technique would require advanced machinery and data processing tools, a non-chemical approach could have some advantages such as lower costs and enhanced speed and mobility.
Controls by country
European Union
Switzerland
The Cantons of Switzerland perform tests to assess the presence of genetically modified organisms in foodstuffs. In 2008, 3% of the tested samples contained detectable amounts of GMOs. In 2012, 12% of the samples analysed contained detectable amounts of GMOs (including 2.4% of GMOs forbidden in Switzerland). Except one, all the samples tested contained less than 0.9% of GMOs; which is the threshold that impose labelling indicating the presence of GMOs.
See also
StarLink corn recall
References
External links
Co-Extra: Research on co-existence and traceability investigates new and improved detection methods
European Network of GMO Laboratories develops and standardises detection methods
Institute for Reference Materials and Measurements provides reference material for GMO detection
GMO Detection Methods Database the Institute for Health and Consumer Protection (IHCP) provides validated GMO Detection Methods
Biochemistry methods
Genetically modified organisms in agriculture | Detection of genetically modified organisms | [
"Chemistry",
"Biology"
] | 1,756 | [
"Biochemistry methods",
"Biochemistry"
] |
12,006,784 | https://en.wikipedia.org/wiki/Steam%20accumulator | A steam accumulator is an insulated steel pressure tank containing hot water and steam under pressure. It is a type of energy storage device. It can be used to smooth out peaks and troughs in demand for steam. Steam accumulators may take on a significance for energy storage in solar thermal energy projects. An example is the PS10 solar power plant near Seville, Spain and one planned for the "solar steam train" project in Sacramento, California.
History
It was invented in 1874 by the Scottish engineer Andrew Betts Brown.
Charge
The tank is about half-filled with cold water and steam is blown in from a boiler via a perforated pipe near the bottom of the drum. Some of the steam condenses and heats the water. The remainder fills the space above the water level. When the accumulator is fully charged the condensed steam will have raised the water level in the drum to about three-quarters full and the temperature and pressure will also have risen.
Discharge
Steam can be drawn off as required, either for driving a steam turbine or for process purposes (e.g. in chemical engineering), by opening a steam valve on top of the drum. The pressure in the drum will fall but the reduced pressure causes more water to boil and the accumulator can go on supplying steam (while gradually reducing pressure and temperature) for some time before it has to be re-charged.
Pressure and temperature
This steam table shows the relationship between pressure and temperature in a boiler or steam accumulator:
Absolute pressure = gauge pressure + atmospheric pressure
See also
Fireless locomotive
References
Sources
Everyman's Encyclopaedia 1931, volume 2, page 543
External links
Steam Accumulators A complete overview of the need for steam storage to meet peak load demands in specific industries, including the design, construction and operation of a steam accumulator, with calculations - Spirax Sarco
Boilers
Energy storage | Steam accumulator | [
"Chemistry"
] | 391 | [
"Boilers",
"Pressure vessels"
] |
12,007,423 | https://en.wikipedia.org/wiki/Transplastomic%20plant | A transplastomic plant is a genetically modified plant in which genes are inactivated, modified or new foreign genes are inserted into the DNA of plastids like the chloroplast instead of nuclear DNA.
Currently, the majority of transplastomic plants are a result of chloroplast manipulation due to poor expression in other plastids. However, the technique has been successfully applied to the chromoplasts of tomatoes.
Chloroplasts in plants are thought to have originated from an engulfing event of a photosynthetic bacteria (cyanobacterial ancestor) by a eukaryote. There are many advantages to chloroplast DNA manipulation because of its bacterial origin. For example, the ability to introduce multiple genes (operons) in a single step instead of many steps and the simultaneous expression of many genes with its bacterial gene expression system. Other advantages include the ability to obtain organic products like proteins at a high concentration and the fact that production of these products will not be affected by epigenetic regulation.
The reason for product synthesis at high concentrations is because a single plant cell can potentially carry up to 100 chloroplasts. If all these plastids are transformed, all of them can express the introduced foreign genes. This is may be advantageous compared to transformation of the nucleus, because the nucleus typically contains only one or two copies of the gene.
The advantages provided by chloroplast DNA manipulation has seen growing interest into this field of research and development, particularly in agricultural and pharmaceutical applications. However, there are some limitations in chloroplast DNA manipulation, such as the inability to manipulate cereal crop DNA material and poor expression of foreign DNA in non- green plastids as mentioned before. In addition, the lack of post- translational modification capability like glycosylation in plastids may make some human- related protein expression difficult. Nevertheless, much progress has been made into plant transplastomics, for example, the production of edible vaccines for Tetanus by using a transplastomic tobacco plant.
Transformation and selection procedure
Gene construct
The first requirement for transplastomic plant generation is to have a suitable gene construct that can be introduced into a plastid like a chloroplast in the form of an E. coli plasmid vector. There are several key features of a suitable gene cassette including but not limited to (1) selectable marker (2) flanking sequences (3) gene of interest (4) promoter sequences (5) 5' UTR (6) 3' UTR (7) intercistronic elements. The selectable marker typically tends to be an antibiotic resistant gene, which would give the plant cell the ability to tolerate being grown on antibiotic containing agar plates. Flanking sequences are crucial for introduction of the gene construct at precise predetermined points of the plastid genome through homologous recombination. The gene of interests introduced have many different applications and can range from pest resistance genes to vaccine antigen production. Intercistronic elements (IEE) are important for facilitating high levels of gene expression if multiple genes are introduced in the form of an operon. Finally, the 5' UTR and 3' UTR enhances ribosomal binding and increases transcript stability respectively.
Transformation and selection
The most common method for plastid transformations is biolistics: Small gold or tungsten particles are coated with the plasmid vector and shot into young plant cells or plant embryos, penetrating multiple cell layers and into the plastid. There will then be a homologous recombination event between the shot plasmid vector and the plastid's genome, hopefully resulting in a stable insertion of the gene cassette into the plastid. Whilst the transformation efficiency is lower than in agrobacterial mediated transformation, which is also common in plant genetic engineering, particle bombardment is especially suitable for chloroplast transformation. Other transformation methods include the use of polyethylene glycol (PEG)- mediated transformation, which involves the removal of the plant cell wall in order to expose the "naked" plant cell to the foreign genetic material for transformation in the presence of PEG. PEG- mediated transformation however, is notoriously time-consuming, very technical and labor-intensive as it requires the removal of the cell wall which is a key protective structural component of the plant cell. Interestingly, a paper released in 2018 has described a successful plastid transformation of the chloroplast from the microalgae species N. oceanica and C. reinhardtii through electroporation. Whilst no study has been attempted yet for plastid transformation of higher plants using electroporation, this could be an interesting area of study for the future.
In order to persist and be stably maintained in the cell, a plasmid DNA molecule must contain an origin of replication, which allows it to be replicated in the cell independently of the chromosome. When foreign DNA is first introduced to the plant tissue, not all chloroplasts will have successfully integrated the introduced genetic material. There will be a mixture of normal and transformed chloroplast within the plant cells. This mix of normal and transformed chloroplasts are defined to be "heteroplasmic" chloroplast population. Stable gene expression of the introduced gene requires a "homoplasmic" population of transformed chloroplasts in the plant cells, where all the chloroplasts in the plant cell has successfully integrated the foreign genetic material. Typically, homoplasmicity can be achieved and identified through multiple rounds of selection on antibiotics. This is where the transformed plant tissue are grown repeatedly on agar plates that contain antibiotics like spectinomycin. Only plant cells that have successfully integrated the gene cassette as shown above will be able to express the antibiotic resistance selectable marker and therefore grow normally on agar plates containing antibiotics. Plant tissue that do not grow normally will have a bleached appearance as the spectinomycin antibiotic inhibits the ribosomes in the plastids of the plant cell, thereby preventing maintenance of the chloroplast However, as heteroplasmic population of chloroplasts may still be able to grow on agar plates effectively, many rounds of antibiotic selection and regrowth are required to cultivate a plant tissue that is homoplasmic and stable. Generation of homoplasmic plant tissue is considered to be a major difficulty in transplastomics and incredibly time-consuming.
Grafting
Some plant species such as Nicotiana tabacum are more receptive to transplastomics compared to members of the same genus such as Nicotiana glauca and Nicotiana benthamiana. An experiment conducted in 2012 highlighted the possibility of facilitating transplastomics for difficult plant species using grafting. Grafting occurs when two different plants are joined and continue to grow, this technique has been widely employed in agricultural applications and can even occur naturally in the wild. A transplastomic N. tabacum plant was engineered to have spectinomycin resistance and GFP fluorescence. Whilst the nuclear transgenic plants N. benthamiana and N. glauca were engineered to have kanamycin antibiotic resistance and YFP fluorescence. The transplastomic plant and the nuclear transgenic plants were then grafted unto each other and the grafted tissues were then analysed. Fluorescence microscopy and antibiotic selection on agar plates with both kanamycin and spectinomycin revealed that the grafted plant tissue had both transplastomics and nuclear transgene DNA material. This was further confirmed through PCR analysis. This study highlighted that plastids like the chloroplast are able to pass between cells across graft junctions and result in the transfer of genetic material between two different plant cell lines. This finding is significant as it provides an alternative pathway for generation of transplastomic plants for species that are not as easily transformed using our current experimental methodology as seen above.
Optimizing transgene expression
Inducible expression systems such as theoriboswitches and pentatricopeptide repeat proteins have been widely studied in an effort to control and modulate expression of transgene products in transplastomic plants. One big advantage in using inducible expression systems is to optimize concentration of transgene protein production. For example, young plants need to devote energy and resources into growth and development to become mature plants. Constitutive expression of the transgene would therefore be detrimental for plant growth and development, as it takes away valuable energy and resources to express the foreign gene construct instead. This would result in a poorly developed transplastomic plant with low product yield. Inducible expression expression of the transgene would overcome this limitation and allow the plant to mature fully like a normal wildtype plant before it is induced chemically to begin production of the transgene which can then be harvested.
Biological containment and agricultural coexistence
Genetically modified plants must be safe for the environment and suitable for coexistence with conventional and organic crops. A major hurdle for traditional nuclear genetically modified crops is posed by the potential outcrossing of the transgene via pollen movement. Initially it was thought that, plastid transformation, which yields transplastomic plants in which the pollen does not contain the transgene, not only increases biosafety, but also facilitates the coexistence of genetically modified, conventional and organic agriculture. Therefore, developing such crops was a major goal of research projects such as Co-Extra and Transcontainer.
However, a study conducted on the tobacco plant in 2007 has disproved this theory. Led by Ralph Bock from the Max Planck Institute of Molecular Plant Physiology in Germany, researchers studied genetically modified tobacco in which the transgene was integrated in chloroplasts. A transplastomic tobacco plant generated through chloroplast mediated transformation was bred with plants that were male sterile with an untouched chloroplast. The transplastomic plants were engineered to have resistance to the antibiotic spectinomycin and engineered to produce a green fluorescent protein molecule (GFP). Therefore, it was hypothesized that any offspring produced by from these two lines of tobacco plant should not be able to grow on spectinomycin or be fluorescent, as the genetic material in the chloroplast should not be able to transfer via pollen. However, it was found that some of the seeds were resistant to the antibiotic and could germinate on spectinomycin agar plates. Calculations showed that 1 out of every million pollen grains contained plastid genetic material, which would be significant in an agricultural farm setting. Because tobacco has a strong tendency towards self-fertilisation, the reliability of transplastomic plants is assumed to be even higher under field conditions. Therefore, the researchers believe that only one in 100,000,000 GM tobacco plants actually would transmit the transgene via pollen. Such values are more than satisfactory to ensure coexistence. However, for GM crops used in the production of pharmaceuticals, or in other cases in which absolutely no outcrossing is permitted, the researchers recommend the combination of chloroplast transformation with other biological containment methods, such as cytoplasmic male sterility or transgene mitigation strategies. This study showed that whilst transplastomic plants do not have absolute gene containment, the level of containment is extremely high and would allow for coexistence of conventional and genetically modified agricultural crops.
There are public concerns regarding a possible transmission of antibiotic resistant genes to unwanted targets including bacteria and weeds. As a result of this, technologies have been developed to remove the selectable antibiotic resistance gene marker. One such technology that has been implemented is the Cre/lox system, where the nuclear encoded Cre recombinase can be placed under control of an inducible promoter to remove the antibiotic resistant gene once homoplasmicity has been achieved from the transformation process.
Examples and the future
A recent example of transplastomics in agricultural applications was conferring potato plants protection against the Colorado potato beetle. This beetle is dubbed a "super-pest" internationally because it has gained resistance against many insecticides and are extremely voracious feeders. The beetle is estimated to cause up to US$1.4 million in crop damages annually in Michigan alone. A study conducted in 2015 by Zhang utilized transplastomics to introduce double stranded RNA producing transgenes into the plastid genome. The double stranded RNA confers protection to the transgenic potato plant via a RNA interference methodology, where consumption of the plant tissue by the potato beetle would result in silencing of key genes required by the beetle for survival. There was a high level of protection conferred, the leaves of the transplastomic potato plant were mostly unconsumed when exposed to the adult beetles and larvae. The investigation also revealed an 83% killing efficacy for larvae that consumed the leaves of the transplastomic plant. This study highlights that as pests gain resistance to traditional chemical insecticides, the use of transplastomics to deliver RNAI- mediated crop protection strategies could become increasingly viable in the future.
Another notable transplastomics based approach is the production of artemisinic acid through transplastomic tobacco plants which is the precursor molecule that can be used to produce artemisinin. Artemisinin- based combination therapy is the preferred and recommended treatment of choice by the WHO (World Health Organization) against malaria. Artemisinin is naturally derived from the plant Artemisia annua, however, only low concentrations of artemisinin in the plant can be harvested naturally and there is currently an insufficient supply for the global demand. A study conducted in 2016 led by Fuentes, managed to introduce the artemisininic acid production pathway into the chloroplast of N. tabacum through a biolistics approach before using their novel synthetic biology tool COSTREL (combinatorial supertransformation of transplastomic recipient lines) to generate a transplastomic N. tabacum plant that had a very high arteminisin acid yield. This study illustrates the potential benefits of transplastomics for bio-pharmaceutical applications in the future.
Despite transplastomics being non- viable for non green plastids at the moment, plant transplastomics work done on the chloroplast genome has proved extremely valuable. The applications for chloroplast transformation includes and is not limited to agriculture, bio-fuel and bio-pharmaceuticals. This is because of a few factors, which include ease of multiple transgene expression in the form of operons and high copy number expression. The study of transplastomics still remains a work in progress. More research and development is still required to improve other areas such as transplastomics in non- green plastids, inability to transform cereal crops through transplastomics and a way to circumvent the lack of glycosylation capability in the chloroplast. Further improvements in this field of study will only give us a potential robust biotechnological route in many applications important in our day-to-day lives.
References
External links
Co-Extra Research on the co-existence and traceability of genetically modified plants
Transcontainer Developing biological containment systems for genetically modified plants
Genetic engineering
Genetically modified organisms in agriculture | Transplastomic plant | [
"Chemistry",
"Engineering",
"Biology"
] | 3,216 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
12,008,116 | https://en.wikipedia.org/wiki/Alpha%20recursion%20theory | In recursion theory, α recursion theory is a generalisation of recursion theory to subsets of admissible ordinals . An admissible set is closed under functions, where denotes a rank of Godel's constructible hierarchy. is an admissible ordinal if is a model of Kripke–Platek set theory. In what follows is considered to be fixed.
Definitions
The objects of study in recursion are subsets of . These sets are said to have some properties:
A set is said to be -recursively-enumerable if it is definable over , possibly with parameters from in the definition.
A is -recursive if both A and (its relative complement in ) are -recursively-enumerable. It's of note that -recursive sets are members of by definition of .
Members of are called -finite and play a similar role to the finite numbers in classical recursion theory.
Members of are called -arithmetic.
There are also some similar definitions for functions mapping to :
A partial function from to is -recursively-enumerable, or -partial recursive, iff its graph is -definable on .
A partial function from to is -recursive iff its graph is -definable on . Like in the case of classical recursion theory, any total -recursively-enumerable function is -recursive.
Additionally, a partial function from to is -arithmetical iff there exists some such that the function's graph is -definable on .
Additional connections between recursion theory and α recursion theory can be drawn, although explicit definitions may not have yet been written to formalize them:
The functions -definable in play a role similar to those of the primitive recursive functions.
We say R is a reduction procedure if it is recursively enumerable and every member of R is of the form where H, J, K are all α-finite.
A is said to be α-recursive in B if there exist reduction procedures such that:
If A is recursive in B this is written . By this definition A is recursive in (the empty set) if and only if A is recursive. However A being recursive in B is not equivalent to A being .
We say A is regular if or in other words if every initial portion of A is α-finite.
Work in α recursion
Shore's splitting theorem: Let A be recursively enumerable and regular. There exist recursively enumerable such that
Shore's density theorem: Let A, C be α-regular recursively enumerable sets such that then there exists a regular α-recursively enumerable set B such that .
Barwise has proved that the sets -definable on are exactly the sets -definable on , where denotes the next admissible ordinal above , and is from the Levy hierarchy.
There is a generalization of limit computability to partial functions.
A computational interpretation of -recursion exists, using "-Turing machines" with a two-symbol tape of length , that at limit computation steps take the limit inferior of cell contents, state, and head position. For admissible , a set is -recursive iff it is computable by an -Turing machine, and is -recursively-enumerable iff is the range of a function computable by an -Turing machine.
A problem in α-recursion theory which is open (as of 2019) is the embedding conjecture for admissible ordinals, which is whether for all admissible , the automorphisms of the -enumeration degrees embed into the automorphisms of the -enumeration degrees.
Relationship to analysis
Some results in -recursion can be translated into similar results about second-order arithmetic. This is because of the relationship has with the ramified analytic hierarchy, an analog of for the language of second-order arithmetic, that consists of sets of integers.
In fact, when dealing with first-order logic only, the correspondence can be close enough that for some results on , the arithmetical and Levy hierarchies can become interchangeable. For example, a set of natural numbers is definable by a formula iff it's -definable on , where is a level of the Levy hierarchy. More generally, definability of a subset of ω over HF with a formula coincides with its arithmetical definability using a formula.
References
Gerald Sacks, Higher recursion theory, Springer Verlag, 1990 https://projecteuclid.org/euclid.pl/1235422631
Robert Soare, Recursively Enumerable Sets and Degrees, Springer Verlag, 1987 https://projecteuclid.org/euclid.bams/1183541465
Keith J. Devlin, An introduction to the fine structure of the constructible hierarchy (p.38), North-Holland Publishing, 1974
J. Barwise, Admissible Sets and Structures. 1975
Inline references
Computability theory | Alpha recursion theory | [
"Mathematics"
] | 1,083 | [
"Computability theory",
"Mathematical logic"
] |
3,094,328 | https://en.wikipedia.org/wiki/Tight%20binding | In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations.
Introduction
The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited.
Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix elements, which would simply be called the bond energies by a chemist.
In general there are a number of atomic energy levels and atomic orbitals involved in the model. This can lead to complicated band structures because the orbitals belong to different point-group representations. The reciprocal lattice and the Brillouin zone often belong to a different space group than the crystal of the solid. High-symmetry points in the Brillouin zone belong to different point-group representations. When simple systems like the lattices of elements or simple compounds are studied it is often not very difficult to calculate eigenstates in high-symmetry points analytically. So the tight-binding model can provide nice examples for those who want to learn more about group theory.
The tight-binding model has a long history and has been applied in many ways and with many different purposes and different outcomes. The model doesn't stand on its own. Parts of the model can be filled in or extended by other kinds of calculations and models like the nearly-free electron model. The model itself, or parts of it, can serve as the basis for other calculations. In the study of conductive polymers, organic semiconductors and molecular electronics, for example, tight-binding-like models are applied in which the role of the atoms in the original concept is replaced by the molecular orbitals of conjugated systems and where the interatomic matrix elements are replaced by inter- or intramolecular hopping and tunneling parameters. These conductors nearly all have very anisotropic properties and sometimes are almost perfectly one-dimensional.
Historical background
By 1928, the idea of a molecular orbital had been advanced by Robert Mulliken, who was influenced considerably by the work of Friedrich Hund. The LCAO method for approximating molecular orbitals was introduced in 1928 by B. N. Finklestein and G. E. Horowitz, while the LCAO method for solids was developed by Felix Bloch, as part of his doctoral dissertation in 1928, concurrently with and independent of the LCAO-MO approach. A much simpler interpolation scheme for approximating the electronic band structure, especially for the d-bands of transition metals, is the parameterized tight-binding method conceived in 1954 by John Clarke Slater and George Fred Koster, sometimes referred to as the SK tight-binding method. With the SK tight-binding method, electronic band structure calculations on a solid need not be carried out with full rigor as in the original Bloch's theorem but, rather, first-principles calculations are carried out only at high-symmetry points and the band structure is interpolated over the remainder of the Brillouin zone between these points.
In this approach, interactions between different atomic sites are considered as perturbations. There exist several kinds of interactions we must consider. The crystal Hamiltonian is only approximately a sum of atomic Hamiltonians located at different sites and atomic wave functions overlap adjacent atomic sites in the crystal, and so are not accurate representations of the exact wave function. There are further explanations in the next section with some mathematical expressions.
In the recent research about strongly correlated material the tight binding approach is basic approximation because highly localized electrons like 3-d transition metal electrons sometimes display strongly correlated behaviors. In this case, the role of electron-electron interaction must be considered using the many-body physics description.
The tight-binding model is typically used for calculations of electronic band structure and band gaps in the static regime. However, in combination with other methods such as the random phase approximation (RPA) model, the dynamic response of systems may also be studied. In 2019, Bannwarth et al. introduced the GFN2-xTB method, primarily for the calculation of structures and non-covalent interaction energies.
Mathematical formulation
We introduce the atomic orbitals , which are eigenfunctions of the Hamiltonian of a single isolated atom. When the atom is placed in a crystal, this atomic wave function overlaps adjacent atomic sites, and so are not true eigenfunctions of the crystal Hamiltonian. The overlap is less when electrons are tightly bound, which is the source of the descriptor "tight-binding". Any corrections to the atomic potential required to obtain the true Hamiltonian of the system, are assumed small:
where denotes the atomic potential of one atom located at site in the crystal lattice. A solution to the time-independent single electron Schrödinger equation is then approximated as a linear combination of atomic orbitals :
,
where refers to the m-th atomic energy level.
Translational symmetry and normalization
The Bloch theorem states that the wave function in a crystal can change under translation only by a phase factor:
where is the wave vector of the wave function. Consequently, the coefficients satisfy
By substituting , we find
(where in RHS we have replaced the dummy index with )
or
Normalizing the wave function to unity:
so the normalization sets as
where are the atomic overlap integrals, which frequently are neglected resulting in
and
The tight binding Hamiltonian
Using the tight binding form for the wave function, and assuming only the m-th atomic energy level is important for the m-th energy band, the Bloch energies are of the form
Here in the last step it was assumed that the overlap integral is zero and thus . The energy then becomes
where Em is the energy of the m-th atomic level, and , and are the tight binding matrix elements discussed below.
The tight binding matrix elements
The elements are the atomic energy shift due to the potential on neighboring atoms. This term is relatively small in most cases. If it is large it means that potentials on neighboring atoms have a large influence on the energy of the central atom.
The next class of terms is the interatomic matrix element between the atomic orbitals m and l on adjacent atoms. It is also called the bond energy or two center integral and it is the dominant term in the tight binding model.
The last class of terms denote the overlap integrals between the atomic orbitals m and l on adjacent atoms. These, too, are typically small; if not, then Pauli repulsion has a non-negligible influence on the energy of the central atom.
Evaluation of the matrix elements
As mentioned before the values of the -matrix elements are not so large in comparison with the ionization energy because the potentials of neighboring atoms on the central atom are limited. If is not relatively small it means that the potential of the neighboring atom on the central atom is not small either. In that case it is an indication that the tight binding model is not a very good model for the description of the band structure for some reason. The interatomic distances can be too small or the charges on the atoms or ions in the lattice is wrong for example.
The interatomic matrix elements can be calculated directly if the atomic wave functions and the potentials are known in detail. Most often this is not the case. There are numerous ways to get parameters for these matrix elements. Parameters can be obtained from chemical bond energy data. Energies and eigenstates on some high symmetry points in the Brillouin zone can be evaluated and values integrals in the matrix elements can be matched with band structure data from other sources.
The interatomic overlap matrix elements should be rather small or neglectable. If they are large it is again an indication that the tight binding model is of limited value for some purposes. Large overlap is an indication for too short interatomic distance for example. In metals and transition metals the broad s-band or sp-band can be fitted better to an existing band structure calculation by the introduction of next-nearest-neighbor matrix elements and overlap integrals but fits like that don't yield a very useful model for the electronic wave function of a metal. Broad bands in dense materials are better described by a nearly free electron model.
The tight binding model works particularly well in cases where the band width is small and the electrons are strongly localized, like in the case of d-bands and f-bands. The model also gives good results in the case of open crystal structures, like diamond or silicon, where the number of neighbors is small. The model can easily be combined with a nearly free electron model in a hybrid NFE-TB model.
Connection to Wannier functions
Bloch functions describe the electronic states in a periodic crystal lattice. Bloch functions can be represented as a Fourier series
where denotes an atomic site in a periodic crystal lattice, is the wave vector of the Bloch's function, is the electron position, is the band index, and the sum is over all atomic sites. The Bloch's function is an exact eigensolution for the wave function of an electron in a periodic crystal potential corresponding to an energy , and is spread over the entire crystal volume.
Using the Fourier transform analysis, a spatially localized wave function for the m-th energy band can be constructed from multiple Bloch's functions:
These real space wave functions are called Wannier functions, and are fairly closely localized to the atomic site . Of course, if we have exact Wannier functions, the exact Bloch functions can be derived using the inverse Fourier transform.
However it is not easy to calculate directly either Bloch functions or Wannier functions. An approximate approach is necessary in the calculation of electronic structures of solids. If we consider the extreme case of isolated atoms, the Wannier function would become an isolated atomic orbital. That limit suggests the choice of an atomic wave function as an approximate form for the Wannier function, the so-called tight binding approximation.
Second quantization
Modern explanations of electronic structure like t-J model and Hubbard model are based on tight binding model. Tight binding can be understood by working under a second quantization formalism.
Using the atomic orbital as a basis state, the second quantization Hamiltonian operator in the tight binding framework can be written as:
,
- creation and annihilation operators
- spin polarization
- hopping integral
- nearest neighbor index
- the hermitian conjugate of the other term(s)
Here, hopping integral corresponds to the transfer integral in tight binding model. Considering extreme cases of , it is impossible for an electron to hop into neighboring sites. This case is the isolated atomic system. If the hopping term is turned on () electrons can stay in both sites lowering their kinetic energy.
In the strongly correlated electron system, it is necessary to consider the electron-electron interaction. This term can be written in
This interaction Hamiltonian includes direct Coulomb interaction energy and exchange interaction energy between electrons. There are several novel physics induced from this electron-electron interaction energy, such as metal-insulator transitions (MIT), high-temperature superconductivity, and several quantum phase transitions.
Example: one-dimensional s-band
Here the tight binding model is illustrated with a s-band model for a string of atoms with a single s-orbital in a straight line with spacing a and σ bonds between atomic sites.
To find approximate eigenstates of the Hamiltonian, we can use a linear combination of the atomic orbitals
where N = total number of sites and is a real parameter with . (This wave function is normalized to unity by the leading factor 1/√N provided overlap of atomic wave functions is ignored.) Assuming only nearest neighbor overlap, the only non-zero matrix elements of the Hamiltonian can be expressed as
The energy Ei is the ionization energy corresponding to the chosen atomic orbital and U is the energy shift of the orbital as a result of the potential of neighboring atoms. The elements, which are the Slater and Koster interatomic matrix elements, are the bond energies . In this one dimensional s-band model we only have -bonds between the s-orbitals with bond energy . The overlap between states on neighboring atoms is S. We can derive the energy of the state using the above equation:
where, for example,
and
Thus the energy of this state can be represented in the familiar form of the energy dispersion:
.
For the energy is and the state consists of a sum of all atomic orbitals. This state can be viewed as a chain of bonding orbitals.
For the energy is and the state consists of a sum of atomic orbitals which are a factor out of phase. This state can be viewed as a chain of non-bonding orbitals.
Finally for the energy is and the state consists of an alternating sum of atomic orbitals. This state can be viewed as a chain of anti-bonding orbitals.
This example is readily extended to three dimensions, for example, to a body-centered cubic or face-centered cubic lattice by introducing the nearest neighbor vector locations in place of simply n a. Likewise, the method can be extended to multiple bands using multiple different atomic orbitals at each site. The general formulation above shows how these extensions can be accomplished.
Table of interatomic matrix elements
In 1954 J.C. Slater and G.F. Koster published, mainly for the calculation of transition metal d-bands, a table of interatomic matrix elements
which can also be derived from the cubic harmonic orbitals straightforwardly. The table expresses the matrix elements as functions of LCAO two-centre bond integrals between two cubic harmonic orbitals, i and j, on adjacent atoms. The bond integrals are for example the , and for sigma, pi and delta bonds (Notice that these integrals should also depend on the distance between the atoms, i.e. are a function of , even though it is not explicitly stated every time.).
The interatomic vector is expressed as
where d is the distance between the atoms and l, m and n are the direction cosines to the neighboring atom.
Not all interatomic matrix elements are listed explicitly. Matrix elements that are not listed in this table can be constructed by permutation of indices and cosine directions of other matrix elements in the table. Note that swapping orbital indices amounts to taking , i.e. . For example, .
See also
Electronic band structure
Nearly-free electron model
Bloch's theorems
Kronig-Penney model
Fermi surface
Wannier function
Hubbard model
t-J model
Effective mass
Anderson's rule
Dynamical theory of diffraction
Solid state physics
Linear combination of atomic orbitals molecular orbital method (LCAO)
Holstein–Herring method
Peierls substitution
Hückel method
References
N. W. Ashcroft and N. D. Mermin, Solid State Physics (Thomson Learning, Toronto, 1976).
Stephen Blundell Magnetism in Condensed Matter(Oxford, 2001).
S.Maekawa et al. Physics of Transition Metal Oxides (Springer-Verlag Berlin Heidelberg, 2004).
John Singleton Band Theory and Electronic Properties of Solids (Oxford, 2001).
Further reading
External links
Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012,
Tight-Binding Studio: A Technical Software Package to Find the Parameters of Tight-Binding Hamiltonian
Electronic structure methods
Electronic band structures | Tight binding | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,471 | [
"Electron",
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Electronic band structures",
"Computational chemistry",
"Condensed matter physics"
] |
3,094,450 | https://en.wikipedia.org/wiki/Degree%20of%20a%20continuous%20mapping | In topology, the degree of a continuous mapping between two compact oriented manifolds of the same dimension is a number that represents the number of times that the domain manifold wraps around the range manifold under the mapping. The degree is always an integer, but may be positive or negative depending on the orientations.
The degree of a map between general manifolds was first defined by Brouwer, who showed that the degree is homotopy invariant and used it to prove the Brouwer fixed point theorem. Less general forms of the concept existed before Brouwer, such as the winding number and the Kronecker characteristic (or Kronecker integral).
In modern mathematics, the degree of a map plays an important role in topology and geometry. In physics, the degree of a continuous map (for instance a map from space to some order parameter set) is one example of a topological quantum number.
Definitions of the degree
From Sn to Sn
The simplest and most important case is the degree of a continuous map from the -sphere to itself (in the case , this is called the winding number):
Let be a continuous map. Then induces a pushforward homomorphism , where is the th homology group. Considering the fact that , we see that must be of the form for some fixed .
This is then called the degree of .
Between manifolds
Algebraic topology
Let X and Y be closed connected oriented m-dimensional manifolds. Poincare duality implies that the manifold's top homology group is isomorphic to Z. Choosing an orientation means choosing a generator of the top homology group.
A continuous map f : X →Y induces a homomorphism f∗ from Hm(X) to Hm(Y). Let [X], resp. [Y] be the chosen generator of Hm(X), resp. Hm(Y) (or the fundamental class of X, Y). Then the degree of f is defined to be f*([X]). In other words,
If y in Y and f −1(y) is a finite set, the degree of f can be computed by considering the m-th local homology groups of X at each point in f −1(y). Namely, if , then
Differential topology
In the language of differential topology, the degree of a smooth map can be defined as follows: If f is a smooth map whose domain is a compact manifold and p is a regular value of f, consider the finite set
By p being a regular value, in a neighborhood of each xi the map f is a local diffeomorphism. Diffeomorphisms can be either orientation preserving or orientation reversing. Let r be the number of points xi at which f is orientation preserving and s be the number at which f is orientation reversing. When the codomain of f is connected, the number r − s is independent of the choice of p (though n is not!) and one defines the degree of f to be r − s. This definition coincides with the algebraic topological definition above.
The same definition works for compact manifolds with boundary but then f should send the boundary of X to the boundary of Y.
One can also define degree modulo 2 (deg2(f)) the same way as before but taking the fundamental class in Z2 homology. In this case deg2(f) is an element of Z2 (the field with two elements), the manifolds need not be orientable and if n is the number of preimages of p as before then deg2(f) is n modulo 2.
Integration of differential forms gives a pairing between (C∞-)singular homology and de Rham cohomology: , where is a homology class represented by a cycle and a closed form representing a de Rham cohomology class. For a smooth map f: X →Y between orientable m-manifolds, one has
where f∗ and f∗ are induced maps on chains and forms respectively. Since f∗[X] = deg f · [Y], we have
for any m-form ω on Y.
Maps from closed region
If is a bounded region, smooth, a regular value of and , then the degree is defined by the formula
where is the Jacobian matrix of in .
This definition of the degree may be naturally extended for non-regular values such that where is a point close to . The topological degree can also be calculated using a surface integral over the boundary of , and if is a connected n-polytope, then the degree can be expressed as a sum of determinants over a certain subdivision of its facets.
The degree satisfies the following properties:
If , then there exists such that .
for all .
Decomposition property: if are disjoint parts of and .
Homotopy invariance: If and are homotopy equivalent via a homotopy such that and , then .
The function is locally constant on .
These properties characterise the degree uniquely and the degree may be defined by them in an axiomatic way.
In a similar way, we could define the degree of a map between compact oriented manifolds with boundary.
Properties
The degree of a map is a homotopy invariant; moreover for continuous maps from the sphere to itself it is a complete homotopy invariant, i.e. two maps are homotopic if and only if .
In other words, degree is an isomorphism between and .
Moreover, the Hopf theorem states that for any -dimensional closed oriented manifold M, two maps are homotopic if and only if
A self-map of the n-sphere is extendable to a map from the n+1-ball to the n-sphere if and only if . (Here the function F extends f in the sense that f is the restriction of F to .)
Calculating the degree
There is an algorithm for calculating the topological degree deg(f, B, 0) of a continuous function f from an n-dimensional box B (a product of n intervals) to , where f is given in the form of arithmetical expressions. An implementation of the algorithm is available in TopDeg - a software tool for computing the degree (LGPL-3).
See also
Covering number, a similarly named term. Note that it does not generalize the winding number but describes covers of a set by balls
Density (polytope), a polyhedral analog
Topological degree theory
Notes
References
External links
Let's get acquainted with the mapping degree, by Rade T. Zivaljevic.
Algebraic topology
Differential topology
Theory of continuous functions | Degree of a continuous mapping | [
"Mathematics"
] | 1,364 | [
"Theory of continuous functions",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Differential topology"
] |
3,094,527 | https://en.wikipedia.org/wiki/Castelnuovo%E2%80%93de%20Franchis%20theorem | In mathematics, the Castelnuovo–de Franchis theorem is a classical result on complex algebraic surfaces. Let X be such a surface, projective and non-singular, and let
ω1 and ω2
be two differentials of the first kind on X which are linearly independent but with wedge product 0. Then this data can be represented as a pullback of an algebraic curve: there is a non-singular algebraic curve C, a morphism
φ: X → C,
and differentials of the first kind ω1 and ω2 on C such that
φ*(1) = ω1 and φ*(2) = ω2.
This result is due to Guido Castelnuovo and Michele de Franchis (1875–1946).
The converse, that two such pullbacks would have wedge 0, is immediate.
See also
de Franchis theorem
References
.
Algebraic surfaces
Theorems in geometry | Castelnuovo–de Franchis theorem | [
"Mathematics"
] | 188 | [
"Mathematical theorems",
"Mathematical problems",
"Geometry",
"Theorems in geometry"
] |
3,094,621 | https://en.wikipedia.org/wiki/Charge%20conservation | In physics, charge conservation is the principle, of experimental nature, that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved. Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density and current density .
This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons. Charged particles can be created and destroyed in elementary particle reactions. In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far.
Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge.
History
Charge conservation was first proposed by British scientist William Watson in 1746 and American statesman and scientist Benjamin Franklin in 1747, although the first convincing proof was given by Michael Faraday in 1843.
Formal statement of the law
Mathematically, we can state the law of charge conservation as a continuity equation:
where is the electric charge accumulation rate in a specific volume at time , is the amount of charge flowing into the volume and is the amount of charge flowing out of the volume; both amounts are regarded as generic functions of time.
The integrated continuity equation between two time values reads:
The general solution is obtained by fixing the initial condition time , leading to the integral equation:
The condition corresponds to the absence of charge quantity change in the control volume: the system has reached a steady state. From the above condition, the following must hold true:
therefore, and are equal (not necessarily constant) over time, then the overall charge inside the control volume does not change. This deduction could be derived directly from the continuity equation, since at steady state holds, and implies .
In electromagnetic field theory, vector calculus can be used to express the law in terms of charge density (in coulombs per cubic meter) and electric current density (in amperes per square meter). This is called the charge density continuity equation
The term on the left is the rate of change of the charge density at a point. The term on the right is the divergence of the current density at the same point. The equation equates these two factors, which says that the only way for the charge density at a point to change is for a current of charge to flow into or out of the point. This statement is equivalent to a conservation of four-current.
Mathematical derivation
The net current into a volume is
where is the boundary of oriented by outward-pointing normals, and is shorthand for , the outward pointing normal of the boundary . Here {{math|J}} is the current density (charge per unit area per unit time) at the surface of the volume. The vector points in the direction of the current.
From the Divergence theorem this can be written
Charge conservation requires that the net current into a volume must necessarily equal the net change in charge within the volume.
The total charge q in volume V is the integral (sum) of the charge density in V''
So, by the Leibniz integral rule
Equating () and () gives
Since this is true for every volume, we have in general
Derivation from Maxwell's Laws
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the modified Ampere's law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:i.e.,By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
In particular, in an isolated system the total charge is conserved.
Connection to gauge invariance
Charge conservation can also be understood as a consequence of symmetry through Noether's theorem, a central result in theoretical physics that asserts that each conservation law is associated with a symmetry of the underlying physics. The symmetry that is associated with charge conservation is the global gauge invariance of the electromagnetic field. This is related to the fact that the electric and magnetic fields are not changed by different choices of the value representing the zero point of electrostatic potential . However the full symmetry is more complicated, and also involves the vector potential . The full statement of gauge invariance is that the physics of an electromagnetic field are unchanged when the scalar and vector potential are shifted by the gradient of an arbitrary scalar field :
In quantum mechanics the scalar field is equivalent to a phase shift in the wavefunction of the charged particle:
so gauge invariance is equivalent to the well known fact that changes in the overall phase of a wavefunction are unobservable, and only changes in the magnitude of the wavefunction result in changes to the probability function .
Gauge invariance is a very important, well established property of the electromagnetic field and has many testable consequences. The theoretical justification for charge conservation is greatly strengthened by being linked to this symmetry. For example, gauge invariance also requires that the photon be massless, so the good experimental evidence that the photon has zero mass is also strong evidence that charge is conserved. Gauge invariance also implies quantization of hypothetical magnetic charges.
Even if gauge symmetry is exact, however, there might be apparent electric charge non-conservation if charge could leak from our normal 3-dimensional space into hidden extra dimensions.
Experimental evidence
Simple arguments rule out some types of charge nonconservation. For example, the magnitude of the elementary charge on positive and negative particles must be extremely close to equal, differing by no more than a factor of 10−21 for the case of protons and electrons. Ordinary matter contains equal numbers of positive and negative particles, protons and electrons, in enormous quantities. If the elementary charge on the electron and proton were even slightly different, all matter would have a large electric charge and would be mutually repulsive.
The best experimental tests of electric charge conservation are searches for particle decays that would be allowed if electric charge is not always conserved. No such decays have ever been seen.
The best experimental test comes from searches for the energetic photon from an electron decaying into a neutrino and a single photon:
but there are theoretical arguments that such single-photon decays will never occur even if charge is not conserved.
Charge disappearance tests are sensitive to decays without energetic photons, other unusual charge violating processes such as an electron spontaneously changing into a positron,
and to electric charge moving into other dimensions.
The best experimental bounds on charge disappearance are:
See also
Capacitance
Charge invariance
Conservation Laws and Symmetry
Introduction to gauge theory – includes further discussion of gauge invariance and charge conservation
Kirchhoff's circuit laws – application of charge conservation to electric circuits
Maxwell's equations
Relative charge density
Franklin's electrostatic machine
Notes
Further reading
Electromagnetism
Conservation laws | Charge conservation | [
"Physics"
] | 1,576 | [
"Physical phenomena",
"Electromagnetism",
"Equations of physics",
"Conservation laws",
"Fundamental interactions",
"Symmetry",
"Physics theorems"
] |
3,094,697 | https://en.wikipedia.org/wiki/De%20Franchis%20theorem | In mathematics, the de Franchis theorem is one of a number of closely related statements applying to compact Riemann surfaces, or, more generally, algebraic curves, X and Y, in the case of genus g > 1. The simplest is that the automorphism group of X is finite (see though Hurwitz's automorphisms theorem). More generally,
the set of non-constant morphisms from X to Y is finite;
fixing X, for all but a finite number of such Y, there is no non-constant morphism from X to Y.
These results are named for (1875–1946). It is sometimes referenced as the De Franchis-Severi theorem. It was used in an important way by Gerd Faltings to prove the Mordell conjecture.
See also
Castelnuovo–de Franchis theorem
References
M. De Franchis: Un teorema sulle involuzioni irrazionali, Rend. Circ. Mat Palermo 36 (1913), 368
Algebraic curves
Riemann surfaces
Theorems in algebraic geometry
Theorems in algebraic topology | De Franchis theorem | [
"Mathematics"
] | 233 | [
"Theorems in algebraic geometry",
"Theorems in algebraic topology",
"Theorems in geometry",
"Theorems in topology"
] |
3,094,833 | https://en.wikipedia.org/wiki/Vascular%20tissue | Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant.
The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well.
Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium gives rise to thickened cork cells to protect the surface of the plant and reduce water loss. Both the production of wood and the production of cork are forms of secondary growth.
In leaves, the vascular bundles are located among the spongy mesophyll. The xylem is oriented toward the adaxial surface of the leaf (usually the upper side), and phloem is oriented toward the abaxial surface of the leaf. This is why aphids are typically found on the undersides of the leaves rather than on the top, since the phloem transports sugars manufactured by the plant and they are closer to the lower surface.
See also
Xylem
Phloem
Cork cambium
Vascular cambium
Vascular plant
Stele (biology)
Circulatory system
External links
Intro to Plant Structure Contains diagrams of the plant tissues, listed as an outline.
Plant anatomy
Plant physiology
Tissues (biology) | Vascular tissue | [
"Biology"
] | 617 | [
"Plant physiology",
"Plants"
] |
3,094,935 | https://en.wikipedia.org/wiki/Reimer%E2%80%93Tiemann%20reaction | The Reimer–Tiemann reaction is a chemical reaction used for the ortho-formylation of phenols.
with the simplest example being the conversion of phenol to salicylaldehyde. The reaction was first reported by Karl Reimer and Ferdinand Tiemann.
Reaction mechanism
Chloroform (1) is deprotonated by a strong base (normally hydroxide) to form the chloroform carbanion (2) which will quickly alpha-eliminate to give dichlorocarbene (3); this is the principal reactive species. The hydroxide will also deprotonate the phenol (4) to give a negatively charged phenoxide (5). The negative charge is delocalised into the aromatic ring, making it far more nucleophilic. Nucleophilic attack on the dichlorocarbene gives an intermediate dichloromethyl substituted phenol (7). After basic hydrolysis, the desired product (9) is formed.
Selectivity
By virtue of its two electron-withdrawing chlorine groups, the carbene (3) is highly electron deficient and is attracted to the electron rich phenoxide (5). This interaction favors selective ortho-formylation, consistent with other electrophilic aromatic substitution reactions.
Reaction conditions
Hydroxides are not readily soluble in chloroform, thus the reaction is generally carried out in a biphasic solvent system. In the simplest sense this consists of an aqueous hydroxide solution and an organic phase containing the chloroform. Therefore, the two reagents are separated and must be brought together for the reaction to take place. This can be achieved by rapid mixing, phase-transfer catalysts, or an emulsifying agent such as 1,4-dioxane as solvent.
The reaction typically needs to be heated to initiate the process; however, once started, the Reimer–Tiemann Reaction can be highly exothermic. This combination of properties makes it prone to thermal runaways.
Scope
The Reimer–Tiemann reaction is effective for other hydroxy-aromatic compounds, such as naphthols. Electron rich heterocycles such as pyrroles and indoles are also known to react.
Dichlorocarbenes can react with alkenes and amines to form dichlorocyclopropanes and isocyanides respectively. As such the Reimer–Tiemann reaction may be unsuitable for substrates bearing these functional groups. In addition, many compounds can not withstand being heated with hydroxide.
Comparison to other methods
The direct formylation of aromatic compounds can be accomplished by various methods such as the Gattermann reaction, Gattermann–Koch reaction, Vilsmeier–Haack reaction, or Duff reaction; however, in terms of ease and safety of operations, the Reimer–Tiemann reaction is often the most advantageous route chosen in chemical synthesis. Of the reactions mentioned before, the Reimer–Tiemann reaction is the only route not requiring acidic and/or anhydrous conditions. Additionally the Gattermann-Koch reaction is not applicable to phenol substrates.
Variations
Using carbon tetrachloride instead of chloroform gives a carboxylic acid product instead of an aldehyde. For example, this reaction variant with phenol would yield salicylic acid.
Historical references
Reimer and Tiemann published several papers on the subject.
The early work has been reviewed.
References
Addition reactions
Carbon-carbon bond forming reactions
Name reactions
Formylation reactions | Reimer–Tiemann reaction | [
"Chemistry"
] | 756 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
3,094,946 | https://en.wikipedia.org/wiki/Tate%20twist | In number theory and algebraic geometry, the Tate twist, named after John Tate, is an operation on Galois modules.
For example, if K is a field, GK is its absolute Galois group, and ρ : GK → AutQp(V) is a representation of GK on a finite-dimensional vector space V over the field Qp of p-adic numbers, then the Tate twist of V, denoted V(1), is the representation on the tensor product V⊗Qp(1), where Qp(1) is the p-adic cyclotomic character (i.e. the Tate module of the group of roots of unity in the separable closure Ks of K). More generally, if m is a positive integer, the mth Tate twist of V, denoted V(m), is the tensor product of V with the m-fold tensor product of Qp(1). Denoting by Qp(−1) the dual representation of Qp(1), the -mth Tate twist of V can be defined as
References
Number theory
Algebraic geometry | Tate twist | [
"Mathematics"
] | 233 | [
"Fields of abstract algebra",
"Discrete mathematics",
"Number theory",
"Algebraic geometry"
] |
3,095,332 | https://en.wikipedia.org/wiki/Explicit%20parallelism | In computer programming, explicit parallelism is the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives. Most parallel primitives are related to process synchronization, communication and process partitioning. As they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead.
The advantage of explicit parallel programming is increased programmer
control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it.
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known as implicit parallelism.
Programming languages that support explicit parallelism
Some of the programming languages that support explicit parallelism are:
Ada
Ease
Erlang
Java
JavaSpaces
Message Passing Interface
Occam
Parallel Virtual Machine
References
Parallel computing | Explicit parallelism | [
"Technology"
] | 229 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,095,525 | https://en.wikipedia.org/wiki/Noether%27s%20theorem%20on%20rationality%20for%20surfaces | In mathematics, Noether's theorem on rationality for surfaces is a classical result of Max Noether on complex algebraic surfaces, giving a criterion for a rational surface. Let S be an algebraic surface that is non-singular and projective. Suppose there is a morphism φ from S to the projective line, with general fibre also a projective line. Then the theorem states that S is rational.
See also
Hirzebruch surface
List of complex and algebraic surfaces
References
Castelnuovo’s Theorem
Notes
Algebraic surfaces
Theorems in algebraic geometry | Noether's theorem on rationality for surfaces | [
"Mathematics"
] | 114 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
3,095,831 | https://en.wikipedia.org/wiki/Axiality%20and%20rhombicity | In physics and mathematics, axiality and rhombicity are two characteristics of a symmetric second-rank tensor in three-dimensional Euclidean space, describing its directional asymmetry.
Let A denote a second-rank tensor in R3, which can be represented by a 3-by-3 matrix. We assume that A is symmetric. This implies that A has three real eigenvalues, which we denote by , and . We assume that they are ordered such that
The axiality of A is defined by
The rhombicity is the difference between the smallest and the second-smallest eigenvalue:
Other definitions of axiality and rhombicity differ from the ones given above by constant factors which depend on the context. For example, when using them as parameters in the irreducible spherical tensor expansion, it is most convenient to divide the above definition of axiality by and that of rhombicity by .
Applications
The description of physical interactions in terms of axiality and rhombicity is frequently encountered in spin dynamics and, in particular, in spin relaxation theory, where many traceless bilinear interaction Hamiltonians, having the (eigenframe) form
(hats denote spin projection operators) may be conveniently rotated using rank 2 irreducible spherical tensor operators:
where are Wigner functions, are Euler angles, and the expressions for the rank 2 irreducible spherical tensor operators are:
Defining Hamiltonian rotations in this way (axiality, rhombicity, three angles) significantly simplifies calculations, since the properties of Wigner functions are well understood.
References
D.M. Brink and G.R. Satchler, Angular momentum, 3rd edition, 1993, Oxford: Clarendon Press.
D.A. Varshalovich, A.N. Moskalev, V.K. Khersonski, Quantum theory of angular momentum: irreducible tensors, spherical harmonics, vector coupling coefficients, 3nj symbols, 1988, Singapore: World Scientific Publications.
I. Kuprov, N. Wagner-Rundell, P.J. Hore, J. Magn. Reson., 2007 (184) 196-206. Article
Tensors | Axiality and rhombicity | [
"Engineering"
] | 454 | [
"Tensors"
] |
3,095,897 | https://en.wikipedia.org/wiki/Celebratory%20gunfire | Celebratory gunfire is the shooting of a firearm into the air in celebration. Notable incidents have occurred throughout the world, even in countries where the practice is illegal.
Common occasions for celebratory gunfire include New Year's Day as well as religious holidays. The practice sometimes results in random death and injury from stray bullets. Property damage is another result of celebratory gunfire; shattered windows and damaged roofs are sometimes found after such celebrations.
Injuries
Depending on the angle it is fired, the speed of a falling bullet changes. A bullet fired nearly vertically will lose the most speed, usually falling at terminal velocity, which is much lower than its muzzle velocity. Despite this, people can still be injured or killed by bullets falling at this speed. If a bullet is fired at other angles, it maintains its angular ballistic trajectory and is far less likely to engage in tumbling motion; it therefore travels at speeds much higher than a bullet in free fall. Dense, small bullets achieve higher terminal velocities than lighter, larger bullets.
Between 1918 and 1920, United States Army Ordnance Corps Julian Hatcher conducted experiments to determine the velocity of falling bullets, and calculated that .30 caliber rounds reach terminal velocities of 90 m/s (300 feet per second or 186 miles per hour). According to computer models, 9 mm handgun rounds reach terminal velocities of between 45 and 75 m/s (150 and 250 feet per second or 100 and 170 miles per hour). A bullet traveling at only 61 m/s (200 feet per second or 135 miles per hour) to 100 m/s (330 feet per second or 225 miles per hour) can penetrate human skin.
Any gunfire can damage hearing of those nearby without ear protection, and blank rounds fired in an unsafe direction can cause injuries or death from muzzle blast at close range, as in the case of actor Jon-Erik Hexum. Birdshot fired from a shotgun disperses and loses energy much faster than slugs, buckshot, or bullets fired from rifles and pistols. Although potentially lethal for many yards at a low angle, fired at a high angle, the main risk of injury from falling "shot rain" is shot landing in the eyes and causing scratches, particularly to persons looking upwards without eye protection.
A Morbidity and Mortality Weekly Report by the U.S. Centers for Disease Control and Prevention (CDC) found that 80% of celebratory gunfire-related injuries in Puerto Rico, on New Year's Eve 2003 were to the head, feet, and shoulders. In Puerto Rico, about seven people have died from celebratory gunfire on New Year's Eve in the last 20 years. The last one was in 2012. Between the years 1985 and 1992, doctors at the King/Drew Medical Center in Los Angeles, California, treated some 118 people for random falling-bullet injuries. Thirty-eight of them died.
In 2005, the International Action Network on Small Arms (IANSA) ran education campaigns on the dangers of celebratory gunfire in Serbia and Montenegro.
In Serbia, the campaign slogan was "every bullet that is fired up must come down."
Trends
Philippine Health Secretary Francisco Duque III noted the drop in stray bullet injuries, in that country, during the 2005 year-end holiday period – from 33 cases to 19.
The number of complaints regarding random shooting in Dallas, Texas, on New Year's Eve declined from approximately 1,000 in 1999 to 800 each in 2001 and 2002.
In early 2008, increased partisanship in Lebanon led to the practice of firing celebratory gunfire in support of politicians appearing on local television, leading to multiple deaths and to calls from these leaders to end the practice.
Notable incidents
Europe
On January 7, 2008, at about 9:30 pm, a Montenegro Airlines Fokker 100 (4O-AOK) was shot at while landing at Podgorica Airport. A routine inspection of the aircraft led to the discovery of a bullet hole in the aircraft's tail. The aircraft was carrying 20 passengers, but no one was injured. The reason for the incident is unknown; however, reports indicate that it may have been an inadvertent result of guns being fired during celebrations for Orthodox Christmas.
January 1, 2005: A stray bullet hit a young girl during New Year celebrations in the central square of downtown Skopje, North Macedonia. She died two days later. This incident led to the 2006 IANSA awareness campaign.
October 12, 2003: Wedding guests in Belgrade, Serbia mistakenly shot down a small aircraft.
Middle East
December 12, 2024: At least six Syrian civilians were reported to be killed as a result of celebratory gunfire following the fall of the Assad regime.
September 4, 2021: At least 17 were killed and 41 were injured in Kabul, Afghanistan, by Taliban militants celebrating their takeover of the Panjshir Valley during the 2021 Taliban offensive.
January 2, 2021: Several parked Middle East Airlines Airbus A320neo airliners at Rafic Hariri International Airport were damaged by falling bullets from celebratory gunfire in Beirut, Lebanon, with an additional death reported in the form of a Syrian refugee who was shot in the head by a stray bullet.
April 6, 2014: A 20-year-old pregnant mother of two, Wadia Baidawi, was struck in the head and killed by a stray bullet from her neighbor's wedding in Sidon, Lebanon.
November 21, 2012: Following a cease-fire ending fighting with Israel, celebratory gunfire in the Gaza Strip killed a man and wounded three others.
October 30, 2012: Twenty-three people were electrocuted after celebratory gunfire brought down a power cable during a wedding party in eastern Saudi Arabia.
August 2010: two people were killed and 13 were injured in Jordan, as part of the yearly celebration of the announcement of the result of Tawjihi.
July 29, 2007: At least four people were killed and 17 others wounded by celebratory gunfire in the capital city of Baghdad, Iraq, following the victory of the national football team in the AFC Asian Cup. Celebratory gunfire occurred despite warnings issued by Iraqi security forces and the country's leading Shiite cleric, Grand Ayatollah Ali al-Sistani, who forbade the gunfire with a religious fatwā.
July 22, 2003: More than 20 people were killed in Iraq from celebratory gunfire following the deaths of Saddam Hussein's sons Uday and Qusay in 2003.
South America
December 25, 2012: A stray bullet believed to be related to Christmas celebrations killed a three-year-old girl in Asunción, Paraguay.
South Asia
November 16, 2016: A self-proclaimed godwoman and her private guards went on a celebratory shooting spree at a wedding in Haryana's Karnal town in India, killing the groom's aunt and leaving three of his relatives critically wounded.
June 6, 2013: a 42-year-old Pakistani woman was killed by a stray bullet from celebratory gunfire. The gunfire was attributed to celebrations for the election of Pakistan's prime minister Newaz Sharif. Her 19-year-old niece was also hit, and rushed to hospital in critical condition.
February 25, 2007: Five people were killed by stray bullets fired at a kite festival in Lahore, Pakistan, including a six-year-old schoolboy who was struck in the head near his home in the city's Mazang area.
December 1859: An autopsy showed that a native in India, who suddenly fell dead for no apparent reason, was mortally wounded from a bullet fired from a distance too far for the shot to be heard. The falling bullet had sufficient energy to pass through the victim's shoulder, a rib, a lung, his heart and his diaphragm.
Southeast Asia
In December 26, 1819, several Bugis people were on a gunfire spree to celebrate the wedding of their chief Arung Belawa happening in Tanjungpinang, Penyengat Island (present-day Indonesia). However, Dutch officials led by recently elected Resident G.E. Königsdorffer detained some of these men which escalated to the murder of five men including a chief named Raja Ronggik, his death led to an all-out war between the Bugis and the Dutch in January 1820, with many Bugis civilians fleeing to Singapore in the ensuing aftermath.
United States
December 31, 2023: 3-year-old Brayden Smith was with his family New Year's Eve when a bullet passed through their Memphis apartment window, striking the toddler during what police believe was "celebratory gunfire." Brayden was rushed to the hospital, but died around 6 am Jan. 3.
January 1, 2023: Two people, a 40-year-old man and 35-year-old man, died after celebratory gunfire was discharged at a party in Lawrence Township, Michigan. A 62-year-old man was arrested at the scene.
December 31, 2021, and January 1, 2022: Multiple people in Durham, North Carolina, were struck by celebratory bullets, including one woman who was killed. In Canton, Ohio a man firing celebratory bullets was shot and killed through his wooden fence by police.
January 1, 2020: A patron who was eating dinner at The Big Catch restaurant in St. Petersburg, Florida, on New Year's Day was struck by a celebratory bullet.
December 31, 2019: Texas nurse, 61-years-old Philippa Ashford shot to death on New Year's Eve, likely by celebratory gunfire, police say.
January 1, 2017: Armando Martinez, a Texas state Representative, was wounded in the head by a stray bullet during a New Year's celebration.
January 1, 2015: A 43-year-old man, Javier Suarez Rivera, was struck in his head and killed while watching fireworks with his family in Houston.
July 4, 2013: A 7-year-old boy, Brendon Mackey, was struck in the top of his head and killed while walking with his father shortly before 9 p.m. amid a large crowd prior to the fireworks display over the Swift Creek Reservoir, outside Richmond, Virginia.
January 1, 2013: A 10-year-old girl, Aaliyah Boyer, collapsed after being struck in the back of the head while watching the neighborhood fireworks in Elkton, Maryland. She died two days later of her injuries.
July 4, 2012: A 34-year-old woman, Michelle Packard, was struck in the head and killed while watching the fireworks with her family. The police believe the shot could have come from a mile away.
January 1, 2010: A four-year-old boy, Marquel Peters, was struck by a bullet and killed inside his church The Church of God of Prophecy in Decatur, GA. It is presumed the bullet may have penetrated the roof of the church around 12:20AM.
In March 2008, Chef Paul Prudhomme was grazed by a .22-caliber stray bullet while catering the Zurich Classic of New Orleans golf tournament. He at first thought a bee had stung his arm, required no serious medical attention, and within five minutes was back to cooking for the golf tournament. It was thought to have been a falling bullet.
December 28, 2005: A 23-year-old U.S. Army private on leave after basic training fired a 9mm pistol into the air in celebration with friends, according to police, one of the bullets came through a fifth-floor apartment window in the New York City borough of Queens, striking a 28-year-old mother of two in the eye. Her husband found her lifeless body moments later. The shooter had been drinking the night before and turned himself in to police the next morning when he heard the news. He was charged with second-degree manslaughter and weapons-related crimes, and was later found guilty and sentenced to 4 to 12 years in prison.
June 14, 1999: Arizona, A 14-year-old girl, Shannon Smith, was struck on the top of her head by a bullet and killed while in the backyard of her home. This incident resulted in Arizona enacting "Shannon's Law" in 2000, that made the discharge of a firearm into the air illegal.
January 1, 1999: Joseph Jaskolka of Wilmington was visiting family members in Philadelphia for New Years when he was struck in the head by a stray bullet as he walked with family members on Fernon Street headed to festivities on South 2nd Street in South Philadelphia. The incident is believed to be from gunfire celebrating the New Year. The bullet remains lodged in Jaskolka's brainstem and he was left paralyzed on the right side of his body due to his injury.
December 31, 1994: Amy Silberman, a tourist from Boston, was killed by a falling bullet from celebratory firing while walking on the Riverwalk in the French Quarter of New Orleans, Louisiana. The Police Department there has been striving to educate the public on the danger since then, frequently making arrests for firing into the air.
July 4, 1950: Bernard Doyle was killed in his seat while attending a New York Giants game at the Polo Grounds. The bullet was determined to have been fired by Robert Peebles, a juvenile, from an apartment building some distance away on Coogan's Bluff, presumably in celebration of Independence Day.
Penalties
In North Macedonia, a person found guilty of firing off a gun during celebrations faces a jail sentence of up to ten years.
In Italy, under art.703 of the Penal Code (Dangerous ignitions and explosions), a person found guilty of firing a gun without permission in an inhabited place or nearby, is sentenced to a fine of up to €103, while if they commit the act in a crowded place the sentence may go as high as up to a month in prison. The law also applies to fireworks, rockets, hot air balloons and, in general, "dangerous ignitions and explosions".
In Pakistan, section 144 of the Pakistan Penal Code is imposed to prevent aerial firing during celebrations if harm is caused, and an FIR may be registered against a person who does so. However, many cases of aerial firing go unreported.
In the United States, crime classifications vary from a misdemeanor to a felony in different states:
In Arizona, firing a gun into the air was raised from a misdemeanor to a felony by Shannon's law, in response to the death of a 14-year-old from a stray bullet in 1999.
In California, discharging a firearm into the air is a felony punishable by three years in state prison. If the stray bullet kills someone, the shooter can be charged with murder.
In Minnesota, it is illegal to discharge a firearm over a cemetery, or at or in a public transit vehicle. Additionally, local governments may regulate the discharge of a weapon within their jurisdictions.
In Ohio, discharging a firearm or a deadly weapon in a public place is classified as disorderly conduct, a Class B misdemeanor, punishable by up to 180 days in jail and a fine of up to $2,000.
In Texas, random gunfire is a Class A misdemeanor, punishable by a maximum one year in jail and $4,000 fine. Anyone who injures or kills someone with a stray bullet could face more serious felony charges.
In Wisconsin, criminal charges for this type of offense range from "endangering safety by use of a dangerous weapon" to "reckless homicide" in the event of a death, with penalties ranging from nine months to 25 years in prison."
Cultural references
The non-fiction U.S. cable television program MythBusters on the Discovery Channel covered this topic in Episode 50: "Bullets Fired Up" (original airdate: April 19, 2006). Special-effects experts Adam Savage and Jamie Hyneman conducted a series of experiments to answer the question: "Can celebratory gunfire kill when the bullets fall back to earth?"
Using pig carcasses, they worked out the terminal velocity of a falling bullet and had a mixed result, answering the question with all three of the show's possible outcomes: Confirmed, Plausible and Busted. They tested falling bullets by firing them from both a handgun and a rifle, by firing them from an air gun designed to propel them at terminal velocity, and by dropping them in the desert from an instrumented balloon.
They found that while bullets traveling on a perfectly vertical trajectory tumble on the way down, creating turbulence that reduces terminal velocity below that which would kill, it was very difficult to fire a bullet in this near-ideal vertical trajectory. In practice, bullets were likely to remain spin-stabilized on a ballistic trajectory and fall at a potentially lethal terminal velocity. They also verified cases of actual deaths from falling bullets.
See also
Stray bullet
Warning shot
Feu de joie
21-gun salute
References
Further reading
"Falling bullets: terminal velocities and penetration studies", by L. C. Haag, Wound Ballistics Conference, April 1994, Sacramento, California.
External links
UN Development Programme activity report
Can a bullet fired into the air kill someone when it comes down? The Straight Dope
Celebratory Gunfire: Good Idea or Not?
'Celebratory' shot kills groom
Spreading the Word About Dangers of Celebratory Gunfire: Henry Louis Adams
Minister Fighting to End Celebratory Gunfire
Traditions
Military life
Ballistics
Police culture | Celebratory gunfire | [
"Physics"
] | 3,595 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
3,095,929 | https://en.wikipedia.org/wiki/Fano%20variety | In algebraic geometry, a Fano variety, introduced by Gino Fano , is an algebraic variety that generalizes certain aspects of complete intersections of algebraic hypersurfaces whose sum of degrees is at most the total dimension of the ambient projective space. Such complete intersections have important applications in geometry and number theory, because they typically admit rational points, an elementary case of which is the Chevalley–Warning theorem. Fano varieties provide an abstract generalization of these basic examples for which rationality questions are often still tractable.
Formally, a Fano variety is a complete variety X whose anticanonical bundle KX* is ample. In this definition, one could assume that X is smooth over a field, but the minimal model program has also led to the study of Fano varieties with various types of singularities, such as terminal or klt singularities. Recently techniques in differential geometry have been applied to the study of Fano varieties over the complex numbers, and success has been found in constructing moduli spaces of Fano varieties and proving the existence of Kähler–Einstein metrics on them through the study of K-stability of Fano varieties.
Examples
The fundamental example of Fano varieties are the projective spaces: the anticanonical line bundle of Pn over a field k is O(n+1), which is very ample (over the complex numbers, its curvature is n+1 times the Fubini–Study symplectic form).
Let D be a smooth codimension-1 subvariety in Pn. The adjunction formula implies that KD = (KX + D)|D = (−(n+1)H + deg(D)H)|D, where H is the class of a hyperplane. The hypersurface D is therefore Fano if and only if deg(D) < n+1.
More generally, a smooth complete intersection of hypersurfaces in n-dimensional projective space is Fano if and only if the sum of their degrees is at most n.
Weighted projective space P(a0,...,an) is a singular (klt) Fano variety. This is the projective scheme associated to a graded polynomial ring whose generators have degrees a0,...,an. If this is well formed, in the sense that no n of the numbers a have a common factor greater than 1, then any complete intersection of hypersurfaces such that the sum of their degrees is less than a0+...+an is a Fano variety.
Every projective variety in characteristic zero that is homogeneous under a linear algebraic group is Fano.
Some properties
The existence of some ample line bundle on X is equivalent to X being a projective variety, so a Fano variety is always projective. For a Fano variety X over the complex numbers, the Kodaira vanishing theorem implies that the sheaf cohomology groups of the structure sheaf vanish for . In particular, the Todd genus automatically equals 1. The cases of this vanishing statement also tell us that the first Chern class induces an isomorphism .
By Yau's solution of the Calabi conjecture, a smooth complex variety admits Kähler metrics of positive
Ricci curvature if and only if it is Fano. Myers' theorem therefore tells us that the universal cover of a Fano manifold is compact, and so can only be a finite covering. However, we have just seen that the Todd genus of a Fano manifold must equal 1. Since this would also apply to the manifold's universal cover, and since the Todd genus is multiplicative under finite covers, it follows that any Fano manifold is simply connected.
A much easier fact is that every Fano variety has Kodaira dimension −∞.
Campana and Kollár–Miyaoka–Mori showed that a smooth Fano variety over an algebraically closed field is rationally chain connected; that is, any two closed points can be connected by a chain of rational curves.
Kollár–Miyaoka–Mori also showed that the smooth Fano varieties of a given dimension over an algebraically closed field of characteristic zero form a bounded family, meaning that they are classified by the points of finitely many algebraic varieties. In particular, there are only finitely many deformation classes of Fano varieties of each dimension. In this sense, Fano varieties are much more special than other classes of varieties such as varieties of general type.
Classification in small dimensions
The following discussion concerns smooth Fano varieties over the complex numbers.
A Fano curve is isomorphic to the projective line.
A Fano surface is also called a del Pezzo surface. Every del Pezzo surface is isomorphic to either P1 × P1 or to the projective plane blown up in at most eight points, which must be in general position. As a result, they are all rational.
In dimension 3, there are smooth complex Fano varieties which are not rational, for example cubic 3-folds in P4 (by Clemens - Griffiths) and quartic 3-folds in P4 (by Iskovskikh - Manin). classified the smooth Fano 3-folds with second Betti number 1 into 17 classes, and classified the smooth ones with second Betti number at least 2, finding 88 deformation classes. A detailed summary of the classification of smooth Fano 3-folds is given in .
See also
Periodic table of shapes a project to classify all Fano varieties in three, four and five dimensions.
Notes
External links
Fanography - A tool to visually study the classification of threedimensional Fano varieties.
References
Algebraic geometry
3-folds | Fano variety | [
"Mathematics"
] | 1,156 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
3,096,030 | https://en.wikipedia.org/wiki/Tolman%20surface%20brightness%20test | The Tolman surface brightness test is one out of six cosmological tests that were conceived in the 1930s to check the viability of and compare new cosmological models. Tolman's test compares the surface brightness of galaxies as a function of their redshift (measured as z). Such a comparison was first proposed in 1930 by Richard C. Tolman as a test of whether the universe is expanding or static. It is a unique test of cosmology, as it is independent of dark energy, dark matter and Hubble constant parameters, testing purely for whether Cosmological Redshift is caused by an expanding universe or not.
In a simple (static and flat) universe, the light received from an object drops proportional to the square of its distance and the apparent area of the object also drops proportional to the square of the distance, so the surface brightness (light received per surface area) would be constant, independent of the distance. In an expanding universe, however, there are two effects that change this relation. First, the rate at which photons are received is reduced because each photon has to travel a little farther than the one before. Second, the energy of each photon observed is reduced by the redshift. At the same time, distant objects appear larger than they really are because the photons observed were emitted at a time when the object was closer. Adding these effects together, the surface brightness in a simple expanding universe (flat geometry and uniform expansion over the range of redshifts observed) should decrease with the fourth power of .
One of the earliest and most comprehensive studies was published in 1996, as observational requirements limited the practicality of the test till then. This test found consistency with an expanding universe. However, therein, the authors note that:
A later paper that reviewed this one removed their assumed expansion cosmology for calculating SB, to make for a fair test, and found that the 1996 results, once the correction was made, did not rule out a static universe.
To date, the most complex investigation of the relationship between surface brightness and redshift was carried out using the 10 m Keck telescope to measure nearly a thousand galaxies' redshifts and the 2.4 m Hubble Space Telescope to measure those galaxies' surface brightness. The exponent found is not 4 as expected in the simplest expanding model, but 2.6 or 3.4, depending on the frequency band. The authors summarize:
Some proceeding work has pointed out that the analysis tested one possible static cosmology (analogous to Einstein–de Sitter), and that static models with different angular size-distance relationships can pass this test. The predicted difference between static and expansion diverges dramatically towards higher redshifts, however, accounting for galaxy evolution becomes increasingly uncertain. The broadest test done to date was out to z=5, this test found their results to be consistent with a static universe, but was unable to rule out expansion as it tested only a single model of galaxy size evolution. Static tired-light models remain in conflict with observations of supernovae, as these models do not predict cosmological time dilation.
See also
Source counts
Tired light
Time dilation
Footnotes
Physical cosmology | Tolman surface brightness test | [
"Physics",
"Astronomy"
] | 660 | [
"Astrophysics",
"Theoretical physics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
3,096,137 | https://en.wikipedia.org/wiki/The%20Number%20Devil | The Number Devil: A Mathematical Adventure () is a book for children and young adults that explores mathematics. It was originally written in 1997 in German by Hans Magnus Enzensberger and illustrated by Rotraut Susanne Berner. The book follows a young boy named Robert, who is taught mathematics by a sly "number devil" called Teplotaxl over the course of twelve dreams.
The book was met with mostly positive reviews from critics, approving its description of math while praising its simplicity. Its colorful use of fictional mathematical terms and its creative descriptions of concepts have made it a suggested book for both children and adults troubled with math. The Number Devil was a bestseller in Europe, and has been translated into English by Michael Henry Heim.
Plot
Robert is a young boy who suffers from mathematical anxiety due to his boredom in school. His mother is Mrs. Wilson. He also experiences recurring dreams—including falling down an endless slide or being eaten by a giant fish—but is interrupted from this sleep habit one night by a small devil creature who introduces himself as the Number Devil. Although there are many Number Devils (from Number Heaven), Robert only knows him as the Number Devil before learning of his actual name, Teplotaxl, later in the story.
Over the course of twelve dreams, the Number Devil teaches Robert mathematical principles. On the first night, the Number Devil appears to Robert in an oversized world and introduces the number one. The next night, the Number Devil emerges in a forest of trees shaped like "ones" and explains the necessity of the number zero, negative numbers, and introduces hopping, a fictional term to describe exponentiation. On the third night, the Number Devil brings Robert to a cave and reveals how prima-donna numbers (prime numbers) can only be divided by themselves and one without a remainder. Later, on the fourth night, the Number Devil teaches Robert about rutabagas, another fictional term to depict square roots, at a beach.
For a time after the fourth night, Robert cannot find the Number Devil in his dreams; later, however, on the fifth night, Robert finds himself at a desert where the Number Devil teaches him about triangular numbers through the use of coconuts. On the sixth night, the Number Devil teaches Robert about the natural occurrence of Fibonacci numbers, which the Number Devil shortens to Bonacci numbers, by counting brown and white rabbits as they reproduce multiple times. By this dream, Robert's mother has noticed a visible change in Robert's mathematical interest, and Robert begins going to sleep earlier to encounter the Number Devil. The seventh night brings Robert to a bare, white room, where the Number Devil presents Pascal's triangle and the patterns that the triangular array displays. On the eighth night, Robert is brought to his classroom at school. The Number Devil arranges Robert's classmates in multiple ways, teaches him about permutations, and what the Number Devil calls vroom numbers (factorials).
On the ninth night, Robert dreams he is in bed, suffering from the flu, when the Number Devil appears next to him. The Number Devil teaches Robert about natural numbers, which the Number Devil calls garden-variety numbers, the unusual characteristics of infinite, and infinite series. Robert finds himself at the North Pole, where the Number Devil introduces irrational numbers (unreasonable numbers), as well as aspects of Euclidean geometry, such as vertices (dots) and edges (lines). By the eleventh night, Robert has shown considerable increased interest in mathematics, but questions its validity, to which the Number Devil introduces the concept of mathematical proofs, ending with the Number Devil showing Robert a complicated proof of basic arithmetic. On the twelfth night, Robert and the Number Devil receive an invitation (which names the Number Devil as Teplotaxl) to Number Heaven, as Robert's time with the Number Devil has finished. At Number Heaven, Robert learns of imaginary numbers, which Teplotaxl describes as imaginative numbers, as well as the Klein bottle. Walking through Number Heaven, Teplotaxl introduces Robert to various famous mathematicians, such as Fibonacci, whom Teplotaxl calls Bonacci, and George Cantor, or Professor Singer. The book ends with Robert in class using his newfound mathematical knowledge.
History
Enzensberger fostered a passion for mathematics and numbers, although he was not a mathematician by trade. In 1998, he delivered a speech at the International Congress of Mathematicians criticizing the isolation of mathematics from popular culture. The Number Devil was ultimately written on suggestion from Enzensberger's eleven-year-old daughter Theresia. Because he was displeased with the way mathematics was taught to students at school, the German author decided to pen a book that teaches mathematics in an innovative way. German illustrator Rotraut Susanne Berner provided many full-page illustrations, as well as smaller drawings, for the book. The Number Devil was first published in German in 1997. The Number Devil has been noted for its unorthodox abandonment of standard notation; instead, Enzensberger created a variety of fictional terms to help describe mathematical concepts. For instance, exponentiation takes the term hopping, and the fictional term unreasonable numbers was coined for irrational numbers. The UCLA Professor of Slavic Languages Michael Henry Heim translated the book from German to English. The translation was particularly difficult in that it required special attention to the numerical aspect of the book. He was also challenged by the necessity to use simple English words appropriate for the target audience of The Number Devil—that is, children aged eleven to fourteen. The mathematics book was then published in English in 1998.
The book was a hit across Europe, becoming a best seller in at least Spain, Germany, the Netherlands, and Italy. The Number Devil also had considerable success in Japan. After the success of The Number Devil, Enzensberger wrote a follow-up, called Where Were You, Robert?, a children's book focusing on history rather than math. The German author has since stated he will not write any more young adult's books, but instead direct his effort towards poetry. Viva Media later published an educational computer game, similarly titled The Number Devil: A Mathematical Adventure, based on the book. An audiobook was also released for The Number Devil.
Reception
The Number Devil received mostly positive reviews from critics. Mathematics professor John Allen Paulos of Temple University wrote an article for The New York Review of Books, praising The Number Devil as a "charming numerical fairy tale for children." Likewise, mathematics writer Martin Gardner of the Los Angeles Times applauded Enzensberger's introduction of mathematics "in such an entertaining way." In a book review for The Baltimore Sun, Michael Pakenham approved of the book's simplicity, writing, "it's not incomprehensible. Not for a minute."
Not all reviews were positive, however. The American Mathematical Society's Deborah Loewenberg Ball and Hyman Bass reviewed the book from a mathematical perspective. Although they praised its "attractive and imaginative fantasy," the two mathematicians found several issues. Ball and Bass were concerned with The Number Devil negative characterization of math teachers, its apparent presentation of mathematics as magical rather than factual, and a number of other contentions. Ted Dewan, writing for the Times Educational Supplement, believed it to be "far more compelling than a standard text," but found it less adventurous than he hoped for. He also criticized its use in mathematics education, stating "I suspect this is the sort of book that well-meaning adults will mistakenly thrust upon children because it will be good for them."
References
1997 fiction books
1997 children's books
Mathematics fiction books
Fictional demons
Fiction about dreams
Henry Holt and Company books
Works by Hans Magnus Enzensberger | The Number Devil | [
"Mathematics"
] | 1,588 | [
"Recreational mathematics",
"Mathematics fiction books"
] |
3,096,183 | https://en.wikipedia.org/wiki/List%20of%20large%20cardinal%20properties | This page includes a list of large cardinal properties in the mathematical field of set theory. It is arranged roughly in order of the consistency strength of the axiom asserting the existence of cardinals with the given property. Existence of a cardinal number κ of a given type implies the existence of cardinals of most of the types listed above that type, and for most listed cardinal descriptions φ of lesser consistency strength, Vκ satisfies "there is an unbounded class of cardinals satisfying φ".
The following table usually arranges cardinals in order of consistency strength, with size of the cardinal used as a tiebreaker. In a few cases (such as strongly compact cardinals) the exact consistency strength is not known and the table uses the current best guess.
"Small" cardinals: 0, 1, 2, ..., ,..., , ... (see Aleph number)
worldly cardinals
weakly and strongly inaccessible, α-inaccessible, and hyper inaccessible cardinals
weakly and strongly Mahlo, α-Mahlo, and hyper Mahlo cardinals.
reflecting cardinals
weakly compact (= Π-indescribable), Π-indescribable, totally indescribable cardinals
λ-unfoldable, unfoldable cardinals, ν-indescribable cardinals and λ-shrewd, shrewd cardinals (not clear how these relate to each other).
ethereal cardinals, subtle cardinals
almost ineffable, ineffable, n-ineffable, totally ineffable cardinals
remarkable cardinals
α-Erdős cardinals (for countable α), 0# (not a cardinal), γ-iterable, γ-Erdős cardinals (for uncountable γ)
almost Ramsey, Jónsson, Rowbottom, Ramsey, ineffably Ramsey, completely Ramsey, strongly Ramsey, super Ramsey cardinals
measurable cardinals, 0†
λ-strong, strong cardinals, tall cardinals
Woodin, weakly hyper-Woodin, Shelah, hyper-Woodin cardinals
superstrong cardinals (=1-superstrong; for n-superstrong for n≥2 see further down.)
subcompact, strongly compact (Woodin< strongly compact≤supercompact), supercompact, hypercompact cardinals
η-extendible, extendible cardinals
Vopěnka cardinals, Shelah for supercompactness, high jump cardinals
n-superstrong (n≥2), n-almost huge, n-super almost huge, n-huge, n-superhuge cardinals (1-huge=huge, etc.)
Wholeness axiom, rank-into-rank (Axioms I3, I2, I1, and I0)
The following even stronger large cardinal properties are not consistent with the axiom of choice, but their existence has not yet been refuted in ZF alone (that is, without use of the axiom of choice).
Reinhardt cardinal, Berkeley cardinal
References
External links
Cantor's attic
some diagrams of large cardinal properties
Large cardinals
cs:Velké kardinály | List of large cardinal properties | [
"Mathematics"
] | 640 | [
"Large cardinals",
"Mathematical objects",
"Infinity"
] |
3,096,353 | https://en.wikipedia.org/wiki/Zirconium%20alloys | Zirconium alloys are solid solutions of zirconium or other metals, a common subgroup having the trade mark Zircaloy. Zirconium has very low absorption cross-section of thermal neutrons, high hardness, ductility and corrosion resistance. One of the main uses of zirconium alloys is in nuclear technology, as cladding of fuel rods in nuclear reactors, especially water reactors. A typical composition of nuclear-grade zirconium alloys is more than 95 weight percent zirconium and less than 2% of tin, niobium, iron, chromium, nickel and other metals, which are added to improve mechanical properties and corrosion resistance.
The water cooling of reactor zirconium alloys elevates requirement for their resistance to oxidation-related nodular corrosion. Furthermore, oxidative reaction of zirconium with water releases hydrogen gas, which partly diffuses into the alloy and forms zirconium hydrides. The hydrides are less dense and are weaker mechanically than the alloy; their formation results in blistering and cracking of the cladding – a phenomenon known as hydrogen embrittlement.
Production and properties
Commercial non-nuclear grade zirconium typically contains 1–5% of hafnium, whose neutron absorption cross-section is 600 times that of zirconium. Hafnium must therefore be almost entirely removed (reduced to < 0.02% of the alloy) for reactor applications.
Nuclear-grade zirconium alloys contain more than 95% Zr, and therefore most of their properties are similar to those of pure zirconium. The absorption cross section for thermal neutrons is 0.18 barn for zirconium, which is much lower than that for such common metals as iron (2.4 barn) and nickel (4.5 barn). The composition and the main applications of common reactor-grade alloys are summarized below. These alloys contain less than 0.3% of iron and chromium and 0.1–0.14% oxygen.
*ZIRLO stands for zirconium low oxidation.
Microstructure
At temperatures below 1100 K, zirconium alloys belong to the hexagonal crystal family (HCP). Its microstructure, revealed by chemical attack, shows needle-like grains typical of a Widmanstätten pattern. Upon annealing below the phase transition temperature (α-Zr to β-Zr) the grains are equiaxed with sizes varying from 3 to 5 μm.
Development
Zircaloy 1 was developed after zirconium was selected by Admiral H.G. Rickover as the structural material for high flux zone reactor components and cladding for fuel pellet tube bundles in prototype submarine reactors in the late 1940s. The choice was owing to a combination of strength, low neutron cross section and corrosion resistance. Zircaloy-2 was inadvertently developed, by melting Zircaloy-1 in a crucible previously used for stainless steel. Newer alloys are Ni-free, including Zircaloy-4, ZIRLO and M5 (with 1% niobium).
Oxidation of zirconium alloy
Zirconium alloys readily react with oxygen, forming a nanometer-thin passivation layer. The corrosion resistance of the alloys may degrade significantly when some impurities (e.g. more than 40 ppm of carbon or more than 300 ppm of nitrogen) are present. Corrosion resistance of zirconium alloys is enhanced by intentional development of thicker passivation layer of black lustrous zirconium oxide. Nitride coatings might also be used.
Whereas there is no consensus on whether zirconium and zirconium alloy have the same oxidation rate, Zircaloys 2 and 4 do behave very similarly in this respect. Oxidation occurs at the same rate in air or in water and proceeds in ambient condition or in high vacuum. A sub-micrometer thin layer of zirconium dioxide is rapidly formed in the surface and stops the further diffusion of oxygen to the bulk and the subsequent oxidation. The dependence of oxidation rate R on temperature and pressure can be expressed as
R = 13.9·P1/6·exp(−1.47/kBT)
The oxidation rate R is here expressed in gram/(cm2·second); P is the pressure in atmosphere, that is the factor P1/6 = 1 at ambient pressure; the activation energy is 1.47 eV; kB is the Boltzmann constant (8.617 eV/K) and T is the absolute temperature in kelvins.
Thus the oxidation rate R is 10−20 g per 1 m2 area per second at 0 °C, 6 g m−2 s−1 at 300 °C, 5.4 mg m−2 s−1 at 700 °C and 300 mg m−2 s−1 at 1000 °C. Whereas there is no clear threshold of oxidation, it becomes noticeable at macroscopic scales at temperatures of several hundred °C.
Oxidation of zirconium by steam
One disadvantage of metallic zirconium is in the case of a loss-of-coolant accident in a nuclear reactor. Zirconium cladding rapidly reacts with water steam above .
Oxidation of zirconium by water is accompanied by release of hydrogen gas. This oxidation is accelerated at high temperatures, e.g. inside a reactor core if the fuel assemblies are no longer completely covered by liquid water and insufficiently cooled. Metallic zirconium is then oxidized by the protons of water to form hydrogen gas according to the following redox reaction:
Zr + 2 H2O → ZrO2 + 2 H2
Zirconium cladding in the presence of D2O deuterium oxide frequently used as the moderator and coolant in next gen pressurized heavy water reactors that CANDU designed nuclear reactors use would express the same oxidation on exposure to deuterium oxide steam as follows:
Zr + 2 D2O → ZrO2 + 2 D2
This exothermic reaction, although only occurring at high temperature, is similar to that of alkali metals (such as sodium or potassium) with water. It also closely resembles the anaerobic oxidation of iron by water (reaction used at high temperature by Antoine Lavoisier to produce hydrogen for his experiments).
This reaction was responsible for a small hydrogen explosion accident first observed inside the reactor building of Three Mile Island Nuclear Generating Station in 1979 that did not damage the containment building. This same reaction occurred in boiling water reactors 1, 2 and 3 of the Fukushima Daiichi Nuclear Power Plant (Japan) after reactor cooling was interrupted by related earthquake and tsunami events during the disaster of March 11, 2011, leading to the Fukushima Daiichi nuclear disaster. Hydrogen gas was vented into the reactor maintenance halls and the resulting explosive mixture of hydrogen with air oxygen detonated. The explosions severely damaged external buildings and at least one containment building. The reaction also occurred during the Chernobyl Accident, when the steam from the reactor began to escape. Many water cooled reactor containment buildings have catalyst-based passive autocatalytic recombiner units installed to rapidly convert hydrogen and oxygen into water at room temperature before the explosive limit is reached.
Formation of hydrides and hydrogen embrittlement
In the above oxidation scenario, 5–20% of the released hydrogen diffuses into the zirconium alloy cladding forming zirconium hydrides. The hydrogen production process also mechanically weakens the rods cladding because the hydrides have lower ductility and density than zirconium or its alloys, and thus blisters and cracks form upon hydrogen accumulation. This process is also known as hydrogen embrittlement. It has been reported that the concentration of hydrogen within hydrides is also dependent on the nucleation site of the precipitates.
In case of loss-of-coolant accident (LOCA) in a damaged nuclear reactor, hydrogen embrittlement accelerates the degradation of the zirconium alloy cladding of the fuel rods exposed to high temperature steam.
Deformation
Zirconium alloys are used in the nuclear industry as fuel rod cladding due to zirconium's high strength and low neutron absorption cross-section. It can be subject to high strain rate loading conditions during forming and in the case of a reactor accident. In this context, the relationship between strain rate-dependent mechanical properties, crystallographic texture and deformation modes, such as slip and deformation twinning.
Slip
Zirconium has a hexagonal close-packed crystal structure (HCP) at room temperature, where 〈𝑎〉prismatic slip has the lowest critical resolved shear stress. 〈𝑎〉 slip is orthogonal to the unit cell 〈𝑐〉 axis and, therefore, cannot accommodate deformation along〈𝑐〉. To make up the five independent slip modes and allow arbitrary deformation in a polycrystal, secondary deformation systems such as twinning along pyramidal planes and 〈𝑐 + 𝑎〉slip on either 1st order or 2nd order pyramidal planes play an important role in Zr polycrystal deformation. Therefore, the relative activity of deformation slip and twinning modes as a function of texture and strain rate is critical in understanding deformation behaviour. Anisotropic deformation during processing affects the texture of the final Zr part; understanding the relative predominance of deformation twinning and slip is important for texture control in processing and predicting likely failure modes in-service.
The known deformation systems in Zr are shown in Figure 1. The preferred room temperature slip system with the lowest critical resolved shear stress (CRSS) in dilute Zr alloys is 〈𝑎〉 prismatic slip. The CRSS of 〈𝑎〉prismatic slip increases with interstitial content, notably oxygen, carbon and nitrogen, and decreases with increasing temperature. 〈𝑎〉basal slip in high purity single crystal Zr deformed at a low strain rate of 10−4 s−1 was only seen at temperatures above 550 °C. At room temperature, basal slip is seen to occur in small amounts as a secondary slip system to 〈𝑎〉 prismatic slip, and is promoted during high strain rate loading. In-room temperature deformation studies of Zr, 〈𝑎〉 basal slip is sometimes ignored and has been shown not to affect macroscopic stress-strain response at room temperature. However, single crystal room temperature microcantilever tests in commercial purity Zr show that 〈𝑎〉 basal slip has only 1.3 times higher CRSS than 〈𝑎〉 prismatic slip, which would imply significant activation in polycrystal deformation given a favourable stress state. 1st order 〈𝑐 + 𝑎〉 pyramidal slip has a 3.5 times higher CRSS than 〈𝑎〉 prismatic slip. Slip on 2nd-order pyramidal planes are rarely seen in Zr alloys, but 〈𝑐 + 𝑎〉 1st-order pyramidal slip is commonly observed. Jensen and Backofen observed localised shear bands with 〈𝑐 + 𝑎〉 dislocations on {112̅ 4} planes during 〈𝑐〉 axis loading, which led to ductile fracture at room temperature, but this is not the slip plane as 〈𝑐 + 𝑎〉 vectors do not lie in {112̅ 4} planes.
Deformation twinning
Deformation twinning produces a coordinated shear transformation in a crystalline material. Twin types can be classed as either contraction (C1, C2) or extension (T1, T2) twins, which accommodate strain either to contract or extend the <𝑐> axis of the hexagonal close-packed (HCP) unit cell. Twinning is crystallographically defined by its twin plane 𝑲𝟏, the mirror plane in the twin and parent material, and 𝜼𝟏, which is the twinning shear direction. Deformation twins in Zr are generally lenticular in shape, lengthening in the 𝜼𝟏 direction and thickening along the 𝑲𝟏 plane normal.
The twin plane, shear direction, and shear plane form the basis vectors of an orthogonal set. The axis-angle misorientation relationship between the parent and twin is a rotation of angle 𝜉 about the shear plane's normal direction 𝑷.
More generally, twinning can be described as a 180° rotation about an axis (𝜼𝟏 or 𝑲𝟏 normal direction), or a mirror reflection in a plane (𝑲𝟏 or 𝜼𝟏 normal plane). The predominant twin type in zirconium is 𝑲𝟏 = {101̅2} 𝜼𝟏 = <101̅1> (T1) twinning, and for this {101̅2}<101̅1> twin, there is no distinction between the four transformations, as they are equivalent.
Due to symmetry in the HCP crystal structure, six crystallographically equivalent twin variants exist for each type. Different twin variants of the same type in grain cannot be distinguished by their axis-angle disorientation to the parent, which are the same for all variants of a twin type. Still, they can be distinguished apart using their absolute orientations with respect to the loading axis, and in some cases (depending on the sectioning plane), the twin boundary trace.
The primary twin type formed in any sample depends on the strain state and rate, temperature and crystal orientation. In macroscopic samples, this is typically influenced strongly by the crystallographic texture, grain size, and competing deformation modes (i.e., dislocation slip), combined with the loading axis and direction. The T1 twin type dominates at room temperature and quasi-static strain rates. Twin types present at liquid nitrogen temperature are {112̅2}〈112̅3̅〉(C1 twinning) and {101̅2}〈101̅1〉 (T1 twinning). Secondary twins of another type may form inside the primary twins as the crystal is reoriented with respect to the loading axis. The C2 compressive twin system {101̅1}〈1̅012〉 is only active at high temperatures, and is activated in preference to basal slip during deformation at 550 °C.
Influence of loading conditions on deformation modes
Kaschner and Gray observe that yield stress increases with increasing strain rate in the range of 0.001 s−1 and 3500 s−1, and that the strain rate sensitivity in the yield stress is higher when uniaxially compressing along texture components with predominantly prismatic planes than basal planes. They conclude that the rate sensitivity of the flow stress is consistent with Peierls forces inhibiting dislocation motion in low-symmetry metals during slip-dominated deformation. This is valid in the early stages of room temperature deformation, which in Zr is usually slip-dominated.
Samples compressed along texture components with predominantly prismatic planes yield at lower stresses than texture components with predominantly basal planes, consistent with the higher critical resolved shear stress for <𝑐 + 𝑎> pyramidal slip compared to <𝑎> prismatic slip. In a transmission electron microscopy study of room temperature deformed zirconium, McCabe et al. observed only <𝑎> dislocations in samples with prismatic texture, which were presumed to lie on prismatic planes. Both <𝑎> (prismatic) and <112̅3̅> <𝑐 + 𝑎> ({101̅1} pyramidal) slip were observed in samples with basal texture at room temperature, but only <𝑎> dislocations were observed in the same sample at liquid nitrogen temperature.
At quasi-static strain rates, McCabe et al. only observed T1 twinning in samples compressed along a plate direction with a prismatic texture component along the loading axis. They did not observe T1 twinning in samples compressed along basal textures to 25% strain. Kaschner and Gray observe that deformation at high strain rates (3000s−1) produces more twins than at quasi-static strain rates, but the twin types activated were not identified.
Capolungo et al. studied twinning as a function of grain orientation within a sample. They calculated a global Schmid factor using the macroscopic applied stress direction. They found the resolved shear stress on any grain without considering local intergranular interactions, which may alter the stress state. They found that although the majority of twins occur in grains favourably oriented for twinning according to the global Schmid factor, around 30% of grains which were unfavourably oriented for twinning still contained twins. Likewise, the twins present were not always of the highest global Schmid factor variant, with only 60% twinning on the highest Schmid factor variant. This can be attributed to a strong dependence on the local stress conditions in grains or grain boundaries, which is difficult to measure experimentally, particularly at high strain rates. Knezevic et al. fitted experimental data of high-purity polycrystalline Zr to a self-consistent viscoplastic model to study slip and twinning systems' rate and temperature sensitivity. They found that T1 twinning was the dominant slip system at room temperature for strain rates between 10−3 and 103 s−1. The basal slip did not contribute to deformation below 400°C. Twinning was found to be rate insensitive, and the rate sensitivity of slip could explain changes in twinning behaviour as a function of strain rate.
T1 twinning occurs during both quasi-static and high-rate loading. T2 twinning occurs only at high rate loading. Similar area fractions of T1 and T2 twinning are activated at a high strain rate, but T2 twinning carries more plastic deformation due to its higher twinning shear. T1 twins tend to thicken with incoherent boundary traces in preference to lengthening along the twinning plane, and in some cases, nearly consume the entire parent grain. Several variants of T1 twins can nucleate in the same grain, and the twin tips are pinched at grain interiors. On the other hand, T2 twins preferentially lengthen instead of thicken, and tend to nucleate in parallel rows of the same variant extending from boundary to boundary.
For commercially pure zirconium (CP-Zr) of 97.0%, basal, 〈𝑎〉 pyramidal, and 〈𝑐 + 𝑎〉 pyramidal slip systems dominate room temperature compression along the normal direction (ND) at both quasi-static and high strain rate loading, which is not seen in high purity polycrystalline and single crystal Zr. In 〈𝑎〉 axis transverse direction (TD) deformation, 〈𝑎〉 prismatic and 〈𝑎〉 pyramidal slip systems are dominant. 〈𝑎〉 pyramidal and basal slip systems are more prevalent than currently reported in the literature, though this may be because 〈conventional analysis routes do not easily identify 〈𝑎〉 pyramidal slip. Basal slip systems are promoted, and 〈𝑎〉 prismatic slip is suppressed at a high strain rate (HR) compared to quasi-static strain rate (QS) loading. This is independent of loading axis texture (ND/TD).
Applications
Zirconium alloys are corrosion resistant and biocompatible, and therefore can be used for body implants. In one particular application, a Zr-2.5Nb alloy is formed into a knee or hip implant and then oxidized to produce a hard ceramic surface for use in bearing against a polyethylene component. This oxidized zirconium alloy material provides the beneficial surface properties of a ceramic (reduced friction and increased abrasion resistance), while retaining the beneficial bulk properties of the underlying metal (manufacturability, fracture toughness, and ductility), providing a good solution for these medical implant applications.
Zr702 and Zr705 are zirconium alloys known for their high corrosion resistance. Zr702 is a commercially pure grade, widely used for its high corrosion resistance and low neutron absorption, particularly in nuclear and chemical industries. Zr705, alloyed with 2-3% niobium, shows enhanced strength and crack resistance and is used for high-stress applications such as demanding chemical processing environments, and medical implants.
Reduction of zirconium demand in Russia due to nuclear demilitarization after the end of the Cold War resulted in the exotic production of household zirconium items such as the vodka shot glass shown in the picture.
References
See also
Google books search results for the dedicated conference named "Zirconium in the nuclear industry"
Construction of the Fukushima nuclear power plants
Google books search results Stith, Tai. Science, Submarines & Secrets: The Incredible Early Years of the Albany Research Center. United States, Owl Room Press ISBN 9781735136646.
Zirconium alloys
Nuclear materials | Zirconium alloys | [
"Physics",
"Chemistry"
] | 4,235 | [
"Materials",
"Nuclear materials",
"Alloys",
"Zirconium alloys",
"Matter"
] |
3,096,395 | https://en.wikipedia.org/wiki/Rotation%20around%20a%20fixed%20axis | Rotation around a fixed axis or axial rotation is a special case of rotational motion around an axis of rotation fixed, stationary, or static in three-dimensional space. This type of motion excludes the possibility of the instantaneous axis of rotation changing its orientation and cannot describe such phenomena as wobbling or precession. According to Euler's rotation theorem, simultaneous rotation along a number of stationary axes at the same time is impossible; if two rotations are forced at the same time, a new axis of rotation will result.
This concept assumes that the rotation is also stable, such that no torque is required to keep it going. The kinematics and dynamics of rotation around a fixed axis of a rigid body are mathematically much simpler than those for free rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for free rotation of a rigid body. The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes.
Translation and rotation
A rigid body is an object of a finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably.
A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and circular motion.
Purely translational motion occurs when every particle of the body has the same instantaneous velocity as every other particle; then the path traced out by any particle is exactly parallel to the path traced out by every other particle in the body. Under translational motion, the change in the position of a rigid body is specified completely by three coordinates such as x, y, and z giving the displacement of any point, such as the center of mass, fixed to the rigid body.
Purely rotational motion occurs if every particle in the body moves in a circle about a single line. This line is called the axis of rotation. Then the radius vectors from the axis to all particles undergo the same angular displacement at the same time. The axis of rotation need not go through the body. In general, any rotation can be specified completely by the three angular displacements with respect to the rectangular-coordinate axes x, y, and z. Any change in the position of the rigid body is thus completely described by three translational and three rotational coordinates.
Any displacement of a rigid body may be arrived at by first subjecting the body to a displacement followed by a rotation, or conversely, to a rotation followed by a displacement. We already know that for any collection of particles—whether at rest with respect to one another, as in a rigid body, or in relative motion, like the exploding fragments of a shell, the acceleration of the center of mass is given by
where M is the total mass of the system and acm is the acceleration of the center of mass. There remains the matter of describing the rotation of the body about the center of mass and relating it to the external forces acting on the body. The kinematics and dynamics of rotational motion around a single axis resemble the kinematics and dynamics of translational motion; rotational motion around a single axis even has a work-energy theorem analogous to that of particle dynamics.
Kinematics
Angular displacement
Given a particle that moves along the circumference of a circle of radius , having moved an arc length , its angular position is relative to its initial position, where .
In mathematics and physics it is conventional to treat the radian, a unit of plane angle, as 1, often omitting it. Units are converted as follows:
An angular displacement is a change in angular position:
where is the angular displacement, is the initial angular position and is the final angular position.
Angular velocity
Change in angular displacement per unit time is called angular velocity with direction along the axis of rotation. The symbol for angular velocity is and the units are typically rad s−1. Angular speed is the magnitude of angular velocity.
The instantaneous angular velocity is given by
Using the formula for angular position and letting , we have also
where is the translational speed of the particle.
Angular velocity and frequency are related by
Angular acceleration
A changing angular velocity indicates the presence of an angular acceleration in rigid body, typically measured in rad s−2. The average angular acceleration over a time interval Δt is given by
The instantaneous acceleration α(t) is given by
Thus, the angular acceleration is the rate of change of the angular velocity, just as acceleration is the rate of change of velocity.
The translational acceleration of a point on the object rotating is given by
where r is the radius or distance from the axis of rotation. This is also the tangential component of acceleration: it is tangential to the direction of motion of the point. If this component is 0, the motion is uniform circular motion, and the velocity changes in direction only.
The radial acceleration (perpendicular to direction of motion) is given by
It is directed towards the center of the rotational motion, and is often called the centripetal acceleration.
The angular acceleration is caused by the torque, which can have a positive or negative value in accordance with the convention of positive and negative angular frequency. The relationship between torque and angular acceleration (how difficult it is to start, stop, or otherwise change rotation) is given by the moment of inertia: .
Equations of kinematics
When the angular acceleration is constant, the five quantities angular displacement , initial angular velocity , final angular velocity , angular acceleration , and time can be related by four equations of kinematics:
Dynamics
Moment of inertia
The moment of inertia of an object, symbolized by , is a measure of the object's resistance to changes to its rotation. The moment of inertia is measured in kilogram metre² (kg m2). It depends on the object's mass: increasing the mass of an object increases the moment of inertia. It also depends on the distribution of the mass: distributing the mass further from the center of rotation increases the moment of inertia by a greater degree. For a single particle of mass a distance from the axis of rotation, the moment of inertia is given by
Torque
Torque is the twisting effect of a force F applied to a rotating object which is at position r from its axis of rotation. Mathematically,
where × denotes the cross product. A net torque acting upon an object will produce an angular acceleration of the object according to
just as F = ma in linear dynamics.
The work done by a torque acting on an object equals the magnitude of the torque times the angle through which the torque is applied:
The power of a torque is equal to the work done by the torque per unit time, hence:
Angular momentum
The angular momentum is a measure of the difficulty of bringing a rotating object to rest. It is given by
where the sum is taken over all particles in the object.
Angular momentum is the product of moment of inertia and angular velocity:
just as p = mv in linear dynamics.
The analog of linear momentum in rotational motion is angular momentum. The greater the angular momentum of the spinning object such as a top, the greater its tendency to continue to spin.
The angular momentum of a rotating body is proportional to its mass and to how rapidly it is turning. In addition, the angular momentum depends on how the mass is distributed relative to the axis of rotation: the further away the mass is located from the axis of rotation, the greater the angular momentum. A flat disk such as a record turntable has less angular momentum than a hollow cylinder of the same mass and velocity of rotation.
Like linear momentum, angular momentum is vector quantity, and its conservation implies that the direction of the spin axis tends to remain unchanged. For this reason, the spinning top remains upright whereas a stationary one falls over immediately.
The angular momentum equation can be used to relate the moment of the resultant force on a body about an axis (sometimes called torque), and the rate of rotation about that axis.
Torque and angular momentum are related according to
just as F = dp/dt in linear dynamics. In the absence of an external torque, the angular momentum of a body remains constant. The conservation of angular momentum is notably demonstrated in figure skating: when pulling the arms closer to the body during a spin, the moment of inertia is decreased, and so the angular velocity is increased.
Kinetic energy
The kinetic energy due to the rotation of the body is given by
just as in linear dynamics.
Kinetic energy is the energy of motion. The amount of translational kinetic energy found in two variables: the mass of the object () and the speed of the object () as shown in the equation above. Kinetic energy must always be either zero or a positive value. While velocity can have either a positive or negative value, velocity squared will always be positive.
Vector expression
The above development is a special case of general rotational motion. In the general case, angular displacement, angular velocity, angular acceleration, and torque are considered to be vectors.
An angular displacement is considered to be a vector, pointing along the axis, of magnitude equal to that of . A right-hand rule is used to find which way it points along the axis; if the fingers of the right hand are curled to point in the way that the object has rotated, then the thumb of the right hand points in the direction of the vector.
The angular velocity vector also points along the axis of rotation in the same way as the angular displacements it causes. If a disk spins counterclockwise as seen from above, its angular velocity vector points upwards. Similarly, the angular acceleration vector points along the axis of rotation in the same direction that the angular velocity would point if the angular acceleration were maintained for a long time.
The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has an effect on the rotation, other forces and torques are compensated by the structure.
Mathematical representation
Examples and applications
Constant angular speed
The simplest case of rotation not around a fixed axis is that of constant angular speed. Then the total torque is zero. For the example of the Earth rotating around its axis, there is very little friction. For a fan, the motor applies a torque to compensate for friction. Similar to the fan, equipment found in the mass production manufacturing industry demonstrate rotation around a fixed axis effectively. For example, a multi-spindle lathe is used to rotate the material on its axis to effectively increase the productivity of cutting, deformation and turning operations. The angle of rotation is a linear function of time, which modulo 360° is a periodic function.
An example of this is the two-body problem with circular orbits.
Centripetal force
Internal tensile stress provides the centripetal force that keeps a spinning object together. A rigid body model neglects the accompanying strain. If the body is not rigid this strain will cause it to change shape. This is expressed as the object changing shape due to the "centrifugal force".
Celestial bodies rotating about each other often have elliptic orbits. The special case of circular orbits is an example of a rotation around a fixed axis: this axis is the line through the center of mass perpendicular to the plane of motion. The centripetal force is provided by gravity, see also two-body problem. This usually also applies for a spinning celestial body, so it need not be solid to keep together unless the angular speed is too high in relation to its density. (It will, however, tend to become oblate.) For example, a spinning celestial body of water must take at least 3 hours and 18 minutes to rotate, regardless of size, or the water will separate. If the density of the fluid is higher the time can be less. See orbital period.
Plane of rotation
See also
Anatomical terms of motion
Artificial gravity by rotation
Axle
Axial precession
Axial tilt
Axis–angle representation
Carousel, Ferris wheel
Center pin
Centrifugal force
Centrifuge
Centripetal force
Circular motion
Coriolis effect
Fictitious force
Flywheel
Gyration
Instant centre of rotation
Linear-rotational analogs
Optical axis
Revolutions per minute
Revolving door
Rigid body angular momentum
Rotation matrix
Rotational speed
Rotational symmetry
Run-out
References
Fundamentals of Physics Extended 7th Edition by Halliday, Resnick and Walker.
Concepts of Physics Volume 1, by H. C. Verma, 1st edition,
Celestial mechanics
Euclidean symmetries
Rotation | Rotation around a fixed axis | [
"Physics",
"Mathematics"
] | 2,683 | [
"Physical phenomena",
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Classical mechanics",
"Astrophysics",
"Rotation",
"Motion (physics)",
"Mathematical relations",
"Celestial mechanics",
"Symmetry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.